text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
In October, Defense Secretary Leon Panetta gave a speech at the Intrepid Air and Space Museum in New York where he stated “cyberspace is the new frontier.” Ears really perked when he said that cyber actors have already infiltrated America’s critical infrastructure, including the computers that operate chemical plants, the electric grid, and water facilities. Such collective attacks could result in a cyber Pearl Harbor, he said.
The candor with which government leaders are referring to the cyber threat isn’t surprising. There have been countless memoranda, reports, and speeches detailing the need to bolster the cybersecurity workforce as a result of continued cybersecurity breaches in federal IT systems. In the coming year and beyond, agencies want to hire more cybersecurity professionals to fill this growing need.
But agencies face a big problem – the shortfall of skilled professionals. A recent report by the Homeland Security Department’s cyber skills task force detailed the need for 600 new cybersecurity professionals in the near term who have mission critical skills. Finding these professionals is difficult given that agencies have to compete with government contractors and others in the private sector to hire them. While the DHS cyber skills report largely focused on the need for technical competencies, another report released by the Government Accountability Office in November 2011 found that nearly every agency has experienced difficulty in hiring cyber workers.
Agencies across government will need to train or hire more people with the right skills in the coming year and beyond. To make up for this shortfall, most experts recommend creating a pipeline of applicants by starting early in the primary school grades to get students interested in STEM fields.
But there’s another challenge associated with recruiting skilled workers: Most of those going into STEM fields are men. Only 13 percent of the US cybersecurity professionals are women. Further, the number of women enrolling in computer science degrees is actually decreasing. In 1985, 37 percent of computer science graduates were women; in 2005, women only made up 22 percent. Despite the growing need for cybersecurity professionals, female enrollment in the fields necessary to get into these jobs continues to decrease. In fact, in 2010 only about 18 percent of undergraduates in STEM fields were women.
Women, being about half the population, are a largely untapped resource for cybersecurity recruitment. In a recent study, women claimed they encountered more institutional barriers to entering cybersecurity fields. For example, the largely male-oriented “hacker” culture prevalent in IT can be hard for girls and women to penetrate and leaves them with fewer opportunities for building mentor-mentee relationships. A lack of interest in STEM fields also attributes to low participation numbers. Perhaps attributed to the lack of women in science and math teaching professions, the gap exists in middle school and increases in high school.
Creating a pipeline of cybersecurity applicants will involve more than scholarships and competitions to recruit the professionals the government will need. Addressing the barriers to entry for women would open up a group of potential applicants largely overlooked in the past.
Find more insights and trends that the federal IT community will be facing in 2013 and beyond in a recent study released for the Government Business Council. And visit Nextgov Prime on December 3rd to continue the conversation with leading thinkers, members of Congress and other experts in technology. | <urn:uuid:b069386d-248a-4ae6-b13a-509c8e2cb4fa> | CC-MAIN-2017-09 | http://www.nextgov.com/technology-news/tech-insider/2012/11/if-you-want-solve-it-skills-gap-fix-gender-gap/59797/?oref=ng-relatedstories | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00595-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952659 | 657 | 2.703125 | 3 |
Carving up the Raspberry Pi
Back in 2011, I wrote about a new kind of computer that game designer and engineer David Braben was developing that would cost just $25. Hot on the heels of the E2 Green PC, an effort that many people felt produced underwhelming results, folks were rightfully skeptical that anything worthwhile could be developed in the computing field for about the cost of a nice meal in a restaurant. Many of the people who commented on my original article made valid points about the limited type of display technology and networking options that could be attached to such a cheap PC.
Last December, however, the Raspberry Pi was made available for purchase and sold out in the initial run, and the Raspberry Pi Foundation went into a second production. One commenter that time around said he was able to pick one up, was impressed with its performance, and wondered if there were some potential government uses, such as storing the little computers inside a vault that could be unlocked to reestablish communications and infrastructure in the event of a disaster.
Since then, people have been coming up with a variety of ways to make use of the computer. CNN recently reported about some the amazing tasks the $25 computer is performing now thanks to its tech-savvy users. The list includes everything from displaying a train schedule to running an interactive network of weather and air quality stations. Some users even created their own private mobile phone network, driven by a single Raspberry Pi.
I think we tend to get a little too caught up sometimes in the speeds and feeds of the latest computing gear. Sure, if a Raspberry Pi can run an application, then a faster, more expensive computer can probably drive it better. (The Pi is about equivalent to a 300 Mhz Pentium 2 processor, though it can be overclocked to 800 MHz, according to Raspberry Pi’s FAQ.)
But if a solid $25 system can make things work, why not use it? I've not been able to find any purely public-sector applications that use a Raspberry Pi, though I can think of a few where it might fit in nicely. It could be used to control remote cameras or sensors, for example or to send images from a weather balloon or drone. And, of course, its adaptability has a lot of educational potential.
If anyone knows of any other good potential uses for a $25 computer, or are planning to try and enlist the Pi for government service, let us know.
Posted by John Breeden II on Apr 02, 2013 at 9:39 AM | <urn:uuid:2343eee4-a611-4362-8909-2043d12d15c3> | CC-MAIN-2017-09 | https://gcn.com/blogs/emerging-tech/2013/04/carving-up-the-raspberry-pi.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00595-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.971339 | 514 | 2.5625 | 3 |
Iris Scans: Security Technology In ActionIris-based security scans are the stuff of sci-fi movies, but NIST research shows how the technology can now be used in the real world to reliably identify individuals.
1 of 6
Sci-fi films routinely lead viewers to believe that scanning an individual's iris is a proven way to identify them, but in practice, the results haven't always been 100% dependable. One of the most significant challenges isn't the technology, but how slight changes in the structure of the iris can throw off calculations used in comparing images of the human eye.
The long-term stability of the iris' distinguishing characteristics, critical for biometric identification, had come under question when a recent study of several hundred subjects found that iris recognition becomes increasingly difficult over a period of three years, consistent with an aging effect.
The latest in an ongoing series of studies of iris recognition for biometric identification, however, refutes that. Scientists at the National Institute of Standards and Technology (NIST) have found that the unique characteristics of the iris in the average person do not change for at least nine years. The results of the study, conducted by researchers in NIST's Information Access division, suggest that iris recognition of average individuals will remain viable for decades. They also imply that identity program managers may not need to recapture iris images as frequently, which factors into the total overall cost of maintaining iris recognition systems.
The new study by NIST researchers used two large operational data sets, including one of nearly 8,000 recurrent travelers across the Canadian-American border, involving millions of images. The travelers, like the woman pictured here in a photograph supplied by the Canadian Border Services Agency, use an iris identification system to confirm the individuals' identity. The system is part of a joint Canadian and American program to help people move quickly across the border. The study examined images that had been captured at least four years and up to nine years previously. NIST researchers found no evidence of a widespread aging effect.
NIST has been working with a variety of organizations to help improve the use of iris recognition systems. In that vein, it established the Iris Exchange program in 2008. The program has sought to establish standards for iris recognition, as well as the development and deployment of systems used to capture and identify iris images. Sponsors of the program include the FBI's Criminal Justice Information System Division and the Office of Biometric Identity Management in the Department of Homeland Security.
1 of 6 | <urn:uuid:1c6aba7a-28ef-406b-b2d8-f704d097bea7> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/iris-scans-security-technology-in-action/d/d-id/1111327?cid=sbx_iwk_related_slideshow_default_doe_rolls_out_powerful_esnet_for_scienti&itc=sbx_iwk_related_slideshow_default_doe_rolls_out_powerful_esnet_for_scienti | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00591-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942851 | 519 | 2.875 | 3 |
After several days of Curiosity being in safety mode, NASA engineers have resumed communication with their Mars rover.
NASA announced that engineers are working to restore the robotic rover, which landed on Mars in August 2012, to full working mode.
Curiosity is stable and not in danger.
The rover, which discovered that ancient Mars held key chemicals needed for life as well as evidence of ancient water flows, put itself into safe mode on July 2.
NASA said Curiosity's team has not yet pinpointed why the robot took that precautionary step, which meant it stopped all activities that weren't necessary for keeping itself running on a basic level.
Engineers are requesting the rover send home data that should help them diagnose the problem.
"Engineers are working to determine the cause of safe-mode entry," NASA noted on its website. "Preliminary information indicates an unexpected mismatch between camera software and data-processing software in the main computer."
Before this computer glitch, Curiosity put itself into safe mode on three other occasions -- all of them in 2013.
NASA sent Curiosity to Mars to try to find evidence that the Red Planet had at any point in its history been capable of sustaining life, even in microbial form.
The rover hit that goal in its first year on Mars when it discovered evidence that more than 3 billion years ago at least one area of the planet had fresh-water lakes and rivers.
Just last week, NASA approved a two-year extension to Curiosity's work.
NASA also has the rover Opportunity working on Mars. Opportunity's sibling, Spirit, has stopped working.
This story, "NASA reconnects with Mars rover Curiosity" was originally published by Computerworld. | <urn:uuid:6de5eee9-7c58-43a3-be94-272198a1dcf9> | CC-MAIN-2017-09 | http://www.itnews.com/article/3092851/emerging-technology/nasa-reconnects-with-mars-rover-curiosity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00115-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.971685 | 340 | 3.46875 | 3 |
Voice-user interface (VUI) technology such as Siri, Cortana and Amazon’s Echo has advanced to a point where voice recognition can be used as an authentication alternative to passwords.
High profile technology investor Mary Meeker recently dedicated a large section of her annual report on the state of the internet to lift-off voice-user interface technology.
This tech makes human interaction with computers possible through speech. While VUI has been around for decades, the technology has made massive strides over the years and its improving accuracy continues to raise its profile.
In the 1970’s machines could recognise words with just 10% accuracy, by 2010 that had reached around 70%, and today it stands at approximately 90%.
Now it’s only a matter of time before accuracy is no longer an issue – for instance Google has announced that it’s working to ensure its speech recognition software will work with even the thickest of accents.
Access with voice authentication
As VUI accuracy continues to improve, it will naturally become a form of authentication as all voices have subtle differences, similar to fingerprints.
Consumer facing organisations are already starting to trial voice recognition as a method of authentication.
A number of UK banks are seeing the need for this and have introduced new voice related security measures.
Findings from a survey carried out by Pindrop showed that over half of respondents felt that no bank was fully secure, and 59% said they would leave their bank if they thought another one was more secure.
Voice recognition is also finding its way into the workplace, with employees being able to access work profiles and systems by simply speaking.
However, using voice authentication presents some clear concerns. Chief among these are the security implications: how will voice authentication be added, changed and shared organisation-wide?
Today 95% of employees use a typed password to access email, and devices such as phones and laptops.
As a result, amongst the most common tickets raised to IT help desks is forgotten password requests. The emergence of voice authentication should offer reprieve to the IT department, freeing them up to address more pressing tasks.
Taking one step at a time
The most astute and forward-thinking IT leaders will be paying close attention to developments in VUI. However, adoption should be cautious and staged in order to limit the security risks to the business – if there are speed bumps during implementation, new security operations can be tested, re-tested and contained if necessary.
Gradual adoption is how enterprises have traditionally met the introduction of new technology such as the rise of the smartphone.
IT teams have traditionally been nervous about allowing employees to access work email on their personal devices. In the same way, as organisations recognise the advantages of voice authentication and race to gain competitive advantage they should slow down and find the solution that best fits.
The role of two-factor authentication
Two-factor authentication adds a second level of verification to an account log-in.
In traditional password authentication, the additional credential can be a further piece of information known by the user such as a phone number, or owned by the user like a biometric.
The added layer of security can trump the action of a hacker with access to the first authentication information. But there is a dilemma with passwords in that hackers can pretend to be users and request to recover sign-in credentials – which is much harder with voice recognition.
Two-factor authentication is part of the larger movement of maturing multi-factor authentication. For instance, biometrics are one way to solve the credential recovery issue but it cannot detect fraud on its own.
So whilst VUI will be a key piece in the authentication puzzle, enterprises cannot look to voice as a direct and sole replacement for authentication. No matter how good the VUI is today, it is still always better to have multi-factor authentication.
It may not stop every attacker, but creating multiple layers certainly creates a roadblock for many of them. Even KITT got hacked once.
Sourced by Matt Peachey, VP/GM International at Pindrop | <urn:uuid:3f4e218b-b2d4-4216-8046-2595c1ad9464> | CC-MAIN-2017-09 | http://www.information-age.com/voice-recognition-enterprise-123462469/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00467-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957893 | 828 | 2.625 | 3 |
While research and spending on unmanned aircraft technology, many questions around safety and privacy need to be answered by government and the industry before drones will routinely fly in public airspace.
Specifically, the Government Accountability Office has issued a report saying worldwide spending on unmanned aircraft could hit $89.1 billion in over the next decade but that worries over national security, privacy, and the interference in Global Positioning-System (GPS) signals have not been resolved and may influence acceptance of routine access for unmanned aircraft in the national airspace system.
Background: What the drone invasion looks like
The GAO said it issues a report will similar findings in 2008 and those issues remain largely unaddressed. There are seven core concerns, from the most recent GAO report:
- 1. The inability for unmanned aircraft to detect, sense, and avoid other aircraft and airborne objects in a manner similar to "see and avoid" by a pilot in a manned aircraft. To date, no suitable technology has been deployed that would provide unmanned systems with the capability to sense and avoid other aircraft and airborne objects and to comply completely with FAA regulatory requirements of the national airspace system. However, research and development efforts by FAA, DOD, NASA, and MITRE, among others, suggests that potential solutions to the sense and avoid obstacle may be available in the near term. With no pilot to scan the sky, most UAS do not have an on-board capability to directly "see" other aircraft. Consequently, unmanned aircraft must possess the capability to sense and avoid an object using on-board equipment, or within the line-of-sight of a human on the ground or in a chase aircraft, or by other means, such as ground-based sense and avoid. Since 2008, FAA and other federal agencies have managed several research activities to support meeting the sense and avoid requirements. DOD officials said the Department of the Army is working on a ground-based system that will detect other airborne objects and allow the pilot to direct the UAS to maneuver to a safe location. The Army has successfully tested one system, but it may not be useable on all types of drones.
- 2. Vulnerabilities in the command and control of UAS operations: Ensuring uninterrupted command and control for both small and large drones remains a key obstacle for safe and routine integration into the national airspace system. Since drones fly based on pre-programmed flight paths and by commands from a pilot-operated ground control station, the ability to maintain the integrity of command and control signals are critically important to ensure that the UAS operates as expected and as intended. FAA and MITRE have been researching solutions to lost link, but the standardization of lost link procedures, for both small and large UAS, has not been finalized. In a "lost link" scenario, the command and control link between the UAS and the ground control station is broken because of either environmental or technological issues, which could lead to loss of control of the UAS. To address this type of situation, drones generally have pre-programmed maneuvers that may direct the aircraft to first hover or circle in the airspace for a certain period of time to reestablish its radio link. If the link is not reestablished, then the UAS will return to "home" or the location from which it was launched, or execute an unintentional flight termination at its current location. It is important that air traffic controllers know where and how all aircraft are operating so they can ensure the safety of the aircraft.
- 3. Progress has been made in obtaining additional dedicated radio-frequency spectrum for unmanned operations, but additional dedicated spectrum, including satellite spectrum, is still needed to ensure secure and continuous communications for both small and large operations. The unmanned industry is working to develop and validate hardware and standards for communications operating in allocated spectrum. Specifically, according to NASA, it is developing, in conjunction with Rockwell Collins, a radio for control and a non-payload communications data link that would provide secure communications. In addition, FAA's UAS Research Management Plan identified 13 activities designed to mitigate command, control, and communication obstacles. One effort focused on characterizing the capacity and performance impact of UAS operations on air-traffic-control communications systems. In addition, a demonstration led by Embry-Riddle Aeronautical University in 2010 simulated a national airspace communications system to demonstrate the process and ability of a UAS pilot to establish alternate voice communications with air traffic control if the primary radio communications link were lost. NASA is also performing additional command and control research. As part of its 5-year UAS Integration in the National Airspace System Project, NASA is working to develop and verify a communications system prototype to support the allocation of spectrum for safe UAS operations.
- 4. FAA and NASA are taking steps to ensure the reliability of both small and large UAS by developing a certification process specific to UAS. Currently, FAA has a process and regulations in place for certifying any new aircraft type and allowing it access to the national airspace system. Drone stakeholders the GAO interviewed stated that this process is costly and manpower intensive, and does not assure certification. One manufacturer that tried certifying a UAS through this process noted that it took one year and cost $1 million to permit a single airframe to have access to the national airspace system. According to the FAA, another manufacturer recently started this process.
- 5. Standards-making bodies are working to develop safety, reliability, and performance standards for drones. The complexities of the issues to be addressed and the lack of operational and safety data have hindered the standards development process. Minimum aviation system performance standards (MASPS) and minimum operational performance standards (MOPS) are needed in the areas of: operational and navigational performance; command and control communications; and sense and avoid capabilities. As of June 2012, the FAA was still defining the data fields it needed and how the data will be used to support the development of performance or certification standards and the regulatory process for drones. FAA officials have since communicated their data requirements to DOD and also provided us with a list of general data requirements. Furthermore, FAA officials also noted that the agency currently has a contract with MITRE to address these data challenges in fiscal year 2013.
- 6. According to FAA, its draft Notice of Proposed Rule Making (NPRM) that would define and govern how small UAS would potentially operate in the national airspace system will be issued at the end of 2012. FAA regulations govern the routine operation of most aircraft in the national airspace system. However, these regulations do not contain provisions that explicitly address issues relating to drones. As the GAO highlighted in its 2008 report, existing regulations may need to be modified to address the unique characteristics of UAS to prevent "undue harm to manned aircraft." Today, UAS continue to operate as exceptions to the regulatory framework rather than being governed by it. Without specific and permanent regulations for safe operation of UAS, federal stakeholders, including DOD, continue to face challenges and limitations on their UAS operations. The lack of final regulations could hinder the acceleration of safe and routine integration of UAS into the national airspace system.
- 7. As the FAA and others continue to address the challenges to UAS integration, they must do so with the expected changes to the operations of the national airspace system as a result of the FAA's NextGen air traffic system in mind. As unmanned operations are expected to proliferate, it is important that they are able to safely operate in the NextGen environment.
Check out these other hot stories: | <urn:uuid:ee0e5133-db7e-45e9-ac75-1b406a9d0396> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2223142/compliance/watchdogs-say-tons-of-issues-remain-before-unmanned-aircraft-can-fly-free-in-us.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00111-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953933 | 1,530 | 2.734375 | 3 |
Disease 'networks' mimic web
Studying the spread of virulent computer viruses may prove useful in understanding the spread of disease and the ability of ecosystems to handle disturbances, researchers say.
"In terms of computer networks, one of the clear points that comes out of the analysis is that if you have this scale-free network, most transmission can be traced to the most highly connected nodes. So this is a clear implication for how to prevent the transfer of viruses: You concentrate on the most highly connected nodes," says Alun Lloyd, a researcher at the Institute for Advanced Study. Lloyd and Oxford Universitys Robert May reported their findings in the journal Science on May 18.
Computer and biological networks have similar structures that affect how disturbances such as electronic viruses propagate through them. Computer networks are "scale-free" networks, meaning most nodes of the network have relatively few connections to other nodes, while a small number have many connections.
For instance, a university, Internet service provider or large company like Microsoft will have thousands or millions of connections to other points in the network, while a home computer may have only one. So, a virus that hits an individuals PC is likely to propagate more slowly than one that invades Microsoft, since there are fewer links to exploit.
The computer case mimics what happens in the spread of diseases in the real world. With sexually transmitted diseases like AIDS, "a few individuals such as prostitutes have very high numbers of partners," the researchers wrote.
On the ecological front, so many processes are involved that the conclusions are not so clear. The model might be used to develop plans for protecting endangered species. "A food web may be one of those networks, so there are interactions in species where the nodes are the species and the links are that one species eats this one and competes with another one," Lloyd says. "The stability of the ecosystem might depend on these links.
"With some species," Lloyd says, "you could remove it quite easily and it might not have much of an effect. But a species with a lot of links might have a very large effect."
A few years ago, IBM scientist Jeffrey Kephart anticipated that the study of this interconnectedness called topology in mathematics might yield important theoretical conclusions about population biology and epidemiology. "For example, in this heyday of HIV, we are admonished daily by educators about the dangers of promiscuous activity, yet until recently there were no quantitative theoretical studies of how the spread of disease depends upon the detailed network of contacts between individuals," Kephart wrote.
"Digital organisms" may be preferable subjects in the study of disease, he wrote, because they can be more easily controlled experimentally.
Lloyd and Mays findings differ somewhat from the conclusions drawn in related research and reported in Physical Review Letters by Romualdo Pastor-Satorras and Alessandro Vespignani. Even at very low levels of infection, a computer virus will spread widely, they concluded in their paper
But Lloyd says their work used a model in which the infected node could be reinfected and continue to spread the virus the Typhoid Mary of computer viruses. In humans, with most viruses, once a person is infected, he or she gains some immunity. In computers, the most highly connected nodes are usually those that are most sophisticated in dealing with virus infections, he says. | <urn:uuid:1e9963a1-4295-496b-89b8-b91c3007fff1> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Under-the-Microscope | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00463-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961325 | 688 | 3.421875 | 3 |
I live in one of the most wired parts of the United States—the San Francisco Bay Area—but for the presidential election, I’ve already voted by mail. On a piece of paper. From the comfort of my living room. Between folks like me who vote by mail and everyone else who votes by marking paper in some way, we comprise about two-thirds of all American voters. Approximately 25 percent of all Americans, however, will use paperless and electronic voting machines to cast their ballots on November 6.
Around the world though, these percentages don't hold. An increasing number of countries are beginning to tackle e-voting with gusto. Estonia, Switzerland, Spain, Brazil, Australia, India, Canada, and a handful of other countries have all held elections through the use of electronic voting machines in recent years.
E-voting was supposed to solve many of the problems inherent in traditional paper voting: it’s difficult for illiterate people to vote, it’s difficult to get physical paper out to all corners of a country (voters abroad can submit their ballot much more easily), tabulating the results takes too much time, physical ballot stuffing or ballot swapping can occur with little or no verification. With an electronic ballot, it’s also, of course, easier to tweak ballots in other languages or to make them available to blind or deaf voters. As recently as August 2012, advocates in Pakistan and the Philippines called for the expansion of e-voting in their respective countries.
Currently, there are four major types of e-voting around the world that are worth keeping an eye on: Brazil’s homegrown direct recording electronic (DRE) setup, Australia’s open-source software, Estonia’s Internet voting, and a Spanish startup’s efforts to expand what’s been called "crypto-voting." Each of these approaches has its own unique set of problems, but the primary obstacles they present for many voting officials and computer scientists is their lack of ability to verify source code and expense.
From dictatorship to e-voting in just over a decade
Surprisingly, Brazil has one of the world’s oldest electronic voting systems, dating way back to 1996. While Brazil certainly is a vibrant (and huge, at 195 million people) democracy, it’s a rapidly developing country—you do know it’s the B in BRIC, right? Brazil has gone through significant economic and political change in recent decades. It wasn’t until 1985 that the country was rid of its military dictatorship, yet, just over a decade later, the country had implemented a locally designed and produced electronic voting system.
As recently as 1996, the country still had 15 percent of the country that could not read or write. That meant a significant portion (over 23 million Brazilians at the time) of the country were effectively disenfranchised from voting.
The DRE machine, known locally as an urna, is about the size of two or three stacked hardback books, and it has a small screen on one side with a keypad on the other side. The machine displays a list of candidates, along with their pictures and the numbers associated with them. Voters use the keypad to type in their preferred number—the device only allows one number to be pressed at a time.
Voters then receive a printed stub confirming that he or she voted. Each DRE device has two flash cards, which store a digital record of the vote count. The cards are removed at the end of the election and the vote totals are sent electronically to the Regional Electoral Office, where national vote counts are tallied within just several hours.
"Nowadays we have 450,000 digital ballot boxes in Brazil," Antonio Esio from the Regional Electoral Office in Sao Paulo, told the BBC in 2008. "We are making more each year because the number of voters is increasing around six percent every election."
Before the electronic system, voters were required to hand-write the complete names of the candidates and their parties—something many illiterate people were unable to do.
"By adopting it, you are enfranchising voters who might be disenfranchised by complicated ballots," Tiago Peixoto, a Brazilian researcher with the ICT4Gov program at the World Bank, told Ars.
However, by 2002, some critics in Brazil countered that by relying on an electronic device, there was little actual voter verification. To use industry parlance, there was no way to verify that the vote was cast as intended and counted as it was cast. So printers were added, which showed the vote on a piece of paper protected behind plastic. Two years later, Brazil eliminated the printers, as they were too costly. The printers were slated to be back (Google Translate) for the 2014 election, but they have since been suspended a second time.
By 2008, the entire software running on the DRE machines was rewritten by developers contracted by the Brazilian Superior Electoral Court. Six months prior to any election, people who have been accredited by the Court are allowed to come in-person, "in an environment controlled by the Superior Electoral Court," where experts can examine the source code, under a nondisclosure agreement.
Diego Aranha, a professor of computer science at the University of Brasilia, was one such expert. But, he said, he and his team were only given five hours in which to examine millions of lines of code—nowhere near adequate to perform a proper audit.
One major flaw he found was that the digital votes are randomly shuffled, as a way to provide extra security while in storage. However, the algorithm to provide that randomness is given a non-random seed: the timestamp.
"I made this assumption because I know how many times people have got this wrong," he told Ars. "They used a really, really bad pseudo-random number generator available: the seed was a timestamp in seconds. This is mission-critical software! This is our software for our democracy."
Despite these problems, so far, Brazil has used its DRE system in its various iterations for nearly two decades without any major political dispute over their use.
In an academic paper published in a forthcoming book, Aranha concluded: "The necessity of installing a scientifically sound and continuous evaluation of the system, performed by independent specialists from industry or academia becomes evident and should contribute to the improvement of the security measures adopted by the voting equipment."
Looking inside the black box Down Under
"It's a black box." So goes the common refrain from computer scientists and cryptographers who work on electronic voting. In other words, no one can be completely certain the computer code running on a given device does exactly what it’s said to. Worse still, no one can ever know the software running on the voter’s computer is precisely the same version of the software that was initially certified.
But for over a decade, the Australian Capital Territory has figured out a way to solve this problem (in use across a handful of voting locations): just make the software open source. The software runs on older PCs running Linux and offers ballots in 12 languages. There are also ballots available for illiterate, blind, or deaf voters.
Each voter receives a barcode that is read by a scanner attached to the computer. Once the code is scanned, it resets the software to be ready to receive a vote. Once the ballot is complete, the card is swiped a second time to cast that ballot. The barcodes are not connected to an individual voter, but the software is designed to only allow one vote per voter. The votes are counted electronically, digitally signed, and sent to a server on a local network.
"We wanted to make it something that people would find trustworthy," said Phillip Green, the electoral commissioner for the territory, in a recent interview with Ars.
"We've likened it to a normal election process where if you're doing it by hand, everything is available to scrutiny," Green said. "We shouldn't have a black box, where you don't know what it does. Open source code was the way to solve the transparency issue. So we get the code audited by a professional company and they're looking for areas in the code that what comes in doesn't come out and that there's nothing in there that would allow someone to maliciously change votes."
In addition, there’s a software keylogger making sure what’s typed in actually matches the votes that were recorded, as a way to prevent fraud. Green added the IT faculty at the Australian National University in Canberra use the source code frequently as a security auditing exercise for its students. This system has run more or less without any problems since 2001.
But if it’s so great, why don’t other states and territories Down Under use it? There’s no real reason, but like in the United States, state and territory voting laws and regulations are set at the state level. The ACT has chosen to go open-source, and there’s nothing stopping the country’s bigger states, like Victoria or New South Wales, from doing the same.
The decision largely has to do with size and expense. The ACT, Australia’s smallest territory by population, is home to about 365,000 people. (My home city of Oakland, California is bigger!) Only about two-thirds of the population are voters. Nationally, the country has around 15 million voters—so ACT voters represent less than three percent of all voters nationally.
"There's no practical reason why it couldn't work these, but it's a hardware [question]," Green added.
"We're getting out of our system cheaply by borrowing hardware. We're part of [the] ACT government computer system and we get monitors that are coming off refresh cycles. We either get the new ones before they get them or the old ones coming off; we're borrowing monitors. We get out of it pretty cheaply by trying to find cheap and innovative ways, and because we've only got five voting locations, we can get away with that. [Other states] might want 50 to 60 sites, and would have difficulty borrowing equipment. It’s several thousand dollars per machine by the time you get the hardware together."
Still, despite the success of the open-source e-voting setup, Green says its days may be numbered. Even though he has his doubts about the security and openness of Internet-based setups, he believes that it, not open-source e-voting, will "be the way of the future." After all, Internet-based systems can reduce the cost of hardware by allowing people to just use their own computers.
"We’re looking at it for 2016," he said in a resigned tone. | <urn:uuid:20c7b8f5-1936-4c2b-bd35-9225dfd2c615> | CC-MAIN-2017-09 | https://arstechnica.com/features/2016/11/internet-based-and-open-source-how-e-voting-is-working-around-the-globe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00339-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.971688 | 2,226 | 2.765625 | 3 |
RATs – Remote Access Trojans – are often used by cyber attackers to maintain a foothold in the infected computers and make them do things unbeknownst to their owners.
But, in order to do that and not be spotted, RATs must employ a series of obfuscation techniques.
Take for example the FAKEM RAT variants recently analyzed by Trend Micro researchers: in order to blend in, some try to make their network traffic look like Windows Messenger and Yahoo! Messenger traffic, and others as HTML.
Usually delivered via spear phishing emails, once executed the malware copies itself using the into the %System% folder.
When contacting and sending information to remote servers, the malicious traffic begins with headers similar to actual Windows Messenger and Yahoo! Messenger traffic. But checking the traffic after it clearly shows its malicious nature.
The communication between the compromised computer and the RAT’s controller is also encrypted. The RAT starts with sending out information about the compromised system, and can receive simple codes and commands that make it do things like execute code, go to sleep, execute shell commands, allows the attacker to browse directories, access saved passwords, and more.
“Now that popular RATs like Gh0st and PoisonIvy have become well-known and can easily be detected, attackers are looking for methods to blend in with legitimate traffic,” the researchers noted .
“While it is possible to distinguish the network traffic FAKEM RAT variants produce for the legitimate protocols they aim to spoof, doing so in the context of a large network may not be not easy. The RAT’s ability to mask the traffic it produces may be enough to provide attackers enough cover to survive longer in a compromised environment.” | <urn:uuid:d3de3711-4ddc-43ae-b4f4-1b97de9a75f1> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/01/18/new-rat-family-makes-its-traffic-look-legitimate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00107-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913025 | 361 | 2.78125 | 3 |
Science 2.0 evolves
Editor's note: This story was updated at 10:11 a.m. Nov. 8, 2007. Please go to Corrections & Clarifications to see what has changed.
Researchers from the National Center for Supercomputing Applications (NCSA) at the University of Illinois are turning to Web 2.0 tools to improve how scientists worldwide can work together and conduct research.
NCSA programmers have created a Web portal called the CyberCollaboratory that provides researchers with blogs, geospatial applications and other collaborative tools to make the scientific process more transparent and efficient.
“You do your thing as a professor, you write it up in a paper, and it goes off into the literature and…maybe somebody will make a change in their research based on your paper a couple of years down the road,” said James Myers, associate director of cyber environments at NCSA. “What we are trying to do is sort of short-circuit that cycle and make it so that as soon as you have a new idea or a result it can get posted…and get used.”
Concerning the big picture, NCSA portal developers hope that by speeding the rate at which scientists share raw and quality controlled data, people will see the benefits of research much quicker as accurate modeling becomes more available to decision-makers. They also expect that by including digital metadata about the origins of raw data and scientific analyses, scientists and policy-makers will be able to ensure that models are current and complete.
For example, about 400 scientists nationwide use the CyberCollaboratory as they work on a National Science Foundation-funded project to explore how natural and human-induced changes to the environment affect water quantity and quality.
Scientists working on the Water and Environmental Research Systems Network have been using the online tools to share documents, organize discussions and report their activities in blogs, said Barbara Minsker, the project’s principal investigator and professor of civil and environmental engineering at the University of Illinois.
The online tools help scientists from 11 test sites nationwide where water data is collected and analyzed better coordinate research, Minsker said. Eventually, the provenance features that track the origins of research mean people will get credit for their work and, in turn, be more likely to share their work, she said.
The CyberCollaboratory portal was built using Liferay's open-source portal software. NCSA officials chose the platform because it lets them add new online capabilities as research methods and technologies evolve during projects that could last decades, Myers said. Participating scientists are encouraged to change and alter analytic tools then share them with other scientists.
NCSA's use of Liferay open-source software and interoperable Java protocols is critical to the long-term success of the project, said Bryan Cheung, the company’s chief executive officer.
Organizers said the geospatial and mapping tools in the CyberCollaboratory will allow researchers to study areas such as water resources on a broader basis — a primary goal of the Water and Environmental Research Systems Network project.
“They fund academic research in these areas now, but this would be something that pushes academia to look at things at a continental scale and global issues as opposed to site-by-site issues,” Myers said.
Ben Bain is a reporter for Federal Computer Week. | <urn:uuid:bdeeb16b-5240-4cf2-a20a-569f5cc150a7> | CC-MAIN-2017-09 | https://fcw.com/articles/2007/11/01/science-20-evolves.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00283-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941576 | 686 | 2.984375 | 3 |
You've got all the bells and whistles when it comes to network firewalls and your building's security has a state-of-the-art access system. You've invested in the technology. But a social engineering attack could bypass all those defenses.
Say two fire inspectors show up at your office, show their badges and ask for a walkthrough—you're legally required to give them access to do their job. They ask a lot of questions, they take electrical readings at various wall outlets, they examine wiring under desks. Thorough, aren't they? Problem is, in this case they're really security consultants doing a social engineering 'penetration test' and grabbing access cards, installing keystroke loggers, and generally getting away with as much of your business's private information as they can get their hands on. (See How to rob a bank for details from this real-world example.)
Social engineers, or criminals who take advantage of human behavior to pull off a scam, aren't worried about a badge system. They will just walk right in and confidently ask someone to help them get inside. And that firewall? It won't mean much if your users are tricked into clicking on a malicious link they think came from a Facebook friend.
In this article, we outline the common tactics social engineers often use, and give you tips on how to ensure your staff is on guard.
Last updated September 27, 2012. See below for the following topics:
- What is social engineering?
- How is my company at risk?
- Sneaky stuff. Give me some specific examples of what social engineers say or do.
- Why do people fall for social engineering techniques?
- How can I educate our employees to prevent social engineering?
- Are there any tools that can help?
- Looks like this is an important security issue. Tell me more!
What is social engineering?
Social engineering is essentially the art of gaining access to buildings, systems or data by exploiting human psychology, rather than by breaking in or using technical hacking techniques. For example, instead of trying to find a software vulnerability, a social engineer might call an employee and pose as an IT support person, trying to trick the employee into divulging his password.
Famous hacker Kevin Mitnick helped popularize the term 'social engineering' in the '90s, although the idea and many of the techniques have been around as long as there have been scam artists of any sort. (Watch the video to see social-engineering expert Chris Nickerson size up one building's perimeter security)
How is my company at risk?
Social engineering has proven to be a very successful way for a criminal to "get inside" your organization. In the example given above, once a social engineer has a trusted employee's password, he can simply log in and snoop around for sensitive data. Another try might be to scam someone out of an access card or code in order to physically get inside a facility, whether to access data, steal assets, or even to harm people.
Chris Nickerson, founder of Lares, a Colorado-based security consultancy, conducts 'red team testing' for clients using social engineering techniques to see where a company is vulnerable. Nickerson detailed for CSO how easy it is to get inside a building without question.
In one penetration test, Nickerson used current events, public information available on social network sites, and a $4 Cisco shirt he purchased at a thrift store to prepare for his illegal entry. The shirt helped him convince building reception and other employees that he was a Cisco employee on a technical support visit. Once inside, he was able to give his other team members illegal entry as well. He also managed to drop several malware-laden USBs and hack into the company's network, all within sight of other employees. Read Anatomy of a Hack to follow Nickerson through this exercise.
In What it's like to steal someone's identity professional pen tester Chris Roberts, founder of One World Labs, says he too often meets people who assume they have nothing worth stealing.
"So many people look at themselves or the companies they work for and think, 'Why would somebody want something from me? I don't have any money or anything anyone would want,'?" he said. "While you may not, if I can assume your identity, you can pay my bills. Or I can commit crimes in your name. I always try to get people to understand that no matter who the heck you are, or who you represent, you have a value to a criminal."
Sneaky stuff. Give me some specific examples of what social engineers say or do.
Criminals will often take weeks and months getting to know a place before even coming in the door or making a phone call. Their preparation might include finding a company phone list or org chart and researching employees on social networking sites like LinkedIn or Facebook.
In the case of Roberts, he was asked to conduct a pen test for a client who was a high-net-worth individual to see how easy it would be to steal from him. He used a basic internet search to find an email address for the individual. From there, it snowballed.
Useful Books on Social Engineering!
By Hadnagy and Wilson (Wiley, Dec 2010)
"This book covers, in detail, the world's first framework for social engineering."
By Johnny Long et al (Syngress 2008)
"Whether breaking into buildings or slipping past industrial-grade firewalls, my goal has always been the same: extract the informational secrets using any means necessary."
"We searched for the e-mail address online were able to find a telephone number because he had posted in a public forum using both," said Roberts. "On this forum, he was looking for concert tickets and had posted his telephone number on there to be contacted about buying tickets from a potential seller."
The phone number turned out to be an office number and Roberts called pretending to be a publicist. From there he was able to obtain a personal cell phone number, a home address, and, eventually, mortage information. The point being from one small bit of information, a social engineering can compile an enitre profile on a target and seem convincing. By the time Roberts was done with his pen test, he knew where the person's kids went to school and even was able to pull a Bluetooth signal from his residence.
Once a social engineer is ready to strike, knowing the right thing to say, knowing whom to ask for, and having confidence are often all it takes for an unauthorized person to gain access to a facility or sensitive data, according to Nickerson.
The goal is always to gain the trust of one or more of your employees. In Mind Games: How Social Engineers Win Your Confidence Brian Bushwood, host of the Internet video series Scam School, describes some of the tricks scam artists use to gain that trust, which can vary depending on the communication medium:
-- On the phone:
A social engineer might call and pretend to be a fellow employee or a trusted outside authority (such as law enforcement or an auditor).
According to Sal Lifrieri, a 20-year veteran of the New York City Police Department who now educates companies on social engineering tactics through an organization called Protective Operations, the criminal tries to make the person feel comfortable with familiarity. They might learn the corporate lingo so the person on the other end thinks they are an insider. Another successful technique involves recording the "hold" music a company uses when callers are left waiting on the phone. See more such tricks in Social Engineering: Eight Common Tactics.
-- In the office:
"Can you hold the door for me? I don't have my key/access card on me." How often have you heard that in your building? While the person asking may not seem suspicious, this is a very common tactic used by social engineers.
In the same exercise where Nickerson used his thrift-shop shirt to get into a building, he had a team member wait outside near the smoking area where employees often went for breaks. Assuming this person was simply a fellow-office-smoking mate, real employees let him in the back door with out question. "A cigarette is a social engineer's best friend," said Nickerson. He also points out other places where social engineers can get in easily in 5 Security Holes at the Office.
This kind of thing goes on all the time, according to Nickerson. The tactic is als o known as tailgating. Many people just don't ask others to prove they have permission to be there. But even in places where badges or other proof is required to roam the halls, fakery is easy, he said.
"I usually use some high-end photography to print up badges to really look like I am supposed to be in that environment. But they often don't even get checked. I've even worn a badge that said right on it 'Kick me out' and I still was not questioned."
Social networking sites have opened a whole new door for social engineering scams, according to Graham Cluley, senior technology consultant with U.K.-based security firm Sophos. One of the latest involves the criminal posing as a Facebook "friend." But one can never be certain the person they are talking to on Facebook is actually the real person, he noted. Criminals are stealing passwords, hacking accounts and posing as friends for financial gain.
One popular tactic used recently involved scammers hacking into Facebook accounts and sending a message on Facebook claiming to be stuck in a foreign city and they say they need money.
"The claim is often that they were robbed while traveling and the person asks the Facebook friend to wire money so everything can be fixed," said Cluley.
"If a person has chosen a bad password, or had it stolen through malware, it is easy for a con to wear that cloak of trustability," he said. "Once you have access to a person's account, you can see who their spouse is, where they went on holiday the last time. It is easy to pretend to be someone you are not."
See 9 Dirty Tricks: Social Engineers Favorite Pick-up Lines for more examples.
Social engineers also take advantage of current events and holidays to lure victims. In Cyber Monday: 3 online shopping scams and 7 Scroogeworthy scams for the holidays security experts warn that social engineers often take advantage of holiday shopping trends by posioning search results and planting bad links in sites. They might also go as far as to set up a fake charity in the hope of gaining some cash from a Christmas donation.
Why do people fall for social engineering techniques?
People are fooled every day by these cons because they haven't been adequately warned about social engineers. As CSO blogger Tom Olzak points out, human behavior is always the weakest link in any security program. And who can blame them? Without the proper education, most people won't recognize a social engineer's tricks because they are often very sophisticated.
Social engineers use a number of psychological tactics on unsuspecting victims. As Bushwood outlines in Mind Games, successful social engineers are confident and in control of the conversation. They simply act like they belong in a facility, even if they should not be, and their confidence and body posture puts others at ease.
This is your brain on social engineering
Brian Brushwood is really good at tricking people. So good he founded a website called "Scam School".
Brushwood understands how social engineers mislead people. Four basic principles:
- They project confidence. Instead of sneaking around, they proactively approach people and draw attention to themselves.
- They give you something. Even a small favor creates trust and a perception of indebtedness.
- They use humor. It's endearing and disarming.
- They make a request and offer a reason. Psych 101 research shows people are likely to respond to any reasoned request.
Read the details in Mind games: How social engineers win your confidence
"People running concert security often aren't even looking for badges," said Brushwood. "They are looking for posture. They can always tell who is a fan trying to sneak back and catch a glimpse of the star and who is working the event because they seem like they belong there."
Social engineers will also use humor and compliments in a conversation. They may even give a small gift to a gate-keeping employee, like a receptionist, to curry favor for the future. These are often successful ways to gain a person's trust, said Bushwood, because 'liking' and 'feeling the need to reciprocate' are both fixed-action patterns that humans naturally employ under the right circumstances.
Online, many social engineering scams are taking advantage of both human fear and curiosity. Links that ask "Have you seen this video of you?' are impossible to resist if you aren't aware it is simply a social engineer looking to trap you into clicking on a bad link.
Successful phishing attacks often warn that "Your bank account has been breached! Click here to log in and verify your account." Or "You have not paid for the item you recently won on eBay. Please click here to pay." This ploy plays to a person's concerns about negative impact on their eBay score. | <urn:uuid:14645eee-5d92-4348-abb2-1cabd92f90d7> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2124681/leadership-management/security-awareness-social-engineering-the-basics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00635-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968542 | 2,733 | 2.75 | 3 |
The U.S. National Science Foundation (NSF) has banned a researcher for using supercomputer resources to generate bitcoin.
In the semiannual report to Congress by the NSF Office of Inspector General, the organization said it received reports of a researcher who was using NSF-funded supercomputers at two universities to mine bitcoin.
Mining is a process to generate the digital currency that involves complex calculations. Bitcoin can be converted to traditional currencies, and 1 bitcoin was worth roughly US$654 on Friday, according to indexes on CoinDesk.
The computationally intensive mining took up about $150,000 worth of NSF-supported computer use at the two universities to generate bitcoins worth about $8,000 to $10,000, according to the report. It did not name the researcher or the universities.
The universities told the NSF that the work was unauthorized, reporting that the researcher accessed the computers remotely, even using a mirror site in Europe, possibly to conceal his identity.
The researcher said he was simply conducting tests, Inspector General Allison Lerner's office wrote in the report, which covers six months to March 31.
"The researcher's access to all NSF-funded supercomputer resources was terminated," the office wrote. "In response to our recommendation, NSF suspended the researcher government-wide."
The office, which is tasked with promoting efficiency in NSF programs and detecting cases of fraud, did not release other details of the case.
It did not immediately respond to a request for more information.
The incident follows a similar case in February in which a researcher at Harvard University was caught using supercomputer resources to mine dogecoin, a recently launched virtual currency.
The researcher was barred from accessing the computer resources. | <urn:uuid:1f8670ba-ef05-4e82-960a-e5369bf5e493> | CC-MAIN-2017-09 | http://www.cio.com/article/2375684/internet/us-researcher-banned-for-mining-bitcoin-using-university-supercomputers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00159-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966123 | 357 | 2.71875 | 3 |
The evolving 3-D printing process excites people for several reasons. The technology's capable of making complex shapes quickly, and it's a great way to produce parts to test for form and function in early manufacturing stages. However, most operators currently use plastic or light metal alloys for relatively large objects.
But what if the technology could print human organs and microfibers on a large scale?
Scientists are already printing tiny strips of living tissue, and they hope to print entire human organs as the technology grows in sophistication. In a process called "bioprinting," doctors could use isolated organs and tissue to test vaccines and other biological agents without worrying about harming animals or relying on inaccurate modeling programs. And the process, once perfected, could produce entire body parts for patient transplants.
According to CNN, 3-D bioprinting involves harvesting living cells from biopsies or stem cells before allowing them to multiply in a petri dish. Scientists feed this "biological ink" into a 3D printer that converts the cells into a 3-D shape that may integrate with existing tissue when placed inside of or onto a host body.
Gastroenterologist Dr. Jorge Rakela essentially told CNN that the technology could transform medicine. "This is an exciting new area of medicine," he said. "It has the potential for being a very important breakthrough.
The world's zeal for 3-D printing will increase, as will the medical community's involvement. According to Bloomberg, the market for 3-D printing reached $777 million in 2012, and it may grow to $8.4 billion in 2025 as medical applications come into play.
Current applications hold promise, but some incorporate non-organic issue for a cybernetic result. Princeton scientists 3-D printed a bionic ear last year that could hear beyond a regular human's natural ability. They printed human cells and nanoparticles, and bonded them with antenna and cartilage to create the body part. They created an ear that heard radio frequencies a million times higher than human ears can. Princeton researcher Michael McAlpine told Mashable that it was intended for demonstration purposes rather than actual application.
"The idea of this was: Can you take a normal, healthy, average human and give them [a] superpower that they wouldn't normally have?" he said.
Other researchers also are developing the technology to produce microscopic materials. Harvard scientist Jennifer Lewis and her students have printed microscopic components, including electrodes, that could be used to make lithium-ion batteries. This year, they also manufactured a patch of tissue with blood-vessel-like material inside that can carry actual blood.
She's adapted 3-D printing to make it more sophisticated, with "inks" comprising materials that are more diverse than plastic and metal, and also high-precision printing platforms with fine nozzles.
Lewis told the Wyss Institute last year that her team's approach was "distinct from commercially available 3-D printers because of its materials flexibility, precision and high throughput."
3-D printing's evolution will continue for the foreseeable future, especially when it comes to organic tissue. A huge limitation to the advancement of 3-D printing of organic tissue has been supplying them with blood throughout the process. Additionally, living tissue is more complex than anything else that's currently being created. But enthusiasts have reason to hope with developments like Lewis's blood vessel work.
Anthony Vicari, an analyst at Boston-based Lux Research, told Bloomberg that 3-D printed organs are possible, but it will be a while before they become reality.
"Organs are foreseeable, but that's a long-term goal," Vicari said. "That requires not just the better printing technology, but much better understanding of tissue engineering." | <urn:uuid:5087b268-c9fe-464e-940b-467bc195e502> | CC-MAIN-2017-09 | http://www.govtech.com/videos/Will-3-D-Printing-Produce-Human-Organs-Nearly-from-Scratch.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00511-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957061 | 769 | 3.671875 | 4 |
Strategic Insights:Heart Valve Replacement Market - India
- 1 Introduction
- 2 Current Worldwide Market
- 3 Market in India
- 3.1 Strategic Outlook
- 3.2 Overview of market
- 3.3 Regulatory landscape
- 3.4 Reimbursement landscape
- 3.5 Technology landscape
- 3.6 Distribution landscape
- 3.7 Pricing landscape
- 3.8 Key hospitals & institutions
- 3.8.1 Narayana Hrudayala Hospitals
- 3.8.2 Sri Jayadeva Institute of Cardiovascular Sciences and Research
- 3.9 Key Opinion Leaders
- 3.10 Market Drivers
- 3.11 Recent Trends
- 3.12 Recent Developments
- 4 Key Players
- 5 Recommendations
- 6 Strategy formulation
For blood to go in only one direction, forward, it must pass through the heart valves, which function as one-way doors, opening and shutting with each beat of the heart. Just as there are four chambers to the heart, there are four heart valves. Blood must pass through one of these valves each time it leaves a chamber.
The Four Heart Valves:
- Tricuspid: The tricuspid valve is named because it has three leaflets. It is located between the right atrium and right ventricle.
- Pulmonary: The pulmonary valve is named because it is located below the pulmonary artery, between the right ventricle and the pulmonary artery.
- Mitral: The mitral valve is named because it looks like an upside down bishop's hat or mitre. It is the only heart valve with two leafets; all of the others have three. It is located between the left atrium and left ventricle.
- Aortic: The aortic valve is named because it is located below the aorta, between the left ventricle and aorta.
The two valves located between the atria and ventricles, the tricuspid and mitral valves, are known as atrioventricular valves. The two other valves, the pulmonary and aortic, are sometimes called semilunar valves, because each of those valves has leaflets that are shaped like half-moons.
Types of heart valves
When someone has to have a heart valve replaced, there are a few things that are done to determine what type of valve the patient will recieve. The patient could recieve one of the following valves: mechanical valves, tissue valves, homograft valves, or allograft valves. These all have there advantages and disadvantages.
Tissue Valves: A tissue valve is another field of valves that are taken from an animal and put into human hearts. These kinds of valves are chemically treated for safety and are prepared for the human heart (St. Jude Medical, Inc., 2007). Since these valves are weak, they are reinforced with a frame or stent to make them stronger, and to support the valve. The valves that aren’t reinforced are called stentless valves (St. Jude Medical, Inc., 2007).
These types of valves aren’t a good choice for younger patients because they wear out quickly. They wear out because they stretch when the demand of blood flow increases (Aortic valve replacement, 2007). Another reason as to why they aren’t used often is because when these valves wear out, the patient will have to under go another operation to get a new valve implanted to replace the previous one. These valves last, on average, 10-15 years in the less active patients such as the elderly, while in the younger and more active patients, they wear out a lot faster (Aortic valve replacement, 2007).
Mechanical Valves: Mechanical valves are designed to mimic a real heart valve (St. Jude Medical, Inc., 2007) and to outlast the patient (Aortic valve replacement, 2007). All versions have a ring to support the leaflets (flaps) like a natural valve and has a thin polyester mesh cuff on the circumference of the valve for easier implantation. This is for easier implantation (St. Jude Medical, Inc., 2007). These valves are not controlled electronically but naturally. As the heart beats, the mechanical valve opens and closes (St. Jude Medical, Inc., 2007). These valves have been proven to last several hundred years by being stress-tested (Aortic valve replacement, 2007).
Homograft Valves: The third type of valve is a homograft valve. This a valve that is taken form a human donor (St. Jude Medical, Inc., 2007; Encyclopedia of Medicine, 2006).These donor valves are only given to patients who will deteriorate rapidly because of a narrowing of the passageway between the aorta and that left ventricle (Encyclopedia of Medicine, 2006) This type of valve is better for pregnant women and children (St. Jude Medical, Inc., 2007. Unlike most valves, this type of valve does not require anticoagulation therapy over a long time (long-term) (St. Jude Medical, Inc., 2007).Durability of a homograft is approximately the same as a tissue valves (Aortic valve replacement, 2007).These valves are sometime, but rarely, taking from the patients own pulmonic valve (Encyclopedia of Medicine, 2006).
Allograft Valves:The fourth type of valve is an allograft valve. These valves are usually taken from pig's aortic valve (Encyclopedia of Medicine, 2006). They are chemically treated before they are put into a human heart. The life span of one of these valves is about 7-15 years, depending of the patient (Encyclopedia of Medicine, 2006). Because of the short life span of this valve, it is generally given to the older patients (Encyclopedia of Medicine, 2006).
Disorders treated by heart valves
- Valvular Stenosis:This occurs when a valve opening is smaller than normal due to stiff or fused leaflets. The narrowed opening may make the heart work very hard to pump blood through it. This can lead to heart failure and other symptoms (see below). All four valves can be stenotic (hardened, restricting blood flow); the conditions are called tricuspid stenosis, pulmonic stenosis, mitral stenosis or aortic stenosis.
- Valvular Regurgitation:This occurs when a valve does not close tightly. If the valves do not seal, some blood will leak backwards across the valve. As the leak worsens, the heart has to work harder to make up for the leaky valve, and less blood may flow to the rest of the body. Depending on which valve is affected, the conditioned is called tricuspid regurgitation, pulmonary regurgitation, mitral regurgitation or aortic regurgitation.
Heart valve procedures
Procedure of Heart Valve Surgery
Heart valve surgery means repair or replacement of the diseased valves. In the surgery, some valves are repaired or mended to do its work properly. Replacement means removal of the diseased valves by a new valve. The procedures of heart valve surgery are :
- Valve Repairing : In the valve repair surgery, a ring is sewn around the opening of the valve to make tighter. The surgeons may cut the other parts or may separate and shorten it to help the valve open and close right.
- Valve Replacement : Sometimes by mending the valves, it is not possible to cure the unhealthy valve, and then replacement is required to get back its normal function. A prosthetic valve is used to replace. There are two types of prosthetic valves.
- Mechanical valves : These types of valves are made from man-made materials. While heart surgeons’ use this valve, lifetime therapy with an anticoagulant is prescribed to the patient.
- Biological (tissue) valves : The surgeons take biological valves from pig, cow or human donors. The longevity of biological valves is less than the mechanical valves.
Heart valve procedures by technique
Transcatheter Aortic Valve Implantation (TAVI)
Description This technique involves insertion of a miniaturized valve through a catheter from the groin. The deployed valve is later inflated at the site of the aortic valve. The entire procedure is conducted under general anesthesia and takes about an hour. It is a non-surgical procedure. In TAVI inner organs are accessed via needle-puncture of the skin, rather than by using a scalpel.
Procedures in India TAVI is still in nascent trial stage in India. In mid of March, a team of doctors at Delhi's Fortis Hospital headed by Dr Ashok Seth operated three patients using TAVI.
Cost of surgery in India The cost of procedure is 29,350 USD which include the cost of valve 21,500 USD (approx).
Prevalence After the age of 75 years, 5% population is at the risk of developing a problem in their heart valve, out of which 35% are not suitable for surgery. If not treated, 50% of them will not survive for more than two years.
The Ross Procedure is a type of specialized aortic valve surgery where the patient's diseased aortic valve is replaced with his or her own pulmonary valve. The pulmonary valve is then replaced with cryopreserved cadaveric pulmonary valve. In children and young adults, or older particularly active patients, this procedure offers several advantages over traditional aortic valve replacement with manufactured prostheses.
Source:University of Southern California
- 1,500 Ross procedures are performed annually on a global basis. In US, this number is around 1,000.
Current Worldwide Market
- Aortic Valve segment represents 55% of the overall market. However, with 35-50% of patients suffering from severe aortic stenosis considered at high risk for surgery, the current number of patients eligible for TAVI procedures is 200,000 worldwide.
- The TAVI segment thus represents a $2B market opportunity. According to various sources, this market size will be reached in 2014.
- The Brazilian, Russian, Indian, and Chinese (BRIC) heart valve device market—comprising sales of heart valve replacement (mechanical, tissue, and transcatheter aortic valve replacement [TAVR]) and heart valve repair (annuloplasty) devices—was valued at nearly $180 million in 2011 and will expand through 2016, driven primarily by rising heart valve procedure volumes.
- Rapid economic growth and an aging population, which are increasing both the prevalence of valvular heart disease and patients’ ability to pay for treatment, constitute the primary drivers of growth in the BRIC heart valve device market.
- The patient population will also expand as government funding for health care infrastructure improves the accessibility and affordability of the procedures for patients across all BRIC nations.
- Rising penetration of tissue heart valves will contribute further to market growth due to the premium price of these devices compared to mechanical heart valves.
|Apr 2012||St. Jude Medical||Japan||The new Trifecta aortic stented, pericardial tissue valve has been implanted in procedures atOsaka University Hospital and Saitama Medical University International Medical Center||Medcity News|
|Nov 2011||Edward Lifesciences||US||The Sapien Transcatheter Heart Valve will provide some people with this condition who can’t undergo open heart surgery with the option of valve replacement||Wallstreet Journal|
|July 2011||Sorin Group||Europe||Mitroflow Aortic Pericardial Heart Valve||Sorin|
|Jan 2011||Sorin Group||Europe||Innovative Self-Anchoring Aortic Heart Valve, Perceval™ S||Sorin|
|May 2010||CryoLife||US||Cryovalve SG Pulmonary Human Heart Valve (and Conduit)||FDA|
Market in India
Overview of market
- The Indian market for heart valves was about 30,000 a year and a sizeable portion of that is being met by the TTK-Chitra valves.
Source:Senior Executive at TTK Chitra Hindu Article
- Indian government is working on a comprehensive regulatory framework for the medical device sector because it has lacked a formal regulatory system for many years. Medical devices are currently either regulated as drugs or simply left unregulated.
- In June 2009, the Drug Consultative Committee (DCC) and the Drug Technical Advisory Board (DTAB) approved new formal regulations for India's medical devices sector. The Health Ministry is set to issue the notification of these new regulations in the near future.
According to the final draft of the newly proposed regulations, all medical devices have been broadly classified into the following categories:
- Class A Devices: Low risk devices that include gloves and operating room utensils;
- Class B Devices: Low to medium risk devices such as needles, surgical knives, and syringes;
- Class C Devices: Moderate to high risk devices such as radiation equipment and heart–lung machines
- Class D Devices: Very high risk and life supporting devices such as implantable pacemakers and defibrillators.
Heart Valves notified as “drugs”: As per the notice dated 16/May/2005 from The Ministry of Health and Family Welfare, Govt. of India has notified Heart Valve devices to be considered as drugs under Section 3, Clause (b) . Sub clause (iv) of the Drugs and Cosmetics Act, notification number s.o.1468 (E). CDSCO: Medicines in India are regulated by CDSCO - Central Drugs Standard Control Organization. Under Ministry of Health and Family Welfare. Headed by Directorate General of Health Services CDSCO regulates the Pharmaceutical Products through DCGI - Drugs Controller General of India at Chair.
|Registration Certificates issued for the Heart Valves along with their manufacturing sites and Indian Authorized agents in since 2010|
|Date||Name of Indian Agent||Name of Manufacturer||Name of the Device||File No.||R. C. No.||Validity of the Registration Certificate|
|Jan. 2012 to Feb 2012||M/s. St. Jude Medical India Private Limited, Plot No. 18 & 19 Laxminagar, behind TB Hospital, Hyderabad-500038||M/s. St. Jude Medical Puerto Rico LLC, Lot 20-B, St. Cagaus Puerto Rico 00725||1. St. Jude Medical Mechanical Heart Valve 2.SJM Master Series (Rotatable)-Aortic + 9||31-28-MD/2006-DC (Re-Reg. 2_||MD-28||30-06-2015|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400094||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic ATS Medical Inc., 3905 Annapolis Lane, Suite 105 Minneapolis, MN – 5547, USA||2. Open Pivot Aortic Valved Graft (AVG)||31-892-MD/2010-DC||MD-893||31-12-2014|
|Jan. 2011 to 20th December 2011||M/s Edward Lifesciences (India) Pvt. Ltd., E.F. 201-204, Remi Biz Court, Plot No. 9, Off Veera Desai Road, Andheri West, Mumbai- 400058||M/s Edward Lifesciences LLC, One Edwards Way, Irvine CA, USA 92614-5686||1. Carpentier-Edwards Bioprosthetic Valved Conduit||31-93-MD/2006-DC (Re-Registration 2010) (End. 1)||MD- 93||31-01-2013|
|Jan. 2011 to 20th December 2011||M/s Edward Lifesciences (India) Pvt. Ltd., E.F. 201-204, Remi Biz Court, Plot No. 9, Off Veera Desai Road, Andheri West, Mumbai- 400059||M/s Edward Lifesciences LLC, One Edwards Way, Irvine CA, USA 92614-5687||2. Edwards MC Tricuspid Annuloplasty System||31-93-MD/2006-DC (Re-Registration 2010) (End. 1)||MD- 94||31-01-2014|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400093||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic Mexico S. de R.L de C.V. Avenida paseo del Cucapah 10510, Parque Industrial EI Lago, Tijuana, B.C. 22570< Mexico||1. Sprinter rapid Exchange Balloon Dilatation Catheter||31-381-MD/2007-DC (Re-Reg. 2010)||MD-381||14-02-2014|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400094||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic Mexico S. de R.L de C.V. Avenida paseo del Cucapah 10510, Parque Industrial EI Lago, Tijuana, B.C. 22570< Mexico||2. Sprinter legent RX Balloon Dilatation Catheter||31-381-MD/2007-DC (Re-Reg. 2010)||MD-382||14-02-2015|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400095||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic Mexico S. de R.L de C.V. Avenida paseo del Cucapah 10510, Parque Industrial EI Lago, Tijuana, B.C. 22570< Mexico||3. Melody Transcatheter Pulmonary Valve||31-381-MD/2007-DC (Re-Reg. 2010)||MD-383||14-02-2016|
|Jan. 2011 to 20th December 2011||M/s. St. Jude Medical India Private Limited, A & B, 2nd Floor, Brij Tarang, Greenland, Begumpet, Hyderabad-500016||M/s. St. Jude Medical Cardiology Division Inc, DBA 177 County Rod, B East St. Paul, MN 55117, USA||Trifecta Valve Aortic (19mm-27mm)||31-26-MD/2006-DC (Re-Reg. 2009 (End 02)||MD-26||30-06-2012|
- Aarogyasri in Andhra Pradesh State
- Jeevandayi Yojana in Maharashtra State
- Kalignar's Insurance Scheme
- It is a flagship scheme of all health initiatives of the State Government with a mission to provide quality healthcare to the poor. The aim of the Government is to achieve "Health for All" in Andhra Pradesh state.
- In 2007, the Andhra Pradesh government launched ’Aarogyasri’, a community health insurance scheme for the poor (Under this scheme, the hospitals received a fixed amount for valve replacement operations).
- The Maharashtra state government provides financial assistance to people falling in the below poverty line (BPL) category for treatment of various diseases.
- Under this scheme, doctors perform major operations within an upper limit of Rs 1.5 lakh.
Kalignar's Insurance Scheme
- Tamil Nadu launched the “Kalaignar’s Insurance Scheme for Life Saving Treatments” for families with an annual income less than Rs. 72,000.
- Each family will enjoys benefits up to Rs. 1 lakh for certain procedures in private hospitals and pay wards in government hospitals.
- Private insurance company Star Insurance, contracted to implement the scheme, has entered into contracts with a number of hospitals in private health care centres and hospitals throughout the State. There will be a minimum of six hospitals in each district and 15 hospitals in the major cities. The government will pay the premium of Rs. 500 per annum. A total of Rs. 517.30 crore is the allotment for the current financial year.
In India, the reimbursement rate for procedures varies from one insurance company to another. Typically, Heart Valve Replacement procedure falls under major illness category. The amount of reimbursement is typically dependent on the sum assured.
- United India Assurance - Pre and Post Hospitalisation expenses payable in respect of any illness shall be the actual expenses incurred subject to a maximum of 10% of the Sum Insured whichever is less. For major illnesses, the expenses are settled on a co-pay of 80:20 ratio. The co-pay of 20% will be charged as a total package applicable on the admissible claim amount.
- Life Insurance Corporation - For major cardiovascular surgical procedures like Valve replacement surgery, open heart surgery for vale repair and heart by-pass surgery, up to 100% of sum assured could be claimed.
- ICICI Lombard Insurance - For critical illnesses like Coronary artery bypass graft surgery and Heart valve replacement surgery, the insured is entitled to the lumpsum benefit of the 100% of the sum insured for. The insurance sum may vary between $12,000 - $24,000.
- The reimbursement for medical devices like valve, stent, pacemaker etc. are evaluated on a case-to-case basis.
An artificial heart valve is a device implanted in the heart of a patient with heart valvular disease. Natural heart valves become dysfunctional for a variety of pathological causes. When one of the four heart valves malfunctions, the medical choice may be to replace the natural valve with an artificial valve.
There are two main types of artificial heart valves:
- Mechanical valves - prosthetics designed to replicate the function of the natural valves of the human heart.
- Biological valves - valves of animals, like pigs, which undergo several chemical procedures in order to make them suitable for implantation in the human heart.
One of the greatest biomedical engineering challenges today is to develop an implantable device that resists the natural conditions to which heart valves are subjected, without eliciting host reactions that would impair their function. Currently, no artificial heart valve device, either mechanical or tissue-derived, fulfills the required prerequisites for an ideal heart valve.
Regenerative medicine approaches to heart valve replacement:
Regenerative medicine is based on principle of using the patient’s own cells and extracellular matrix components to restore or replace tissues and organs that have failed. Modern approaches to heart valve regenerative medicine include several research methodologies with the most intensely researched approaches being:
- the use of decellularized tissues as scaffolds for in situ regeneration
- construction of tissue equivalents in the laboratory before implantation, and
- use of scaffolds preseeded with stem cells.
The regenerative medicine approach is however still in its nascent stages.
Future and perspesctives:
Effective treatments of valvular disease continues to present multiple challenges. The exciting lines of investigation in this area are:
- Finding causes and developing nonsurgical therapy approaches for valvular disease
- Improvement of current artificial devices
- Regenerative medicine approaches
- This technique involves replacing diseased aortic and mitral vales with the patient's own pulmonary valve and valves collected from cadavers replace the pulmonary valve.
Arkalgud Sampath Kumar- Biography
Percutaneous Transcatheter Aortic Valve Implantation (TAVI)
- In TAVI, a replacement valve is passed through a hole in the groin by a puncture of the femoral artery and advanced up to the ascending aorta of the patient. It substitutes for a more invasive procedure in which the chest is opened. The survival is equivalent, but the risk of stroke is higher.
Dr. Ashok Seth, Fortis Healthcare, Delhi
Transcatheter Pulmonary Valve (TPV) Therapy
- Transcatheter pulmonary valve therapy or Percutaneous pulmonary valve implantation (PPVI) treats narrowed or leaking pulmonary valve conduits without open-heart surgery.
- With transcatheter pulmonary valve therapy, a catheter (a thin, hollow tube) holding an artificial heart valve is inserted into a vein in the leg and guided up to the heart. The heart valve is attached to a wire frame that expands with the help of balloons to deliver the valve. Once the new heart valve is in position, it begins to work immediately.
Value Chain- Heart Valves
Sales Force Structure
Distributors & Stockists
- Distribution channel generally consists of the company, distributor & Hospital while Doctors being the key influencer regarding which valves should be procured. Patients are rarely aware of brands generally go by the Doctors choice which is conjunction with their paying capacity & need of the surgery.
- Big private chain of Hospitals sometimes bypass distributor and directly deal with manufacturing company. Discussed in detail in pricing section.
- Distributors are majorly concentrated in metropolitans like New Delhi, Chennai, Mumbai, ,Kolkata ,Bangalore, Hyderabad etc. and also cater to regions nearby.
- Commercial activities are done by stockist while companies sales force provide technical support to doctors , addresses the all issues faced to streamline the process & take feedback at every level in supply chain.
Distribution channel at government hospitals:
- Hospital floats a tender on basis of their requirement. The stockist quote price for 6 months/ yearly for heart valves. Generally authorized stockist gets discount on the product from the company hence are able to bag deals easily.
- Pricing: Rate contract is signed with the stockist and all the procedures would be charged as per the “rate” mentioned in the contract .Procurement is done on demand basis.
- Exceptions: If the doctor feels that he needs a Heart Value which has new technology and is still not in "rate contract" list, he can issue a local purchase for that ,hence a new entrant with unique offering can make waves.
- Hospitals keep an inventory of different valves from different manufacturers. The number of heart valve from each manufacturer is actually determined by the surgeons.
Procurement of heart valve at the time of surgery:
- When a valve is recommended for a patient, the patient has to go procurement dept. to get their valve, the procurement dept then informs the vendor and only after this the vendor would raise the invoice. This is unique because even if the valve is sitting in the hospital, invoice is raised only when it is ready to be implanted into the patient. This is same for every vendor.
- For insurance related patients, the patients have to go to the credit cell of the hospital and then when the credit cell gives a green sign (after the formalities with insurance vendor and patient) the procurement dept. asks to raise the invoice.
- Commission varies from company to company but generally the distributor take (20-25%),hospitals take (20-25%).
- Doctors take (10%),perfusionist takes (2%) how ever the data regarding the cut taken by doctors & perfutionists varies from nil to a few percentage in monetary /non monetary terms which generally happens under the table.
The major cost of the surgery includes cost of the
- Medical devices
Medical device The heart valve typically costs from INR22,000 ($420) to INR200,000($3,800). The most cost effective valve is manufactured by TTK. The valve is called TTK Chitra. Depending upon the type( mechanical, tissue, percutaneous etc)of valve the price could go up to more than INR10 lakhs($18,800).
Procedure The hospitals generally charge a fixed amount on money for the whole procedure. In Tier 1 cities it is INR2 lakhs($3,800) in most of the hospitals.
The ratio of cost of device to procedure generally is 30% to 70%.
Each hospital in the chain has a pricing committee which decides the price of valve based on factors such as :
- Handling charges
- Benefit to the patients
Hence price of same valve may be different in a hospital of same chain
- Some hospitals (generally a chain) have a central committee which directly negotiates with the company on the purchasing price of the Heart Valves
- Purchasing committee has a lot of bargaining power as they deal in huge volumes (Example – Fortis group)
Key hospitals & institutions
Narayana Hrudayala Hospitals
Narayana Hrudayalaya is founded by one of the India’s oldest construction company “Shankar Narayana Construction Company”. Narayana Hrudayalaya group currently has 5000 beds in India and aims to have 30,000 beds in the next 5 years in India to become the one of the largest healthcare player in the country.
Narayana Hrudayalaya - Highlights
- The largest cancer hospital in the country at the Bangalore campus - 1,400-bed cancer and multispecialty hospital.
- Largest number of Pediatric Heart Surgeries in the world.
- Largest number of Heart Valve Replacements in the world for the year 2007.
- Over 32 heart surgeries performed in a day.
- World leader in endovascular interventions for aneurysm of aorta.
- First hospital in Asia to implant a 3rd generation artificial heart.
- Working on a mission to do a heart operation for US$800 from point of admission to point of discharge in next 3 years.
NH Institute of Cardiac Sciences, Bangalore
Narayana Hrudayalaya is located close to the Electronics City of Bangalore covering 26 acres of land with a building to accommodate 1000 beds, 26 operation theaters and infrastructure to perform 70 heart surgeries a day. Within the first 5 years of commissioning this institution, currently 25 heart surgeries are done on a daily basis, out of them about 30% are on children with heart problem. Rest of them is adult open-heart surgeries.
The institute is one of the world’s largest pediatric heart hospitals. It is the brainchild of renowned cardiac surgeon Dr. Devi Shetty, who performed over 15,000 heart operations.
Heart Valve Procedures
The Ross Procedure
The Ross Procedure, also known as Pulmonary valve translocation, was developed by Donald Ross in 1967.This operation uses the patient’s own pulmonary valve and part of the main pulmonary artery as a unit to replace the aortic valve and ascending aorta. A homograft valve is harvested from a cadaver, is then placed in the pulmonary position. The pulmonary valve is identical in shape, size, and in fact stronger than the aortic valve and is therefore an ideal replacement for the diseased aortic valve. Narayana Hrudayalaya has a full fledge functioning homograft heart valve bank for the benefit of the needy patients. The surgeons of the Narayana Hrudayalaya have a large experience in successful valve replacements using homografts and Ross operations. These operations are being done only in very few centres in our country. Surgeons at Narayana Hrudayalaya have performed about 100 of these procedures with excellent results. They are perhaps one of the most experienced surgeons in the World in performing operations like Bental Procedure for Aortic Aneurysm and Aortic Arch replacement surgery for dissecting Aneurysm of Aorta.
Mitral Valve Repair in New Born Babies and Infants
Mitral Valve leakage is a dreadful condition affecting small percentage of children suffering from congenital heart disease. Only option for these children is repair of the valve, which is done on a regular basis at Narayana Hrudayalaya.
Ross's Procedure for Aortic Stenosis
Best treatment option for Aortic Stenosis is Ross's Procedure in which the patient's own pulmonary valve is used to replace the aortic valve and in the place of pulmonary valve a homograft taken from a dead body is replaced.
Narayana Hrudayala uses economies of scale to keep the cost of treatment low.
- The procedure cost is arounf INR 110,000 for a fully paid heart surgery
- Narayana Hrudayala also offers free treatment for few who cannot afford the procedure
- It has tie-ups with health foundations and offer them discounted price of INR 60,000 to 70,000
Unlike other hospitals, the bulk of its profits come from the out- patients ward, where the cost to the patient is low but the margins are as high as 80 percent. The number of walk-in patients remains high because they know the cost of surgery will be subsidised should they need it.
Dr. Avery Mathew
Designation: Senior Consultant Cardiac Surgeon
Brief Profile: He has done M.Ch(Cardiothoracic) in Kasturba Medical College,Mangalore His forte lies in Aortic Aneurysms Surgery, besides Coronary Artery and Valve Surgery.
Dr. Binoy C, MCh
Designation: Consultant Cardiac Surgeon
Brief Profile: Dr Binoy completed his training in cardiac surgery at the prestigious Seth G.S Medical College and King Edward Memorial Hospital at Mumbai and The Royal Prince Alfred Hospital at Sydney, Australia. His fields of interest and expertise include Total Arterial Coronary Revascularization procedures using bilateral Internal Mammary Arteries, aortic surgeries and Pulmonary Thrombo Endarterectomy. He also leads the Extra Corporeal Membrane Oxygenation (ECMO) programme in the hospital.
Dr. Chinnaswamy Reddy H M, DNB(Gen. Sur.), DNB(CTS), FPCS
Designation: Senior Consultant Cardiac Surgeon
Brief Profile: He has done M.Ch(Cardiothoracic and Vascular Surgery) in Jayadeva Institute of Cardiology,Bangalore University. He specializes in Bex-Nikaidoh operation, REV operation, Double-switch Ross operation and the latest Cone Reconstruction of Tricuspid Valve in Ebstein\'s Anamoly.
Sri Jayadeva Institute of Cardiovascular Sciences and Research
Sri Jayadeva Institute of Cardiovascular Sciences & Research is a Government owned Autonomous Institute and is offering super specialty treatment to all Cardiac patients. It has got 600 bed strength with State of Art equipments in the form of 4 Cathlabs, 4 Operation Theaters, Non-Invasive Laboratories and 24 hours ICU facilities. Presently on an average 800-1000 patients are visiting this hospital every day and annually 21,500 In patients are treated. About 2500 Open Heart Surgeries, 8500 Coronary Angiograms, 3500 Procedures including Angioplasties and Valvuloplasties are done in this hospital. The prevalence of heart attach, which was 2% in 1960 has increased to 12% in 2008. Unfortunately heart attack and other related heart ailments steadily increasing among the poor people. 70% of the patients who comes to our hospital are well below the poverty line. The consumables used for various procedures like Open heart surgeries (Valve replacement), Angioplasty procedures, Pacemaker procedures are becoming very expensive, however quality treatment is given at affordable cost. Well equipped special ward facilities with round the clock angioplasty services are also provided.
- URL:Hospital Website
- Location: Bangalore
Heart Valve Replacement Procedures
Cost of Valve Replacement Procedure (MVR / AVR / DVR ) INR Rupees
- Any additional Devices / Implants/ Drugs used shall be charged extra.
- Wherever the procedure rates are not listed in the CGHS website, SJICR Category rates shall be applicable.
- Deluxe Ward Charges – Rs.2500/day
- Special Ward Charges – Rs. 975/day
- Procedure/Investigation charges vary for CGHS, ESI, Yeshasvini and other boards. please contact SJIC for more details.
- SJIC shall have sole discretionary powers to modify tariff without notice.
Dr. C.N. Manjunath
M.B.B.S, M.D (Gen.Medicine), D.M (Cardiology)
Director and Prof. & HOD of Cardiology
Degree College University Year of passing M.B.B.S M.M.C Mysore 1982
M.D. (Gen. Medicine) B.M.C Bangalore 1985
D.M. (Cardiology) K.M.C Mangalore 1988
Marital Status : Married
Nationality : Indian
Designation : Professor & Head of Cardiology;Director
Sri Jayadeva Institute of Cardiovascular
Sciences & Research, 9th Block Jayanagar
Bannerghatta Road, Bangalore – 560069.
Phone: 080-22977422, 22977433,
Direct -080 – 22977456 fax: 26534477
Cell Phone: 9844006699
Residence: 26692155, 26697558
Key Opinion Leaders
The surgeons are the decision makers regarding the kind and make of the valve. Almost in all hospitals surgeons recommend the type and brand of the device. Dolcera team performed a research exercise that involved interviewing doctors at top hospitals. During this exercise we found that the following factors are taken into account while taking a decision for selecting a heart valve:
- Indication of patient
- Paying capacity
- Quality of valve(durability)
- Supply/ Availability
- Need for anti-coagulation
- Haemodialysis dynamics
The Dolcera team found following insights from the discussions with surgeons:
- Patients are not aware of the brands available in the market.
- Sometimes patients ask for a foreign valve only.
- Patients usually have information about tissue or mechanical valve and want to know which one was used the procedure and why?
Here is a list of few of the key influencers in the industry:
Please click on the names to get biographies of Physicians
- Dr. Vivek Jawali
- Dr. Ashok Seth
- Dr. Naresh Trehan
- Dr. Ajay Kaul
- Dr. Surendra Nath Khanna
- Dr. Z. S. Meharwal
- Dr. Sunil K Kaushal
- Dr. Y. K. Mishra
- Dr. Sanjay Gupta
- Dr. Vijay Dikshit
Rising middle-class and ageing population
India has a population close to 1.1 billion people, making it the second most populated country behind China, and 5% of them are over 65 years of age. And unlike China, India does not impose restrictions such as ‘onechild’ policy upon its citizens. Over the next couple of decades, India is expected to surpass China as the world’s most populous country. During the forecast period to 2015, India is expected to reach 1.3 billion in total population. And as the ageing population grows, the demand for healthcare services and products will also rise. The most important driver for India however, is the rising middle-class population that will exceed 450 million by 2015. Although most of the population cannot afford premium healthcare, there are 100 million middle-class people with an annual income of over $5,000 who demand quality healthcare. While $,5000 may be a small amount in comparison to international standards, in terms of purchasing power parity (PPP), Indian citizens can enjoy premium health services within this income bracket on a par with people in developed nations.
Medical tourism has been gaining more attention resulting in an increased influx of foreign patients into India over the past seven to eight years. About 50% of specialized urban hospitals are actively focusing on tapping medical tourists to grow their business and gain international recognition.
Cost effectiveness against developed countries(Medical Tourism)
India is fast becoming a popular destinations for procedures like heart valve replacement surgeries primarily due to:
- Cost savings ranging from 70-80%
- Presence of highly educated, skilled and experienced surgeons to the same degree as United States.
- The patient may remain in hospital for a prolonged recovery period after the surgical procedure. A hospitalized recovery allows one to heal faster than if he/she were discharged to recover at home as is the practice in the United States.
The following table provides a snapshot of the comparative cost (in USD) for major heart procedures across 6 countries:
|Procedure||Country (cost in USD)|
|Heart Valve Replacement||18,000||21,500||11,500||15,500||12,000||170,000|
Transcatheter Aortic Valve Implantation (TAVI)
Description:This technique involves insertion of a miniaturized valve through a catheter from the groin. The deployed valve is later inflated at the site of the aortic valve
Procedures in India: TAVI is still in nascent trial stage in India. In mid of March 12, a team of doctors at Delhi's Fortis Hospital headed by Dr Ashok Seth operated three patients using TAVI.
Cost of surgery in India: The cost of procedure is $29,350 USD which includes the cost of valve $ 21,500 USD (approx).
Regulatory affairs: Sources at the health ministry revealed that a dialogue is on between Drug Controller General of India and the manufacturing company. Cost is said to be the bone of contention.
TAVI Procedure – Insights from Doctors in India
- Dr Ashok Seth one of the most reputed cardiologist of India is Chairman, Cardiac Sciences, Fortis Escorts Heart Institute
"The valve is known to last up to 15 years but its efficacy for the Indian population is still being assessed.“
- Dr Vivek Gupta a senior interventional cardiologist in Apollo Hospital, Delhi
“Mass availability of the valve is also an issue, as there is only one company manufacturing it”
- Prof RK Saran, Head of Lari Cardiology at Chhatrapati Shahuji Maharaj Medical University
“The disease of AVS is on the rise in Indian population affecting close to 1 million elders every year.”
- March,2012 - Medical devices: Budget unveils moves to drive growth
- March,2012 - New technique offers alternative to vulnerable heart patients
- March,2012 - India’s First Successful Percutaneous TAVI Performed at Fortis Escorts
- November,2011 - India Medtronic Launches Pulmonary Valve Replacement Therapy for Congenital Heart Disease Patients
- March,2012 - Edwards Sapien Safely Replaces Aortic Valves at Two Years
- November,2011 - Colibri Heart Valve Will Present at Upcoming 23rd Annual Transcatheter Cardiovascular Therapeutics Scientific Symposium
- November,2011 - Less Invasive Heart Valve Replacement Is Approved
- March,2011 - Medtronic Announces Global Launch of New Heart Valve Repair Ring Designed to Adapt to Heart’s Natural Valve
- Medtronic is present in India since 1979.It is headquartered out of Mumbai with offices all over the country and a total headcount of 318 employees spread across the country.
- Medtronic has sales offices in New Delhi, Kolkata, Bangaluru, Hyderabad, Chennai, Vadodara and Cochin.
- Medtronic is a leader in biological valves today.
- Medtronic have adapted to the Indian market well as regards the price is concerned.
- Medtronic recently has discontinued their blockbuster model Hall, thus sales have suffered drastically.
St Jude Medical
- St. Jude Medical company’s Indian operations have grown since it launched a wholly owned subsidiary in the local market six years ago.
- St. Jude Medical currently is headquartered and also has distribution center in Hyderabad
- Sales offices in New Delhi, Kolkata and Mumbai — three of the four major metros in India.
- Warehouses located in Delhi, Mumbai, Bangalore, Ahmedabad, Kolkata and Chennai.
- St Jude is today the undisputed winner in mechanical heart valve market revenue wise.
- St Jude have adapted to the Indian market well as regards the price is concerned .St Jude started with high prices but then compromised on prices and sales went up very high, now they are the market leaders. That’s because patients in emerging markets often contend with cost-related accessibility issues which they have addressed well.
Edwards Life sciences
- Edwards have been present in India since long but their presence has not been significant when compared to St. Jude or Medtronic.
- The company is headquartered in Mumbai & has a sales force of 30 employees
- Edwards products are good but priced very high as compared to competition.
- Edwards is concentrating on percutaneous valves costing 30000 $ in US (15 lakhs in India) which has very less takers.
- No presence in minimally invasive products (huge market), now have Port Access also, but have not introduced their products in India.
- TTK Healthcare's most significant contribution to healthcare is the manufacture and distribution of India's first indigenous heart valve prosthesis - the tilting-disc TTK Chitra Heart Valve.
- This is the only Indian-made heart valve and is the most price-friendly in the world. So far, over 50,000 TTK Chitra Heart Valves have been successfully implanted in patients.
- TTK Chitra Heart Valves has been tested in various International Laboratories and the findings were published in leading journals. The results indicate that the performance of TTK Chitra Heart Valves is comparable with any other valves available in the market.
- Manufacturing facility is located at Kazhakottom in Trivandrum India
- TTK Chitra Heart Valves are being used in over 250 major cardiac centers in the country with a total of over 55,000 implants.
- The Chitra TTK Valves are leader in sales number wise .In year 2011-12 10,425 Chitra TTK mechanical valves were sold (Source: annual report)
- Sorin & ATS have also been trying to venture into India market but do not have any significant presence in India as of now. Recently they have been focusing on India ,by poaching sales force of existing players.
|Comparative Analysis||Medtronic||St. Jude||Edwards||TTK|
|Product||Product line is good, but untimely product recalls have caused losses||Huge portfolio expanded very rapidly||High end products serves only niche segments||Single product (Best Seller)|
|Pricing||Adapted to Indian market competitively priced||Adapted to Indian market competitively priced||Priced very high as compared to peers||Unique selling proposition. Priced lowest in world deal in huge volumes|
|Marketing||Specialized in introducing new technology & key opinion leader management||Focus on aggressively reaching out to a large set of surgeons as they deal low margins & high volumes||Not much focus||Tie up with Insurance schemes has benefited in a huge way|
|Training||Conduct training||Not much focus||Conducts training||Not much focus|
Institutional challenges in India (a special case)
India strategic outlook - Phase wise approach
- By targeting at key hospital chains & few key opinion leaders in selected cities most of the Indian market can be covered as all the procedures are concentrated at few sites in exceptionally large volumes. We can have a look at India in a brief snap shot below.
Note:Fortis Healthcare has acquired Wockhardt Hospitals
Find hotspots - Attractive customer segments
- The customers can be segmented in India as follows. The local segment with global quality aspirations is growing at a very fast pace concentrated in most of the metropolitans.
- Bottom of the pyramid: TTK
- Local segment: St. Jude
- Local segment (global aspirations): Medtronic
- Global outlook: Medtronic, Edwards
Key opinion leader management
The Head of department, who is generally the most experienced surgeon would test a new heart valve initially with few surgeries, depending upon his experience he would recommend the use of new heart valves and generally .The surgeons need to be convinced about the safety & durability of the valve through training various online illustration and manuals that talk in detail about the procedure .Generally they are interested in relative comparison with competitors. Also the approval of new heart valve is given by the respective head of dept. at the state / government owned institutions .The list of approved valves are allowed to bid for the tender.
Especially the chain of hospitals, have key doctors that drive the most of business, they have a big team of cardiac surgeons under them. For example Dr Vivek Jawali at Fortis Bangalore has under him team of 17 cardiac surgeons which performs almost 25 procedures daily (not all heart valves) .His opinion is regarded very highly in the industry. In private hospital the procurement & pricing is generally handled by a separate committee hence the doctors in that committee are also very crucial.
- Training conducted in the form of videos, literature and workshops is highly appreciated and well received as suggested by surgeons.
- The module of the current training and interactive programs is a concern for the doctors. They feel it is more marketing oriented and less knowledge oriented. In its present form there is no value add; surgeons thus avoid them.
- Surgeons want to be technological partners; having them in the whole technological framework will prove fruitful
Sales & distribution plan
- Focus on selected cities: As the most of the procedures are carried out in capital city of the state or some prominent town .Mostly the traffic is routed from other cities to the capital city due to availability of proper infrastructure in big cities only. The sales person can be recruited state wise so that they can cater to both Tier 1 &Tier 2 cities with focus initially on Tier 1 city. For example in state of Karnataka has 3500- 5000 procedures are conducted every year out of which 90% of demand is from Bangalore, the capital of the state. Hence focusing on key institutions in metropolitans would cover most of the market.
- The sales person should be allocated to a particular state with major focus on key accounts & key opinion leaders in the capital city and reach out to surgeons based on the priority list. The doctors should be classified in three categories as mentioned below. The top most should have the highest priority.
- Key opinion leaders: Most important surgeons, monthly 4 visits should be conducted, there are doctors are instrumental in getting new technologies in vogue. Should be partnered with in various activities such as awareness programs, radio shows, new product launches. Their insight would help a lot in framing strategy for the company. Conducting special training programs can be of great help to both the surgeon and the firm
- Head of Departments & other key decision makers: They can drive immediate business hence need to follow up rigorously 3-4 times in a month.
- Other surgeons: These are not decision makers but as they also conduct a lot of procedures & are a part of the team they too require follow-ups twice in month.
- For Government organizations: These are tough to handle as the approval channel is complex as the gatekeeper are more and the organizations are highly bureaucratic in nature.
- Paucity of statistical data: There is no company like MAT & OMR that keeps records of patients /prescriptions in pharmaceutical industry; hence no statistics are available to ascertain the exact size of the patients that do not undergo heart valve surgery. Hence interviewing experts in the field can help us finding the unmet needs & the exact potential of the market.
- Underserved market: According to our survey doctors /surgeons say that almost 35-40 % of patients do not get the surgery done. Reasons being scarcity of funds, while few also do not get operated at a later age as they avoid taking risk do not find much utility in getting operated.
- OT Register: Exact demand and log can be found in Operation theatre OT registration which contains detail such as of name patient, size of heart valve used, company name , model number & other details. These details are generally not accessible however some information can be fetched by having good rapport with Hospital staff. Building relationship with surgeon is the key to success in Heart valve sales.
- Following steps should be taken at various fronts: | <urn:uuid:4e8329ee-b335-4809-a0e4-f0be921f0697> | CC-MAIN-2017-09 | http://dolcera.com/wiki/index.php?title=Strategic_Insights:Heart_Valve_Replacement_Market_-_India | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00035-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913108 | 11,251 | 3.015625 | 3 |
What would it be like if you didn't need your eyeglasses to clearly see your laptop screen or a text message on your smartphone?
Scientists at the University of California Berkeley are working on computer screens that would adjust their images to accommodate individual user's visual needs. Think of it as a display that wears the glasses so users don't have to.
"For people with just near sightedness or far sightedness, life isn't so bad," said Fu-Chung Huang, the lead author of the research paper on the display project at Berkeley. "But as you get older, your lenses lose elasticity and you cannot read things close to you, like a cell phone or tablet. You need another pair of reading glasses, which can be quite inconvenient.
Scientists at the University of California Berkeley are developing a vision-correcting display that would mean users wouldn't need their eyeglasses to see it clearly. (Video: UC Berkeley)
"With this technology, in the future, you just need to press a button and the display will accommodate to your vision," he said in an email to Computerworld.
Users would input their vision prescription into their individual desktop, laptop or mobile device. Then when the user logs on with a password, the computer recognizes the user and automatically adjusts its display.
Researchers at Berkeley, working with scientists at MIT, are developing algorithms that will compensate for a user's specific vision needs to adjust the image on a screen so the user can see it clearly without needing to wear corrective lenses. The software will create vision-correcting displays.
The researchers have been working on the technology for three years.
Researchers place a printed pinhole array mask, shown here, on top of an iPod touch as part of their prototype of a visually corrected display. (Image: Fu-Chung Huang)
A user who, for instance, needs reading glasses to see or read anything clearly on his laptop or tablet screens wouldn't need to wear the eyeglasses if the displays adjust themselves for his vision needs.
If a user who needs one pair of glasses to see things at a distance and another pair for reading, would not need to put on reading glasses to read her emails or Facebook posts if the display could adjust itself for her near-vision needs.
The displays, according to Berkeley, also could be used for people whose vision cannot be corrected with eyeglasses or contacts.
"This project started with the idea that Photoshop can do some image deblurring to the photo, so why can't I correct the visual blur on the display instead of installing a Photoshop in the brain?" asked Huang, who now is a software engineer at Microsoft. "The early stage is quite hard, as everyone said it is impossible. I found out that it is indeed impossible on a "conventional 2D display." I need to modify the optical components to make this happen."
The university said that the hardware setup adds a printed pinhole screen sandwiched between two layers of clear plastic to an iPod display to enhance image sharpness. The tiny pinholes are 75 micrometers each and spaced 390 micrometers apart.
The algorithm, which was developed at Berkeley, it works by altering the intensity of each direction of light that emanates from a single pixel in an image based upon a user's specific visual impairment, the university reported. The light then passes through the pinhole array in a way that allows the user to see a sharp image.
Huang, who has not yet talked with computer monitor or smartphone and tablet manufacturers about the research, noted that the display technology could be developed into a thin screen protector.
"The current version is still quite fragile," he added. "It requires precise calibration between the eye and the display and it took some time to find the sweet spot for my own eye. But remember that Amazon just announced the Fire Phone with the super fancy dynamic perspective to track your eye. This technology can solve my problem ... so I'm pretty optimistic about the overall progress."
However, he said that at this point in their work, the technology wouldn't work on a shared display such as a television screen.
"In the future, we also hope to extend this application to multi-way correction on a shared display, so users with different visual problems can view the same screen and see a sharp image," he said.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Vision-correcting display nixes your need for eyeglasses" was originally published by Computerworld. | <urn:uuid:4a4ceed9-42c0-48f1-92d7-dcb2f3c2a017> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2460161/computers/vision-correcting-display-nixes-your-need-for-eyeglasses.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00455-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951148 | 992 | 3.375 | 3 |
Flash storage can be a big power consumer in mobile devices, but it's not the flash that sucks up all that energy, it's the software that goes with it, according to researchers from the University of California at San Diego and Microsoft.
Studying built-in storage in an Android smartphone and two Microsoft Surface RT tablets, graduate student Jing Li and his colleagues found that storage consumed more energy than anything else when the devices had their screens off. Though that may not sound important, screen-off time may account for much of the day as consumers carry their devices around. Functions keep running in the background as alerts and other data come down from the network.
Li, who presented the findings Tuesday at the Usenix FAST conference in Santa Clara, California, wasn't surprised at the power demands of native storage. But he was stunned to discover that almost all of that power was consumed by software rather than the underlying hardware.
The storage devices themselves, in this case eMMC (embedded multimedia card) flash chips, only took up about 1 percent of the energy devoted to storage, the UCSD team found. The other 99 percent was consumed by elements of the software stack, including the runtime system, the file system and encryption functions.
There are good reasons to include those processes. For example, encryption is vital in mobile devices because they're especially vulnerable to theft and loss, Li said. But the way encryption is performed in them places a heavy burden on the battery.
"Even though there are some application-specific components inside the mobile device that can help you to deal with encryption, the throughputs of those components are too low to meet the requirements of the storage system," Li said. "Because of that, the designers of the storage system still decide to use the general-purpose CPU to perform the encryption tasks."
In one set of tests, the team compared power consumption between devices with and without encryption. It showed that a storage subsystem with encryption claimed more than twice the share of the device's power consumption compared to one without, Li said.
One reason is that most devices use full-disk encryption even though some of the data, such as OS files, application binaries and some media purchased online, may not need it, he said. As an alternative, the study suggested using a partially encrypted file system. Tools such as Encrypting File System on Windows and GNU Privacy Guard could provide this capability, which would let app developers fine-tune which data gets encrypted and control the energy consumption of an app, Li said. But it would take additional components to fully secure a partially encrypted file system, he said.
Most mobile devices also run apps in secure containers, using managed languages such as Java or the Common Language Runtime, to prevent unauthorized access to sensitive data and contain attacks by malicious apps. The team's testing showed that this technique increased power consumption by as much as 18 percent on Windows RT and 102 percent on Android.
Device and OS vendors use managed languages and other techniques to isolate data among different apps, Li said. Much of this storage virtualization can be moved into storage hardware by giving each application the illusion of a private file system, he said.
Another way to cut down on power consumption would be to shift the storage tasks now running on CPUs, including encryption and virtualization, onto an SoC (system on a chip) specifically for storage operations, Li said. A challenge there will be to make the SoC's encryption engine fast enough to keep up with applications' demands, he said. | <urn:uuid:d3b2235f-23ce-40af-a93d-8e7fdbe58f32> | CC-MAIN-2017-09 | http://www.itworld.com/article/2700385/storage/the-surprise-power-hog-for-mobile-storage--software.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00331-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962084 | 716 | 2.6875 | 3 |
The Japanese company says it can hold up to 148 gigabits per square inch.
Sony has developed a cassette tape that can store a whopping 185 terabytes (TB) of data, which is about 74 times the capacity of tapes being used today.
The tapes, which were developed in collaboration with IBM, claim to have an a real recording density of 148 gigabits per square inch, which is equivalent of about 3700 Blu-ray discs.
The Japanese firm announced the technology at the INTERMAG Europe 2014 international magnetics conference being held this week in Dresden.
The storage is made possible via a vacuum forming technique called sputter deposition which generates multiple layers of crystals with a uniform orientation on a polymer film.
The sputter method, which has thickness of less than 5 micrometers, is a form of thin film deposition in which electrostatic discharge is used to force argon (Ar) ions to collide with the material (target).
Sony, in a statement, said: "Until now, when the sputter method was used to deposit a thin film of fine magnetic particles on a polymer film, roughness on the surface of the soft magnetic underlayer caused the orientation of the crystals in the underlayer above it to become non-uniform.
"This in turn caused non-uniform crystalline orientation and variations in the size of the magnetic particles (grain) in the nano-grained magnetic layer directly above the underlayer, and prevented increases in recording densities."
Sony and Panasonic earlier jointly announced a new optical disc standard that stores ten times the amount of data as a Blu-Ray disc.
The new ‘Archival Disc’ can hold up to 300GB data, say the firms, which they hope will expand the market for long-term data storage.
The emergence of wireless technologies, smart products and software-defined businesses is having a staggering effect on the volume of the world’s data.
Known as the Internet of Things, the digital universe is doubling in size every two years and will multiply 10-fold between 2013 and 2020 – from 4.4 trillion gigabytes to 44 trillion gigabytes. | <urn:uuid:64338446-e56b-4bff-9fb7-8f38f77ef335> | CC-MAIN-2017-09 | http://www.cbronline.com/news/enterprise-it/storage/sonys-new-tape-technology-can-store-massive-185-tb-data-060514-4259335 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00251-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927596 | 439 | 2.65625 | 3 |
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
The collection of public and private wireline and wireless networks that make up the Internet represents 400 billion dollars of infrastructure and is responsible for 13 trillion dollars in commerce – but it faces some real challenges. It’s too complex, insecure and difficult to bring services to bear. When it comes to routing technology, the foundation of the Internet’s infrastructure, there has been no innovation in the past 20 years.
Early networks simply sent packets within private networks, and when private networks needed to be inter-connected, routers came along – creating the Internet as we know it today. And while speeds and feeds have improved with time, the only routing innovations since have focused on layering technology on top of or underneath the existing routing layer.
Nearly all of those innovations have session state embedded in them. At a high level, a session is “a temporary connection between two endpoints for the purposes of communicating information.” But ask five people what that means, and you’ll get five different answers. HTTP sessions, SIP sessions, virtual circuits, and more – the notion of a session exists up and down the network stack.
Regardless of protocol, though, all sessions do have some common characteristics – they all require some sort of signal to be established (and in most cases, ended), there is a two-way exchange of information, and each session is singularly unique. In the network layer, sessions have “biflow” (meaning two related unidirectional flows in opposite directions), directionality (reflecting which endpoint initiated the session), and state (sessions have a recognizable start and end along with other parameters specific to that session). These characteristics make it possible to associate packets and flows with a unique session, and manage that session.
Network elements that require session state include firewalls, carrier grade NATs, load balancers (ADCs), session border controllers, DPI devices, and many others, which together are often called “middle boxes” because they are located in the middle of a network. As a result, an entire industry has emerged around routers to deliver bolt-on middle box functionality that increases complexity and cost.
The answer to this predicament is session-based routing. By infusing technology developed for middle boxes into established routing technology, a new simpler paradigm emerges that can route sessions instead of packets. This session-oriented approach is designed to build context-aware networks that can easily, dynamically and securely stretch across network boundaries. Session-oriented routers make dynamic routing decisions based on fully distributed knowledge of services topology and policy frameworks. The result is much a more simple, secure and agile network.
Although much has been said about the benefits of Software-Defined Networking (SDN), Network Function Virtualization (NFV) and Software-Defined Wide Area Networks (SD-WAN) as potential solutions to network complexity, they simply magnified existing complexities. By over-relying on middle boxes, albeit virtually, along with existing tunneling and overlay techniques, these solutions add to complexity.
With session-oriented routing, advanced network functions can be performed natively – without the need for additional boxes. Session-oriented routing can also operate on in-band signaling techniques to provide security and reliability, thereby removing the need for overlay approached. The routers can understand and enforce policies (security or otherwise) and ultimately provide smart, dynamic path selection.
Session-oriented routers enable a much tighter alignment between the network and the applications it supports. And, because session-oriented routers are based on software, advanced, secure networking can be put anywhere and everywhere. Session-oriented routers also work with existing network infrastructure, so network operators do not need to rip and replace existing technology, saving time and money while delivering a network that is fundamentally simpler and smarter than ever before.
That brings up the question: Why are session-aware routers necessary to support the networks of tomorrow? The megatrends of cloud, user mobility, and Internet of Things (IoT) are changing networking in fundamental ways. The clients and servers are now in different networks. Even the concepts of a LAN and WAN are melting into a hodgepodge of private networks that require secure interconnections. Point-to-point tunnels (often called virtual networking) are proliferating to create these interconnections, but these are creating new layers of complexity, new middle boxes, and new layers of inefficiency. Bandwidth driven by video is continuing to increase steadily. The lack of IPv4 addresses and steady rollout of IPv6 means that the Internet really is two networks to manage. Connecting all these networks requires session state and innovative policies that function across all of these networks.
Why don’t more routers have session awareness today? The answer is part technological and part religious. Until recently, specialized hardware was the only game in town for router data planes, and the focus was on forwarding as many packets as possible, as fast as possible. Plus, the practical limitations of hardware product cycles and custom chip design made it far more onerous (and distracting) to incorporate advanced network functions into routers. It was just easier to build standalone network appliances and put them around/behind/in front of the router. With the advent of powerful x86-based hardware and software-based networking, the game has changed.
With trends like collaboration, increased use of video, virtual reality and IoT increasing the traffic volume, now is the time to focus on sessions instead of packets or flows. Session-aware, software-based routers drastically simplify the network, while improving security and reliability. With this type of technology, network operators will be able to create borderless networks with greater security, agility, insight and performance, without relying on legacy middle boxes or complicated tunneling and overlays – and without having to rip and replace existing technology. It’s evident that today’s flow-based networks are no longer cutting it and the migration to a secure session-based routing system will create simpler, more intelligent networks that are much more efficient.
This story, "Session-based routing holds the key to the Internet’s future " was originally published by Network World. | <urn:uuid:3035c0cf-3642-4f2e-b929-2152a34c09c7> | CC-MAIN-2017-09 | http://www.itnews.com/article/3142643/lan-wan/session-based-routing-holds-the-key-to-the-internets-future.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00020-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.933726 | 1,294 | 2.828125 | 3 |
Navy tests satellite connection in the Arctic
For the first time, military users have demonstrated they can transfer large megabyte data files over stable satellite connections in the Arctic.
During the Navy's 2014 Ice Exercise, the Mobile User Objective System (MUOS) satellites provided nearly 150 hours of secure data connections.
Satellite communications in the Arctic are becoming increasingly important as the polar ice sheet shrinks and shipping traffic increases. Most geosynchronous satellites can’t reach users in the Arctic.
MUOS is a next-generation narrowband tactical satellite communications system designed to significantly improve ground communications for U.S. forces on the move. MUOS gives military users more communications capability over existing systems, including simultaneous voice, video and data.
From March 17 to 27, MUOS provided over 8,800 minutes of service to Ice Camp Nautilus. Navy users at the camp could connect to both secure and classified communication systems and send data files.
"Last year we proved the constellation's reach, but this is the first time MUOS has been used for secure government exercises," said Paul Scearce, director of Military Space Advanced Programs at Lockheed Martin, the MUOS prime contractor and systems integrator. "This means users could traverse the globe using one radio, without needing to switch out because of different coverage areas. This goes far in increasing the value that MUOS provides mobile users, not just in traditional theaters of operation, but those at the furthest extents of the planet."
Lockheed Martin first demonstrated the MUOS constellation's ability to reach arctic users in tests during 2013. Those tests marked a significant gain in signal reach from the required latitude of 65 degrees north—roughly Fairbanks, Alaska.
This expansion in coverage, inherent with the system, comes at a time when governments are focusing on arctic security.
"We downloaded multiple files—up to 20 megabytes—nearly at the top of the world," said Dr. Amy Sun, Narrowband Advanced Programs lead at Lockheed Martin. "We sent a steady stream of photos, maps and other large data pieces securely through the system, something that could never be done by legacy communication satellites."
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:a772ae3b-3db3-4d95-b4e6-cc8518ab39f1> | CC-MAIN-2017-09 | https://gcn.com/articles/2014/05/07/navy-muos-acrtic.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00196-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.899229 | 458 | 2.71875 | 3 |
The causes of bad breath and how to eliminate it.
Many people have a problem with bad breath (halitosis), but they have no idea what causes it or how to eliminate it. In general, there are three possible reasons why a person might have bad breath:
1. Eating Smelly Foods
Aromatic foods such as onions, garlic, fish, peanut butter and others can leave a strong odor on your breath, but this type of bad breath is only temporary unless there are also other problems.
2. Poor Oral Hygiene
Properly brushing, flossing and using mouthwash is essential for keeping bad breath at bay. Maintaining good oral hygiene is also very important for preventing the third (and most serious) cause of bad breath...
3. A Dental Condition
Dental conditions such as gum disease, oral cancer, cavities, and/or bacteria on the surface of the tongue typically result in bad breath. Unfortunately, these conditions cannot be remedied without the help of a dentist.
In a nutshell, brushing (including the tongue), flossing and gargling with mouthwash after every meal coupled with regular professional cleanings will prevent the most pervasive causes of bad breath.
Chewing gum helps reduce the intensity of bad breath by keeping the mouth moist, and many people also find relief by using toothpaste, mouthwashes and other oral hygiene products that are formulated specifically to combat persistent halitosis. | <urn:uuid:3670f0b7-d704-4a0a-a00c-f8ee7a9342a8> | CC-MAIN-2017-09 | http://www.knowledgepublisher.com/question.php?ID=258 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00372-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947746 | 296 | 2.875 | 3 |
Manufacturing Breakthrough Blog
Friday November 18, 2016
Limitations of the Standard Cost System
Srikanth and Umble tell us that many of the problems that plague American industries today are a direct result of the application of standard cost principles throughout the organization. They further explain that many manufacturing problems derive from the view that manufacturing management’s goal is to control and reduce the standard cost of each individual operation. To illustrate the problems of the standard cost approach, Srikanth and Umble demonstrate how this approach might be misapplied to a typical investment decision. The following is a case study on how Srikanth and Umble illustrate these points.
Suppose the plant manager of a manufacturing firm is considering a proposal to purchase a new and faster stamping machine. The basic information available to analyze the decision is as follows:
- The old machine is able to process material at a rate of 100 units per hour (one every 36 seconds) The new machine is three times as fast, producing material at a rate of 300 units per hour (12 seconds per unit). This saves 24 seconds, or 0.00666 hours, per unit processed.
- The stamping machine is operated by one machinist, who is available to work approximately 2,000 hours per year (40 hours per week for 50 weeks). This is true for either the new machine or the old machine.
- Approximately 150,000 units per year are processed at the work station where the stamping process is performed.
- The cost of direct labor is $15 per hour.
- The overhead factor for this process is 280% of direct labor.
- The net cost of the new machine is $27,000 (This includes the salvage value of the old machine).
The standard cost approach would be to determine whether or not the proposed investment has a sufficiently high return. In most cases, the return is measured in terms of cost savings, and the projected savings would be compared to the initial investment to calculate the payback period. The expected savings from the purchase of the new machine would typically be calculated in the following way:
- Annual direct labor cost savings = reduction in process time per unit x units produced per year x direct labor cost.
In most cases today, the standard cost procedures generally charge overhead to an are based on the amount of direct labor consumed in that area. Therefore, any savings in direct labor cost for an area will eventually result in less total overhead being charged to that area. Thus, any projected savings in direct labor cost can further be projected to reduce the overhead charged to that area. The projected amount of annual overhead cost savings can be calculated by applying the appropriate overhead factor:
Annual overhead cost savings = annual direct labor cost savings x overhead factor
And the total annual cost savings for the area are calculated as follows:
Total annual cost savings = annual direct labor cost savings + annual overhead cost savings
While the exact procedure for the above calculations may differ from firm to firm, the fundamental approach is the same. Continuing with the illustration of the stamping machine:
Annual direct labor cost savings = 0.00666 hours per unit x 150,000 per year x $15 per hour = $15,000 per year
Annual overhead cost savings = $15,000 x 280% = $42,000 per year
Total annual cost savings = $15,000 = $42,000 = $57,000 per year
Since the net cost of the new machine is $27,000, the direct labor cost savings of $15,000 translates to a payback period of 1.8 years.
In my next post, we will complete our decision-making analysis on whether or not to purchase the new machine. We will then look at a more realistic way to appraise the information to make a better decision. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time,
L. Srikanth and Michael Umble,Synchronous Management – Profit-Based Manufacturing for the 21st Century, Volume One – 1997, The Spectrum Publishing Company, Wallingford, CT | <urn:uuid:ff5168ee-612e-40d4-afb3-305d8f935ef6> | CC-MAIN-2017-09 | http://manufacturing.ecisolutions.com/blog/posts/2016/november/problems-with-traditional-management-accounting-part-2.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00548-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928058 | 852 | 2.8125 | 3 |
I recently came across a blog in Emergency Management Magazine discussing the need to use multiple forms of emergency notifications. Lessons learned and recent studies reveal that the public won’t likely take action unless they receive their directions from at least two trusted sources. A study on evacuations during the San Diego wildfires found that residents generally wouldn’t leave their homes until they had received confirmation from a second source (like the news or a personal contact).
Thankfully, in today’s networked environment, people have information coming at them from all sides (friends, media, online news, social networking sites, etc.) and will most likely be able to verify a threat if they receive initial notification. However, there is always that risk that an employee, friend, neighbor, student, etc. was not notified. Or their source was not credible or trusted? How can you ensure all individuals have received and verified an emergency notification?
And, once an individual does understand there is a threat (violence, natural disaster, etc.), then what? Where should they go? What do they need to do? Should they notify others?
Emergency mass notification systems are only effective if each and every individual sending and receiving the alert is fully aware of specific policies, procedures, roles and responsibilities – people must understand what they HAVE TO DO and NEED TO DO if an incident occurs.
Lessons learned have shown that many safety and security programs do not put enough emphasis on the implementation of crisis management plans, emergency plans, code of conduct manuals, staff procedures manuals, SOPs and other processes after organizations have spent time and money performing assessments, performing general training, purchasing mass notification technologies and developing their plans, procedures and policies.
It is critical for organizations to implement Lessons Learned at the individual-level to prevent and prepare for future incidents. Organizations need to ensure that all procedures, plans, guidelines, etc. have been assigned to all appropriate personnel (faculty, students, employees, law enforcement, board members, vendors, contractors, third-parties, etc.) and that all personnel have acknowledged and understand their roles and responsibilities before, during and after an incident occurs. | <urn:uuid:2ec4b048-677b-4ae0-bf7b-0d350c57b041> | CC-MAIN-2017-09 | http://www.infosecisland.com/blogview/4384-You-Have-an-Emergency-Notification-SystemOK-Then-What.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00068-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962303 | 433 | 2.546875 | 3 |
Yesterday, the Federal Communications Commission issued a public notice on the use and allocation of wireless spectrum as it pertains to the National Broadband Plan.
The FCC is seeking comments through Oct. 23 “on the sufficiency of current spectrum allocations in spectrum bands, including but not limited to the prime spectrum bands below 3.7 GHz, for purposes of the Commission’s development of a National Broadband Plan.”
The FCC said responses received in its National Broadband Plan have indicated that some parties were concerned that there isn’t enough spectrum available to meet demands for wireless broadband in the future, which led to yesterday’s public notice.
The FCC cited the following examples as some of the reasons to take a closer look at wireless spectrum:
- Wireless association CTIA said that the wireless market in the United States now encompasses more than 270 million subscribers, and the vast number of mobile devices also place heavy burdens on networks.
- Motorola noted that more than 78 percent of U.S. wireless consumers have a wireless device that is capable of accessing the Internet, and approximately 40 million American consumers are active users of mobile Internet services – a 75 percent increase from two years ago.
- According to Wireless Communications Association International (WCAI), a traditional handheld device with average customer usage patterns will consume about 30 megabytes of data in a month; a single smartphone consumes 30 times that amount, and a single connected notebook or laptop computer is consuming 450 times that amount.
- Wireless devices are increasingly used to access bandwidth-intensive applications, such as video, Internet gaming and social networking. WCAI noted that these kinds of mobile data applications require bandwidth between 1 and 5 Mbps, compared with 6 to 12 kbps for a mobile voice call.
The FCC wants to gauge what the current spectrum is being used for and whether it’s being used efficiently, and the Commission seeks to find out what spectrum would be best for mobile wireless and fixed broadband services and applications. | <urn:uuid:5fe73bf3-4941-44df-bfdc-e0483758f2cc> | CC-MAIN-2017-09 | https://www.cedmagazine.com/news/2009/09/fcc-issues-public-notice-mobile-broadband | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00244-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939844 | 403 | 2.609375 | 3 |
Sensors get L.A. up to speed
Traffic monitoring system compiles data from thousands of sources to deliver a detailed picture of the area's highways
NETWORK ADMINISTRATORS are accustomed to monitoring traffic to
reduce bandwidth bottlenecks. Allen Chen's task is a little
different: speeding the flow of people, not electrical impulses. As
senior transportation electrical engineer at the California
Department of Transportation (Caltrans), Chen is responsible for
managing traffic on more than 500 miles of freeways in the Los
'We are not going to build more freeways,' he said.
'But with information, we can manage them better and help
drivers make better decisions.'
In October 2007, the Los Angeles Regional Transportation
Management Center (LARTMC), a facility that houses Caltrans and the
California Highway Patrol (CHP), formally opened. There, data
assembled from more than 10,000 sensors and cameras gives operators
an overview of traffic in the area so they can act quickly to
reroute traffic or remove bottlenecks.
Although LARTMC is new, its Advanced Traffic Management System
(ATMS) represents the implementation of research that started about
40 years ago. In 1969, Caltrans began researching techniques for
improving traffic flow. Its final report, issued in 1976,
recommended four steps:
- Install a detection system to monitor traffic volume and
- Meter freeway ramps to balance capacity and demand and improve
- Use changeable message signs that inform drivers of freeway
- Provide a fleet of tow trucks to immediately remove disabled
vehicles from roadways.
'Ramp metering was very revolutionary,' Chen said.
'Freeways are designed for 1,800 vehicles per lane per hour,
but once we slow to 35 miles per hour, our throughput drops to
1,200 per hour. By maintaining proper flow with the ramp meters, we
can increase the flow up to 2,400.'
Those 1976 recommendations have been put into place, with more
than 10,000 sensors embedded in the roadways, 1,280 traffic
monitoring stations, 450 closed-circuit cameras, 960 ramp metering
systems, and 109 changeable message signs and 15 highway advisory
radio stations to keep the public informed.
In 1998, the state legislature approved funds to build a new
five-story facility to house Caltrans and CHP. Located about 10
miles from downtown, the site includes microwave and satellite
communications to CHP and Caltrans employees in the field. The
facility also is the base for a Sonet OC-12 connection to the
freeway sensors and control system.
'Using artificial intelligence, we can build filters and
algorithms that detect any anomaly,' Chen said. 'Then
the system will alert our operators, and the operators can activate
the closed-circuit TVs to investigate.'
Although Caltrans uses commercial software wherever possible
' and the systems and workstations run on Microsoft Windows
XP ' there is no commercial product designed to manage such a
traffic system. Therefore, Caltrans had to assemble software into
an overall package that met its needs. It selected Science
Applications International Corp. as the primary systems integrator
and Delcan as the chief consultant for systems architecture and
ATMS runs on Hewlett- Packard HP-UX 11i servers. Data is stored
in an Oracle 9i database ' which is migrating to Oracle 10g
' and a Gensym G2 business rules engine is used to run
scenarios and present data and suggested actions to operators. The
wall in the control room contains 12 84-inch digital light
projection displays, 12 50-inch DLP displays and two electronic
With ATMS, Caltrans is working to further reduce bottlenecks,
and it is also expanding its information outreach activities to
keep other agencies and the public informed of traffic conditions
via the Internet, radio and TV news, and cell phone/personal
digital assistant alerts.
'If the commuter knows whether it will take five minutes
or an hour to reach their destination, they can make an informed
decision,' Chen said. 'And if they decide to take a
less congested route, it will also benefit others on the | <urn:uuid:3ee8bb49-c8b6-449a-8313-213ea93ea344> | CC-MAIN-2017-09 | https://gcn.com/Articles/2008/11/14/Sensors-get-LA-up-to-speed.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00296-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.897949 | 897 | 2.703125 | 3 |
Five human rights groups urged the United Nations (UN) General Assembly to adopt a new resolution against indiscriminate mass surveillance.
The draft resolution, titled "The right to privacy in the digital age" was introduced by Germany and Brazil in November in the wake of the NSA mass surveillance revelations. It calls upon states to respect and protect the right to privacy, including in the context of digital communication and to take measures to put an end to violations of those rights.
This is a critical moment for the protection of privacy around the world, said the Electronic Frontier Foundation, Privacy International, Human Rights Watch, Amnesty International and Access in a letter sent to all members of the UN General Assembly on Thursday. That letter linked to an earlier version of the resolution that, at the time of writing, contained minor drafting differences.
"Indiscriminate mass surveillance, which tramples individuals' right to privacy and undermines the social contract we all have with the State, must come to end immediately," they wrote, adding that if the resolution is adopted this would be the first major statement by the UN on privacy in 25 years.
The draft resolution says states should create conditions to prevent such violations and ensure that relevant national legislation complies with their obligations under international human rights law.
States should also review their procedures, practices and legislation regarding the surveillance of communications, their interception and collection of personal data, with a view to upholding the right to privacy by ensuring the full and effective implementation of all their obligations under international human rights law, according to the draft resolution.
Moreover, states should establish or maintain existing independent, effective domestic oversight mechanisms capable of ensuring transparency and accountability for state surveillance of communications, their interception and collection of personal data, the draft said.
In their letter to members of the General Assembly, the groups wrote: "A strong resolution would crucially reiterate the importance of protecting privacy and free expression in the face of technological advancements and encroaching State power."
By adopting the resolution, the groups said, the General Assembly can take a stand against indiscriminate practices such as mass surveillance, interception, and data collection, both at home and abroad. It will also help to uphold the right of all individuals to use information and communication technologies such as the internet without fear of unwarranted interference, they wrote.
Despite efforts by the "Five Eyes" surveillance alliance -- the U.S., Canada, New Zealand, Australia and the U.K. -- to weaken the resolution, it remains relatively undamaged, the groups said in a joint news release.
The resolution also requests the United Nations High Commissioner for Human Rights to present a report on the protection and promotion of the right to privacy in the digital age.
If adopted, the resolution will ensure that this issue stays on the front burner at the UN, the groups said, adding that a vote on the resolution is expected to take place in the next week.
Loek is Amsterdam Correspondent and covers online privacy, intellectual property, open-source and online payment issues for the IDG News Service. Follow him on Twitter at @loekessers or email tips and comments to email@example.com | <urn:uuid:b4763e4e-f0f7-467b-87e1-2c57eee51153> | CC-MAIN-2017-09 | http://www.itworld.com/article/2702885/security/privacy-groups-urge-un-to-adopt-digital-surveillance-resolution.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00240-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929234 | 640 | 2.609375 | 3 |
What Is An All-Flash Array?
An All-Flash Array, also referred to as a SSA or Solid State Array, is data storage that contains multiple Flash memory drives. All-Flash storage contains no moving parts, which means significantly less heat generated, less power utilized, and less maintenance. From a functional standpoint, All-Flash technology powered by Intel® Xeon® processors provides vastly superior performance; fewer spikes in latency, better disaster recovery, support of real-time analytics, much faster data transfer rates, and the ability to free IT staff to focus on other tasks. All-Flash Arrays provide the foundation for next-generation business applications, and the All-Flash data center.
What Is Flash Storage?
Flash Storage is storage media designed to electronically secure data. The media is designed to be electronically erased and reprogrammed. Flash represents a transformational shift in computing; by eliminating rotational delay and seek time, Flash provides responses orders of magnitude faster than traditional spinning disk. Flash Storage represents a performance breakthrough for storage operations (IO) and provides the foundation for enabling the next technology wave.
What Is A Hybrid Flash Array?
A Hybrid Flash Array is a combination of data storage units that utilize at least two types of storage media, one being Flash storage, the other being one of several possible options. Hybrid Flash arrays are used by businesses that have a need to store both hot and cold data in a single storage platform and want to balance performance and economics. Hybrid Flash arrays allow customers to choose the ratio of performance and capacity that best suits their specific needs to achieve the optimal value for their investment. The ability to add more Flash or capacity as needs dictate is a powerful benefit.
What Is Flash Based Storage?
Flash Based Storage is data storage media that delivers tremendous performance increases over traditional spinning hard disk drives. Flash Based Storage provides a dramatically lower cost per operation on a $/IO basis. This can save significant capital costs for most business workloads as well as virtual data center deployments that are practically ubiquitous in modern data centers. Virtualization causes a significant increase in IO density per server, a workload pattern that can often be significantly impacted and measured with increased performance and lower costs, by leveraging Flash Based Storage technologies.
What Is Flash Technology?
Flash Technology is any storage repository that uses Flash memory. Flash Memory comes in many forms, and you probably use Flash Technology every day. From a single Flash chip on a simple circuit board, to circuit boards in your phone, to a fully integrated “enterprise Flash disk” where multiple chips are used in place of a spinning disk, Flash Technology is everywhere!
What Is Flash SSD?
Flash SSDs are data storage devices that offer high performance, low latency storage, which provides significant performance, power consumption and cooling benefits over traditional spinning media. The I/O capability of Flash SSDs is 10x faster than spinning drives. Customers use Flash SSDs for the best application response time to run virtual servers and virtual desktops and to unleash the power of analytics for real-time decision making for the business.
Ultrabook, Celeron, Celeron Inside, Core Inside, Intel, Intel Logo, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside Logo, Intel vPro, Itanium, Itanium Inside, Pentium, Pentium Inside, vPro Inside, Xeon, Xeon Phi, and Xeon Inside are trademarks of Intel Corporation in the U.S. and/or other countries. | <urn:uuid:fc3a845c-b996-41d5-9d60-3221693b9749> | CC-MAIN-2017-09 | https://www.emc.com/hi-in/storage/discover-flash-storage/definitions.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00592-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.896833 | 710 | 3 | 3 |
IPv6 Neighbor Discovery Protocol
In IPv6 we do not have ARP (address resolution protocol) anymore. ARP is replaced with ICMP based NDP protocol. NDP or ND protocol uses special IPv6 ICMP messages to find and resolve L2 neighbours IPv6 addresses.
It’s a simple way for hosts to learn IPv6 addresses of neighbours on L2 subnet around himself. That includes learning about other hosts and routers on local network. That is the biggest difference between IPv4 and IPv6, there’s no ARP but ICMP takes the function.
NDP is defined in RFC 2461 and this article will introduce you to NDP functions, main features’ lists, and the related ICMPv6 message types.
As the most precise description of NDP is that it belongs to the Link layer of the Internet Protocol suite in TCP/IP model. We can say that Link layer of TCP/IP model is basically a direct combination of the data link layer and the physical layer in the OSI Open Systems Interconnection protocol stack. As in this blog I always try to use OSI model this article was inserted both to Data-link and Physical layer category.
In case of IPv6 networks, the NDP Protocol make use of ICMPv6 messages and solicited-node multicast addresses for operating its core function, which is tracking and discovering other IPv6 hosts that are present on the other side of connected interfaces. Another use of NDP is address autoconfiguration.
Let’s discuss some major roles of IPv6 NDP:
- Stateless address autoconfiguration – SLAAC
- Duplicate address detection DAD
- Router discovery
- Prefix discovery
- Parameter discovery link MTU, hop limits
- Neighbor discovery
- Neighbor address resolution – replaces ARP in IPv6
- Neighbor and router reachability verification
In order to carry out work NDP uses five types of ICMPv6 messages. In the following list you can find the function as well as summary of their goals.
NDP message types:
IPv6 nodes send Neighbor Advertisement (NA) messages periodically or repeatedly in order to inform their presence to other hosts present on the same network as well as send them their link-layer addresses.
IPv6 nodes send NS messages so that the link-layer address of a specific neighbor can be found. There are three operations in which this message is used:
▪ For detecting duplicate address
▪ Verification of neighbor reachability
▪ Layer 3 to Layer 2 address resolution (for ARP replacement) ARP is not included in IPv6 as a protocol but rather the same functionality is integrated into ICMP as part of neighbor discovery. NA message is the response to an NS message. From the figure the enabling of interaction or communication between neighbor discoveries between two IPv6 hosts can be clearly seen.
Router Advertisement and Router Solicitation
A Cisco IPv6 router start sending RA messages for every configured interface prefix as soon as the configuration of the ipv6 unicast-routing command is entered. It is possible to change the default RA interval (200 seconds) with the help of the command ipv6 nd ra-interval. On a given interface the router advertisements include all of the 64-bit IPv6 prefixes that are configured on that interface. This permits stateless address autoconfiguration SLAAC to function and generate EUI-64 address. In case of RAs, link MTU and hop limits are included in the message as well as the info whether a router is a candidate default router or not.
In order to inform hosts about the IPv6 prefixes used on the link and also to inform hosts that the router is available as default gateway the IPv6 routers send periodic RA messages. A Cisco router that runs IPv6 on an interface advertises itself as a candidate default router. This happens by default. If you want to avoid advertising of the router as a default candidate use the command ipv6 nd ra-lifetime 0. A router informs the connected hosts about its presence by sending RAs with a lifetime of 0. It further tells connected hosts not to use it to reach hosts that are located or present beyond the subnet.
It is possible to hide the presence of a router completely in terms of router advertisements by simply disabling router advertisements on that router. It can be done by issuing the command ipv6 nd suppress-ra.
Ipv6 hosts at startup can send Router Solicitation (RS) messages to all-routers multicast address. It is quite helpful for the hosts on a given link to learn the router’s addresses. Sending RS message occurs without any waiting time for a periodic RA message. When there is no configured IPv6 address on host interfaces, RS message is sent from the unspecified source address. On the other hand, if the host has a configured address then the source of RS will be from that address.
Duplicate Address Detection
IPv6 DAD or Duplicate Address Detection is a neighbor solicitations function. When the address autoconfiguration is performed by host, that host does not automatically assume that the address is unique. It will probably be true that the address is unique if we know that EUI-64 process is generating the IPv6 address from MAC address which should be unique. Yes but what if there are some interfaces on that L2 subnet with manually configured IPv6 addresses? They could be configured just the same as the generated address, right? One more check is done just to be sure and that one is called DAD.
DAD works like this
- The host will firstly join the All Nodes multicast address and Solicited-Node multicast address of the address for which the uniqueness is being checked.
- Host then simply send few NS messages (Neighbor Solicitation messages) to the Solicited-Node address as the destination. The source address field will remain undefined with unspecified address which is written like this “::”.
- The address being checked is written inside Target Address field which we simple refer to as tentative address field.
The source of this message is an unspecified address (::) . There is a unique address in the Target Address field in the NS. If the host sending that kind of message receives an NA response it means that the address is not a unique one. The purpose of using this process by IPv6 hosts is to verify the uniqueness of both the addresses i.e. statically configured and autoconfigured.
An example is that when a host has autoconfigured an interface for the address 2001:128:1F:633:207:85FF:FE80:71B8, an NS is sent to the corresponding Solicited-node multicast address, FF02::1:FF80:71B8/104. If there is no answer from other host, the node comes to know that it is fine to utilize the autoconfigured address.
Solicited-node multicast address details and process of generating them from any random IPv6 unicast or anycast address is explained in more detail in article: Solicited-node multicast address from February 2015.
It is the most efficient method described here for a router to perform DAD, due to the reason that on the router same solicited-node address matches all autoconfigured addresses. (see the above section for a discussion of solicited-node addresses about “IPv6 Address Autoconfiguration”.)
Neighbor Unreachability Detection
It is easy for the IPv6 neighbors to track each other, basically in order to ensure that Layer 3 to Layer 2 address mapping stay current, with the use of information found out by different means. It is not only the presence of an advertisement of a neighbor or router that defines reachability but there is further requirement of confirmed, two-way reachability. However, it is not essential for a neighbor to ask another node for its existence and receive a reply directly as a result. Here are the two ways of a node confirms reachability:
- When a host sends a query to the desired host’s solicited-node multicast address then it is responded with an NA or an RA.
- When a host is interacting with the desired host then in response it gets a clue from a higher-layer protocol that two-way communication or interaction is properly functioning. A TCP ACK is one such clue. A point to note is that these clues from higher-layer protocols can only work for connection-oriented protocols. UDP, is such that does not accept frames and, so it cannot be utilized for verifying neighbor reachability. In such event when a host requires confirmation of another’s reachability where only connectionless traffic or no traffic is passing between these hosts then it is important for the originating host to send a query to the desired neighbor’s solicited-node multicast address.
Functions of ND in IPv6
|Message Type||Information Sought or Sent||Source Address||Destination Address||ICMP Type, Code|
|Router Advertisement (RA)||Routers advertise their presence and link prefixes, MTU, and hop limits.||Router’s link-local address||FF02::1 for periodic broadcasts; address of querying host for responses to an RS||134, 0|
|Router Solicitation (RS)||Hosts query for the presence of routers on the link.||Address assigned to querying interface, if assigned, or :: if not assigned||FF02::2||133, 0|
|Neighbor Solicitation (NS)||Hosts query for other nodes’ link-layer addresses. Used for duplicate address detection and to verify neighbor reachability.||Address assigned to querying interface, if assigned, or :: if not assigned||Solicited-node multicast address or the target node’s address, if known||135, 0|
|Neighbor Advertise- ment (NA)||Sent in response to NS messages and periodically to provide information to neighbors.||Configured or automatically assigned address of originating interface||Address of node requesting the NA or FF02::1 for periodic advertisements||136, 0|
|Redirect||Sent by routers to inform nodes of better next-hop routers.||Link-local address of originating node||Source address of requesting node||137, 0| | <urn:uuid:dc1672d4-36ef-433b-a2dc-14eb4d30cab2> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2012/ndp-ipv6-neighbor-discovery-protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00468-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910127 | 2,157 | 2.78125 | 3 |
What if computers had a “check engine” light that could indicate new, novel security problems? What if computers could go one step further and heal security problems before they happen?
To find out, the Defense Advanced Research Projects Agency (DARPA) intends to hold the Cyber Grand Challenge (CGC)—the first-ever tournament for fully automatic network defense systems. DARPA envisions teams creating automated systems that would compete against each other to evaluate software, test for vulnerabilities, generate security patches and apply them to protected computers on a network.
To succeed, competitors must bridge the expert gap between security software and cutting-edge program analysis research. The winning team would receive a cash prize of $2 million.
“DARPA’s series of vehicle Grand Challenges were the dawn of the self-driving car revolution,” said Mike Walker, DARPA program manager. “With the Cyber Grand Challenge, we intend a similar revolution for information security. Today, our time to patch a newly discovered security flaw is measured in days. Through automatic recognition and remediation of software flaws, the term for a new cyber attack may change from zero-day to zero-second.”
Highly trained experts capable of reasoning about software vulnerabilities, threats and malware power modern network defense. These experts compete regularly on a global “Capture the Flag” tournament circuit, improving their skills and measuring excellence through head-to-head competition. Drawing on the best traditions of expert computer security competitions, DARPA aims to challenge unmanned systems to compete against each other in a real-time tournament for the first time.
“The growth trends we’ve seen in cyber attacks and malware point to a future where automation must be developed to assist IT security analysts,” said Dan Kaufman, director of DARPA’s Information Innovation Office, which oversees the Challenge.
The competition is expected to draw teams of top experts from across a wide range of computer security disciplines including reverse engineering, formal methods, program analysis and computer security competition. To encourage widespread participation and teaming, DARPA plans to host teaming forums on the CGC website.
For the first time, a cyber competition would take place on a network framework purpose-built to interface with automatic systems. Competitors would navigate a series of challenges, starting with a qualifying event in which a collection of software must be automatically analyzed. Competitors would qualify by automatically identifying, analyzing and repairing software flaws.
DARPA intends to invite a select group of top competitors s from the qualifying event to the Cyber Grand Challenge final event, slated for early to mid-2016. In that competition, each team’s system would automatically identify software flaws, scanning the network to identify affected hosts. Teams would score based on how capably their systems could protect hosts, scan the network for vulnerabilities and maintain the correct function of software. The winning team from the CGC finals would receive a cash prize of $2 million, with second place earning $1 million and third place taking home $750,000.
A Broad Agency Announcement (BAA) with specific information for potential competitors is available here. Competitors can choose one of two routes: an unfunded track in which anyone capable of fielding a capable system can participate, and a funded track in which DARPA awards contracts to organizations presenting the most compelling proposals.
DARPA also plans in the near future to issue a second BAA for proposals to develop technologies to support the competition. Support technologies will include accessible visualization of a real-time cyber competition event, as well as custom problem sets. | <urn:uuid:68949595-86b0-4d19-bf0c-6fc4b5b14bf9> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/10/24/darpa-announces-cyber-defense-tournament-with-a-2-million-cash-prize/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00468-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944345 | 734 | 2.734375 | 3 |
The phenomenon of the Internet of Things (IoT) is positively influencing our lives by augmenting our spaces with intelligent and connected devices. Examples of these devices include lightbulbs, motion sensors, door locks, video cameras, thermostats, and power outlets.
By 2022, the average household with two teenage children will own roughly 50 such Internet connected devices, according to estimates by the Organization for Economic Co-Operation and Development. Our society is starting to increasingly depend upon IoT devices to promote automation and increase our well being. As such, it is important that we begin a dialogue on how we can securely enable the upcoming technology.
Nitesh Dhanjani conducted research on the Philips hue lighting system. The hue personal wireless system is available for purchase from the Apple Store and other outlets. Out of the box, the system comprises of wireless LED light bulbs and a wireless bridge. The light bulbs can be configured to any of 16 million colors.
He released a paper that discusses top threats associated with the product in addition to a detailed analysis of how the system works.
A vulnerability can be used by malware on an infected machine on the user’s internal network to cause a sustained blackout. A demonstration of this vulnerability can be seen in the video below:
The goals of the research:
- Lighting is critical to physical security. Smart lightbulb systems are likely to be deployed in current and new residential and corporate constructions. An abuse case such as the ability of an intruder to remotely shut off lighting in locations such as hospitals and other public venues can result in serious consequences.
- The system is easily available in the marketplace and is one of the more popular self installable wireless light bulb solutions.
- The architecture employs a mix of network protocols and application interfaces that is interesting to evaluate from a design perspective. It is likely that competing products will deploy similar interfaces thereby inheriting abuse cases.
The hue system is a wonderfully innovative product. It is therefore important is to understand how it works and to ultimately push forward the secure enablement of similar IoT products. | <urn:uuid:b422d449-5214-4a43-9b0e-161f5a970c2f> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/08/13/hacking-a-smart-lightbulb-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00588-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925203 | 421 | 2.859375 | 3 |
Capturing packets from computer networks can sometimes sound like an activity reserved for hackers and geeks. Looking at a stream of raw packets is not for the fainthearted, but thankfully there are a number of technologies out there that can make the task easier.
Before you can consider packet capture you need to look at how you can implement it on your network. Almost all networks nowadays are based on network switches, no more hubs thankfully. One of the reasons we use network switches is that they are good at only sending packets where they are needed. If I copy some files from a server, my data won't be sent to every network port, which is what happens with hubs. Instead, the switch will send my data though a path that it figures out by communicating with other switches. There are some exceptions to this with broadcast and multicast traffic but normally levels of those types of traffic are low.
Hubs had an advantage over switches when it came to troubleshooting problems. If you connected to one port you saw all traffic on the network, and data was replicated across all ports. In order to keep this useful functionality, switch manufacturers include a feature called port mirroring. Port mirroring allows you to take a copy of the data going to and from one or more ports. This type of feature is sometimes called passive or out-of-band monitoring as it does not impact network operations.
Port mirroring on its own is of little use, though. You need to connect something to the mirror port that can take the data and present it in a human readable way.
Wireshark and handheld packet capture devices
One of the most basic forms of packet capture involves the installation of an application that can take packets from a network interface and display their contents. The most common example of this is Wireshark. This application allows you to see all traffic visible on a network interface, not just traffic addressed to one of the interface's configured addresses and broadcast/multicast traffic
It is still very hard to read the output of these applications but they can still be very useful for troubleshooting application problems with a single system.
There are also handheld packet capture devices on the market. These plug into a mirror port and provide real time visibility into what is happening. Most types of systems in this space don't store much information so you need to be watching the screen to spot the issue. If you have problems like a general network slow-down that are on-going, then they can be useful.
Application monitoring and IDS
Some application monitoring and intrusion detection systems (IDS) use a mirror port as a source of data. Most will look at the content of each packet (DPI) and extract certain data. In the case of IDS it could be a suspicious text string that suggests the presence of a virus. For application monitoring, information such as filenames or usernames are captured and stored in a database.
By only storing the information you are interested in you can get a historical record of what's been happening on your network. Systems in this space are often used for data compliance reasons as they are a viable alternative to systems that use log files as a source of data.
Packet recorders are like the CCTV cameras for your network as they record everything. They take each packet from a network and store its entire content within a database. Because of this, they can be expensive and consume vast amounts of disk space. The output of these systems will mirror what users on the network experienced. For example, a complete web page can be rebuilt from the packet data so you can see exactly what the end user accessed.
This all sounds a bit big-brother but they do have their uses. If you have a problem on your network, you can replay what happened, just like you would replay CCTV footage if there was a problem on the premises you are monitoring. Because of this, I see them used a lot for monitoring applications used for financial transactions, online banking or ticket sales being examples.
There are more and more systems becoming available which use network packets as their source data. The main thing to watch out for is that you get something that extracts the right level of information for you and stores it for a time to meet your requirements.
Do you use packet capture on your network? If so I would be interested in hearing your comments on how useful you find it.
Darragh Delaney is head of technical services at NetFort. As Director of Technical Services and Customer Support, he interacts on a daily basis with NetFort customers and is responsible for the delivery of a high quality technical and customer support service. Follow Darragh on Twitter @darraghdelaney | <urn:uuid:2e2f788c-3605-46b6-b85a-5ae41c46a82b> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2472879/networking/packet-capture-made-simple.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00464-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953175 | 954 | 3.125 | 3 |
The STP (Spanning Tree Protocol) standard (IEEE 802.1d) was designed when the recovery after an outage could wait a minute or so and be acceptable performance. With Layer 3 switching in LANs, switching began to compete with routers running protocols because they are able to offer faster alternate paths.
Rapid Spanning Tree Protocol (RSTP or IEEE 802.1w) brought the ability to take the twenty seconds of waiting for the Max Age counter plus fifteen seconds of Listening plus fifteen seconds of Learning or fifty seconds down to less than one second for point-to-point connected and edge switches and six seconds for root switches.
The new 802.1D that includes RSTP uses terms that are close to the same as STP. The biggest difference is that all switches send BPDUs and actively look for possible failures using a feedback function. While parameters can be set for backward compatibility, in that mode it loses the “rapid” advantage.
The new 802.1d reduces port states from five (Disabled, Blocking, Listening, Learning, and Forwarding) to three (Discarding, Learning, and Forwarding)
RSTP replaced Non-Designated or Blocking ports with Alternate and Backup ports, both of which actively discard frames. The Alternate port has an alternate path to the root switch. The Backup port is a redundant path to a LAN segment where another switch already connects.
Managers may configure edge ports if those ports attach to a LAN segment without any other switch on it. These edge ports go to forwarding on configuration. RSTP continues to monitor an edge port for BPDUs in case another switch shows up on that LAN segment. Managers may configure RSTP to automatically detect edge ports. Either way, when a switch detects a BPDU coming to an edge port, that port becomes a non-edge port.
RSTP will respond to BPDUs that are sent from the root switch’s direction. An RSTP switch will “propose” its spanning tree information to its designated ports. If an RSTP switch gets the proposed information and decides it is better root information, it sets all its other (other than Root) ports to discarding. The switch may send an “Agreement” to the first bridge’s proposal to agree that it’s better spanning tree information. The first bridge, upon receiving this agreement, can rapidly transition that port to the forwarding state and so bypass the older listening/learning states. This begins a cascade moving from the root switch. In this cascade, each designated switch “proposes” to its neighbors to determine if it can make a rapid transition. This is one of the major functions that let RSTP achieve faster convergence than STP.
RSTP also maintains backup details about the discarding status of the switch ports. This can avoid timeouts with failure of the current forwarding ports or if the root port fails to receive BPDUs in a certain time interval. | <urn:uuid:7595bf51-bf95-43de-af27-b969adef1ab4> | CC-MAIN-2017-09 | http://blog.globalknowledge.com/2012/11/29/rapid-spanning-tree/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00336-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.896359 | 629 | 2.96875 | 3 |
iptables is the packet filtering technology that’s built into the 2.4 Linux kernel. It’s what allows one to do firewalling, nating, and other cool stuff to packets from within Linux. Actually, that’s not quite right —
iptables is just the command used to control netfilter, which is the real underlying technology. We’ll just call it iptables though, since that’s how the whole system is usually referred to.
Stateful Packet Inspection
First off, many have heard a number of different definitions of “stateful” firewalls and/or “SPI” protection, and I think it’s worth the time to take a stab at clearing up the ambiguity. “Stateful Inspection” actually gives its true definition away in its name; it’s nothing more and nothing less than attempting to ensure that traffic moving through the firewall is legitimate by determining whether or not it’s part of (or related to) an existing, accepted connection.
When you hear that this firewall or that firewall does “SPI”, it could really mean anything; it’s a big buzzword right now, so every company out there wants to add it to their sales pitch. Remember, the definition is broad, so there can be (and is) a big difference between so-called SPI protection on a $50 SOHO router as compared to what’s offered on something like Check Point FW-1. The former could do a couple TCP-flag checks and call it SPI, while the latter does a full battery of tests. Just keep that in mind; not all SPI is created equal.
iptables is made up of some basic structures, as seen below:
TABLES are the major pieces of the packet processing system, and they consist of FILTER, NAT, and MANGLE. FILTER is used for the standard processing of packets, and it’s the default table if none other is specified. NAT is used to rewrite the source and/or destination of packets and/or track connections. MANGLE is used to otherwise modify packets, i.e. modifying various portions of a TCP header, etc.
CHAINS are then associated with each table. Chains are lists of rules within a table, and they are associated with “hook points” on the system, i.e. places where you can intercept traffic and take action. Here are the default table/chain combinations:
- FILTER: Input, Output, Forward
- NAT: Prerouting, Postrouting, Output
- MANGLE: Prerouting, Postrouting, Input, Output, Forward
And here’s when the different chains do their thing:
- PREROUTING: Immediately after being received by an interface.
- POSTROUTING: Right before leaving an interface.
- INPUT: Right before being handed to a local process.
- OUTPUT: Right after being created by a local process.
- FORWARD: For any packets coming in one interface and leaving out another.
In other words, if you want to process packets as they leave your system, but without doing any NAT or MANGLE(ing), you’ll look to the OUTPUT chain within the FILTER table. If you want to process packets coming from the outside destined for your local machine, you’ll want to use the same FILTER table, but the INPUT chain. See the image below for a visual representation of this.
TARGETS determine what will happen to a packet within a chain if a match is found with one of its rules. A two most common ones are DROP and ACCEPT. So if you want to drop a packet on the floor, you write a rule that matches the particular traffic and then jump to the DROP target. Conversely, if you want to allow something, you jump to the ACCEPT target — simple enough.
How Packets Move
Packets move through netfilter by traversing chains. Each non-empty chain has a list of rules in it, which packets are checked against one after another. If a match is found, the packet is processed. If no match is found, the default action is taken. The default action for a chain is also called its policy. By default, chain policies are to jump to the ACCEPT target, but this can be set to DROP if so desired (I suggest it).
So, with that
inadequate short intro out of the way, let’s dig into it with some diagrams and a couple of cookbook-style examples :
Allow Outgoing (Stateful) Web Browsing
iptables -A OUTPUT -o eth0 -p TCP –dport 80 -j ACCEPT iptables -A INPUT -i eth0 -p TCP -m state –state ESTABLISHED,RELATED –sport 80 -j ACCEPT
In the first rule, we’re simply adding (appending) a rule to the OUTPUT chain for protocol TCP and destination port 80 to be allowed. We are also specifying that the incoming packets will need to exit the machine over interface eth0 (-o is for “output”) in order to trigger this rule; this interface designation is important when you start dealing with machines with multiple interfaces. You can also add additional checks beyond those seen here, such as what source ports are allowed, etc., but we’ll keep it simple for the examples here. The second rule allows the web traffic to come back (an important part of browsing).
Notice the “state” stuff; that’s what makes netfilter a “stateful” firewalling technology. Packets are not able to move through this rule and get back to the client unless they were created via the rule above it, i.e. they have to be part of an established or related connection, and be coming from a source port of 80 — which is usually a web server. Again, you can add more checks here, but you get the point.
Allowing Outgoing Pings
iptables -A OUTPUT -o eth0 -p icmp –icmp-type echo-request -j ACCEPT iptables -A INPUT -i eth0 -p icmp –icmp-type echo-reply -j ACCEPT
Here, we’re appending (-A) to the output (OUTPUT) chain, using the icmp (-p) protocol, of type echo-request (–icmp-type echo request), and jumping (-j) to the ACCEPT target (which means ACCEPT it, strangely enough). That’s for the outgoing piece. For the return packet, we append to the INPUT chain instead of OUTPUT, and allow echo-reply instead of echo-request. This, of course, means that incoming echo-requests to your box will be dropped, as will outgoing replies.
“Passing Ports” Into A NATd Network
One of the most commonly-used functions of firewall devices is “passing ports” inside to other private, hidden machines on your network running services such as web and mail. In corporate environments this is out of necessity, and at home it’s often for gaming or in a hobbyist context. Either way, here’s how you do it with Netfilter/IPTABLES:
iptables -t nat -A PREROUTING -i eth0 -p tcp -d 126.96.36.199 –dport 25 -j DNAT –to 192.168.0.2:25
If we break this down, we see that we’re actually using the nat table here rather than not specifying one. Remember, if nothing is mentioned as far as tables are concerned, you’re using the filter table by default. So in this case we’re using the nat table and appending a rule to the PREROUTING chain. If you recall from the diagram above, the PREROUTING chain takes effect right after being received by an interface from the outside.
This is where DNAT occurs. This means that destination translation happens before routing, which is good to know. So, we then see that the rule will apply to the TCP protocol for all packets destined for port 25 on the public IP. From there, we jump to the DNAT target (Destination NAT), and “jump to” (–to) our internal IP on port 25. Notice that the syntax of the internal destination is IP:PORT.
Ah, but this is only half of the work. If you have any experience with corporate-class firewalls such as Check Point or Astaro, you know there are two parts to enabling connectivity like this — the NAT portion, and the rules portion. Below is what we need to get the traffic through the firewall:
iptables -A FORWARD -i eth0 -o eth1 -p tcp –dport 25 -d 192.168.0.2 -j ACCEPT
In other words, if you just NAT the traffic, it’s not ever going to make it through your firewall; you have to pass it through the rulebase as well. Notice that we’re accepting the packets in from the first interface, and allowing them out the second. Finally, we’re specifying that only traffic to destination port (–dport) 25 (TCP) is allowed — which matches our NAT rule.
The key here is that you need two things in order to pass traffic inside to your hidden servers — NAT and rules.
Ok, so that about does it for now. I have obviously only scratched the surface here, but hopefully I’ve covered the very basics in a way that can help someone. I intend to keep adding to this as time goes on, both for the sake of being thorough and to clarify things as needed. If, however, I have missed something major, or made some sort of error, do feel free to contact me and set me straight. Also, be sure to check out the manpage as well as the links below.
The Netfilter/Iptables Project
Linux Firewalls, Second Edition | <urn:uuid:1b7f4eb1-1a83-4639-84f4-be50eafdf876> | CC-MAIN-2017-09 | https://danielmiessler.com/study/iptables/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00336-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916065 | 2,162 | 2.96875 | 3 |
Note: This was originally posted on the Center for Democracy & Technology blog.
The Internet is running out of address space and it appears that the solution has narrowly avoided a technical issue that carried serious implications for consumer privacy.
The Internet’s inventors never imagined it would explode to become a global tool linking billions of computers and phones. As a result the addressing format they originally designed – known as IPv4 – has nearly maxed out the possible number of addresses it can supply. So a new system—IPv6—was designed to provide an almost endless number of addresses.
Yesterday began the World IPv6 Launch, with Internet service providers, web sites, and equipment vendors around the globe turning on IPv6. Some companies are further along in their adoption of the new addresses than others, but ultimately we will all need to transition to IPv6 to ensure that all Internet devices can talk to each other.
As part of World IPv6 launch, Internet companies large and small – including AT&T, Cisco, Comcast, Facebook, Google, Microsoft, Verizon Wireless, and Yahoo! – committed to turning on IPv6 and leaving it on (the launch’s tag line is “the future is forever”). A small fraction of Internet users and devices have started communicating via IPv6, with more and more making the transition over the coming months and years.
As our devices make the switch to IPv6, they will be assigned new IP addresses in the IPv6 format. IPv6 addresses can be generated in a number of different ways and the choice of how they are created has potentially wide reaching effects for privacy. One of the original methods for assigning new addresses involved using a unique device identifier (known as a MAC address) as the suffix of the IPv6 address. This method creates a permanent, unique address for a device, potentially allowing any server that the device communicates with to indefinitely track the user.
IPv6 designers soon realized the potential privacy problems of MAC-based addresses; as a result, they created an alternate method known as “privacy extensions” or “privacy addresses.” The privacy extensions use a randomly generated number instead of a MAC address. The random number is unrelated to any device identifier and in practice lasts no more than a week (and often much less time), ensuring that the user’s IP address cannot be used for long-term user tracking.
It is up to operating system vendors to choose which IP address assignment method will be the default on their devices. Fortunately they have made good choices, particularly within the last year. Microsoft has long led the charge on IPv6 privacy, with privacy extensions on by default in all versions of Microsoft Windows since the release of Windows XP nearly a decade ago. Apple followed suit last year, with privacy extensions activated by default in all versions of Mac OS X since 10.7 (Lion) and with the release of iOS 4.3 for iPhone and iPad. Google did likewise in its Android 4.0 release last year.
As long as Internet users choose to upgrade their operating systems to the latest versions, they should be protected against perpetual tracking by IP address as the Internet moves into the IPv6 future. | <urn:uuid:20cbefe5-30c6-447d-868a-6e005b218fd7> | CC-MAIN-2017-09 | https://www.alissacooper.com/2012/06/06/privacy-in-a-future-that-is-forever/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00512-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942804 | 651 | 2.90625 | 3 |
One each year we take time out to celebrate Earth, home to humankind, the animal and plant kingdom, and whatever aliens walk in our midst.
The rest of the year, of course, we try our best to make our beloved Blue Planet uninhabitable. We're doing a pretty good job, but there's always a chance that Earth will be destroyed by outside forces before we get around to finishing our work.
The video below explores 10 different ways that Earth could meet a sudden and violent demise. It's both fascinating and frightening.
Also, it did not escape our attention that some of the scientists interviewed seemed inappropriately excited about their pet doomsday theories. Scientists, is being right that important to you? You're talking about all of us dying here! Can't any of you act a bit somber? It's bad optics.
We're hard-pressed to pick a favorite way for Earth to be destroyed, if there can be such a thing. But perhaps the most ironic would be the scenario in which the planet we're so desperate to travel to and colonize -- Mars -- falls out of its orbit and collides with Earth. Talk about a cosmic joke.
This story, "Enjoy Earth Day while we still have Earth" was originally published by Fritterati. | <urn:uuid:f70ff703-0e52-46cb-9f36-4f15253a7409> | CC-MAIN-2017-09 | http://www.itnews.com/article/2913199/enjoy-earth-day-while-we-still-have-earth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00456-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956961 | 261 | 2.65625 | 3 |
Looking for an example of how open government data is being used by the private sector? Google the words “aspirin,” “ibuprofen” or “amoxicillin” and check out the upper right hand corner of the webpage.
You’ll find boxes with information about recommended uses, risks and possible side effects that rely on open data streams from the National Library of Medicine and the Food and Drug Administration.
“The best part about this is we didn’t even know it was happening,” U.S. Chief Technology Officer Todd Park told members of the President’s Council of Advisors on Science and Technology Friday. “It didn’t happen because of evangelism by us. It didn’t happen because we met them at a meet up. They just somehow learned about the API we had . . . and boom. This materialized and now that information is helping everyone on Google search and score aspirin and every other drug.”
Park has been working to make raw government data available online to developers and entrepreneurs since 2009, first as CTO of the Health and Human Services Department and later in his goverenmentwide role. The goal is that entrepreneurs will use that data to build applications and services that aid consumers, turn a profit or both.
Park listed nearly a dozen such applications during his hour-long presentation to the advisory council, including iTriage, an app that matches patients with hospitals based on their symptoms, and WeMakeItSafer, which alerts companies when a product in their inventory is recalled.
Applications built with government-gathered weather and Global Positioning System data have become lucrative businesses for companies such as Google and Apple.
The Obama Administration has released government data on healthcare, education, energy, oceans and numerous other “communities” through the site Data.gov. In the coming months, officials plan to update Data.gov to include feedback from entrepreneurs who have used the site about which datasets were most useful, how they used those datasets and how others might consider using them, Park told council members.
In the future, he suggested, a nongovernment organization, such as the magazine Consumer Reports, might grade government datasets using similar criteria to provide an objective assessment of what information is most valuable and where there are potentially lucrative gaps in data use.
The Office of Management and Budget plans to release policy soon describing best practices for making government data open and machine readable, Park said. Agencies were required to make open and machine readable data the default for government as part of the government digital strategy released in May.
“There are a lot of smart people that work for the U.S. government but even the U.S. government is vastly outnumbered by the rest of the planet Earth,” Park said. “If you take the data taxpayers have paid for and you give it back to them in machine readable, easily findable, easily usable form . . . new applications, services, companies and nonprofits will help deliver vast benefits to the American people, grow the economy, create jobs, make the workplace more efficient and result in general rejoicing.” | <urn:uuid:5c49bd62-c175-43c4-9313-dcbc353e5cf7> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2013/01/what-pain-killers-can-teach-us-about-open-government/60478/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00384-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9572 | 652 | 2.609375 | 3 |
On Thursday, the world learned that attackers were breaking into computers using a previously undocumented security hole in Java, a program that is installed on hundreds of millions of computers worldwide. This post aims to answer some of the most frequently asked questions about the vulnerability, and to outline simple steps that users can take to protect themselves.
Update, Jan. 13, 8:14 p.m. ET: Oracle just released a patch to fix this vulnerability. Read more here.
Q: What is Java, anyway?
A: Java is a programming language and computing platform that powers programs including utilities, games, and business applications. According to Java maker Oracle Corp., Java runs on more than 850 million personal computers worldwide, and on billions of devices worldwide, including mobile and TV devices. It is required by some Web sites that use it to run interactive games and applications.
Q: So what is all the fuss about?
A: Researchers have discovered that cybercrooks are attacking a previously unknown security hole in Java 7 that can be used to seize control over a computer if a user visits a compromised or malicious Web site.
Q: Yikes. How do I protect my computer?
A: The version of Java that runs on most consumer PCs includes a browser plug-in. According to researchers at Carnegie Mellon University‘s CERT, unplugging the Java plugin from the browser essentially prevents exploitation of the vulnerability. Not long ago, disconnecting Java from the browser was not straightforward, but with the release of the latest version of Java 7 — Update 10 — Oracle included a very simple method for removing Java from the browser. You can find their instructions for doing this here.
Q: How do I know if I have Java installed, and if so, which version?
A: The simplest way is to visit this link and click the “Do I have Java” link, just below the big red “Download Java” button.
Q: I’m using Java 6. Does that mean I don’t have to worry about this?
A: There have been conflicting findings on this front. The description of this bug at the National Vulnerability Database (NVD), for example, states that the vulnerability is present in Java versions going back several years, including version 4 and 5. Analysts at vulnerability research firm Immunity say the bug could impact Java 6 and possibly earlier versions. But Will Dormann, a security expert who’s been examining this flaw closely for CERT, said the NVD’s advisory is incorrect: CERT maintains that this vulnerability stems from a component that Oracle introduced with Java 7. Dormann points to a detailed technical analysis of the Java flaw by Adam Gowdiak of Security Explorations, a security research team that has alerted Java maker Oracle about a large number of flaws in Java. Gowdiak says Oracle tried to fix this particular flaw in a previous update but failed to address it completely.
Either way, it’s important not to get too hung up on which versions are affected, as this could become a moving target. Also, a new zero-day flaw is discovered in Java several times a year. That’s why I’ve urged readers to either uninstall Java completely or unplug it from the browser no matter what version you’re using.
Q: A site I use often requires the Java plugin to be enabled. What should I do?
A: You could downgrade to Java 6, but that is not a very good solution. Oracle will stop supporting Java 6 at the end of February 2013, and will soon be transitioning Java 6 users to Java 7 anyway. If you need Java for specific Web sites, a better solution is to adopt a two-browser approach. If you normally browse the Web with Firefox, for example, consider disabling the Java plugin in Firefox, and then using an alternative browser (Chrome, IE9, Safari, etc.) with Java enabled to browse only the site(s) that require(s) it.
Q: I am using a Mac, so I should be okay, right?
A: Not exactly. Experts have found that this flaw in Java 7 can be exploited to foist malware on Mac and Linux systems, in addition to Microsoft Windows machines. Java is made to run programs across multiple platforms, which makes it especially dangerous when new flaws in it are discovered. For instance, the Flashback worm that infected more than 600,000 Macs wiggled into OS X systems via a Java flaw. Oracle’s instructions include advice on how to unplug Java from Safari. I should note that Apple has not provided a version of Java for OS X beyond 6, but users can still download and install Java 7 on Mac systems. However, it appears that in response to this threat, Apple has taken steps to block Java from running on OS X systems.
Q: I don’t browse random sites or visit dodgy porn sites, so I shouldn’t have to worry about this, correct?
A: Wrong. This vulnerability is mainly being exploited by exploit packs, which are crimeware tools made to be stitched into Web sites so that when visitors come to the site with vulnerable/outdated browser plugins (like this Java bug), the site can silently install malware on the visitor’s PC. Exploit packs can be just as easily stitched into porn sites as they can be inserted into legitimate, hacked Web sites. All it takes is for the attackers to be able to insert one line of code into a compromised Web site.
Q: I’ve read in several places that this is the first time that the U.S. government has urged computer users to remove or wholesale avoid using a particular piece of software because of a widespread threat. Is this true?
A: Not really. During previous high-alert situations, CERT has advised Windows users to avoid using Internet Explorer. In this case, CERT is not really recommending that users uninstall Java: just that users unplug Java from their Web browser.
Continue reading → | <urn:uuid:82e7b058-edd9-401e-aa0f-a1a4d2c80443> | CC-MAIN-2017-09 | https://krebsonsecurity.com/tag/will-dormann/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00380-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93286 | 1,247 | 2.84375 | 3 |
The Nobel Prize has been awarded to Serge Haroche, from the Collège de France and the École Normale Supérieure, and David Wineland, from the University of Colorado and also the National Institute of Standards and Technology (NIST). While technically the award is to honor research into the strange effects between light and matter, the quantum effects observed have implications for quantum computing and ultimately security.
According to the New York Times "Such techniques are crucial for the dream of quantum computers, which manipulate so-called qubits that are 1 and 0 simultaneously to solve some problems like factoring gigantic numbers to break codes beyond the capacity of ordinary computers. Such computers depend on the ability to isolate their “qubits” from the environment to preserve their magical computing powers, but at the same time there must be a way to measure the qubits to read out their answer."Read More > | <urn:uuid:d1a96bff-a078-468f-9d04-8a1f7acf6111> | CC-MAIN-2017-09 | https://www.mocana.com/blog/topic/serge-haroche | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00556-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934015 | 183 | 2.515625 | 3 |
Robots of the World, UniteBy Tim Moran | Posted 2011-04-06 Email Print
Knowledge sharing goes global for robotic learners.
As if we don’t have enough to worry about, now comes news that robots are getting their own information-packed network. A recent squib on FastCompany.com reports that a European Union-funded project called RoboEarth is building a database and network that will let robots share information about the world—and us. The robots will both contribute to and curate the network, which will allow them to access and share everything they have learned and everything they do.
For instance: If one robot learns how to set a table for dinner, that information can be made available on the robot net so future robots can learn how to do it too. What one robot has learned is available for all time for other robots. At the speed with which Wikipedia and Facebook have taken off with humans, the mind reels at how rapidly robots will know how to do everything humans can do—except program a remote control. Nobody can do that.
Tech-Savvy Kids Can’t Tie Their Shoes
When I was a kid, toys were simple and technology was rudimentary. Consumer technology was just beginning to take off with products like the tape cassette, the portable calculator and the transistor radio. We rode bikes, played stickball and collected baseball cards.
Flash forward 50 years: According to a recent study by AVG, an Internet security company, children today are more likely to know how to navigate with a mouse, play a computer game and operate a smartphone than swim, tie their own shoelaces or make their own breakfast. Said AVG’s CEO, J.R. Smith: “Technology has changed what it means to be a parent raising children today—these children are growing up in an environment that would be unrecognizable to their parents.”
Ya think? Unless their mothers were computers, of course.
When Computers Wore Skirts
Once upon a time, a “computer” was a person, not a thing. In most cases, this person was female, and therein lies the tale of the women who, in 1942, were part of a secret U.S. military program at the University of Pennsylvania, where they worked as “computers” calculating weapons trajectories for soldiers fighting in World War II. A cohort of these women was turned up by LeAnn Erickson, an associate professor at Temple University, whose research led to a video documentary, Top Secret Rosies: The Female Computers of World War II.
After the war ended, some of these women were hired to work on that new technology marvel, ENIAC, the Electronic Numerical Integrator and Computer. Though the men who created it, John Mauchly and J. Presper Eckert Jr., received all the kudos, it was the nascent “programming” and vacuum-tube debugging by women “computers” that actually made ENIAC work. But other than a shared certificate of commendation from the military, “the programmers and their hand-calculating counterparts got no recognition.” Until now. | <urn:uuid:1d1a0e6b-262f-4237-8e78-495e81127fa0> | CC-MAIN-2017-09 | http://www.baselinemag.com/c/a/Intelligence/Robots-of-the-World-Unite-206373 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00024-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961803 | 658 | 2.8125 | 3 |
SFP transceivers (small form-factor pluggable) are modules act to connect the electrical circuitry of the module with the optical or copper network, widely used for both telecommunication and data communications applications. The SFP transceiver is not standardized by any official standards body, but rather is specified by a multi-source agreement (MSA) between competing manufacturers.
The SFP was designed after the GBIC interface, and allows greater port density (number of transceivers per cm along the edge of a mother board) than the GBIC, which is why SFP is also known as mini-GBIC.
SFP transceivers are expected to perform at data speeds of up to five gigabits per second (5 Gbps), and possibly higher. Because SFP modules can be easily interchanged, electro-optical or fiber optic networks can be upgraded and maintained more conveniently than has been the case with traditional soldered-in modules. Rather than replacing an entire circuit board containing several soldered-in modules, a single module can be removed and replaced for repair or upgrading. This can result in a substantial cost savings, both in maintenance and in upgrading efforts.
SFP Transceivers have a wide range of detachable interfaces to multimode/single-mode fiber optics, which allows users to select the appropriate transceiver according to the required optical range for the network. SFP transceivers are different types working with different wavelength for various kinds of distances. Signal transmitting rate of these modules range from 100Mbps up to 4Gbps or more, working distance of these SFP transceiver modules can be from 500 meters to 100 kilo meters, working wavelength of different SFP modules are typically 850nm, 1310nm and 1550nm, there are also CWDM type SFP transceivers available. For example, SX SFP use 850nm for max 550 meters, LX SFP use 1310nm for max 10km, ZX SFP could reach 80km.
There are also copper SFP transceivers with copper cable interfaces, which allow a host device designed primarily for optical fiber communications to also communicate over unshielded twisted pair networking cables. Modern optical SFP transceivers support digital diagnostics monitoring (DDM) functions, also known as digital optical monitoring (DOM). This feature gives users the ability to monitor the real-time parameters of SFP, such as optical output power, optical input power, temperature, laser-bias current and transceiver supply voltage. DOM function for SFP is optional, it help users detect real time SFP working status.
SFP interfaces a network device motherboard (for a switch, router, media converter or similar device) to a fiber optic or copper networking cable. SFP transceivers support communications standards including SONET, Gigabit Ethernet, Fiber Channel and other communications standards. They also allow the transport of fast Ethernet and gigabit Ethernet LAN packets over time-division-multiplexing-based WANs, as well as the transmission of E1/T1 streams over packet-switched networks.
Fiberstore provides various kinds of SFP fiber optic transceivers, including single mode SFP, multimode SFP and Copper SFP, with various working wavelength and functions. We also supply CWDM SFP, DWDM SFP and cisco sfp 10g fiber transceivers. The compatible brand transceivers we offer include cisco sfp transceiver, hp sfp transceiver, 3Com sfp, Juniper sfp, Foundry sfp, Extreme sfp, Netgear sfp, Force10 sfp, etc. | <urn:uuid:e133a0a1-8885-44cb-b32b-63e2c67fa38b> | CC-MAIN-2017-09 | http://www.fs.com/blog/why-sfp-transceivers-are-widely-used-in-communication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00200-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.915241 | 751 | 3.0625 | 3 |
The conference was aimed at providing a bird's eye view of cloud computing terminology, what it is and what it is not. I would follow the same pattern used during the training to report my learning. The theory was complemented by a few practical case studies of companies who have used the Cloud to run successful businesses.
The conference was held in 4 cities, Bangalore being one of them and saw several companies participating. There were delegates from IBM, HP, Cisco, Akamail, 3i Infotech, Mphasis, Mindtree, Samsung and Fidelity to name a few.
Module 1 : Cloud Introduction and Definitions
So what is this whole Cloud thing? What are cloud characterisitics?
Cloud Computing is first of all not a computing model. It is essentially a delivery model. It has been existing for decades now, so to say this whole cloud computing thing is Old Wine in New Bottles. What essentially this means is that Cloud Computing is another term for services delivered over the Internet or the World Wide Web. This has been long existent in SaaS applications (Software as a Service). The email and web apps which you have been using for quite some time now are variants of this. Where Cloud goes ahead is delivering an OS or even an entire hardware configuration as a service. This means that small startups can enjoy the advantage of huge computing powers and custom platforms without actually having to buy hardware and set up their own datacentre.
- A pay-as-you-go financial model
- Rapid Elasticity to scale up to your needs
- Multiple users (tenants) can take advantage of the same physical resource
- On-Demand self-service via Automatic Provisioning of required services - No Manual Intervention
For the more technical definition-savvy, here is the Cloud Computing definition from NIST (National Institute of Science and Technology)
"Cloud Computing is a model
for enabling convenient, on-demand network access
to a shared pool of confugurabke computing resources (e.g. networks, storage, applications and Services
that can be rapidly provisioned and released with minimal management effort or service provider interaction"
What are the Deployment Models for the Cloud?
Deployment model refers to how you want to use the cloud to your organization's advantage more so where does the data resides and who operates the Cloud.The various Cloud Deployment Models are
- Private Cloud - A private cloud is owned and operated solely for an organization.
- Public Cloud - This is the other extreme where cloud services are made available to the general public by a large industry group and is owned by an organization selling the cloud services. e.g. Amazon provides its EC2(Elastic Compute Cloud) instances to other organizations or individuals on a pay-as-u-go financial model.
- Community Cloud - Shared by several organizations and supports a specific community that has shared concerned. e.g. several educational institues may own a community cloud to share/host resources and knowldege for a particular domain.
- Hybrid Cloud - This is a composition of two or more clouds (private,community or public) that remain unique entities but are bound by technologies which enable data and application portability between them. e.g. IBM's Cast Iron help you to integrate your on-premise data (within the enterprise) to data on the cloud.
In Cloud Computing anything which can be delivered to end-users on the Internet becomes a Service Model. The most commonly referred to Service models are SaaS, PaaS, IaaS
- Software-as-a-Service - The primary users are Business End users who do not deal with any code or confuguration complexity. Use these to complete end-user tasks. e.g. Email, Office Automation, CRM, Website Testing, Wiki, Blog and Virtual Desktop.
- Platform-as-a-Service - Developers and Deployers use PaaS to create and deploy applications and services for users. e.g. Middleware execution stack, service and application test, development, integration and deployment.
- Infrastructure-as-a-Service - Iaas is used mostly by operations team (IT Infrastructure Personnel, System Managers/Administratrs) to create platforms for service and application test, development, integration and deployment. e.g. Virtual machines, operating systems, message queues, networks, storage, CPU, memory, backup services.
So when we say cloud, who are the actors involved?
A Cloud like any other service has a provider and a consumer. These are the primary actors.A Cloud Consumer is a person or organization which uses services from a Cloud Provider whereas a Cloud Provider is a person, organization or entity responsible for making a service available to the interested parties.
Besides these there may be Cloud Auditor which maintains the regulatory compliance to government guidelines; a Cloud Broker which helps a consumer to select and route to an appropriate provider much like an ESB(Enterprise Service Bus).
The backbone of all these is a Cloud Carrier which refers to the ISPs which provide Internet connectivity. No matter how fast your Cloud might be, if your network is slow, everything is slow in effect.
Actually No. SOA and Cloud are two different things. Although both are related to services. But SOA is an architectural approach which guides how you should design your enterprise applications as a set of loosely-coupled reusable services; whereas Cloud is a deployment and operational model. Cloud can be host services which have been developed as per SOA philosophy.Does Cloud provide me infinite resources?
Because of the vast computing and physical resources on the side of the cloud provider, the cloud gives an "illusion of infinite capacity". So as a matter of fact Cloud does not literally provide infinite resources. The Cloud characterisitic which gives this illusion is elasticity. | <urn:uuid:778e1ce5-2480-42b0-aa9e-76b0e567cea3> | CC-MAIN-2017-09 | https://www.ibm.com/developerworks/community/blogs/706a5f30-025f-4dc3-b162-01e918fe489e/entry/cloud_clearing_the_haze_solving_the_maze5?lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00200-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932341 | 1,199 | 2.796875 | 3 |
The definition of big data holds the key to understanding big data analytics. According to the Gartner IT Glossary, Big Data is high-volume, high-velocity and high-variety information assets that demand cost effective, innovative forms of information processing for enhanced insight and decision making.
- Volume refers to the amount of data. Many factors are contributing to high volume: sensor and machine-generated data, networks, social media, and much more. Enterprises are awash with terabytes and, increasingly, petabytes of big data.
- Variety refers to the number of types of data. Big data extends beyond structured data such as numbers, dates and strings to include unstructured data such as text, video, audio, click streams, 3D data and log files.
-Velocity refers to the speed of data processing. The pace at which data streams in from sources such as mobile devices, clickstreams, high-frequency stock trading, and machine-to-machine processes is massive and continuously fast moving.
Learn more at https://intellipaat.com/hadoop-online-training/ | <urn:uuid:063b5037-9493-4020-9584-8cd9ba039bdc> | CC-MAIN-2017-09 | http://www.informationweek.com/big-data/big-data-analytics/big-data-analytics-masters-degrees-20-top-programs/d/d-id/1108042?cid=sbx_bigdata_related_news_business_intelligence_big_data&itc=sbx_bigdata_related_news_business_intelligence_big_data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00252-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.914963 | 231 | 3.3125 | 3 |
Yesterday I attended an event called Big Data Forum 2012 held in London.
Big data seems to be yet a buzzing term with many definitions. Anyway, surely it is about datasets that are bigger (and more complex) than before.
The Olympics is Going to be Bigger
One session on the big data forum was about how BBC will use big data in covering the upcoming London Olympics on the BBC website.
James Howard who I know as speckled_jim on Twitter told that the bulk of the content on the BBC Sports website is not produced by BBC. The data is sourced from external data providers and actually also the structure of the content is based on the external sources.
So for the Olympics there will be rich content about all the 10,000 athletes coming from all over the world. The BBC editorial stuff will be linked to this content of course emphasizing on the British athletes.
I guess that other broadcasting bodies and sports websites from all over the world will base the bulk of the content from the same sources and then more or less link targeted own produced content in the same way and with their look and feel.
There are some data quality issues related to sourcing such data Jim told. For example you may have your own guideline for how to spell names in other script systems.
I have noticed exactly that issue in the news from major broadcasters. For example BBC spells the new Egyptian president Mursi while CNN says his name is Morsi.
Bigger Data in Party Master Data Management
The postal validation firm Postcode Anywhere recently had a blog post called Big Data – What’s the Big Deal?
The post has the well known sentiment that you may use your resources better by addressing data quality in “small data” rather than fighting with big data and that getting valid addresses in your party master data is a very good place to start.
I can’t agree more about getting valid addresses.
However I also see some opportunities in sharing bigger datasets for valid addresses. For example:
- The reference dataset for UK addresses typically based on the Royal Mail Postal Address File (PAF) is not that big. But the reference dataset for addresses from all over the world is bigger and more complex. And along with increasing globalization we need valid addresses from all over the world.
- Rich address reference data will be more and more available. The UK PAF file is not that big. The AddressBase from Ordnance Survey in the UK is bigger and more complex. So are similar location reference data with more information than basic postal attributes from all over world not at least when addressed together.
- A valid address based on address reference data only tells you if the address is valid, not if the addressee is (still) on the address. Therefore you often need to combine address reference data with business directories and consumer/citizen reference sources. That means bigger and more complex data as well.
Similar to how BBC is covering the Olympics my guess is that organizations will increasingly share bigger public address, business entity and consumer/citizen reference data and link private master data that you find more accurate (like the spelling example) along with essential data elements that better supports your way of doing business and makes you more competitive.
My recent post Mashing Up Big Reference Data and Internal Master Data describes a solution for linking bigger data within business processes in order to get a valid address and beyond. | <urn:uuid:a29ef78c-12ef-421a-b2c1-688d69c7992c> | CC-MAIN-2017-09 | https://liliendahl.com/category/sport/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00252-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.936771 | 692 | 2.515625 | 3 |
Recent data breaches tell us what private and public sector victims are dealing with: disruption, reputational damage, and significant financial repercussions. They can also find themselves attracting the undesirable attention of regulators. Like those suffered recently by the IRS and Ashley Madison, data breaches have ignited the discussion about the role that federal regulators should play in holding organizations accountable.
US Congress has not yet adopted sweeping legislation governing data security. Even in cases of these large-scale, headline-grabbing data breaches with massive financial settlements, there has not been a clear path by which the federal government can file cases of wrongdoing. This may now be changing.
Over the past few months, many state and federal regulators have stepped up their focus on data security, conducting their own examinations and investigations, and ultimately levying fines for non-compliance, or lack of adequate security measures to protect consumer information.
Perhaps most significant was a ruling in August 2015 from a federal US appellate court confirming that the Federal Trade Commission (FTC) has the authority to take legal action against an organization for not adequately safeguarding customer data. This ruling widely confirms the FTC’s authority to regulate companies that are negligible in the loss of consumer data to hackers.
So what does this ruling mean? The court’s decision demonstrates that information security must be treated like any other protective measure and that having inadequate cybersecurity measures in place should not serve as an exception.
In many cases, organizations have acted recklessly by storing sensitive information without encryption, or placing passwords on sticky notes. In these cases, government bodies like the FTC will be able to make a clear argument that this lack of security equates to insufficient protection and the organization can therefore be held accountable for such unsupported claims.
One of the challenges both the FTC and future organizations will face is making a clear case that the proper safeguards were in place. As we’ve seen, cyberattacks come in many shapes and sizes and therefore there is no definitive checklist for protecting corporate or customer data. Defining a fair standard by which every organization must adhere will be a discussion point and serve as an arena of debate for some time.
Navigating data compliance
It is challenging for organizations to understand and comply with the many well-meaning regulatory requirements, particularly if such requirements are veiled as suggestions.
It’s critical for businesses to protect themselves and their customers by implementing and adhering to formal security procedures. In the coming year, the European Union is poised to introduce its General Data Protection and Regulation legislation, which would implement new regulation on privacy laws for any organization that processes personal data through the offering of services or goods to citizens in the European Union. While no such blanket regulations exists in the US, several industries have been issued increasingly larger regulatory fines for not complying with existing industry-specific legislation. The introduction of new legislation in Europe could be a catalyst for similar legislation in the U.S.
There is no one panacea solution when it comes to ensuring the integrity of your corporate network and the security of customer data. Organizations need to adopt a layered approach that includes encryption, anti-malware, and endpoint security. It is also important to conduct frequent and comprehensive security audits on the well-being of your data security.
Education and staff awareness are also critical. Having a formal procedure for what is expected in the event of a breach can often help expedite the containment process to mitigate potential risks. Internal awareness training should be conducted regularly across the organization.
With greater regulatory oversight than ever before, organizations must ensure they are investing in and prioritizing the protection of their sensitive data, across all levels of the organization.
Sweeping legislation like the EU GDPR may be inevitable, but time will tell if this form of governance will encourage organizations to prioritize security.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:cb442ebc-2591-4986-9be4-565ce53220a3> | CC-MAIN-2017-09 | http://www.csoonline.com/article/3005595/data-protection/ftc-ruling-suggests-upcoming-changes-for-data-compliance-regulation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00021-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95569 | 782 | 2.515625 | 3 |
Back to Basics With Unix: What's in a System?
One thing is for certain: Unix is complicated. Linux does it one way, Solaris another, and all the BSDs, yet another. Fortunately there is some logic behind the differences. Some have to do with where the OS came from, and some were design choices, intended to improve usability. In this article we'll talk about a few major differences between the Unix variants, and tell you what you need to know about various differences in command-line utilities.
First, recall that Unix started off in research labs, where two main flavors arose: System V (SysV), and BSD. SysV (five, not "vee") spawned from AT&T Unix's fourth version, SVR4. BSD, from Berkeley, is the competing Unix variant. They both derived from the same Unix from Bell labs, but quickly diverged. Despite POSIX efforts, there are still BSD and SysV systems today, and their functionality still diverges.
Most operating systems are pretty clearly associated with one or the other, and generalized comparisons between BSD vs. SysV usually prove correct.
FreeBSD is the most prominent branch from traditional BSD, followed by NetBSD and OpenBSD. Mac OS X descends from another BSD variant, NEXTSTEP, and remains very BSD-like while borrowing heavily from the FreeBSD userspace. On the SysV side of the house, AIX, IRIX and HP-UX were the main variants. In short: commercial entities focused on SysV, academics focused on BSD.
Linux, however, is an oddball. Linux certainly adopted many SysV methodologies, but these days it is also very BSD-like. Sun's Solaris is also confusing. SunOS started off as BSD, but SunOS 4 was the last BSD version. SunOS 5.x (aka Solaris) is now SysV. The details are much crazier than I've alluded to here, and we probably don't want yet another Unix history lesson. A fun place to start for further reading is the Wikipedia page on Unix Wars.
It has been said that one can tell which system one is using based on two indicators: whether or not the system boots with inittab, and the format of the accounting file. Process accounting isn't really used any longer, and most people don't even know what it's for, so that's mostly moot. The boot system, however, is still critical to understand.
SysV booting means you use inittab. The init program, when run by the kernel, will check /etc/inittab for the initdefault entry, and then boots to the runlevel defined there. Entering a runlevel means that each startup script in the directory will be run in order. Sequentially, and slowly. Sun was so annoyed with this it implemented a mechanism to fire up services in parallel, among other things, with the Service Management Facility (SMF). Ubuntu Linux implemented Upstart, which basically works around the sequential nature of init scripts too.
BSD booting means that init simply runs /etc/rc, and that's all. Well, it used to. Soon BSD systems implemented rc.local, so that software and sysadmins alike could implement changes without fear of harming the critical system startup routines. Then /etc/rc.d/ was implemented, so that each script could live separately, just like SysV init scripts. Traditionally, BSD-style scripts didn't take arguments, because there are no runlevels, and they only run once: on startup. There are still no runlevels in BSD, but the startup scripts generally take "start" and "stop" arguments, to allow sysadmins and package management tools to restart services easily.
The most frustrating and quickest to surface differences between SysV and BSD are in the traditional utilities. Some common commands take very different arguments and even have some very different functionality. This isn't so important if you're in Linux now, as it generally supports both, but once you find yourself in BSD-land, you're up for some confusion.
The first command people usually run into is 'ps.' The arguments differ:
- SysV: ps -elf
- BSD: ps aux
Linux supports both, BSD does not. Often we may want to list all processes owned by a particular user. In BSD, you must run, "ps aux |grep username" but in SysV you can run, "ps -u username." Just plain 'ps' will list your own processes in both flavors.
Another commonly noticed difference is with the 'du' command. Not because some older systems don't support the -h argument to provide human-readable output, but because they display different things.
- SysV: shows the amount available in 512-byte blocks
- BSD: nice output showing size in bytes and percentage used
Printing in BSD is confusing for SysV users, and vice-versa. It isn't as common, since newer OSes support both, but it's noteworthy nonetheless. BSD systems traditionally used lpr, lpq, and lprm to administer print jobs, whereas SysV had lp, lpstat, and cancel. Most systems adopted the BSD style, since lpr-ng (next generation) provided these commands, and CUPS subsequently adopted the BSD variants.
Other programs, such as du, who, ln, tr and more will have slight differences between SysV and BSD. Heck, the differences between the various Unix standards are confusing enough that a single Unix variant may have multiple directories of utilities. Take a look at Solaris's /usr/ucb, /usr/xpg4, and /usr/xpg6 directories. Each standard they support, which has differences from POSIX, is documented and implemented in a separate location. Too bad Linux, as represented by its many distributions, doesn't comply with any standards.
In the end, the differences outlined here are probably the only ones anyone would ever notice. The nuances between du, for example, may be applicable for people writing shell scripts for systems administration procedures. The differences do turn up often enough to be mentionable, but in reality this level of work requires reading manual pages so often that they'd figure it out quickly. User-level utilities are "similar enough" with the exception of ps.
There are so many other differences in system maintenance procedures that those are more frequently focused on. Once the 'ps' hurdle is out of the way, and you understand how the system boots, the main problems are more conceptual, as in "how do I add a user." These vary by OS, and also by distribution of Linux.
Come back next week to learn about the different ways Unix-like operating systems facilitate systems administration tasks. | <urn:uuid:130442ff-c803-47e7-849c-c1be1d2120ab> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3736811/Back-to-Basics-With-Unix-Whats-in-a-System.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00021-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945859 | 1,441 | 3.25 | 3 |
Apply motion to help users see changes in information.
Animation communicates the relationships between two or more elements. Use motion to make state changes explicit and enhance or add information to a data vis. When elements change, their relationships are affected. Use animation to clarify how this change happens and its influence on the environment or other data sets.
Move data with precision
Start from the initial inspiration ofto inspire a mechanical feeling of standard movements and systematic behaviors. Whether your concept comes from a space craft or self-driving car, use the essence of the machine in order to convey a feeling, a common identity, and the object’s sense of belonging in its environment.
Make elements appear from a common starting point (from the bottom, from the center, etc.). Add consequentiality through the use of slight delays. Be consistent in your system of ordering elements (from left to right, from bigger to smaller).
Use the same motion to make elements point the same direction. Horizontal movements draw the shapes into a stack, giving the user the perception of the elements’ coming together. When appropriate, use color shades to give visual feedback of the previous state.
Make elements come out and grow from a common shape. Clearly show the common shape where smaller shapes come from. Rely on color variations to emphasize different levels of data detail and depth.
Motion can be applied to show the user what is updating and how. Animate shapes in order to make clear what is growing and what is lowering, avoiding sharp switches from a status to another. When users apply changes, do not reload the animation.
Animate grids as individual layers of a visualization which follow the same rules of the other elements. The draw-in effect of the lines should be updated consistently, but only when necessary. Avoid redundant and excessive grid animation when, for instance, the axis scales does not change. | <urn:uuid:a0b0731c-0a84-4eec-805a-3d4872247e75> | CC-MAIN-2017-09 | https://www.ibm.com/design/language/experience/data-visualization/animation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00373-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.869244 | 379 | 3.875 | 4 |
Correlation rules are written to match specific events or sequences of events by using field references, comparison and match operators on the field contents, and operations on sets of events.
The Correlation Engine loads the rule definition and uses the rules to evaluate, filter, and store events in memory that meet the criteria specified by the rule. Depending on the rule definition, a correlation rule might fire according to several different criteria:
The value of one field or multiple fields
The comparison of an incoming event to past events
The number of occurrences of similar events within a defined time period
One or more subrules firing
One or more subrules firing in a particular order
This section provides a basic overview of how to build Correlation rules and the various parameters required to build a rule. | <urn:uuid:f02d8089-db5f-436b-ac2c-d058cec4d4c9> | CC-MAIN-2017-09 | https://www.netiq.com/documentation/sentinel70/s701_user/data/bgrabb4.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00373-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.895563 | 157 | 2.921875 | 3 |
After a journey of nine years and 3 billion miles, the New Horizons spacecraft re-established contact with NASA and was furiously collecting data and what are expected to be "dazzling" images of the dwarf planet Pluto.
Starting at 5:50 a.m. ET on Wednesday, the spacecraft is expected to begin sending the most detailed images of the dwarf planet that scientists have ever seen, along with mapping and measurements about the planet, its atmosphere and moons.
The new images are expected to be 10 times higher in resolution than previous images the probe has sent back as it approached Pluto.
"It's going to be gorgeous data," said Glen Fountain, New Horizons' project manager. "Dazzle us. It will dazzle us. We've seen just a hint of that data and there is more to come."
On Tuesday, New Horizons, a piano-sized spacecraft that was launched in January of 2006, became the first space probe to make a close flyby of Pluto. Using its seven instruments, the probe is expected to deliver the first detailed scientific information about the rocky and icy dwarf planet.
In a press conference Tuesday night – amid cheering, tears and raised arms -- NASA administrator Charles Bolden called the flyby a historic accomplishment for NASA and for mankind.
"With this mission, we have visited every single planet in our solar system," Bolden said. "Every mission… and exploration is expanding our knowledge of our solar system and the universe, and paving the way for future missions. This is one more step in our journey to Mars because it gives us one more piece of the puzzle to our solar system."
NASA received a signal from the spacecraft around 9 p.m. Tuesday, alerting scientists that it had made its closest approach to Pluto and was functioning properly.
Alan Stern, New Horizons' principle investigator, said the message from the spacecraft was set up to be short and succinct so it could spend as much time as possible making scientific measurements and capturing images.
The first real dump of scientific data is expected this morning and will be sent to NASA and Johns Hopkins University, which designed, built and operates New Horizons.
"If you think it was big today, wait until tomorrow and the next day," said John Grunsfeld, a former NASA astronaut and associate administrator for the Science Mission Directorate at NASA. "You haven't seen anything yet. This is just the beginning. It's the beginning of the mission."
Alice Bowman, New Horizons' mission operations manager, received excited applause and cheers when she announced that the spacecraft's main computer system reported that the expected number of memory segments have been used. That information tells scientists that data about Pluto already has been collected.
"The spacecraft did exactly what it was supposed to do and the signal was there. It was great," said Bowman, who guided New Horizons across the solar system. "Right now, it's outbound from Pluto. It will be turning around and taking more images and spectra as it looks back at the planet."
This story, "NASA spacecraft expects 'dazzling' data after Pluto flyby" was originally published by Computerworld. | <urn:uuid:66b66ebf-be75-4070-b72f-1b6868feff6e> | CC-MAIN-2017-09 | http://www.itnews.com/article/2948203/space-technology/nasa-spacecraft-to-be-full-of-dazzling-data-after-pluto-flyby.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00549-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.970239 | 639 | 3.3125 | 3 |
If you’re over 30 you probably remember the term “melting pot” being used to describe America’s model of benefitting from other cultures. Growing up in the SF Bay Area I heard it all the time.
I was thinking about multiculturalism the other day and trying to figure out what was wrong with the “melting pot” approach when I realized that NOTHING was wrong with it.
The problem isn’t the melting pot; the problem is NOT doing the melting pot.
The melting pot and e pluribus unum are very similar concepts. They speak of taking difference and turning it into unity. They speak of building a stronger, united whole by incorporating the strengths of diverse components.
Abusing the Metaphor
Fine. I like that.
The problem is that we’re not doing that. We’re not following the metaphor. The metaphor has a key word and concept within it: melting. Melting implies a couple of key things:
When you have two metal bars next to each other and you throw them into a true “melting pot” they are exposed to the extreme temperatures and they melt. The lines between them fade away. They cease, for all intents and purposes, to be different metals. And the properties of each are given to the new, single alloy.
This is what we want — a united America that’s stronger due to its diverse population.
But we don’t have that. We’re missing the heat. The heat in the case of the metals is actual increased temperature, but the heat in the cultural context is the social pressure on immigrants to downplay their previous culture and adopt the American way of life. In both cases the “heat” is essential because it enables many to become one.
Without it you have a single location with a collection of different people with different cultures. Cultures which exert such extreme and opposing pressures on the whole that they can destroy it in a very short period.
That’s our country today. A lack of whole. A lack of unity. A lack of identity. We are pulling ourselves in too many directions
Some cultures look different but are fundamentally the same. Take the Eastern Indian culture and the Upper Middle-Class “White” culture 1, for example. They may have different native languages and like different foods, but the similarities are far more significant. Their goals are as follows:
- Be disciplined with your money.
- Be relatively conservative with your values.
- Financial security is your measure of success.
- Education is your path to financial security.
- Every child goes to a good university. Period.
- The purpose of every parent is to get your kids into that great school.
- Make those kids successful so that they can have kids and they can do the same.
The primary source of accomplishment in the lives of parents within these cultures is captured by the following statement:
Yes, my son is doing really well. He was top of his class in high school and he’s on track to be top 25 at Harvard. He’ll get hired immediately and make a lot of money.
That’s it. That’s life for most of these people. And it’s not just Indian and White people. It’s a good portion of the Chinese and Koreans too. Oh, and the Jews.
Education –> Money –> Kids –> Education –> Money.
Notice that almost half (48%) of Hindus in the United States have a Masters degree 2.
So these are the types of approaches to life that we should be standardizing on as an American ideal. It doesn’t have anything to do with foods or accents or favorite dances; it’s about shared values.
And it doesn’t mean that there’s no place for art or philosophy. I’m not saying everyone has to try and be a doctor, engineer or lawyer; that’s un-American as well. The point is simply that people who come here and want to establish Sharia law and abolish the American way of life, from America, should not be welcome.
This is to say that there is such thing as being so different as to become incompatible with America. And that’s America’s primary challenge today — figuring out what our identity is and determining how to go about actualizing it.:
1 Yes, I’m aware that “White” culture is all but pinned down. | <urn:uuid:360e901b-4774-477a-bc6f-17d79de45bef> | CC-MAIN-2017-09 | https://danielmiessler.com/blog/thoughts-on-the-melting-pot-metaphor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00425-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95416 | 948 | 2.765625 | 3 |
A widely accepted definition of information security risk is the potential of a specific threat exploiting the vulnerabilities of an information asset, with the following formula used to represent information security risks: Risk = Likelihood x Impact.
The potential impact on information, processes and people is typically estimated during a business impact analysis as part of corporate business continuity planning. However, estimating likelihood of information security risks is often guesswork resulting from combined vulnerability assessments and threats assessments. While assessing the likelihood of risks, many IT security teams will categorise risk using the traffic light system for high, medium or low level. Those responsible for information security in a company should estimate risk levels for all corporate information systems and apply control measures accordingly. Estimating risk levels is a continuous process and it requires the use of tools such as vulnerability assessment scanners and/or contracting the services of companies specialized in ethical hacking.
In May this year, the Financial Times was hacked via the exploit of one of its many blogging systems. The system in question was based on the vulnerable version of a content management system. This case illustrates that the principle of the weakest link in the security chain could affect complex information systems with many interconnected components. To maintain a high level of protection of vital corporate information, it is necessary to assess vulnerabilities of all information systems, since those that are less critical could be exploited to provide access to other, more critical systems.
The likelihood of successfully exploiting a vulnerability is determined by the degree of difficulty in its implementation, skills of the attacker, availability of software tools, capacity of processing power and data connectivity, and publicly available information on the vulnerability.
A vulnerability that is known to be popular among hackers carries a higher likelihood of exploitation. Standard tools for vulnerability assessments are software based vulnerability scanners. These automated tools compare detected application, operating systems and other components on target hosts against proprietary or public databases of known vulnerabilities. They provide reports on detected gaps and recommend implementation of security patches, if they are available.
In assessment reports, automated scanners typically provide links to vendor provided security patches or knowledge base articles with recommended fixes. After testing the top five tools, I have been using Retina CS from BeyondTrust for the last two years. However, automated tools lack human intelligence and cannot recognize relationships among interconnected information systems. A determined hacker is more likely to exploit even the low prioritized vulnerability on one system if it has the potential to lead to a high value business asset.
In today's dynamic business environment where boundaries of responsibilities blur in cloud computing, it is difficult to dedicate resources to the continuous audit of all IT assets. Moreover, certified and skilled manual ethical hacking is costly and time consuming. Nevertheless, there are new assessment solutions for information security managers and IT auditors — hybrid solutions in the form of combined automated vulnerability scanners with manual ethical hacking.
These solutions are less costly than ethical hacking projects, comparable in cost to automated scanners and could be used for regular periodic security assessments of all corporate information systems exposed to the Internet. Their look and feel is similar to automated vulnerability assessment scanners. They do not require administration overhead of ethical hacking projects. These solutions are available on-line and assessments can be scheduled through web portals with very little manual interaction and no expertise required by the customer. Frost and Sullivan recently published an overview of these hybrid solutions.
Hybrid vulnerability assessment solutions are particularly accurate when analysing web based information systems, which are often ranked as high-risk in annual information security reports. The competitive advantage of hybrid vulnerability scanners over traditional automated scanners is in human skills to adapt attack strategies and related tools to particular components of a target. The concept mimics the approach of attackers.
Attackers usually begin with reconnaissance, with the objective of collecting intelligence about the target. These techniques are also used by automated vulnerability scanners. Hackers perform web searches for details about the target company, its employees, and its web identity. They search Internet forums and social networks to identify weak links for possible phishing attacks. These methods are also used by ethical hackers, and are available in their reports with recommended protection practices.
When hackers collect enough information and identify the weakest links in the security chain, they begin manual attacks. The weakest link, as illustrated in the above-mentioned Financial Times case, is typically an information system component that is not updated regularly with security patches therefore vulnerable to published exploits.
Other weak links could be those components that are misconfigured, for example disclosing unnecessary information about software versions in error messages displayed to every user. To be efficient, attacks have to be optimized and adapted to bypass security controls. Automated tools cannot adapt their attack scripts for sophisticated evasion techniques. Undoubtedly hackers can. Ethical hackers, working in the "back office" of hybrid vulnerability scanners, apply the same evasion techniques when assessing the level of exploitability of target systems. This increases the accuracy of exploit level estimates in reports from hybrid vulnerability scanners over automated scanners.
I have tested hybrid vulnerability solutions, such as ImmuniWeb; they offer custom-built scripts in their assessment reports in the form of exploit proof of concept. These scripts are useful for information security teams to verify the likelihood of a risk materializing and to adapt mitigation controls. They could be applied after mitigation controls have been implemented to verify their effectiveness. These target specific scripts were traditionally available only in dedicated ethical hacking and penetration testing projects.
Hybrid vulnerability assessment solutions enrich the arsenal of protection available to information security practitioners in this increasingly insecure cyberworld. With hybrid vulnerability scanners already available on the market, even those information systems identified as being low risk could be included in regular vulnerability assessments. Consequently organizational risk exposure would be more accurately measured and potential business impact further reduced. Indeed, the Financial Times intrusion would probably have been avoided if all their blog systems were systematically tested for security vulnerabilities.
Viktor Polic is adjunct faculty in information security and telecommunications at Webster University in Geneva and CISO at one of the UN's specialized agencies. | <urn:uuid:59e1dc49-3832-499f-87ba-4c0ed01bdd26> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2134159/strategic-planning-erm/the-quest-for-weak-links-in-information-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00069-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941215 | 1,191 | 2.75 | 3 |
The internet protocol (IP) is the primary communications protocol for determining how data packets are routed around the internet and is responsible for the addressing system that ensures traffic is routed to the intended destination. The current version is IPv4 and has worked well for years, running in the background without anyone really worrying about it.
But IPv4 was developed when the internet was a smaller place. Ten years ago, there were slightly over 360 million internet users worldwide; by mid-2010, that had grown to around two billion. However, those numbers do not tell the whole story. Many people use more than one device to connect to the internet, often a mobile device in addition to a PC. As well as this, any manner of devices are becoming internet-enabled—from home appliances to medical equipment, networked cameras to intelligent transport systems, online gaming consoles to cars. It is estimated that there are currently five billion devices connected to the internet and that by 2020 that number will grow to some 50 billion. Each device needs an IP address to identify it on the network and there are simply not enough addresses available with IPv4.
Because of this, IPv6 was developed some years ago, offering a vastly expanded pool of available IP addresses. The transition to IPv6 is not optional as the internet and the number of devices connected to it continues to expand. There are many reasons for switching over to IPv6 beyond the fact that the number of available IP addresses is at exhaustion point—it offers security improvements over IPv4, such as mandatory use of IPSec for encryption and authentication, it offers auto-configuration for new devices connected to the network, it offers superior connections for mobile devices and improves peer-to-peer collaboration capabilities. However, there are also new security issues that it introduces that will need to be addressed, including an increased risk of distributed denial of service and buffer overflow attacks.
According to network equipment and services vendors, those security risks can be mitigated. Of more concern are security issues that are not inherent in IPv6 per se, but rather concern the way that it is used and implemented. Misconfigurations are considered to be among the most important security issues since IPv6 is new, is considered to be complex, and there is a lack of implementation and policy guidance, training and available tools.
In an effort to test drive IPv6 implementations, 8th June 2011 was designated as IPv6 Day by the Internet Society. A wide variety of organisations participated in IPv6Day, ranging from web content providers such as Facebook, Yahoo and Google, to service providers and telcos. The purpose of the day is to gather information about how IPv6 functions in a production environment with a view to accelerating the momentum of its deployment worldwide and to work out how to iron out problems that are already known about, such as IPv6 brokenness, which are primarily related to misconfigured network equipment and faulty firewall settings.
IPv6 Day was not a flag day for worldwide implementation of IPv6, which will probably take a number of years. However, it was an important milestone in terms of uncovering the issues that will be involved in its deployment so that any problems can be solved. The results of IPv6 Day will be reported on in further articles on this blog. | <urn:uuid:8cf71152-56e8-447e-9540-cdd5cd45d29c> | CC-MAIN-2017-09 | http://www.bloorresearch.com/blog/security-blog/ipv6-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00421-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968123 | 659 | 3.65625 | 4 |
Art of Storytelling
From the earliest cave dwellers to modern corporate executives, people have told stories to make a point. Stories are powerful because they speak to both reason and emotion. Plus, stories are memorable.
By telling stories in the workplace, you'll be better able to engage, convince, and influence others. You'll be more successful at getting your message across in conversations or presentations.
Storytelling is actually an art you already use every day. This course will help you discover your natural ability, refine it, and apply it to your business and workplace goals.
Prework for this course should be completed one week in advance and brought to class.
Anyone who wants to use storytelling to engage, convince, and influence others in conversations or presentations | <urn:uuid:8be55a3c-2110-4284-a735-2caf3876f700> | CC-MAIN-2017-09 | https://www.globalknowledge.com/ca-en/course/116600/art-of-storytelling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00297-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957466 | 154 | 3.015625 | 3 |
Emerging reports on the two recent train crashes in Europe suggest, ever so sadly, that they could have been avoided. In Spain, the engineer appears to have been talking on the phone and looking at documents while speeding. In Switzerland, one driver seems to have failed to stop at a signal and let another train pass, leading to their collision. Human errors are human nature, but these particular mistakes carried tragic consequences.
In the aftermath of a major train crash, let alone a pair in close succession, it's imperative to remember that train accidents are exceedingly rare. That's more or less true around the world, Yonah Freemark reminds us at CNN.com, with high-speed rail in particular performing incredibly well. Writing here earlier this week, Emily Badger pointed out that Americans are less likely to die in a train crash than in a boating accident.
Of course the sad exceptions are what stick most in our collective minds. Perhaps the worst during this period was the September 2008 collision in Los Angeles between a Metrolink commuter train and a freight train that killed 25 people and injured more than a hundred. That accident was the result of human error, too, with startling similarities to what occurred recently in Europe: an engineer who was texting at the controls and, likely as a result, missed a stop signal.
If there's anything good about these incidents it's that they often spark a conversation about safety that's easy to defer in brighter times. Less than a month after the Metrolink crash, Congress adopted the Rail Safety Improvement Act of 2008. The law's biggest measure was the mandatory implementation by intercity and commuter trains (as well as some large freight lines) of a billion-dollar technology called "positive train control."
The promise of PTC, according to the legislation, was to eliminate precisely the types of human errors responsible for the Metrolink and similar crashes: "train-to-train collisions, over-speed derailments, incursions into established work zone limits, and the movement of a train through a switch left in the wrong position." | <urn:uuid:fecda9fb-7058-43b5-bfee-eb7e5c36365b> | CC-MAIN-2017-09 | http://www.nextgov.com/emerging-tech/2013/07/billion-dollar-technology-may-or-may-not-prevent-next-big-train-crash/67808/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00065-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.973625 | 419 | 2.71875 | 3 |
Taking pi to new frontiers
American, Japanese compute pi to 5 trillion digits on one powerful desktop PC
- By Kevin McCaney
- Aug 12, 2010
PI IN THE SKY. Pi, the infinite number that represents the ratio of a circle’s circumference to its diameter, has long held a fascination for people, even outside the sphere of mathematics.
It’s turned up in a poem by a Nobel laureate, a song by the singer Kate Bush in which she sings pi to 150 decimal places (and, apparently, gets some of them wrong) and sporting-event cheers at the Massachusetts Institute of Technology.
And seeing how far one could go in computing pi’s digits has been something of a sport itself among mathematicians, dating at least as far back as Archimedes.
After centuries of computing pi by hand, John Wrench and Levi Smith achieved a breakthrough in 1948, calculating pi to 1,120 decimal digits using a gear-driven calculator. When electronic computers came along, the numbers got longer, and earlier this month, the bar was raised again.
Alexander Yee and Shigeru Kondo, an American computer science student and a Japanese systems engineer, reported on the site Numberworld that they had achieved a new world record, computing pi to 5 trillion digits on a single desktop PC.
And this was no ordinary desktop PC. It was a machine built by Kondo with a 3.33 GHz 2 x Xeon X5680 processor (12 physical cores, 24 hyperthreaded), 96G of DDR3 RAM and three hard drives. It was running Windows Server 2008 R2 Enterprise x64. Yee wrote that their goal was to test the limits of hardware as much as it was to compute pi. Even with all that power, the computation took 90 days. When they were finished, the compressed output of decimal and hexadecimal digits took up 3.8T, they reported.
Computing to that many digits might not have a lot of practical applications — you’d need a mere 39 digits of pi to make a circle the size of the observable universe accurate to within one hydrogen atom, according to the Wikipedia entry on pi – but it’s the sport that counts. How often does anyone get to chase the infinite?
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:b533db7a-c397-422c-8406-95cc19aab35c> | CC-MAIN-2017-09 | https://gcn.com/articles/2010/08/16/technicalities-computing-pi.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00241-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958523 | 483 | 3.15625 | 3 |
The VoIP Peering Puzzle�Part 3: The IETF SPEERMINT Architecture
Our previous tutorial investigated the work that the IETF has undertaken to support the transmission of real time information over Internet Protocol (IP)-based internetworks, including the worldwide Internet. We discovered that this work falls within the Real-Time Applications and Infrastructure research Area. A working group within that Area, called SPEERMINT (which stands for Session PEERing for Multimedia INTerconnect), is investigating the concepts of IP network peering in general, and peering of networks that use the Session Initiation Protocol (SIP) in particular (see http://www.ietf.org/html.charters/speermint-charter.html). This working group has produced a number of Internet Draft documents that describe their work, including one that defines the SPEERMINT Peering Architecture, which we will examine in this tutorial (see http://www.ietf.org/internet-drafts/draft-ietf-speermint-architecture-02.txt). (As before, it should be noted that Internet Drafts are works in progress, and are subject to revision, until such time that they are approved by the Working Group and other IETF authorities as Request for Comments or RFC documents.)
To begin, let's look at what this architecture is designed to accomplish. To quote from the SPEERMINT working group charter:
SPEERMINT focuses architectures to identify, signal, and route delay-sensitive (real-time) communication sessions. These sessions use the SIP signaling protocol to enable peering between two or more administrative domains over IP networks. Where these domains peer, or meet, the establishment of trust, security, and a resistance to abuse and attack are all important considerations. Note that the term "peering" is used here to refer to the interconnection between application layer entities such as SIP servers, as opposed to interconnection at the IP network layer.
In other words, the SPEERMINT work goes beyond the mere interconnection of networks, and extends into higher layer (i.e., OSI Layers 57) functions, including signaling, trust (or authentication), and security. Note that these issues are perhaps not so great a concern for users of the Public Switched Telephone Network (PSTN), as signaling, authentication, and security are functions that the network has dealt with for many decades. But for those enterprises that are moving out from under the PSTN "security blanket," new ways must be found to provide these necessary functions.
In addition, the SPEERMINT architecture identifies a number of possible peering scenarios:
- Enterprise to enterprise across the public Internet
- Enterprise to service provider across the public Internet
- Service provider to service provider across the public Internet
- Enterprise to enterprise across a private Layer 3 network
- Enterprise to service provider across a private Layer 3 network
- Service provider to service provider across a private Layer 3 network
The architecture itself defines three layers, a Location Function (LF), a Signaling Function (SF) and a Media Function (MF). Recall from our previous tutorial that the SPEERMINT work is focused on what is called Layer 5 (Session) peering, in which the SIP methods are employed. This being the case, we can make the assumption that the lower layer functions (e.g., OSI Layer 14) are handled by other network processes, and that the focus of the SPEERMINT architecture and functionality is at OSI Layers 57, as also noted above. These architectural functions are described as follows:
The Location Function (LF) develops call routing data (CRD) by discovering the Signaling Function (SF), and end user's reachable host (IP address and port). Before this process occurs, however, the provider that is initiating the call determines if the target (destination) address is one that can be completed without going outside that provider's network (or the provider's federation, if a multilateral peering relationship has been established). If that provider determines that an extra-network call is required, then the LF function is employed to determine the call routing data. Examples of this function include ENUM (Electronic Numbers), routing tables, SIP Domain Name Service (SIP DNS), and the SIP Redirect Server. We will explore some of these functions in greater detail in future tutorials.
The Signaling Function (SF) performs routing of SIP messages, optionally performs termination and re-initiation of calls, also optionally implements security and policies on SIP messages, and assists in discovery/exchange of parameters to be used by the Media Function (MF). The routing of SIP messages is performed by SIP proxies. The optional termination and re-initiation of calls is performed by Back to Back User Agents (B2BUAs), that are described in RFC 3261, section 6 (see ftp://ftp.rfc-editor.org/in-notes/rfc3261.txt). The SF may also perform additional functions such as Session Admission Control, SIP denial of service (DoS) protection, SIP topology hiding, SIP header normalization, and SIP security, privacy, and encryption.
The Media Function (MF) performs media related functions such as media transcoding and media security implementation between two SIP providers. An example of this function would be the transformation of a voice payload from one encoding, such as G.711, to another, such as EvRC (Enhanced variable rate codec, which is used with CDMA wireless networks).
With this foundation insight into a peering architecture, our next tutorial will dig deeper, and examine the actual messages that are exchanged between these layers in order to complete the end-to-end connection.
Copyright Acknowledgement: © 2006 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:f72a2dec-8045-42bf-917c-c829310c75fb> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/unified_communications/The-VoIP-Peering-Puzzle151Part-3-The-IETF-SPEERMINT-Architecture-3646146.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00593-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918101 | 1,280 | 3.15625 | 3 |
The Pros and Cons of Using Digital Rights Management With Video Games
Digital rights management, also more commonly shortened to DRM, is defined as “a class of technologies that are used by hardware manufacturers, publishers, copyright holders, and individuals with the intent to control the use of digital content and devices after sale.” This is accomplished through the use of code inserted into the media, restricting access to only the person who has purchased it.
Online and cloud gaming currently utilize digital rights management in a big way, when games are purchased over the internet. This comes in the form of persistent online authorization, a limited number of game installs and even protection against tampering with the game’s code.
So is digital rights management the best way to control video games? Is it something that will last into the foreseeable future? Let’s take a look at some of the pros and cons to using digital rights management in the video gaming industry.
Content Protection: This reason is the main driving force behind using digital rights management. Video games using digital rights management are subjected to the aforementioned restrictions, making the possibility to sell or give unauthorized copies much more difficult.
Content Availability: With digital rights management protection in place, more companies are willing to put high-end content up for sale on the internet. This is due to a much decreased worry of having the content illegally copied and sold.
Trials: One of the most effective marketing strategies when it comes to video games is allowing the player to try before they buy. In the past, this was only able to be accomplished by releasing two separate versions, one a trial and one full. Using digital rights management, a time limit can be set on how long the full version of the game can be played before payment is required.
Restrictive: Video games that use digital rights management contain a number of restrictions that can adversely affect a player’s experience. These can include the need for a constant internet connection and the inability to make backup copies of games that you own.
Privacy: Digital rights management safeguards are linked to the specific person who purchased the video game, usually by way of a signup or registration process. This provides an easy way for companies to track which content a person purchases, potentially trampling on individual privacy rights.
Server Problems: While cloud gaming is defined by how it is conducted solely over the internet, traditional gaming is not always restricted by these parameters. However, with digital rights management, no matter what game you are playing, a connection with the internet is mandatory. Thus, if there is any problems on the server side, you would be unable to play the game you purchased.
The argument over whether or not digital rights management should be used in the coming future is a hot one. What is your opinion on the usage of digital rights management? Let us know in the Comments section below!
By Joe Pellicone | <urn:uuid:871ee726-4f11-48dc-99ad-956489ef60ab> | CC-MAIN-2017-09 | https://cloudtweaks.com/2014/04/pros-cons-digital-rights-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00593-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938721 | 585 | 2.53125 | 3 |
NOAA aquatic bots break the ice on climate research
- By John Breeden II
- Nov 06, 2012
The Apollo 17 crew reported that from space, Earth looks like a big blue marble. But ironically, we know very little about the vast and diverse oceans that make our planet blue.
“The ocean is a tremendous resource, yet we don’t understand it enough, how to get more food from it, the effects of it absorbing more carbon, or what will happen as it becomes more acidic,” said Christian Meinig, the director of NOAA’s Pacific Marie Environmental Laboratory. “In some ways we know more about deep space than our own oceans.”
Nowhere is that more true than in the Arctic Ocean. It’s a harsh place by any standard, and any explorer faces many obstacles. Much of the year it’s snowed over. The seas are unusually calm, but clouds block out the sky more than half the time. Its sea floor is ringed by pingos, mountains of sunken rock and ice that form and move over time. Like hidden underwater teeth, they can rip out the bottom of unprepared and unfortified vessels. Temperatures are also below freezing most of the time.
Over the years, NOAA has sent various data-taking probes into this netherworld because the Arctic Ocean has become invaluable when looking for answers about climate change. Recording temperatures, especially in columns of water going down six meters below the surface, could help scientists predict future key climate changes. Data could be shared with NOAA’s Fisheries Service, for example, to create smarter seafood management plans.
NOAA’s Meinig thought robots might handle these data collection tasks capably -- and relatively inexpensively, so he enlisted Liquid Robotics Inc., of Sunnyvale Calif., to help. A set of the company’s Wave Glider robots are currently swimming off the coasts of California and Hawaii.
According to Liquid Robotics senior vice president for product management Graham Hine, Wave Gliders are perfect for unmanned exploration because they are propelled by wave power. As a consequence, “they can operate for months at a time, never need to stop for refueling and are robust enough to survive in tough conditions,” he said.
Wave Gliders have proven to be surprisingly robust in the field for day-to-day operations, but for the Arctic mission, extreme cold was the enemy. Batteries don’t perform as well in cold environments, and the 90 percent cloud cover could limit charging the solar panels that power the instruments. “We added a supplemental lithium-ion battery pack with a 1,200-watt-hour capacity into one of the bays,” Meinig explained. “If the Wave Glider couldn’t charge, the extra batteries had enough power to run the instruments for 15 days.”
For scientific readings, six Therm¬Array thermistors from RTS Instruments of Maple Ridge, Canada, were molded into the tether that connects the float with the submarine, to record temperature at various depths.
Meinig explained that this first mission with the robots was half science and half proof of concept. “Our focus is how we best provide high-quality, low-cost ocean observing systems for research and operations,” Meinig said. “We’re constantly working towards that goal.”
On the low-cost side of the equation, NOAA rented a small boat and sent only two employees to launch the Wave Gliders at Prudhoe Bay, Alaska. The same team was tasked to retrieve the boats at the end of their two-month journey.
For the scientific part of the mission, the robots followed each other at 12-hour intervals. So if one robot took readings at noon, the second would take the same sample at midnight. This allowed study of diurnal heating effects. “We held one on station to see if it could stay in place, while the second swam away,” Meinig said. “They both performed well overall.” They stayed in the Arctic for two months. Between them, they took over 900,000 temperature readings, the most ever in a survey of that part of the world.
Even though this mission only recorded temperature and telemetry, Meinig said there were many other factors that could be recorded by different instruments using the Wave Gliders. “We certainly plan on doing more with them now that we know we have an unmanned vehicle that can be launched and later recovered inexpensively by two people in a small boat,” he said.
That’s a big achievement for a robot that began as a tiny model floating in a fish tank at Liquid Robotics headquarters in California. Today it is swimming the world’s oceans from the Pacific to the Arctic, operating at lower costs to the public and taking hard-to-get readings that one day might help preserve the planet.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:a5065b77-3122-4fa4-a9a3-4c0d8a7ffcf7> | CC-MAIN-2017-09 | https://gcn.com/articles/2012/11/06/noaa-aquatic-bots-climate-research.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00593-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955276 | 1,043 | 2.96875 | 3 |
Big data—the kind that statisticians and computer scientists scour for insights on human beings and our societies—is cooked up using a recipe that’s been used a thousand times. Here’s how it goes: Acquire a trove of people’s highly personal data—say, medical records or shopping history. Run that huge set through a “de-identification” process to anonymize the data. And voila—individuals become anonymous, chartable, and unencumbered by personal privacy concerns.
So what’s the problem? It turns out that all that de-identified data may not be so anonymous after all.
So argues Arvind Narayanan, a Princeton computer scientist who first made waves in the privacy community by co-authoring a 2006 paper showing that Netflix users and their entire rental histories could be identified by cross-referencing supposedly anonymous Netflix ratings with the Internet Movie Database. Narayanan and fellow Princeton professor Edward Felten delivered the latest blow to the case of de-identification proponents (those who maintain that de-identification is viable) with a July 9 paper that makes a serious case for data paranoia.
They argue that de-identification doesn’t work—in theory or in practice—and that those who say it does are promoting a “false sense of security” by naively underestimating the attackers who might try to deduce personal information from big data. Here are Narayanan and Felten’s main points:
Personal location data isn’t really anonymous
A 2013 study showed that given a large dataset of human mobility data collected from smartphones, 95 percent of individuals were uniquely identifiable from as few as four points—think check-ins or shared photos with geo-location metadata. Even the most devout de-identificationists admit there’s no robust way to anonymize location data.
Experts don’t know how vulnerable data is
In a case study of the meticulously de-identified Heritage Health Prize dataset, which contains the medical records of 113,000 patients, the University of Ottawa professor and de-identification expert Khaled El Emam estimated that less that 1 percent of patients could be re-identified. Narayanan, on the other hand, estimated that over 12 percent of patients in the data were identifiable. If an attack is informed by additional, specific information—for example, in an attempt to defame a known figure by exposing private information—it could be orders of magnitude easier to finger an individual within a dataset.
De-identification is hard, and re-identification is forever
De-identifying data is challenging and error-prone. In a recently released dataset of 173 million taxi rides in New York City, it turned out that individual taxis, and even their drivers, could be identified because the hashing (a mathematical function that disguises numbers) of license plate numbers in the data was shoddy.
The thing is, when a person’s anonymity is publicly compromised, it’s immortalized online. That can be an even worse problem than a data breach at a company or web app. When a company’s security is breached, cleanup is messy but doable: the flaw is patched, users are alerted, and life goes on. But abandoning a compromised account is more feasible than abandoning an entire identity.
So should we smash our smartphones, swear off health care, and head for the hills? Not according to the de-identification defender El Emam. He points out that Narayanan did not actually manage to re-identify a single patient in the Heritage Health Prize dataset. “If he is one of the leading re-identification people around,” El Emam says, “then that is pretty strong evidence that de-identification, when done properly, is viable and works well.”
That’s good news for all us human beings who make up big data. But just because the anonymity of big data hasn’t been definitively broken yet doesn’t mean it’s unbreakable. | <urn:uuid:adaab31f-561a-4e11-bb88-ddfc21aefe4c> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2014/07/heres-why-you-may-never-be-truly-anonymous-big-data-world/88492/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00117-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916991 | 846 | 2.703125 | 3 |
As the real and figurative dust begins to settle in Japan following the massive earthquake and tsunami, the grim evaluation of damage is just beginning in terms of life, property and increasingly, business.
Today Japanese markets went live and back to work and to some extent, so did some of the country’s largest companies. Honda, Sony and others were forced to shut down for an extended period but otherwise Japan has been trying to push forward, if not with a sense of sad defiance in the face of the mounting tragedies—human, environmental, structural and otherwise.
The assessments extend far beyond Japan’s borders, at least on the human front as millions look to cloud-based platforms to share and receive important news and information from a broad spectrum of worldwide sources.
As Dr. Jose Luis Vazquez-Poletti discussed this morning, “following the first hints of news about the tragedy in Japan, people around the world turned to the Internet to find different formats for information—not just mass media coverage, but also firsthand impressions left on personal websites, blogs and social media outlets…a combination social networks and the principles of cloud computing became the primary source for information gathering and sharing.”
Indeed, the convergence of cloud computing and the incredible breadth of tools to harness it for massive, real-time communication and collaboration shows the power of ICT developments like cloud-based services to aid during times of national emergency.
This communications side of the cloud story is striking in its scope; families and agencies sharing updates in near real-time, distributed coordination of search and rescue operations across any number of hosted platforms. However, there is another angle of cloud computing that emerges during major crises.
Reliance on clouds as the main artery for communications and even business continuity following mobile phone and related disruptions is advantageous but what if those networks or data storehouses are obliterated or at worst, temporarily knocked out following exhaustion of backup power?
Just as with many other critical elements of infrastructure, a few of Japan’s datacenters have been affected by the tragedy. Rather than being due to direct damage to structures, however, the failures appear to be due to rolling blackouts and extended power outages. While they are not as widespread as one might imagine given the scope and magnitude of the damage, this is nonetheless causing issues for those who rely on cloud-based services in the country.
ZDnet Japan has been maintaining an updated list of affected datacenters with short descriptions of current challenges showing that some datacenters are faring better than others. Overall, despite some serious breakdowns in ICT infrastructure, the country’s clouds have been protected by a variety of power and data backup methods.
According to reports, among the hardest hit in the data market was NTT Communications—one of Japan’s largest providers of data and communication services. On Friday they lost their IP-VPN connection and were closely monitoring the exterior of the building holding one of its datacenters. In a statement issued on Friday the company noted that “due to earthquakes in the Tohoku region NTT has failed in some of our services.” NTT apologized to its customers but claimed that backup power supplies for its other datacenters have extended capabilities.
Announcements from the Japanese Ministry of Internal Affairs have emerged about severed communication networks, including KDDI’s undersea cables.
Despite these and other major ICT infrastructure failures, there are a number of companies reassuring customers that even in the face of power loss their data is still safe.
Earlier this month Amazon Web Services announced the availability of its cloud computing services to the Tokyo area with the launch of a new datacenter. While the exact location of the data storehouse was withheld, in a statement about its new Japanese reach, one of Amazon’s spokespeople behind the move stated that “developers in Japan told me that latency and in-country data storage are of great importance to them.”
It is quite likely that, based on these specific concerns and the fact that they were highlighted in a relatively sparse release, the datacenter is located somewhere in the heart of Tokyo, which suffered a great deal of damage although not as much as other coastal cities touched by the massive tsunami.
According to Amazon, however, the datacenter has emerged unscathed and for all intents and purposes, its business as usual—at least in terms of its cloud offering in the region. Furthermore, as one might imagine, AWS has some exhaustive backup and recovery plans, including stores off-site and off-continent.
On its status page, which shows real-time outage or interruption events by region, Amazon’s services all seem to have the green light. However, it notes that while they do not believe there will be interruption is service, it is a possibility. As the company’s message to Asia-Pacific AWS users states:
“There are planned Tokyo Electronic outages scheduled over the next few weeks, starting Monday morning (Japan time). We have been re-validating our back-up power capability so that customers have the least interruption possible.”
A number of U.S.-based companies are jumping into the fray to offer assistance to businesses, non-profits and government agencies via cloud-based software. For instance, yesterday IBM Japan announced that it would be providing free LotusLive services until the end of July to ensure the necessary “means of information sharing and email targeted at local governments and nonprofit organization for supporting browser-based activities.”
Japan’s leading internet provider IIJ has stated that it is providing free access to cloud-based resources from its unaffected datacenter location from a rapidly-deployed server setup in the Kansai area it claims will be unaffected by power outages and rolling blackouts. Although the translation is approximate, the company notes that “traffic information and safety confirmation as well as railways operation are supported in this infrastructure for delivering information as quickly as needed—IIG is doing all it can to to support various server engineers.”
Microsoft had an office in one of the worst affected areas, Sendai. In addition to offering words of concern and condolences, the company announced that it would be providing monetary and software donations to Japan.
According to a report, this assistance includes free incident support for those with damaged facilities and “free temporary software licenses for customers, non-profits and relief agencies.”
Microsoft has also opened a cloud-based disaster recovery portal on its Windows Azure for officials to use for collaboration and communications.
Similar efforts were underway, although on a smaller scale, following New Zealand’s earthquake, which rocked Christchurch and put data backup worries on center stage.
In fact, now that the tidal wave of shock is turning slowly into recognition of the gravity of the situation, today has sparked a number of conversations around the web highlighting the value of having a contingency plan and reliable backup and recovery options. These have saved many of the datacenters, both in terms of backup power and datastores, but some companies that had been reliant on on-site systems might not have fared well.
Many of these same backup and recovery-related conversations emerged immediately following the Christchurch earthquake not long ago. ISC Research community manager Ullrich Loeffler predicted that many companies that were displaced after the tragedy were unlikely to reinvest in their own IT infrastructure. He stated that many of the companies that were forced to line up in queues to try to salvage hard drives and other physical information stores would begin considering the cloud option. Still, Loeffler made it clear that firms would turn to the cloud as a precautionary measure, explaining that “companies only tend to turn to cloud-based or hosted solutions when they need to refresh their systems.”
While Loeffler’s statement that the cloud is not a precautionary measure might ring true in the abstract, there were a number of tales of cloud-based backup and recovery solutions being deployed directly as precautionary measures. This was especially the case in Christchurch where businesses were given a wake-up call in the form of an initial, less severe quake that rocked the town—and swayed the confidence of a number of businesses with mission-critical data stores at the heart of their operations.
The New Zealand Herald reported on a number of companies that found that their decision to deploy cloud-based solutions saved their businesses following the destruction of their offices. Software company EMDA, which supplies software for supply chain and manufacturing businesses had just reevaluated its backup and recovery plan to include both on and off-site backups following the first earthquake.
Although the tragedy could have sparked a much more serious data problem, especially if the epicenter had been closer to Tokyo where a number of datacenters and communications hubs are centered, it does serve as a reminder about the value and risks associated with cloud-based business models. Chances are any organization that has decided to put all or some of its data in the cloud, especially public clouds, has granted significant attention to the issue of reliability and backup. Still, for smaller companies this might be a secondary consideration.
It is difficult to focus on this one element of a tragedy that is so broad in scope that it is almost impossible for the mind to process. We can take our cues from the strong decision to move forward with markets on this Monday following such dramatic loss of life and property, however, and look ahead to see how the challenges from this event can help other countries better prepare for disaster on the cloud and communications level.
Just as the earthquake and tsunami in Japan has caused a massive look inward for countries reliant on nuclear power, this should also be a living example of considering contingency planning options for data protection and loss prevention. | <urn:uuid:58fa7681-c043-47f5-8702-bcebade99d12> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/03/14/calamities_contingencies_and_the_cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00117-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968244 | 2,010 | 2.6875 | 3 |
A year ago it seemed Sony couldn't even get a laptop battery right. A massive recall of lithium-ion cells tainted its image and had the company scrambling, but on Thursday it reported a sweet breakthrough in bio battery technology.
Sony, one of the world's largest battery makers, said it had succeeded in creating a battery that produces electricity by breaking down sugar. The bio cell, which measures 39 millimeters cubed, delivers 50mW (milliWatts)—a world record for such a cell, according to the company.
A video provided by Sony shows four of the cells connected in series delivering enough energy to power a Walkman music player. The battery uses glucose solution as a fuel. A second video shows a small fan being powered by the cell with a glucose-based sports drink used as the fuel.
As in other cells, power is produced through a flow of electrons between a cathode and anode.
In the bio cell, sugar-digesting enzymes at the anode extract electrons and hydrogen ions from the glucose. The hydrogen ions pass through a membrane separator to the cathode where they absorb oxygen from the air to produce water as a byproduct. The electrons flow around the circuit outside the device, producing the electricity needed to power it.
Details of the bio battery were accepted as a paper at the 234th American Chemical Society National Meeting and Exposition that is taking place this week in Boston.
Sugar is naturally occurring, so the technology could be the basis for an ecologically friendly energy source. Companies like Sony are researching numerous technologies that could replace the dominant lithium ion cells as a clean power source for portable electronics.
One of the most talked about is fuel cell technology. While hydrogen-based cells have taken off for home or automobile use, versions based on methanol for use in electronics products have yet to be commercialized. Toshiba and NEC are among the companies that promised methanol fuel cell-based laptops in previous years, but each time technology launches have been delayed. | <urn:uuid:e376a543-1711-417a-a0eb-2d3ac8d50dcc> | CC-MAIN-2017-09 | http://www.cio.com/article/2438142/hardware/sony-walkman-runs-on-sugar-based-bio-battery.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00165-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95699 | 411 | 3.046875 | 3 |
For the first time since Stuxnet was discovered in 2010, researchers have publicly named the worm's original victims: five Iranian companies involved in industrial automation.
Stuxnet is considered to be the first known cyberweapon. It is believed to have been created by the U.S. and Israel in order to attack and slow down Iran's nuclear program.
The worm, which has both espionage and sabotage functionality, is estimated to have destroyed up to 1,000 uranium enrichment centrifuges at a nuclear plant near the city of Natanz in Iran. It eventually spread out of control and infected hundreds of thousands of systems worldwide, leading to its discovery in June 2010.
Security researchers from Kaspersky Lab and Symantec reported Tuesday that while the nuclear facility at Natanz might have been the ultimate target of Stuxnet's creators, the initial victims were five Iranian companies with likely ties to the country's nuclear program. Their reports coincided with the release of "Countdown to Zero Day", a book about Stuxnet by journalist Kim Zetter, that is partially based on interviews with researchers who investigated the threat.
Every time Stuxnet executes on a computer it saves information about that computer inside its executable file. This information includes the computer's name, its IP address and the workgroup or domain it's part of. When the worm spreads to a new computer it adds information about the new system to its main file as well, creating a trail of digital breadcrumbs.
"Based on the analysis of the breadcrumb log files, every Stuxnet sample we have ever seen originated outside of Natanz," Symantec researcher Liam O Murchu said in a blog post. "In fact, as Kim Zetter states, every sample can be traced back to specific companies involved in industrial control systems-type work. This technical proof shows that Stuxnet did not escape from Natanz to infect outside companies but instead spread into Natanz."
The Kaspersky Lab researchers reached the same conclusion and they even named the companies they believe might have served as "patient zero."
The 2009 version of Stuxnet, dubbed Stuxnet.a, was compiled on June 22, 2009, based on a date found in the collected samples. A day later it infected a computer that, according to the Kaspersky researchers, belonged to a company called Foolad Technic Engineering Co. that's based in Isfahan, Iran.
This company creates automated systems for Iranian industrial facilities and is directly involved with industrial control systems, the Kaspersky researchers said. "Clearly, the company has data, drawings and plans for many of Iran's largest industrial enterprises on its network. It should be kept in mind that, in addition to affecting motors, Stuxnet included espionage functionality and collected information on STEP 7 projects found on infected systems."
On July 7, 2009, Stuxnet infected computers at another Iranian company called Neda Industrial Group, which according to the Iran Watch website, was put on the sanctions list by the U.S. Ministry of Justice for illegally manufacturing and exporting commodities with potential military applications.
On the same day, Stuxnet infected computers on a domain name called CGJ. The Kaspersky researchers are confident that those systems belonged to Control-Gostar Jahed, another Iranian company operating in industrial automation.
Another Iranian industrial automation vendor infected in 2009 with Stuxnet.a was Behpajooh Co. Elec & Comp. Engineering. This company was infected again in 2010 with Stuxnet.b and is considered patient zero for the 2010 Stuxnet global epidemic, the Kaspersky researchers said.
"On April 24, 2010 Stuxnet spread from the corporate network of Behpajooh to another network, which had the domain name MSCCO," the researchers said. "A search for all possible options led us to the conclusion that the most likely the victim is Mobarakeh Steel Company (MSC), Iran's largest steel maker and one of the largest industrial complexes operating in Iran, which is located not far from Isfahan, where the two victims mentioned above -- Behpajooh and Foolad Technic -- are based."
"Stuxnet infecting the industrial complex, which is clearly connected to dozens of other enterprises in Iran and uses an enormous number of computers in its production facilities, caused a chain reaction, resulting in the worm spreading across thousands of systems in two or three months," the Kaspersky researchers said.
Another company infected in 2010 with Stuxnet.b was Kalaye Electric Co., based on a domain name called KALA that was recorded in malware samples. This was the ideal target for Stuxnet, because it is the main manufacturer of the Iranian uranium enrichment centrifuges IR-1.
"Thus, it appears quite reasonable that this organization of all others was chosen as the first link in the infections chain intended to bring the worm to its ultimate target," the Kaspersky researchers said. "It is in fact surprising that this organization was not among the targets of the 2009 attacks."
The attackers behind Stuxnet had one problem to solve -- how to infect computers in a facility like the one at Natanz that had no direct Internet connections, the Kaspersky researchers said. "The targeting of certain 'high profile' companies was the solution and it was probably successful." | <urn:uuid:497fe037-2561-469a-8b44-513e2def0bb1> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2846754/malware-cybercrime/first-stuxnet-victims-were-five-iranian-industrial-automation-companies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00337-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965892 | 1,098 | 2.671875 | 3 |
Many people and businesses unknowingly leave their private information readily available to hackers because they subscribe to some common myths about computer and network security. But knowing of the facts will help you to keep your systems secure. Here are some answers to these myths.
MYTH: “I have virus protection software so I am already secure.”
FACT: Viruses and security threats are two completely different things. Your anti-virus software will not tell you about any of the more than 10 000 security threats for which a good vulnerability assessment will test your network. These include whether your financial or customer records are exposed to the Internet or whether your computer is vulnerable to various hacker attacks.
MYTH: “I have a firewall so I don’t need to worry about security threats.”
FACT: Firewalls are great and typically provide a good layer of security. However, firewalls commonly perform services such as port forwarding or network address translation (NAT). It is also surprisingly common for firewalls to be accidentally misconfigured (after all, to err is human). The only way to be sure your network is really secure is to test it. Among the thousands of security threats a good analysis tests for, there is an entire category specifically for firewall vulnerabilities.
MYTH: “I have nothing to worry about; there are too many computers on the Internet.”
FACT: People understand the need to lock their homes, roll up their car windows, and guard their purses and wallets. Why? Because if you don’t then sooner or later you will be a victim. But people are just starting to be aware that the same is true with their computers and networks. A single hacker can scan thousands of computers looking for ways to access your private information in the time it takes you to eat lunch.
MYTH: “I know the security of my network and information is important, but all the solutions are too expensive and/or time consuming.”
FACT: While it is true that some network security products and services are very expensive and time consuming, you can find good network analysis tools that are very robust, efficient and effective, yet still affordable.
MYTH: “I can’t do anything about my network’s security because I’m not a technical wizard.”
FACT: While network security is a technical problem, a sound remote analysis report should provide a solution that is comprehensible to non-technical people and geeks alike. If it’s a true remote automated system you won’t have to download, install or configure anything. A good report will include a business analysis that explains technical issues in plain English with plenty of charts, graphs, and overviews to illustrate it. It must be easily comprehensible by non-technical business people and home users.
MYTH: “I know what is running on my computer and I am sure that it is secure.”
FACT: Only 2% of networks receive a perfect score on our security scans. That means 98% of them have one or more possible security threats or vulnerabilities. These threats could exist in your operating system, the software you run, your router/firewall or files.
MYTH: “I tested my network a few months ago, so I know it is secure.”
FACT: New security threats and vulnerabilities are discovered daily. Telspace has a database of security threats that grows by 5-10 new vulnerabilities every week. Sometimes we have even seen more than 80 new security threats crop up in a single month! Just because your network tested well this month, does not mean it will still be secure next month – even if you don’t change anything. You should frequently update your anti-virus software and analyse your security regularly.
MYTH: “Network and computer security is only important for large businesses.”
FACT: In reality, nothing could be further from the truth. Whether you are a casual home user or a large enterprise, your computer contains valuable and sensitive information. This could be financial records, passwords, business plans, confidential files and any other private data. In addition to your private information, it is also important to protect your network from being used in denial of service attacks, as a relay to exploit other systems, as a repository for illegal software or files, and much more.
MYTH: “A “port scan’ is the same thing as a security analysis scan and some web sites already give me that for nothing.”
FACT: Actually a port scan and a security analysis scan are two very different things. In general terms your computer’s Internet connection has 65,535 unique service ports. These ports are used both by software running on your computer and by remote servers sending data to your computer (when you view a web page or check your email). A port scan will simply tell you which service ports are being used on your computer. It does not test any of these ports for security threats nor does it tell you where your network is vulnerable to possible hackers or attacks.
MYTH: “The best time to deal with network security is when a problem arises.”
FACT: The best time to deal with network security is right now, before a problem arises and to prevent you from ever becoming a victim. Think about it – the best time to lock the doors in your home is before a robbery occurs. Afterwards it is already too late, the damage has been done. This is why it is critical to analyse your network’s security now, to find and fix the vulnerabilities before a break-in happens. | <urn:uuid:767e16f7-55d9-45f2-b411-f61472929f98> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2006/03/17/revealing-the-myths-about-network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00037-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954944 | 1,170 | 2.765625 | 3 |
The Crepate virus was found in Italy in August 1993. It has some quite advanced methods, such as variable encryption, multipartite infection and semi-stealth capabilities. Crepate infects hard disk master boot records, floppy DOS boot records and COM, EXE and Overlay files.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
The virus has several phases of execution. When an infected file is executed on an uninfected system, it will install it's own modified boot sector on the primary hard disk. The original boot sector is stored along with the virus code which all in all occupies 7 sectors. The virus makes no attempt to mark these sectors as bad; the contents of the selected sectors are simply overwritten. After this stage has been completed, the virus will check the date by reading the Real Time Clock. (INT 1Ah function 4) If the day is the 22nd, the virus will completely format the primary hard disk.
The second phase of execution begins when the computer is rebooted. The boot sector code that the virus installed in the first phase will then load the main body of virus code into memory. The main virus code takes control of INT 1Ch installing a routine which checks if COMMAND.COM has been loaded by the system boot code. After installing this routine, the virus executes the original boot code and the boot process continues as normal. Since INT 1Ch is called 18.2 times a second, it can continually monitor whether COMMAND.COM has been loaded yet. Once COMMAND.COM is loaded, the virus takes control of INT 21h effectively bypassing many anti-virus programs which are loaded after COMMAND.COM.
Once control has taken of INT 21h, the virus becomes a Stealth COM/EXE infector. The virus traps the following subfunctions of INT 21h in order to infect files:
3Dh (Open) 3Eh (Close) 43h (Lseek) 41h (Delete) 4Bh (Load and execute program) 6C00h (Extended open/create)
The following functions are trapped to give the virus it's Stealth ability:
11h (Find first/FCB) 12h (Find next/FCB)
Also, when an infected program is executed, the DOS boot record of current disk is infected.
The virus considers a file to be infected if the word before the last byte at the end of a file is equal to 6373h ("cs"). All infected files will also have an invalid time stamp; the seconds field contains 62. The stealth routines in the virus uses this technique to identify infected files.
The virus creates a "garbage" header for every file that is infected. The virus also employs techniques to confuse Heuristic scanners.
Once the damage routine is activated, the virus is effectively able to bypass many programs monitoring INT 13h because during the original address of INT 13h is taken during the boot process.
When virus is active in memory, CHKDSK will give allocation errors. This is due the stealth method used by the virus.
Virus contains two strings: "Crepate (c)1992/93-Italy-(Pisa)" and "Crepa(c) by R.T." The second strings is located right in the end of the infected files and, unlike rest of the code, is unencrypted.
Technical Details: Jeremy Gumbley, Symbolic, Parma | <urn:uuid:11ad6f25-0510-4853-846a-b8a6830853e8> | CC-MAIN-2017-09 | https://www.f-secure.com/v-descs/crepate.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00157-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924951 | 784 | 2.515625 | 3 |
The phrase “hot water cooling” seems like an oxymoron. How can hot water possibly help cool servers in high-density data centers?
Although the data center community has become conditioned to think of temperatures between 60 and 75 degrees as the proper climate for a server room, there are many ways to keep equipment running smoothly with cooling technologies featuring significantly higher temperatures.
There are three recent examples of this trend. Last week’s release of the Top 500 list of the world’s most powerful supercomputers highlighted the new IBM SuperMUC system at Leibniz Supercomputing Centre (LRZ) in Germany, which placed fourth on the list.
Another example is the eBay Project Mercury data center in Phoenix, which houses high-density racks of servers in rooftop containers, which have continued to operate at high efficiency at exterior temperatures of up to 119 degrees. A micro-modular data center enclosure from Elliptical Mobile Systems has also shown the ability to cool components using warm water.
Benefits of Hot Water Cooling
The projects employ different approaches to hot water cooling, all of which benefit from tightly-designed and controlled environments that focus the cooling as close as possible to the heat- generating components.
Using a higher water temperature in a cooling system provides two benefits – it allows you to either use your chiller less, or not at all. Higher inlet water temperature maximizes the number of hours in which “free cooling” is possible through the use of water side economizers. Many data center cooling systems set the chilled water temperature in a range between 45 to 55 degrees. Here’s a look at three projects that have pushed the boundaries on water temperature.
IBM’s SuperMUC Supercomputer
The new LRZ SuperMUC system was built with IBM System x iDataPlex Direct Water Cooled dx360 M4 servers. IBM’s hot-water cooling technology directly cools active components in the system such as processors and memory modules with coolant temperatures that can reach as high as 113 degrees Fahrenheit.
By bringing the cooling directly to components, SuperMUC allows an increased inlet temperature. “It is easily possible to provide water having up to 40 degrees Celsius using simple ‘free-cooling’ equipment, as outside temperatures in Germany hardly ever exceed 35 degrees Celsius,” LRZ says. “At the same time the outlet water can be made quite hot (up to 70 degrees Celsius) and re-used in other technical processes – for example to heat buildings or in other technical processes.”
SuperMUC is based on the liquid cooling system developed for the Aquasar supercomputer at the Swiss Federal Institute of Technology Zurich (ETH) in 20120. The cooling system system features a system of capillary-like pipes that bring coolant to the components, remove the heat, and than are returned to a passive cooling system that uses fresh air to cool the water. IBM has a video providing additioal infromation on the SuperMUC cooling system.
eBay’s Project Mercury
In his vision for Project Mercury, eBay’s Dean Nelson sought a design that could run without chillers in even the most brutal climates – such as Phoenix, where daytime temperatures regularly exceed 100 degrees. Nelson, the Director of Global Foundation Services for eBay, wanted to test the limits of using air to cool servers.
In designing for year-round use of free cooling, eBay deployed data center containers from Dell that were able to use a water loop as warm as 87 degrees F and still keep servers running within their safe operating range. Dell warranties its servers for fresh-air cooling solutions are capable of running at 104 degrees Fahrenheit for up to 900 hours per year and 113 hours Fahrenheit for 90 hours per year.
To make the system work at the higher water temperature, the system as designed with an unusually tight “Delta T” – the difference between the temperature of air at the serve inlet and the temperature as it exits the back of the rack. Nelson says eBay’s servers were designed to maintain a Delta T of 6 to 12 degrees. This allows eBay to raise the inlet temperature and still maintain the exhaust heat at a manageable level.
Nelson discusses the approach to Project Mercury in this video from Dell DCS.
Elliptical Mobile Systems
Recent testing found that Elliptical Mobile’s newest enclosure can cool high-density loads using water in a range of 65 degrees all the way up to 85 degrees. The R.A.S.E.R. HD is a 42U enclosure designed to handle IT loads from 20 kW to 80 kW.
The testing was conducted at the United Metal Products facility in Tempe, Arizona, with the enclosures placed outdoors on a 100-degree day. The testing used a 23kW load bank to simulate IT loads, and found the unit was able to maintain a server inlet temperature around 85 degrees after the water temperature was raised to 85 degrees.
The cooling system for R.A.S.E.R. HD consists of an air loop and a water loop. The fans of the cooling unit draw warm air from the rear section of the cabinet and into an air/water heat exchanger. The air is cooled and then blown into the front area of the cabinet. Inside the air/water heat exchanger, the heat energy of the warm air is transferred to the medium of water. The heat exchanger is connected to an external reciprocal chiller unit, where the water is cooled again.
In this video, Scott Good of gkworks provides an overview of the testing and a closer look at the enclosures in action.
DCK’s John Rath contributed to this story. | <urn:uuid:a635c505-6949-40b0-9c2b-356aa90c4412> | CC-MAIN-2017-09 | http://www.datacenterknowledge.com/archives/2012/06/25/hot-water-cooling/?utm-source=feedburner&utm-medium=feed&utm-campaign=Feed%3A+DataCenterKnowledge+(Data+Center+Knowledge) | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00033-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928985 | 1,181 | 2.703125 | 3 |
Throughout most of human history, humans knew Venus as "the brightest star in the sky." Fifty years ago, however -- December 14, 1962 -- humanity got to know our neighbor planet in a wonderfully new way: up close and personal. On that day, NASA's Mariner 2 spacecraft sailed by Venus, at a range of 21,600 miles, scanning its atmosphere and surface for a full 42 minutes. It was the first time any spacecraft had ever successfully made a close-up study of another planet.
The result, here on Earth, was data. Much, much data. Data that disproved, among other things, a popular theory: that Venus was, as a planet, very much like Earth. Mariner 2's readings showed instead that the surface temperature on Venus was a very un-balmy 797°F on both the day and night sides -- hot enough to melt lead. They also demonstrated that Venus rotates in the opposite direction from most other planets in our solar system; that it has an atmosphere composed mostly of carbon dioxide, with very high pressure at its surface; that it features a continuous cloud cover; and that it has no detectable magnetic field. Mariner 2 also discovered new information about interplanetary space -- learning that, among other things, the solar wind streams continuously, and that the density of cosmic dust between planets is much lower than it is near Earth.
So what did all that information look like, actually? There's the billboard of data, for one thing -- painted on, apparently and tellingly, wood -- in the image above. But there's also that unfurled scroll of paper -- a paper towel roll, essentially, full of information about our neighbor planet. Which brings us, finally, to this -- the larger image of the shot above: | <urn:uuid:b8cd304b-0911-4285-936b-4e1a294412e8> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2012/12/what-big-space-data-looked-1962/60179/?oref=ng-relatedstories | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00385-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963141 | 363 | 4.09375 | 4 |
As I discussed in Collecting The SSD Garbage, when data is written to a flash cell, if there is old data in that cell it must be cleared out first by writing zeros to it. After a few months of use there is almost always data in every flash cell, so this double-write occurs on every write. A flash cell can be written to only so many times before it fails or wears out. As long as that wear out occurs several years after you purchased it and you have some warning that it is getting ready to fail, that is not a major problem to manage.
What you don't want is to have just a few cells in the environment receive the majority of the writes and wear out before the other flash cells. This might cause the storage to fail when most of the memory on it is still usable. Wear leveling fixes this. It makes sure that the write load is spread out as evenly as possible across all the cells in the environment. Different controllers will have different success rates in that distribution and it is something to test.
Testing how effectively a drive utilizes wear leveling is fairly easy. As my colleagues and I discussed in our recent webinar that guides you through understanding solid state "specsmanship," there is a self-monitoring, analysis, and reporting technology (SMART) statistic that will report how much life a drive has left. Using this information when comparing solid state technology will go a long way toward understanding which drives will last the longest.
It is important to note that memory wear-out concerns are unique to flash-based memory, As I discussed in a recent article. The Advantages of DRAM SSD, DRAM-based systems do not have to worry about memory wear and--under the right circumstances--that may be a compelling reason to consider DRAM-based storage. For many applications, though, the price/performance reality of flash-based storage is simply too attractive, which is why we need to know what wear leveling is and how the flash-based vendors are handling it.
The environment in which you use solid state storage will also directly impact how long it will last. Certain environments may wear through the drives much sooner than the five years that is often touted, while others may see a significantly longer life. In an upcoming entry, I will discuss how the various environments and use-cases can impact solid state.
Follow Storage Switzerland on Twitter. | <urn:uuid:05530dc2-fb5c-48b1-b57f-231a9ada5071> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/demystifying-ssd-wear-leveling/1410583100?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00505-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96978 | 488 | 2.53125 | 3 |
A new threat called the Heartbleed Bug has just been reported by some researchers at Codenomicon and Google. Heartbleed attacks the heartbeat extension (RFC 6520) implemented in OpenSSL. The official reference to the Heartbleed bug is CVE-2014-0160.
Heartbleed allows an attacker to read the memory of a system over the Internet and compromise the private keys, names, passwords and content. An attack is not logged and would not be detectable. The attack can be from client to server or server to client.
Heartbleed is not a flaw with the SSL/TLS protocol specification, nor is it a flaw with the certificate authority (CA) or certificate management system. Heartbleed is an implementation bug.
The bug impacts OpenSSL versions 1.0.1 through 1.0.1f. The fix is in OpenSSL version 1.0.1g. The 0.9.8 and 1.0.0 version lines are not impacted. OpenSSL 1.0.1 was introduced in March 2012, so the vulnerability is 2 years old.
The impacted systems are widespread. OpenSSL is used in Apache and NGINX, which Netcraft reports are 66 percent of the market share.
OpenSSL is also used in operating systems such as Debian Wheezy, Ubuntu 12.04.4 LTS, CentOS 6.5, Fedora 18, OpenBSD 5.3 and 5.4, FreeBSD 8.4 and 9.1, NetBSD 5.0.2 and OpenSUSE 12.2.
If you are using an impacted version of OpenSSL, you need to consider the following:
- Upgrade your system to a software version that uses OpenSSL 1.0.1g or higher. You may have to wait until your software vendor publishes a new release
- Renew your SSL certificates with a new private key
- Ask your users to change their passwords
- As content may have been compromised, you will need to consider whether you need to notify users
Updated April 9, 2014: Qualys SSL Labs has added a Heartbleed test to their SSL Server Test. | <urn:uuid:d993f962-03d4-4d1c-93ec-9baf37f30c40> | CC-MAIN-2017-09 | https://www.entrust.com/openssl-heartbleed-bug/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00205-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921122 | 441 | 2.65625 | 3 |
Nestled between the crashing waves of the Pacific and the sprawling city of Los Angeles, the Hyperion wastewater treatment plant has served residents for more than a century. In operation since 1894, the plant has evolved to become a modern infrastructure marvel. Originally the plant collected sewage from the city and deposited it in the Santa Monica Bay, destroying most of the marine life in the area. These practices led L.A. to start a program in 1980 to heal the bay, in which the Hyperion plant also played a role. Today, thanks to cutting-edge water treatment technology, water returned to the ocean is 95 percent free of biosolids. The biosolids extracted also help power the facility.
Photo courtesy of California State University, Long Beach | <urn:uuid:72745b75-0585-4205-af9e-8cdc4935f738> | CC-MAIN-2017-09 | http://www.govtech.com/hyperion-wastewater-treatment-plant-021511.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00201-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957781 | 152 | 3.0625 | 3 |
- By Carlos A. Soto
- Jul 19, 2005
Having your name and number in Paris Hilton's cell phone directory is like openly publishing them on the Web.Fiction:
The miscreants who posted the heiress' contacts online last winter got them by hacking into her smart phone through a Bluetooth radio.
Is Bluetooth technology, the underutilized short-range wireless communications you might currently have in your cell phone, PDA or notebook PC, vulnerable to attack? In short, yes'but then again, everything is vulnerable to attack. Despite erroneous reports that Paris' smart phone was leaking info like a sieve (turns out it didn't even have a Bluetooth radio), the good news is that current Bluetooth wireless products are'for the most part'safe and secure under most conditions. After weeks of trying to break into Bluetooth devices, the GCN Lab knows. Here's what we found out.True blue
Bluetooth, like its equally scrutinized wireless cousin WiFi, uses radio frequencies to move data. But that's where the similarities end. WiFi establishes a fixed connection between a node and a network that relies on an exchange of IP addresses. Bluetooth was developed to create a simpler, smaller connection between two peripherals. As such, Bluetooth connections bypass several network protocols and don't require an exchange of IP addresses. This characteristic of Bluetooth alone makes it more secure than WiFi because the connection is ephemeral and independent of IP addresses.
Bluetooth is like a sonar connection between two peripherals. Data hops to and from devices during each periodic ping. WiFi, on the other hand, represents a constant stream of data between an access point and a wireless client. Such a steady stream could be intercepted by a third party.
Bluetooth exploits are well known [see sidebar], but as with other networking communications, as long as Bluetooth users keep their devices up-to-date with the latest technologies, including patches and fixes, and follow up with a good dose of common sense, they can be kept fairly secure.
Admittedly, it hasn't always been that way. Early versions of Bluetooth, like early versions of WiFi, had significant vulnerabilities. But that's changing. Prior to our Bluetooth hacking binge, we sat down with an expert to understand the current state of Bluetooth and the nature of attacks.
Spencer Parker, European technical director for AirDefense Inc. of Alpharetta, Ga., said threats to Bluetooth devices have decreased over the last two years thanks to firmware redesigns and upgrades. That's no small admission from an expert whose company benefits from more Bluetooth security vulnerabilities, not fewer.
Other reasons for the drop in Bluetooth attacks are that the software needed to mount an attack is often difficult to obtain, the hardware and programs designed to attack devices are expensive, and sophisticated Bluetooth hacks normally require advanced knowledge of Linux and command prompt code, Parker said. And when vast quantities of personal data seem increasingly vulnerable through other means (ChoicePoint, Bank of America, etc.), why would a hacker bother breaking into a PDA that might or might not yield useful information?
Of course, no one should take a laissez faire stance on Bluetooth. Many experts still warn the technology is insecure, and the Bluetooth Special Interest Group (www.bluetooth.com), the standard's leading trade association, continues to refine its security models to stay ahead of the bad guys. At last summer's most prominent hacker conference, DefCon in Las Vegas, security experts demonstrated how they could take over Bluetooth-enabled devices, sending vendors scrambling to update their products.
Last month, two Israeli experts explained how Bluetooth is vulnerable to eavesdropping. No one has yet exploited the vulnerability, and to do so, they said, would require $2,000 worth of equipment.
Parker said the greatest danger to Bluetooth users today lies in out-of-date software. In addition, agencies often don't know what Bluetooth-enabled devices they have. The list could include not only PDAs and cell phones, but also notebooks, desktops and peripherals that the IT staff either doesn't know have Bluetooth radios or doesn't know are broadcasting a signal.AirDefense makes software called BlueWatch ($320 for government buyers) that scans offices for rogue Bluetooth signals and reports information back to the network administrator.
BlueWatch can also identify what services, such as network access, are available on Bluetooth devices so agencies can identify devices that pose a security risk and shut them down.
Parker and others offer a list of recommendations for securing Bluetooth connections [see sidebar], chief among which is to make sure your mobile device is running the vendor's latest firmware. Even more basic advice: Know if the mobile device you plan to use has a Bluetooth radio and then immediately learn how to deactivate it. If you must turn it on, pair up only with other trusted devices.
The National Institute of Standards and Technology also issued Bluetooth guidance. To read it, go to www.gcn.com
and enter 457 in the GCN.com/box.Breaking in
In the early 1990s, network security expert Dan Farmer and a colleague wrote an important paper that essentially laid out how hackers could break into networks. Shortly after, they created the Security Administrator's Tool for Analyzing Networks. The underlying premise was that you couldn't protect your networks unless you thought like a hacker.
In that spirit, the GCN Lab set about hacking Bluetooth devices. We pulled together a variety of systems, including a Hewlett-Packard iPaq, an MPC TransPort X3100 notebook, an old Nokia 3650 cell phone and a much newer Sony Ericsson P910a smart phone. As Parker and others suggested, we found that hacking a Bluetooth device grows harder the newer the product we tried to crack, although launching denial-of-service attacks was fairly easy. A Bluetooth DOS attack means simply sending requests to another Bluetooth device (provided you can locate it) until you wear it down, but it doesn't involve stealing information.
We also found that you don't need to hack a device to gain access to its contents. By merely requesting a hook-up from our iPaq to the Nokia 3650 (and simulating the 3650 user accepting the link) we were able to access all the contents of the Nokia device. Newer handhelds offer security measures to restrict access.
We ended up using the iPaq and the TransPort to launch most of our attacks. Both come with Bluetooth locators for finding nearby devices. The iPaq uses AirDefense's BlueWatch; the TransPort comes with BlueSoleil from IVT Corp. We also downloaded a pair of Bluetooth hacking tools, namely BlueSniffer and RedFang.
We first trained our sights on the Nokia 3650. Legitimately pairing with the phone was easy, but we wanted to launch a bluesnarf (in which we access contents) or bluebug (in which we gain control) attack. No luck. None of the software we tried got us access to the device, which isn't to say there isn't software out there that could.
Even when we established a legitimate connection to the 3650, we couldn't manipulate its controls.Bluejacking kills batteries
So we turned to bluejacking (basically sending unwanted data to a target device). It took us seconds to bluejack the 3650. Once we sent the unwanted message, the hypothetical recipient could accept or deny it. When we simulated a rational person denying the bluejack message, we then easily launched a DOS attack by repeatedly sending the same message. We were able to run down the 3650's battery in just 15 minutes.
Overall, it was much harder to attack the Sony Ericsson P910a. The BlueSoleil program on the TransPort notebook was unable to determine the maker of the P910a, and the BlueWatch software on the iPaq successfully ID'd the smart phone but could not turn up the list of services running on it. Still, we encountered one interesting vulnerability.
It turned out the P910a did not require personal information number pairing to set up certain services, such as dial-up networking and file transfer. PIN pairing is a fairly basic security precaution in Bluetooth devices designed to ensure connections only between trusted devices.
Using the TransPort notebook, we were able simply to request file transfer services from the P910a. The hypothetical P910a users still had to tap 'accept' on the smart phone screen, but did not have to use a PIN. If the user didn't know what he was doing or accidentally tapped the accept button, a crude bluesnarf attack could ensue.
We were unable to bluebug the P910a. However, as with the 3650, we could bluejack it and launch a DOS attack that ran down its battery and emptied its memory in 25 minutes. Keep in mind, though, that bluejacking requires the hacker be within 10 meters of his target. If you ever think you're being bluejacked, the best security measure is to walk away.
Yes, Bluetooth is imperfect. Yes, it can be attacked. But in our experience, hacking Bluetooth is more trouble than it's worth. If you keep your devices updated and take fairly simple precautions, you're unlikely to become a target. | <urn:uuid:543a330b-8322-46ab-a326-08321f8d357c> | CC-MAIN-2017-09 | https://gcn.com/articles/2005/07/19/hacking-bluetooth.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00201-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952851 | 1,899 | 2.796875 | 3 |
The Cloud Privacy Illusion
Privacy in the cloud may be an illusion, given the known cybersecurity risks, not to mention the laws in the U.S. and around the world that permit government agencies relatively easy access to remote data including data stored in the cloud.
Of course, businesses have relied on storing data in the cloud for more than 50 years. While many companies take great pains to protect cloud data from cyberthreats, they have no way to prevent governments from freely accessing their cloud data. Companies using the cloud may not realize that cloud data is more vulnerable than other remotely stored data, including data held in disaster recovery locations.
Generally, IT security experts are alarmed that most businesses that use the cloud do not consider how vulnerable their data is from a cybersecurity standpoint. Oftentimes, cloud solutions are chosen by businesses to reduce IT infrastructure costs, with little regard for the actual security of cloud data from cybercriminal or government access.
Most will remember that in the aftermath of 9/11, the U.S. Patriot Act became law. The Patriot Act permits the U.S. government, without court orders, to have simplified access to telephone, email, and electronic records to gather intelligence in the name of national security.
The official name of the Patriot Act says a great deal about its purpose: "Uniting and Strengthening America by Providing Appropriate Tools Required To Intercept and Obstruct Terrorism Act of 2001."
Of course before there was a Patriot Act, law enforcement agencies had access to many types of data, including cloud data, by conventional means, such as obtaining court-issued search warrants. Another example is the Foreign Intelligence Surveillance Act (FISA), passed in 1978 and amended by the Patriot Act, which addresses other approaches to electronic surveillance and collection of foreign intelligence information.
Conclusions About Government Access
We are not alone. Laws around the world allow governments free access to data in the cloud. What may come as a surprise is that Mutual Legal Assistance Treaties (MLATs) facilitate cooperation across international boundaries. Under these MLATs, the U.S. and EU member states allow law enforcement authorities to request data on servers of cloud providers located in any countries that are part of the MLATs.
On May 23, 2012, international law firm Hogan Lovells published a white paper entitled " A Global Reality: Government Access to Data in the Cloud." Some of the white paper's conclusions:
On the fundamental question of governmental access to data in the Cloud, we conclude, based on the research underlying this White Paper, that it is not possible to isolate data in the Cloud from governmental access based on the physical location of the Cloud service provider or its facilities. Government's ability to access data in the Cloud extends across borders. And it is incorrect to assume that the United States government's access to data in the Cloud is greater than that of other advanced economies.The White Paper makes this additional observation when comparing the U.S. Patriot Act to comparable European laws:
... our survey finds that even European countries with strict privacy laws also have anti-terrorism laws that allow expedited government access to Cloud data. As one observer put it, France's anti-terrorism laws make the Patriot Act look "namby-pamby" by comparison.The analysis of the MLATs in the Hogan Lovells' white paper continues with details about the following countries: U.S., Australia, Canada, Denmark, France, Germany, Ireland, Japan, Spain and the United Kingdom. If your company does business in any of those countries, you may want to become more aware of the data privacy risks.
When Does the US Government Need a Warrant?
Prior to the enactment of the Patriot Act, search and seizure of Internet data was generally subject primarily to the protections afforded by the 4th Amendment of the Constitution:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.The decision by a judge to issue a warrant to permit a search and seizure includes balancing the need for the search against the protected interests of liberty, property and privacy.
In the context of electronic data, the diversity of data types and sources has led to a variety of approaches as to what constitutes a reasonable search and seizure. Depending on the jurisdiction, as well as the device or data sought or investigation pending, a court may require different levels of detail before issuing a warrant.
For this discussion, the term "devices" will broadly refer to desktops, laptops, cellphones, tablets, external hard drives or memory storage, or any other computer-related technologies that could store or transmit data.
One of the first distinctions to make is whether the data sought is "inside" or "outside" a device. "Inside" and "outside" helps to establish who possesses the data and what laws may regulate it. Another distinction is between personal or non-personal use. Further, the "expectation of privacy" is important to the evaluation of a reasonable search, and that expectation is impacted by the location of the data.
When data resides on a computer used strictly for personal matters, there is a greater expectation of privacy than if the data is stored on a device used for a business or government purpose. Similarly, where the data may be available for some public access, there is less or no expectation of privacy.
In a criminal matter, if the data are "inside" the device, there are issues of verifying who was using the device when the crime occurred, locating the device, obtaining the search warrant or consent to search, and forensic analysis of the device.
If the data is "outside" the device, then collecting the data probably invokes the 1986 Stored Communications Act, which law controls data posted by users on Internet hosts such as Facebook, Google, LinkedIn and other social media sites.
Based on Terms of Service (which very few people read), Internet hosts rarely provide any information in a civil lawsuit unless the owner of that data agrees in writing, relying on the Stored Communications Act in a civil proceeding -- but governments can get that same data in a criminal proceeding without the permission of the owner of the data.
Privacy Groups and Government Access to the Internet
Among the many privacy issues the Electronic Privacy Information Center (EPIC) focuses on are those implicated in the Patriot Act and relating to personal data stored on the cloud and remote Internet sites. EPIC's overview of the Patriot Act includes these statements:
The implications for online privacy are considerable. ... The Act also extends the government's ability to gain access to personal financial information and student information without any suspicion of wrongdoing, simply by certifying that the information likely to be obtained is relevant to an ongoing criminal investigation.The impact of the MLATS between the U.S. and EU is not discussed by EPIC, but EPIC does devote a great deal of resources to monitoring privacy in the EU. There, Directive 95/46 of the European Parliament and the Council of 24 October 1995 was established
... to provide a regulatory framework to guarantee secure and free movement of personal data across the national borders of the EU member countries, in addition to setting a baseline of security around personal information wherever it is stored, transmitted or processed.The Electronic Frontier Foundation (EFF) also dedicates a great deal of resources to protect privacy and specifically focuses on the Patriot Act. The EFF produced a white paper entitled "Patterns of Misconduct: FBI Intelligence Violations from 2001-2008" based on a review of about 2,500 pages of FBI documents secured from Freedom of Information Act requests. The EFF White Paper states the following:
The documents suggest that FBI intelligence investigations have compromised the civil liberties of American citizens far more frequently, and to a greater extent, than was previously assumed. ... From 2001 to 2008, the FBI engaged in a number of flagrant legal violations, including:
- submitting false or inaccurate declarations to courts.
- using improper evidence to obtain federal grand jury subpoenas.
- accessing password protected documents without a warrant.
Assuming the EFF's findings are accurate regarding the FBI's access to personal data on the Internet, the privacy expectation of Internet data in the U.S. should be of concern to the business community.
In ConclusionSecurity of data in the cloud should be of concern to all businesses, whether that concern is due to cybercriminals or governments.
In particular, businesses relying on the cloud should be mindful of these privacy risks of cloud data being captured by governments, foreign and domestic. | <urn:uuid:9b7d09c9-2860-4772-8b1c-052884daf33b> | CC-MAIN-2017-09 | http://www.linuxinsider.com/story/tech-blog/75848.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00077-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937314 | 1,773 | 2.984375 | 3 |
Secure C/C++ Development (SCD) will guide you through the main memory corruption vulnerabilities that affect C/C++ programs, such as buffer overflows and use-after-frees. The course is packed with hands-on exercise scenarios based around sample vulnerable programs. These are used to demonstrate how attackers exploit flaws in the real world and how code can be written defensively to prevent or mitigate the impact of security vulnerabilities.
- How to identify security flaws that affect C/C++ code through code review and manual testing.
- How to evaluate the impact of flaws by learning offensive techniques used to exploit these flaws in real-world attacks.
- How mitigation techniques (such as canaries, ASLR and DEP) can be applied both at the compiler and at the operating system level to reduce the impact of vulnerabilities, together with an assessment of their effectiveness against determined attackers.
- OS Exploit Mitigation
- Input Validation
- Time and State
- Code Quality
- Integrating Security
Benefits to your organization
- Helps to ensure that your software is resilient to an attack, against even the most advanced threats.
- Reduces the number and severity of the vulnerabilities that are introduced into software.
- Increases your organization’s overall understanding of security, reducing the time and cost of remediating vulnerabilities
- Stimulates a positive attitude and an understanding of the importance of security within the development team.
Who should attend?
This workshop is aimed at developers with an operating knowledge of C/C++. Although the workshop uses an x86 Linux distribution as a base platform, the concepts explained can be easily applied/transferred to other operating systems (e.g. Windows) and platforms (e.g. ARM embedded devices).
Given the highly specialist content of this course, it is recommended that delegates know how to write programs in C/C++, are familiar with the use of debuggers and can read and understand basic x86 assembly code (no actual assembly programming experience is required).
Download the Secure C/C++ Brochure below for the full syllabus | <urn:uuid:e88e0edb-5867-4bb3-9e4c-c6186613184f> | CC-MAIN-2017-09 | https://www.mwrinfosecurity.com/training/secure-c-training/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00253-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.909304 | 431 | 2.953125 | 3 |
NASA on Tuesday said it has canceled the March 2016 launch of a space probe that was designed to give scientists a deeper look inside Mars.
The space agency said project managers decided to cancel the launch of their Interior Exploration using Seismic Investigations Geodesy and Heat Transport (InSight) mission after repeated attempts failed to fix a leak in a critical scientific instrument onboard the spacecraft.
"Learning about the interior structure of Mars has been a high priority objective for planetary scientists [for decades]," said John Grunsfeld, associate administrator for NASA's Science Mission Directorate. "We push the boundaries of space technology with our missions to enable science, but space exploration is unforgiving, and the bottom line is that we're not ready to launch in the 2016 window."
It's not clear when the mission will be rescheduled, but it is not expected to launch until 2017 at the earliest.
"A decision on a path forward will be made in the coming months, but one thing is clear: NASA remains fully committed to the scientific discovery and exploration of Mars," Grunsfeld added.
NASA first announced the planned InSight mission in August 2012. The probe was to be one of the scientific tools that researchers would use to determine why Mars evolved so differently than Earth.
InSight is made up of a lander that carries a robotic arm, two cameras, and a thermal probe that will dig into the Martian surface to calculate the planet's temperature.
Scientists have high hopes for the probe, expecting it to tell them how Mars is cooling, whether the core of Mars is solid or liquid like Earth's, and why Mars' crust is not divided into tectonic plates that drift like they do on Earth.
The instrument causing the trouble is the Seismic Experiment for Interior Structure (SEIS), a seismometer built by France's Centre National d'Études Spatiales (CNES). The seismometer measures movements in the ground, some as small as the diameter of an atom, according to NASA.
The sensitive instrument requires a vacuum seal around its three main sensors to withstand the harsh conditions of the Mars environment, NASA said.
Scientists thought they had repaired the leak around the seal, but a test on Monday showed that the instrument could not hold its vacuum seal in extreme cold temperatures.
"It's the first time ever that such a sensitive instrument has been built," said Marc Pircher, director of CNES's Toulouse Space Centre. "Our teams will find a solution to fix it, but it won't be solved in time for a launch in 2016." | <urn:uuid:31aa54ee-fb41-4c28-ab68-ea90388fdd5c> | CC-MAIN-2017-09 | http://www.computerworld.com/article/3017926/space-technology/nasa-suspends-2016-launch-of-mars-probe.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00374-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944592 | 529 | 2.96875 | 3 |
What is a CDN?
CDNs are widely used in today’s Internet landscape, improving the delivery of a significant percentage of all Internet traffic worldwide. But what does that mean, and what really is a CDN? A content delivery network is a highly-distributed platform of servers that responds directly to end user requests for web content. It acts as an intermediary between a content server, also known as the origin, and its end users or clients. To learn more, check out our video and article: “What is a CDN?”
What are the benefits of a CDN?
Content Delivery Networks, also known as CDNs, carry nearly half of the world’s Internet traffic. They are ubiquitous by their presence and mitigate the challenges of delivering content over the Internet. But why are CDNs so pervasive? Why is it that small and medium content providers, as well as large corporations, have come to rely on CDNs to provide a seamless web experience to their end users? To learn more, check out our video and article: “What are the benefits of a CDN?”
How do I maximize the benefits of a CDN?
Content Delivery Networks, or CDNs, are like many strategic tools: easy to learn and benefit from, but difficult to master and get the most out of. Like doing business online, using a CDN involves layers of complexity; and CDN providers spend every moment of every day dealing with some of the most complex technical challenges of doing business online. The challenges met by deploying a global CDN include delivery of enterprise web applications, media and software delivery, and cloud security solutions.
So how can you, whether you have a small technical team or a massive world-class staff, get the most out of your CDN? To learn more, check out our video and article: “How do I maximize the benefits of a CDN?”
Learn more about the Next Generation of Content Delivery Networks (CDN)
- Content Delivery for an Evolving Internet: Choosing the Right CDN for Today and Tomorrow
In this whitepaper, we define the core requirements for such a CDN – a highly distributed architecture, cutting-edge software services, sophisticated security capabilities, and support for agile businesses – and establish why these particular requirements are critical for helping businesses succeed in today’s fast-changing marketplace.
- CDN Buyer's Guide
In this authoritative guide, CDN buyers can get up to speed on the latest developments in the Next Generation CDN. Learn about CDN provider’s capabilities that are critical to delivering greater online experiences, including advanced web performance optimization, high quality video streaming delivery, cloud security, and web application acceleration.
- Next Generation CDN Infographic
A graphical overview of how CDNs are evolving in response to new digital requirements. Learn how digital performance can affect your bottom line, how CDNs are optimizing their networks, and what's driving enterprise buyers to the Next Generation CDN.
- The State of CDN Services
This white paper from Unisphere Research, “The State of CDN Services: Reaching Global Scale Using Content Delivery Networks“, covers the latest content consumption trends and offers insights on how you can use a content delivery network to profit from them. Discover how to optimize your CDN to deliver media to various devices, how different CDN services enhance user experiences, and how to apply CDN practices to meet the expectations of online video viewers.
- CDN Resource Page
An introduction to Akamai's CDN solutions and the benefits they provide to CDN operators, including advanced web performance optimization, improved network security and compliance, application delivery acceleration, additional self-service options, higher quality video delivery and a cloud architecture built for a global audience.
- Content Delivery Network
An overview of the evolution and latest developments of the content delivery network. See how Next Generation CDNs are addressing issues arising from the growth of non-cacheable content, the prevalence of the cloud, new security requirements and the demand for better CDN analytics.
- CDN Services
Meet Akamai Aura Managed CDN, an innovative Software-as-a-Service (SaaS) solution for managed CDN services. This turnkey solution lets CDN providers and operators launch their own video streaming services and optimize their network for content delivery, while reducing deployment time and upfront costs.
- CDN Platforms
CDN platforms are changing, and web experience enhancements are evolving to help enterprises deliver an optimized web experience. Evolving CDN platforms and capabilities include web experience optimization services that increase website speed, media and content delivery.
- CDN Glossary
An invaluable reference tool providing definitions for many of the terms that are part of the quickly evolving CDN landscape. | <urn:uuid:9dc10351-75a1-4151-8f1c-52e3a791f5c8> | CC-MAIN-2017-09 | https://www.akamai.com/uk/en/cdn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00374-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910465 | 987 | 2.5625 | 3 |
States nationwide are developing safety guidelines for self-driving cars, but the National Highway Traffic Safety Administration hasn’t even developed safety guidelines for the insecure electronics that come standard in today’s cars.
In response to questions about the status of automotive cybersecurity research and regulations, agency officials said in a statement that “NHTSA is aware of the potential for ‘hackers’ and other cybersecurity issues whenever technology is involved, however, the agency is not aware of any real-world cybersecurity issues in vehicles.” When asked whether NHTSA is developing voluntary recommendations for manufacturers, agency officials referred back to the statement.
Security problems are real, however. They present risks ranging from car theft to crashes. In 2010, a disgruntled former employee of an auto dealership allegedly remotely deactivated the starters of customers’ vehicles. University researchers have shown that intruders can infiltrate the computers tied to virtually every aspect of automobile mechanics, including brakes, speedometers and entertainment consoles. More sophisticated cars present additional threat vectors that also can be exploitable, such as navigation systems and Bluetooth for hands-free calling.
But, practically speaking, regulating car cybersecurity would be a feat for many reasons, according to the researchers and privacy advocates. For one thing, the rule-making process would constantly lag behind quick-morphing cyber threats. Also, NHTSA might not even know what to say, judging by a recent National Academy of Science study that found the agency remains in the early phases of understanding vehicular network security. Some experts reasoned that NHTSA is not acting because the agency typically does not dictate guidelines until a safety issue is pervasive on the road.
“There’s no clear evidence or no clear strict need for regulation at this point,” said John Maddox, who served as NHTSA associate administrator for vehicle safety research until August. “What we do need is to conduct the research to study the problem very carefully.”
Whether or not car cyber defenses should be mandatory is debatable, but most experts agree that regulators, manufacturers and consumers need a better handle on the matter.
At least four institutions and two automobile associations are developing or have developed recommended best practices. In 2011, the Transportation Department’s John A. Volpe National Transportation Systems Center presented NHTSA with industry guidelines. Just last week, an agency official involved in cyber research planning spoke about safety and dependability at a vehicle cybersecurity workshop the University of Maryland hosted.
$10 million for vehicle electronics safety
NHTSA’s 2013 budget request suggests that the agency may be weighing regulations. The document reveals plans to “conduct rule-making ready research to establish electronic requirements for vehicle control systems” in everyday cars. The budget proposes establishing a $10 million program to study cyber risks, starting in 2013.
Under the strategy, new agency personnel would pinpoint problems that could arise in up-and-coming vehicle electronics before they go into production. “We will identify and evaluate potential solutions and countermeasures and evaluate the need for additional standards,” the budget papers state.
The National Academy of Science’s study, which was released in January -- and famously dispelled allegations that Toyota electronics caused unintended acceleration -- urged NHTSA to get up to speed in cyber. And the report criticized the agency for lacking the technical competency to probe the Toyota issue without help. NHTSA’s Office of Vehicle Safety Research does not study cybersecurity, according to the review.
The proposed 2013 cyber plan aligns with the academy’s advice and also would engage other cyber-related federal agencies. The Defense Department’s Cyber Crime Center, the Pentagon’s computer forensics hub, already is examining Ford’s SYNC in-car voice-recognition system to flag potential cyber threats, according to DC3 contractor Lockheed Martin Corp., which is supporting the research.
Sen. Jay Rockefeller, D-W.Va., chairman of the Commerce, Science and Transportation Committee, is watching NHTSA’s movement on cyber concerns, committee aides said. “The chairman is aware of the potential issues revolving around in-car computers,” Rockefeller spokesman Kevin McAlister said. The committee “will work to ensure that NHTSA performs the necessary actions to protect drivers and passengers.”
In the lab and during live road tests, researchers from the University of California, San Diego and the University of Washington completely overrode an assortment of safety-critical systems to, for example, stop a vehicle’s engine.
“The kinds of things you worry about is either that your car is leaking information that you wish to be private,” such as your driving habits or what your passengers are saying, “or that an adversary can control features of your car,” said Stefan Savage, a UCSD computer science professor and principal investigator on the project.
During one expedition, the team was able to access a car’s internal network to disengage the brakes, making it difficult for the driver to stop. The investigators also succeeded in forcing the brakes to deploy, lurching the driver forward. Another demonstration showed how various entry points allow these sorts of attacks, such as specially crafted CDs, mechanics’ diagnostic tools, FM radios and wireless tire pressure sensors.
An actual car hack
The academy cited the team’s work and pointed to an actual cyber incident that highlights these looming dangers. The dealership ex-employee apparently manipulated systems in customer vehicles to disable the engine. By exploiting the program, he deactivated the starters and Global Positioning System units on about 100 vehicles, leaving the owners stranded. “Obviously, had such an attack compromised a vehicle’s power train, braking and other operating systems while being driven, the consequences could have been much more severe,” the academy report stated.
Volpe experts told NHTSA that sector-specific cyber guidelines require strong federal leadership. “Get involved in the rule-making process early,” their recommendations stated. The Federal Aviation Administration, for instance, took part in vulnerability assessments and collaborated with industry to identify incident response techniques.
Some former NHTSA officials say that until there is clear evidence of real-life threats, mandatory standards would be superfluous and costly for manufacturers and the government.
“I’m not ruling out the need for regulation,” but the need has not presented itself yet, said Maddox, now director of collaborative program studies at Texas A&M Transportation Institute.
If the auto industry develops voluntary standards, NHTSA then should consider whether to release its own guidelines, he said. Right now, the U.S. Council for Automotive Research, comprising engineers from Chrysler Group, Ford and General Motors, has a cyber-physical systems task force that is working on cybersecurity controls. The Society for Automotive Engineers also is examining the issue.
Ford officials rolled off a list of cybersecurity precautions they take in designing all their vehicles, including SYNC-enabled cars. The manufacturer “fuzz” tests key interfaces -- a technique that discharges random information at software while security specialists monitor for signs of failure. Ford spokesman Alan Hall said designers simulate possible vulnerabilities during conception by looking at the people, parts, data flows and other functional elements “to determine where we may have issues with things like data integrity, information disclosure, denial of service, escalation of privilege, tampering or spoofing, etc., and then determine one or more mitigation strategies.”
SYNC has a built-in firewall and application white-listing functions that dictate where downloads are permitted to launch in the system. Also, the vehicle control system network is separate from SYNC’s infotainment features, according to Hall. Software updates must be “code-signed,” or validated as Ford-authored in order to execute “thus preventing unauthorized software installation and access to private information,” he said.
Manufacturers are more up to speed
Maddox said a voluntary regime of cybersecurity safeguards, such as the frameworks the manufacturers are establishing, might be more appropriate for the constantly evolving field of hacking. “The industry would be more knowledgeable and more nimble than government can be in this area,” he said.
Some privacy groups agree that manufacturers should take the lead in creating cyber standards.
“The car manufacturers have a lot of incentive to not put cars on the road that are inherently vulnerable,” said Joseph Lorenzo Hall, senior staff technologist with the Center for Democracy and Technology, a civil liberties organization. If drivers start complaining to NHTSA of “someone messing with you on their OnStar,” the popular support system, that’s where NHTSA might have a role to play, he said. Such a gaping privacy and safety hole might force a recall and ex post facto regulations for cyber safety testing. A car security weakness “probably doesn’t reach their radar until there is big potential for something very bad happening on the road,” he said.
Other civil rights groups, however, back regulations because they believe cyber protections are both necessary and within the agency's authority.
“The potential for drivers in the United States to have their cars tracked or compromised by security flaws in vehicles' embedded computers is a matter of both driver safety and security,” said Amie Stepanovich, associate litigation counsel for the Electronic Privacy Information Center. “Regulations would provide guidance for vehicle manufacturers and baseline protections for all drivers in the United States.”
She added existing state data breach laws might offer citizens some protections, but such legislation is inconsistent and nonexistent in some states.
The UCSD and University of Washington researchers were reluctant to press for regulations and admitted standards development will take years, but they said they are encouraged by NHTSA’s apparent attention to their findings. “We’ve talked with them many times, we’ve been at workshops with them on the topic . . . From my standpoint there certainly appears to be interest and activity related to better understanding the cybersecurity problem and what to do about it,” Savage said. He said he is not familiar with regulatory politics or NHTSA’s thinking.
“It would be very easy to dictate a set of requirements that would either do little good or would be unworkable in practice,” Savage said. Today’s global marketplace means many hands from many part-makers in many facilities touch U.S. cars. “There are complex supply chain issues here because automotive manufacturers are really integrators. There may be no single person who has access to all the source code that goes into a modern vehicle,” so demanding that manufacturers evaluate the whole vehicle may be unfeasible, he said.
Savage’s research stated that Americans should not be overly afraid of cyber intrusions because of the sophistication required to pull off the hacks demonstrated.
Future cars, however, are at risk because they are expected to offer more wireless connectivity and computer controls, the team found.
“The standards process is going to take a while,” Savage said.
Discuss the future of Federal IT with experts, innovators and your peers on Dec. 3 in Washington at Nextgov Prime, the defining event in the federal technology landscape. Learn more at nextgov.com/prime. | <urn:uuid:cde3ebd1-ed8e-496f-9a39-862bc408ab23> | CC-MAIN-2017-09 | http://www.nextgov.com/cybersecurity/2012/11/do-we-need-cyber-cops-cars/59477/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00126-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94911 | 2,343 | 2.546875 | 3 |
You do not have a cell phone, you have a computer in your pocket that can make phone calls; you no longer drive a car, you are driving in a computer whose computer case is shaped like a car and is designed for transportation. The same can be said about thermostats, X-ray machines and yes – security camera systems. We now know these internet-connected devices as the ‘The Internet of Things’ (IoT) and we in IT have a joke about it.
“What does the “S” in IoT stand for?”
“There isn’t an “s” in IoT.”
“S” is for Security
With the quality varying in the vast array of IoT devices, there is plenty of low hanging fruit for bad guys to take advantage of. Usually, this means hackers taking control of a device so they can add it to their ‘zombie’ network of infected, remote controlled Internet devices. You may have heard of this before, it is called a botnet. Remember when Twitter, Netflix, CNN, and Reddit went down late last October? That was a botnet.
“P” is for Protected
To address the security of your network, answer these two questions:
- How do we ensure that none of your equipment would become part of an infected botnet?
- How do we protect your office network from the bot-nets that already exist?
I previously mentioned that the bad guys go for the low hanging fruit. What does this look like? Take network-based, wireless security cameras for example – aka “nanny cams”. You can find these devices ranging in price and quality from $35 to well over $2000. Parallel to this large span of pricing lies a huge difference in features, and quality.
When we focus on quality there are certainly the physical tangibles: ability to withstand abuse, quality control of the product before it leaves the manufacturer, quality of the lenses and the image the camera takes. However, what is frequently overlooked by the consumer is the quality of the intangibles: quality code (code that is secure, stable, and frequently updated).
You Get What You Pay For
The equipment on the lower-end of the scale, like a nanny cam, is most likely rushed through production – with the only requirement being that the equipment functions as advertised with little to no effort spent on reliability and security (despite the claims on the box).
If you happen to purchase such equipment – you alone would be responsible for visiting the manufacturer’s website to download and install firmware updates for security – and that is assuming the manufacturer even releases security updates. This kind of sloppy security can lead to hackers watching your nanny cam feed – even if it is password protected. Effective security and automatic updates are what companies pay for when they purchase more expensive network equipment.
The Cost of Security
A deep investment in network security means your servers are protected with patch management, the routers are the only equipment directly connected to the internet, monitored, and the firmware is upgraded regularly of each device monitored remotely. The rest of your equipment, access points and switches are commercial grade, also monitored, and not directly connected to the internet so if they were to get infected, it would have to be an ‘inside job’.
Now that we know what is necessary to prevent your equipment from being infected, let’s focus on the second half – how do we protect your already healthy network from the millions of infected devices on the internet? By putting all your equipment behind a firewall so the outside world cannot directly communicate with them.
Long story short, when it comes to your IoT, YOU are responsible for adding the “S”.
Josh Erdman of TekTegrity has been in IT since 1997 and never leaves behind an opportunity to learn something new. He is a true ‘Jack of all trades’, a skill he taps into with his consulting, as he is always on the lookout for new ways to merge technology with business processes. In his spare time, Josh jumps into any opportunity to present technology and science to kids and loves public speaking. | <urn:uuid:27166b17-0d2d-42ab-909a-82b3bf72a0e4> | CC-MAIN-2017-09 | https://techdecisions.co/network-security/no-s-security-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00370-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963078 | 867 | 2.578125 | 3 |
Using the Amazon cloud is a challenge, partly due to the overwhelming number of terms that must be understood to just get your servers up and running. Below is a taxonomy break-down that you can use as a reference for getting started with the Amazon cloud.:
- Cloud Computing: A self-service environment for the creation of highly-scalable applications, with the immediate availability of compute power and granular levels of billing.
- Amazon Web Services (AWS): A set of services delivered by Amazon that can be used to meet your needs for a cloud-based application.
- Elastic Compute Cloud (EC2): A service, accessible through either a console or an API, that allows you to launch, stop, start, terminate and otherwise manage the servers leased from Amazon’s datacenter.
- Elastic Block Storage (EBS): Effectively a hard-disk that stores your server image.
- Simple Storage Server (S3): An HTTP based solution for the storage and retrieval of data, typically used as a file hosting solution that is scalable and which does not need to run off servers that you own and run.
- CloudFront: A Content Delivery Network (CDN) that is associated with S3, and allows you to distribute your data to physically distinct datacenters around the world, thereby placing files closer to your users and improving their ability to retrieve files quickly.
- SimpleDB: A non-relational data store that is highly-scalable and tuned to manage large volumes of abstract data attributes (key/value pairs), made accessible to developers via a basic API.
- Relational Database Service (RDS): A relational database (MySQL) that is hosted and managed by Amazon, and made available to developers that do not want to manage their own database platform.
- Elastic Load Balancer (ELB): A load-balancer is a solution that distributes traffic evenly to the cloud servers that you own, with intelligence to avoid dead and overworked nodes.
- Regions: Compute power you use from Amazon (EC2 and EBS volumes) runs in a physical datacenter, whereby there are currently 5 datacenter regions you can use; Northern Virginia, Northern California, Ireland, Singapore and Tokyo.
- Availability Zones: Each physical region is further broken data into zones, whereby a zone is an independent section of a datacenter that adds redundancy and fault tolerance to a given region.
By Simon Ellis
LabSlice now offers consulting services for EC2 migration: http://LabSlice.com/Contact. | <urn:uuid:99950d11-2d69-40a2-9609-e1c302757613> | CC-MAIN-2017-09 | https://cloudtweaks.com/2011/03/a-taxonomy-of-the-amazon-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00474-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.903872 | 530 | 2.578125 | 3 |
In February 2011, the global Internet Assigned Numbers Authority (IANA) allocated the last blocks of IPv4 address space to the five regional Internet registries. At the time, experts warned that within months all available IPv4 addresses in the world would be distributed to ISPs.
Soon after that, unless everyone upgraded to IPv6, the world would be facing a crisis that would hamper Internet connectivity for everyone. That crisis would be exacerbated by the skyrocketing demand for IP addresses due to a variety of factors: the Internet of Things (refrigerators needing their own IP address); wearables (watches and glasses demanding connectivity); BYOD (the explosion of mobile devices allowed to connect to the corporate network); and the increase in smartphone use in developing countries.
So, here we are three years later and the American Registry for Internet Numbers (ARIN) is still doling out IPv4 addresses in the United States and Canada.
Whatever happened to the IPv4 address crisis?
The day of reckoning still looms – it’s just been pushed out as the major Internet players have developed ingenious ways to stretch those available numbers. But these conservation efforts can only work for so long.
ARIN currently has “approximately 24 million IPv4 addresses in the available pool for the region,” according to President and CEO John Curran. They’re available to ISPs large and small, but Curran predicts they will all likely be handed out by “sometime in 2014.”
Even then, addresses will still be available to be assigned to the operators' clients for a while longer. And not all operators are likely to experience shortages at the same time. "It's more of a problem for networks that are growing. For networks that are stable, they can reuse addresses" as some customers drop their service and new ones sign up.
Phil Roberts, technology program manager for the Internet Society, adds, "There's some anticipation in using addresses. Network operators get a block and parcel them out – you don't get them right when you need them."
How did we get here?
The problem took no one by surprise. The Internet Engineering Task Force (IETF) foresaw the global growth of network-connected devices 20 years ago, and in response drafted a new version of the Internet Protocol to address the looming shortage.
IPv6 uses a 128-bit address space – that is, 2^128 – yielding far more potential addresses than IPv4’s 32-bit scheme, and in fact more addresses than there are grains of sand in the Earth’s crust.
So, why hasn’t everyone just switched over to IPv6?
Well, IPv6 is not backward compatible with IPv4, meaning network operators need to run a dual stack IPv4/IPv6 network for years to come. And for IPv6 to work, it needs to be implemented end to end, meaning IPv6 has to be enabled by network hardware vendors, transit providers, access providers, content providers, and endpoint hardware makers.
Since there’s no economic incentive to being the first to invest in revamping your protocol support, many hardware and service providers stood on the sidelines and waited for momentum to build.
For enterprises, it made no sense to upgrade to IPv6 if their ISPs were still running IPv4. As John Brzozowski, fellow and chief architect for IPv6 at Comcast Cable, puts it: We had a chicken-and-egg problem. "Service providers didn't want to implement IPv6 because the content providers weren't there, and content providers didn't want to implement it because the service providers weren't there."
Plus, there were ways to avoid having to face the IPv6 music. One common technique is carrier-grade network (CGN) address translation (NAT), which translates private IP addresses within a carrier's network to a smaller number of public IP addresses in much the same way that ordinary NAT lets individuals and organizations use multiple internal IP addresses.
However, CGN brings with it a number of issues that limit its appeal. For one thing, it's expensive for carriers, and the money they spend on it could be more productively applied to IPv6-ready hardware. For another, a great deal of Internet infrastructure relies on the premise that a single public IP address uniquely identifies a carrier subscriber. CGN breaks that assumption, which means that it breaks geolocation services and impedes law enforcement organizations’ ability to identify users.
Carriers can also purchase surplus IP addresses from other carriers. ARIN has a well-defined process that lets organizations transfer IPv4 addresses. Some organizations have also transferred addresses without ARIN approval – what some have called a black market in IPv4 addresses.
ARIN is also helping to ease the pain by reclaiming unused addresses from, say, ISPs that have gone out of business, although that number is relatively small and won’t materially affect the date upon which all IPv4 addresses are gone. ARIN is also now parceling out smaller and smaller blocks of IPv4 numbers and tightening the criteria for approval of new addresses.
But IPv4 workarounds will only last for so long and most organizations are recognizing that fact and moving, if grudgingly, to IPv6. Roberts says, "There's a light at the end of the tunnel."
Where are we headed?
Comcast recently announced that it now has the world's largest IPv6 deployment. In a post on Comcast’s site, Brzozowski said, “Today, over 25 percent (and growing) of Comcast’s Xfinity Internet customers are actively provisioned with native dual-stack broadband Internet service. Native IPv6 support has been deployed to over 75 percent of our broadband network, and our goal is 100 percent in early 2014.”
Not all service providers have been as proactive, however. According to Internet Society measurements, Verizon shows no IPv6 presence.
All the major enterprise router vendors, and most vendors of small office routers, offer products with IPv6 support. A growing ISP or an expanding business should have no trouble finding hardware that supports IPv6.
As with IPv6 deployment among access providers, deployment among content providers is growing. Among websites, according to Roberts, the five top sites as measured by Alexa all support IPv6, and they account for a substantial portion of total IP traffic. One of those sites, Google, continually collects statistics about IPv6 adoption and shares them in a graph whose curve shows a steady upward trend.
However, while the shape of the curve is encouraging, in absolute terms the number of users accessing Google via IPv6 is barely above 3% of all users. Still, "that's more than double what it was a year ago," Roberts says, and IPv6 traffic is growing at a faster rate than IPv4, which Roberts sees as a promising sign.
The Internet Society also makes ongoing measurements of IPv6 deployment on its World IPv6 Launch site. It shows that 13 percent of the Alexa Top 1,000 websites are currently reachable over IPv6. "That number was 10 percent a year ago," Roberts said. In addition, the Internet Society checks the number of network operators who are turning on IPv6. "The first time [we reported on the statistics] we had about 70 networks," Roberts says. "Now we're up to 226."
With endpoint hardware providers, IPv6 readiness is a mixed bag. "A lot of devices in the home don't use it yet," Roberts says. However, the fast-growing cell phone market is a different story. Cell carriers are making progress supporting IPv6-enabled devices. For instance, Roberts points to Verizon Wireless. "All of its new smartphones have IPv6 enabled," he says, and T-Mobile recently announced that its Android 4.4 phones will default to IPv6 only for connecting to its mobile network.
Some gaming console manufacturers too are jumping on the bandwagon. In October, Microsoft's Chris Palmer announced at NANOG 59 that the Xbox One gaming console will use IPv6 with IPsec for peer-to-peer communication between gamers, and said that performance will be best when end-to-end communication is over IPv6.
That end-to-end, IPv6 connection may be elusive when content delivery networks are involved. Some CDNs, such as Limelight, turn on IPv6 by default for their customers, but others, such as Akamai, do not. Akamai's Erik Nygren says, "Most of our customers have very rich environments that still require end-to-end testing prior to dual-stacking."
One problem is that customer-premises equipment (CPE) has to be capable of supporting IPv6 and properly configured to do so, and not all CPE currently in production can claim that. Nevertheless, Akamai reported in June that roughly 1.5 percent of the content requests it sees come in over IPv6 – a rate that is about double what it saw a year previously.
Over the entire network ecosystem, including carrier hardware and networks, CDNs, corporate networks, home electronics, mobile devices, and content providers, there is steady progress in IPv6 implementation.
Unallocated IPv4 address blocks are gone forever. However, carriers still have IPv4 addresses available for allocation, so IPv4 addresses will remain in use for some time to come. And though there may be no immediate crisis for service providers, businesses, or customers, there is steady pressure to enable IPv6 in every segment of the network ecosystem as the best way to address IPv4 address scarcity.
No one seems willing to predict a date by which the last IPv4 packet will traverse the Internet backbone, but we are seeing clear progress toward IPv6 critical mass in the form of dual-stack implementations in enterprise, mobile, and home-based devices and operating systems.
Once it becomes clear we’ve reached an inflection point, when service and content providers can count on dual-stack users, and users can count on the availability of IPv6-enabled content, the pace of adoption should quicken. Just as no one needs to be the first to support IPv6, no one wants to be last either.
The reality is, Roberts says, "It takes a while to transition. After all this is done it would be a great graduate thesis for someone to see why it has taken so long."
Lee Schlesinger is Network World’s former test center director. You can follow him on Twitter @leeschlesinger. | <urn:uuid:fdc00866-4fa2-437f-8519-8bca0afafc4f> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2174297/lan-wan/whatever-happened-to-the-ipv4-address-crisis-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00294-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94558 | 2,165 | 2.765625 | 3 |
There’s an interesting use case for GPU computing written up over at EE Times this week. Uri Tal, founder of Rocketick, an Israeli company that provides GPU-accelerated software for electronic design automation (EDA), authored the article and went into some detail about why GPUs are such great computational engines for doing chip simulations.
First, though, he described why EDA applications are not so great when running on CPUs. Most of it centers around the lack of data locality these applications exhibit. CPUs rely heavily on large caches to avoid the considerable latency costs of accessing main memory. But since EDA data sets are too large to fit into cache and the application’s access pattern is somewhat random, memory bandwidth becomes a bottleneck. And since chip designs are getting larger and more complex, a cache-based architecture probably won’t be able to catch up.
Not so for GPUs, which are built with data parallelism in mind and are hooked to graphics memory (GDDR5) that provide higher bandwidth than CPU grade memory. Writes Tal:
GPU’s are perfectly suited for data-parallel algorithms with huge datasets. In the most recently developed GPUs there are more than a thousand processing cores, organized in SIMD groups. All that is required is that you launch several million short-lived independent threads that need not communicate with each other. The memory latency can be perfectly hidden by switching between “waiting” threads to “ready” threads very efficiently. Instead of optimizing for the latency of the single thread, optimization is for throughput – the number of threads that can be processed in specific time duration.
But getting the EDA to take advantage GPUs is not a slam dunk. It rests on being able to parallelize the application such than dependencies between all the threads are minimized. Tal said they had to redesign both the EDA software structure and the underlying algorithms to make that happen.
According to him, the redesign paid off, resulting in chip simulations that ran 10 to 30 times faster. Better yet, the Rocketick software can run on multiple GPUs and will automatically deliver more performance as newer, bigger, and quicker GPUs are rolled out.
Although not mentioned in Tal’s writeup, it’s worth mentioning that NVIDIA uses GPU-accelerated tools to design and verify its own hardware. Back in 2010, at least, NVIDIA was using Agilent software as part of their chip design workflow, employing a small in-house GPU cluster. At the time, the GPU maker was evaluating Rocketick’s offering and the early results looked “promising.” | <urn:uuid:2c13bd04-4e4e-4659-b22e-c0935ac9f8a5> | CC-MAIN-2017-09 | https://www.hpcwire.com/2012/05/03/designing_faster_chips_with-_faster_chips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00294-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958634 | 536 | 2.96875 | 3 |
castAR Makes the World Your 3D Playground
Do you remember the scene in an early Star Wars movie where R2-D2 and Chewbacca played a holographically generated, Jedi 3D board game while killing time in their light freighter? Full-color, three-dimensional game pieces were projected onto the board. Chewbacca wasn't a particularly good loser, and R2-DR2 blipped and beeped a lot.
Well, a few decades later, we may be about to see a projected augmented reality system -- for real. Kickstarter project castAR is a 3D, holographically projecting generator that lets you position objects in virtual space. Applications include gaming.
How it Works
A pair of glasses are equipped with two projectors, one over each eye. The projectors create a 3D view onto a highly reflective surface.
A tracking camera, installed in the glasses, picks up identification markers embedded on the reflective surface.
The reflective surface is designed to reduce scattering of light, enabling multiple players to see the projections, and the camera lets software track the player's head in relation to the physical scene. Software then portrays the projected scene.
Key add-ons to castAR include a magic wand, which acts as a controller and is also tracked; a radio-frequency identification, or RFID, grid that underlies the reflective surface, along with RFID bases that can be attached to existing miniature game pieces; and a non-projection virtual reality, or VR, and non-projection augmented reality, or AR, clip-on glasses attachment.
The difference between AR and VR is that AR includes a view of the real world too, not just a totally fabricated virtual world, as is the case with VR.
The creator reckons that the glasses will ultimately weigh less than 100 grams and will fit over prescription glasses if necessary. The in-glasses camera detects movements to the sub-millimeter and only processes the image and analyzes it, sending results to the PC, thus reducing processor requirements for the PC.
One version of the optional RFID existing miniature bases can track and provide two-way communication for miniature electronics, like future-developed motors.
Technical Illusions currently has roughly 1,500 backers for castAR who are contributing more than US$350,000 of a $400,000 goal. The funding period ends on Nov. 14.
A contribution of $189 gets you the starter package, including the glasses and a one-meter by one-meter reflective surface. A $395 contribution gets you a two-player gaming set-up, with two magic wands and the larger one-meter by two-meter surface.
The estimated shipping date is September 2014.
From a crowdfunding, jump-in perspective, we like the fact that this potential product is self-contained and not dependent on other technology becoming ready. As a counter-example, the Oculus Rift virtual reality headset -- thus far only available to developers -- is also spurring crowdsourced add-ons, like the Transporter3D telepresence add-on.
castAR could be a more tangible project to get involved in.
Undoubtedly this genre of gaming device -- the immersive virtual and augmented environment -- is just waiting to explode onto the gaming market.
We think that it's going to be a question of who can combat latency, required-processing power, physical size and weight issues, and VR-induced nausea.
AR, which includes the physical world, has the advantage that it's less likely to cause seasickness-like nausea, a common side-effect with all-immersing VR that's created by the body and brain getting discombobulated.
castAR, in its native form, is a projected form of AR that may well get the combination of real-world and virtual correctly mixed.
This is a rapidly developing area and a number of devices may come to market at the roughly same time. They include VR goggles a la Oculus Rift; retina projectors that don't use screens at all; elaborate telepresence 2D and 3D processors for VR goggles; and this, the castAR holographic projector.
It will be the gamers who decide who wins this battle for the next generation of game interfaces. | <urn:uuid:0e9845ec-fc4d-41c2-a40c-83ba2c7d6673> | CC-MAIN-2017-09 | http://www.linuxinsider.com/story/linux-community/79194.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00414-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929035 | 876 | 2.515625 | 3 |
Molinari-Jobin A.,KORA |
Kery M.,Swiss Ornithological Institute |
Marboutin E.,ONCFS |
Molinari P.,Italian Lynx Project |
And 8 more authors.
Animal Conservation | Year: 2012
Inferring the distribution and abundance of a species from field records must deal with false-negative and false-positive errors. False-negative errors occur if a species present goes undetected, while false-positive errors are typically a consequence of species misidentification. False-positive observations in studies of rare species may cause an overestimation of the distribution or abundance of the species and distort trend indices. We illustrate this issue with the monitoring of the Eurasian lynx in the Alps. We developed a three-level classification of field records according to their reliability as inferred from whether they were validated or not. The first category (C1) represents 'hard fact' data (e.g. dead lynx); the second category (C2) includes confirmed data (e.g. tracks verified by an expert); and the third category (C3) are unconfirmed data (e.g. any kind of direct visual observation). For lynx, which is a comparatively well-known species in the Alps, we use site-occupancy modelling to estimate its distribution and show that the inferred lynx distribution is highly sensitive to presence sign category: it is larger if based on C3 records compared with the more reliable C1 and C2 records. We believe that the reason for this is a fairly high frequency of false-positive errors among C3 records. This suggests that distribution records for many lesser-known species may be similarly unreliable, because they are mostly or exclusively based on unconfirmed and thus soft data. Nevertheless, such soft data form a considerable part of species assessments as presented, for example in the International Union for Conservation of Nature Red List. However, C3 records can often not be discarded because they may be the only information available. When inferring the distribution of rare carnivores, especially for species with an expanding or shrinking range, we recommend a rigorous discrimination between fully reliable and un- or only partly reliable data, in order to identify possible methodological problems in the distribution maps related to false-positive records. © 2011 The Authors. Animal Conservation © 2011 The Zoological Society of London. Source
Chapron G.,Swedish University of Agricultural Sciences |
Kaczensky P.,University of Veterinary Medicine Vienna |
Linnell J.D.C.,Norwegian Institute for Nature Research |
Von Arx M.,KORA |
And 76 more authors.
Science | Year: 2014
The conservation of large carnivores is a formidable challenge for biodiversity conservation. Using a data set on the past and current status of brown bears (Ursus arctos), Eurasian lynx (Lynx lynx), gray wolves (Canis lupus), and wolverines (Gulo gulo) in European countries, we show that roughly one-third of mainland Europe hosts at least one large carnivore species, with stable or increasing abundance in most cases in 21st-century records. The reasons for this overall conservation success include protective legislation, supportive public opinion, and a variety of practices making coexistence between large carnivores and people possible. The European situation reveals that large carnivores and people can share the same landscape. Source | <urn:uuid:0b274e52-f7cd-41ad-9ff7-d38100fd96f3> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/bavarian-environment-agency-46713/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00414-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.892906 | 702 | 2.78125 | 3 |
U.S. researchers have developed a camera chip that could give smartphones the ability to take 3D scans of everyday objects, a sought-after feature in the 3D-printing world.
Engineers at the California Institute of Technology (Caltech) said their device is based on a cheap silicon chip less than 1 millimeter square and it can produce 3D scans with extremely fine resolution.
The chips could be incorporated into phones and the data could be sent to 3D printers to duplicate scanned objects, eliminating the need to use large desktop devices.
The device works by shining beams of light, which are perfectly aligned, on a targeted object. It then detects subtle differences in the light that is reflected back from that object. The differences help it build a digital 3D image of the target.
To shine the light, the device uses an array of tiny LIDAR (light detection and ranging) laser beam scanners. Useful for measuring distance, LIDAR elements have been used for years in applications such as navigation for driverless cars and robots.
The light that is reflected off the object is picked up by a small 4 x 4 grid of detectors, as the researchers describe in a study published in the journal Optics Express..
The detectors act like pixels in that they measure the phase, frequency and intensity of the incoming light and assign a distance value to each pixel in the 3D image of the object that has been scanned.
The researchers used the proof of concept camera chip to create a 3D scan of a U.S. penny from half a meter away. The scan features micron-level resolution as well as the larger undulations on the penny’s surface that are nearly invisible to the naked eye.
The 16-pixel array could be increased to hundreds of thousands to create larger, more powerful arrays for applications such as helping driverless cars avoid obstacles, according to Caltech.
“The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands of new uses for such systems by incorporating them into personal devices such as smartphones,” Caltech electrical engineering professor Ali Hajimiri said in a release last week. | <urn:uuid:e1e63d9b-20f9-45be-9a28-49258b072dfc> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2906234/camera-chip-could-turn-phones-into-3d-scanners.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00166-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94368 | 449 | 3.5 | 4 |
NASA, with an eye toward Earth-based projects, is calling on software and hardware developers to create new technologies for addressing issues around coastal flooding.
The space agency on Wednesday launched its third global "codeathon," this time featuring a challenge focused on coastal flooding. The call to arms, in addition to three other climate-themed challenges, show NASA's interest in amassing knowledge and solving problems revolving not just around space exploration, but social needs as well.
The agency hopes participants will leverage federal data to create simulations and other technology that could help people understand their exposure to coastal-inundation hazards and other dangers, NASA said.
The initiative was announced alongside a new effort by the White House to make climate change data more accessible to the public and researchers alike, as part of President Obama's open data project. Other groups like Google and the World Bank will contribute data of their own to the project, the White House said, which is designed to help Americans plan for climate impacts.
NASA has some interesting ideas about what applications might come out of the two-day challenge, which will be hosted at nearly 100 locations across the world next month. New technologies could help coastal businesses understand their level of exposure to flooding risks, or the extent to which they might be affected by sea level rise and coastal erosion in the future.
"Solutions developed through this challenge could have many potential impacts," said NASA chief scientist Ellen Stofan, in the agency's announcement.
NASA's coastal flooding challenge is one of four climate-related challenges using data provided by NASA, the National Oceanic and Atmospheric Administration and the Environmental Protection Agency. But beyond coastal flooding, the codeathon includes more than 40 new challenges related to other areas including robotics, human spaceflight and asteroids. Half of the challenges are focused here on Earth.
In total, NASA hopes participants will make use more than 200 data sources for the event to build their applications.
The codeathon could be seen as the agency's version of the obligatory hackathon now hosted by many Internet companies. The events usually have developers break from their usual routine and work feverishly to knock out code aimed at new applications or products.
NASA's effort seeks to capitalize on President Obama's "open government initiative," which aims to make data from U.S. government agencies, like from NASA, more easily accessible online. The codeathon will be held April 12-13. | <urn:uuid:3a5c7846-1953-452e-9c37-e906fa469794> | CC-MAIN-2017-09 | http://www.cio.com/article/2377781/internet/nasa--codeathon--challenge-seeks-apps-for-coastal-flooding.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00342-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951911 | 487 | 3.015625 | 3 |
If you ask the average consumer what their first order of business would be to learn about a new restaurant or research hotel reservations, they'd most certainly head to the Internet. Your company's online presence is exceptionally important, and plays a key role in your potential guests' purchasing decisions. Yet despite the importance of online image, many hotels and restaurants fall short in a key area of website design: general accessibility for all populations.
A website's 'accessibility' specifically refers to web content that is available to all individuals, regardless of any disabilities or environmental constraints. Users may be operating in situations under which they cannot see, hear, or move, or they may have difficulty processing some types of information, reading or understanding text, or may be unable to use a keyboard or a mouse.
According to the U. S. Census Bureau, there are about 51.2 million Americans with some level of disability and 32.5 million people with a severe disability. Furthermore, the proportion of people with disabilities grows as the baby boomer generation ages. People between the ages of 45 and 54 have an 11.5 percent chance of developing a disability, and those chances increase dramatically between the ages of 55-64. Almost 54.5 percent of the population over 65 years of age has a disability.
The Internet can offer stimulating opportunities to people with disabilities, while providing independence and freedom. But, if a website offers low accessibility or provides vague information, then technology will be of little help to communicate to users with visual impairment.
Weak access discovered
To evaluate website accessibility, I worked with a graduate student from the University of Delaware, Lina Xiong, to conduct a study of 100 randomly selected hotel and restaurant websites. More than half of all those evaluated could not be viewed successfully by people with disabilities. Most of the hotel websites we analyzed failed the majority of our evaluation parameters; the single largest cause of failure was a lack of alternative text for non-text materials. This result is consistent with previous research, and such failure is relatively easy to rectify. Restaurant websites faired slightly better than hotel websites, though this could be largely attributable to their general simplicity by comparison.
Tips for improvement
Consider following these guidelines for web design to provide universal access to all guests:
- Tap best practices. Become familiar with Section 508 standards, an amendment to the Rehabilitation Act that works to eliminate barriers in information technology in Federal environments. Although Section 508 only applies to Federal agencies, it offers a comprehensive picture of how to improve information technology accessibility.
- Test current sites. Run an online testing tool such as Cynthia Says (www.cynthiasays.com) to determine what accessibility measures are missing, based on Section 508 standards or Web Content Accessibility Guidelines (WCAG 1.0), established by the World Wide Web Consortium.
- Look for easy areas to improve. In some cases, simply adding alternative text for non-text materials can greatly enhance accessibility.
- Focus on content. Reconsider the balance between 'presentation' and 'usability' of a website; content is ultimately king. Decrease flash-based content or animation elements that may present difficulties in compatibility for assistive technologies or other users with lower versions of necessary viewing software.
- Create a second site. Redesigning large, complex websites to be more accessible can be costly and labor intensive. Consider instead creating a 'mirror website' that includes all of the necessary content from the original website, without any elements that may hinder accessibility. Offer a prominent link on the nav bar directing users to the mirror website.
- Understand user disabilities. Only through an outside-in approach can web designers develop an accessible website which can better reach business revenue potential and offer enhanced customer interaction. | <urn:uuid:aa862117-514b-4203-91fe-fa54de5372e9> | CC-MAIN-2017-09 | http://hospitalitytechnology.edgl.com/magazine/September-2008/Improving-Web-Accessibility55301 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00110-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.911999 | 763 | 2.515625 | 3 |
Primer: AjaxBy Baselinemag | Posted 2005-11-08 Email Print
Ajax, a collection of programming technologies, delivers online content to users without reloading an entire page.
What is it? Ajax is a buzzword that describes a collection of Web-oriented programming technologiesall of them several years oldfor creating Web applications that behave more like traditional computer programs.
What does it do? A Web application built with Ajax efficiently delivers additional information to a browser when someone clicks on a button or moves the mouse cursor over a part of the page, without refreshing the entire page. That, ideally, results in Web pages that respond almost as if they were a locally installed program. For example, video rental company Netflix uses Ajax to automatically pop up a movie's synopsis, complete with a thumbnail image of the movie poster, when a customer moves a cursor over titles in a list of search results. Previously, the site required loading to a brand-new page if a user wanted to find out more about a movie.
Why is this stuff getting attention now? Because some big Web sites have recently provided examples of usefuland funAjax-based applications. Google Maps (maps.google.com), introduced in February, shows street addresses on a map and then lets you scroll in different directions without having to wait for the page to reload. "Sites built using the Ajax approach are easy to use and very cool," says Brian Goldfarb, a product manager in Microsoft's Web Platform and Tools group. "It gets you emotionally connected."
Why else is it interesting? Ajax works with most standard Web browsers and any Web server, unlike proprietary technologies for creating interactive Web applications that require additional software (such as Macromedia's Flash). Although technically the Microsoft-developed XML code that is part of Ajax isn't an industry standard, major browsersincluding Microsoft's Internet Explorer and the open-source Firefoxwork with Ajax-based pages.
What's the downside? It's very hard to do. Creating an Ajax application from scratch is like having to build a brick wall but first having to figure out how to create the bricks. "Sexy Web pages are great," says Forrester Research analyst Mike Gilpin, "but the dark side to Ajax is that it's really, really labor intensive." That's why Ajax-like applications haven't achieved widespread popularity.
Will it get easier? Yes. Web development tools vendors are delivering better building blocks for Ajax. In September, Microsoft demonstrated Atlas, a set of prebuilt programming "libraries" that wrap Ajax technologies into discrete, functional pieces of code. Tibco, an application-integration software company, last year bought General Interface, a six-person startup that developed a tool for creating Web interfaces with Ajax. For Tibco, Ajax is no mere decorative trifle: "It's for people who want to create rich applications," says Kevin Hakman, a marketing director at the company, "and eliminate the installation of software on the desktop." | <urn:uuid:17aa09b9-bf4c-47df-9a1b-6e2705c98698> | CC-MAIN-2017-09 | http://www.baselinemag.com/it-management/Primer-Ajax | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00162-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.904734 | 611 | 2.734375 | 3 |
Lighter-than-air craft were the wave of the future until the Hindenburg burned up in full view of movie cameras in 1937. Now with few exceptions — like aerial coverage of football games — they seem like toys or novelties. But something is stirring in the realm of lighter-than-air. NASA and the Defense Advanced Research Projects Agency recently gave California-based Aeros $35 million to develop a 500-foot-long helium-filled airship that’s purported to carry more than 60 tons of cargo. And Google is floating Project Loon, the idea that Internet connectivity can be established on a network of balloons 12 miles up in the air.
Balloons also are being tested to lift wireless communications platforms high above areas stricken by disaster. Although balloons aren’t the only method for doing so, they have some unique characteristics that might help round out disaster response at the local level.
In a major disaster, ground-based communications systems are most likely knocked out, and in 12 hours or so, generators and backup batteries will grind to a halt. It may be days before organized disaster relief arrives, and so an emergency communications system is needed that local authorities can deploy within 12 hours and continue operating for 72 to 96 hours.
To fill that critical gap, the FCC, FEMA and others are looking at Deployable Aerial Communications Architecture (DACA) for emergency communications, hoisted above the Earth on aircraft, drones, helicopters, satellites or balloons.
But getting the equipment airborne is only the first challenge. Using DACA in real-world emergencies will require multiple coordination points to avoid a range of concerns.
One worry is potential interference with ground-based wireless systems that may still be operating after a disaster, or that are brought back into service following the disruption. Gregg Riddle, former president of the Association of Public-Safety Communications Officials (APCO International), for example, last year applauded an FCC inquiry into the technology as a first step toward “identifying whether and how DACA can be used … without creating interference to other emergency radio communications.” Avoiding interference will require coordination with the National Telecommunications and Information Administration (NTIA), which regulates radio spectrum.
The FAA will be involved to prevent collisions between balloons or drones carrying communications gear and other aircraft. In addition, areas near Canada or Mexico need coordination with the U.S. State Department, if operations, aerial equipment or spectrum use might impact a neighboring country.
So just how well would a balloon-based communications system work, and could local authorities launch and manage it? To find out, Reston, Va.-based Oceus Networks and two of its partners — Space Data Corp. of Chandler, Ariz., and NTIA Public Safety Communications Research of Boulder, Colo. — conducted a test in July in Adams County, Colo., one of the first areas to pilot FirstNet, a nationwide public safety wireless network. The test temporarily used FirstNet bandwidth to avoid interference issues and a special “steerable” balloon package.
Balloons have their limitations. Weather balloons fly at relatively low altitudes and drift out of range, requiring periodic launches to sustain communications with the target area. Balloons can also be tethered, but at such low altitudes, area coverage is restricted. For the Adams County test, a series of hydrogen-filled high-altitude balloons provided a lift into the stratosphere, which at moderate latitudes ranges between 30,000 and 160,000 feet high. The “steerable” part includes technology that lets the balloon drop sand to ascend and vent gas to descend. Up and down is good enough to catch a ride with the wind. For the test, Space Data provided meteorological support.
“I was really impressed at how well the guys could pilot the balloon,” said Jim Patterson, vice president of Oceus Networks Public Federal Solutions. The day of the test, the jetstream was blowing about 120 mph toward Denver, which caused the balloon to move toward that city, he said. But once the balloon reached 50,000 feet, a 6 mph breeze pulled it back toward Boulder, near Adams County.
“They targeted the recovery point very early on in our launch, and they hit it exactly,” Patterson said. “Potentially you can kind of orbit over an area.” The higher the altitude, the greater the coverage area. At 75,000 feet, the area targeted for coverage was about 38 square miles.
According to Doug Sharp, Oceus Networks’ director of engineering, the communications gear carried by the balloon is developmental. “At this point, there are not a lot of commercial payloads rated for that altitude,” he said. The test payload weighed about 50 pounds and consisted of a full 4G long term evolution (LTE) network in a box, or “network on wheels” minus the wheels.
“It was a full, self-contained deployable network that we flew on the balloon,” Sharp said. (Specifically it was a 20-watt LTE 10 megahertz Y frequency division duplex carrier in Band 14, according to Sharp.)
For the test, the team wanted to use traditional on-the-ground equipment, which included Motorola Solutions’ dongle for access to the public safety LTE network, a modem that’s designed for use in vehicles and an LTE-enabled smartphone. Tests were conducted using the different devices. “All communicated to the balloon, but it’s not limited to those,” Sharp said. “Any Band 14-certified device could be utilized.”
Do first responders think public safety communications via a balloon is more than hot air? Bill Schrier, who was involved in the development of FirstNet and now works for the Washington state CIO, articulated some concerns about balloon-based communications, including vulnerability to wind currents that accompany hurricanes, tornadoes and windstorms. He added that covering such a large area on the ground would mean hundreds or thousands of responders would all share the same bandwidth and signal. In addition, connecting the transmission equipment in the balloon to the Internet — known as backhaul — would rule out fiber and make microwave transmission difficult.
Sharp acknowledged that there are still issues to hash out before real-world deployment, but he said the test confirmed that communications from a high-altitude platform could cover extended distances.
Sharp said that focusing the LTE coverage area would be important to avoid saturating the bandwidth and there are some backhaul options that weren’t included in the first test phases, such as a dedicated microwave backhaul using steerable high-gain antennas or even Wi-Fi or WiMAX technology.
“A second possibility is to utilize the newly standardized feature of LTE for in-band backhaul,” said Sharp. “This would result in reduced user-to-user throughput, but would facilitate simpler backhaul configurations.”
One thing the team learned was not to launch a high-altitude balloon on July 2. “We launched this on World UFO Day,” Sharp said. “And the switchboard lit up, as I’m sure it did at the police station. Our balloon payload was 75,000 feet above Denver and people could see it from the ground.”
Other lessons learned included the effects of cold, thin air and low pressure on electronics, which are being applied to future platforms, Sharp said.
“We were able to close the link from the ground to the air and at various distances run data rates of anywhere from 5 megabits per second to 20 megabits on the downlink to the local stations, and we were able to close the link and communicate with the payload. We are still looking at do we have enough data to tell us about the overlap to public safety communications.”
Although the test used a balloon, emergency communications gear can just as easily be lifted by helicopter or other means. The real value, Patterson said, is what you can do with the bandwidth. “People can wear biobelts so you can keep track of where they are, beam video back and forth, give real-time situational awareness with whiteboarding and exit routes. It’s a very flexible, dynamic technology. We just tried one of the hardest-use cases for our experiment.” | <urn:uuid:3dc7987d-a4cf-4e7e-a24c-9a1b6e776964> | CC-MAIN-2017-09 | http://www.govtech.com/data/Balloons-for-Emergency-Communications-Not-Just-Hot-Air.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00038-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952161 | 1,721 | 3.265625 | 3 |
Researchers at MIT’s Lincoln Lab have developed a radar system that can “see” through concrete walls up to eight inches thick. Developed with an eye toward military applications, the device, it’s claimed, can operate from 60 feet away. Using wavelengths similar to those employed by Wi-Fi, the device yields real-time video of moving objects behind walls. Currently, the systems displays moving things – such as people – as blobs that researcher Gregory Charvat said in an MIT news release “requires a lot of extra training” to understand. But Charvat and his colleagues are working on enhancements to improve the images. To see the system in action, watch the video below. | <urn:uuid:af6709ce-8f28-4633-a76e-6c0ce7af2325> | CC-MAIN-2017-09 | http://www.govtech.com/technology/MIT-Researchers-New-Radar-Technology-Sees-Through-Walls.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00038-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957052 | 146 | 2.640625 | 3 |
Dynamic Multipoint Virtual Private Network (DMVPN)
A Dynamic Multipoint Virtual Private Network (DMVPN) can be used with other networks like Multiprotocol Label Switching (MPLS), but streaming multicast is accomplished quite well using "Default" and "Data" Multicast Distribution Trees (MDTs) with MPLS.
Before implementing a Dynamic Multipoint Virtual Private Network (DMVPN) as a hub and spoke solution, or streaming multicast with a DMVPN, an explanation of DMVPN may be in order for many of us trying to implement this solution. All examples of VPNs in this paper cross the public Internet. DMVPNs could be used with other networks like Multiprotocol Label Switching (MPLS), but streaming multicast is accomplished quite well using "Default" and "Data" Multicast Distribution Trees (MDTs) with MPLS.
A DMVPN is not a protocol so there are no configuration commands that trigger it like "ip dmvpn xxxx." A DMVPN is instead a network design. This design allows remote sites/spokes in a "Hub and Spoke" or "Star" VPN router topology to connect to each other directly without sending the traffic/data packets through the Hub. In other words, it is one hop rather than two hops which is sometimes called a hairpin turn. Most of this paper will describe Phase 2 DMVPN design. Phase 3 is also available and the differences are explained at the end of this paper.
The DMVPN design is made up of the following technologies, which will be explained separately:
1. Multipoint Generic Routing Encapsulation (mGRE)
2. Next Hop Resolution Protocol (NHRP)
3. Routing protocol-EIGRP is often mentioned as a good choice
4. IP sec encryption
Next Hop Resolution Protocol (NHRP) was originally used in non-broadcast multi-access networks (NBMA), like Frame-Relay and Asynchronous Transfer Mode (ATM). Devices/routers connected to an NBMA network typically are all on the same IPv4 subnet. Broadcasts and multicasts do not reach all devices like they do on an Ethernet network because legacy NBMA networks are usually Layer 2 WAN implementations with no routing inside the WAN. Without routing inside the NBMA network, spoke routers have to go through the hub router to get to another spoke as described above. This limitation could cause a bandwidth bottleneck at the hub router. One solution is to put the routers in a full mesh topology, but spoke routers would need an extensive configuration and the expense of extra virtual circuits to reach each other in one hop.
Like any NBMA configuration there needs to be a mapping either statically, like shown here, or dynamically of the IPv4 next hop address to the NBMA next hop address. The NBMA next hop address in this example is a Frame-Relay DLCI number. The above example is with only four locations. A topology of twenty routers would need a total of 190 virtual circuits and 380 map statements. A more scalable, less expensive solution is needed that also does not cause a bandwidth bottleneck at the hub router.
The solution is to have a Layer 3 inexpensive WAN like the public Internet. The Internet has routers inside the backbone that can make routing decisions. The 190 virtual circuit problems go away as Internet routers have a path to every destination. Also, the Internet routers are fully meshed or at least heavily partially meshed so two remote spoke routers likely have a better path to each other than going through a hub router. The problem is the routers connected to the Internet are likely not all on the same subnet so they do not form an NBMA and NHRP registration and discovery would not take place. Therefore, for spoke routers to connect directly to other spoke routers and provide the security necessary on the public Internet, some form of VPN tunnel configuration is needed. The 380 map statements problem would still exist as 380 VPN configurations. | <urn:uuid:cc3b13cd-31ac-40fe-8ccc-88caa2090379> | CC-MAIN-2017-09 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/dynamic-multipoint-virtual-private-network-dmvpn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00038-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927463 | 826 | 2.828125 | 3 |
"Open source" has come a long way and with the new administration adopting the open source content management system Drupal to power the recovery.gov Web site, open source's visibility will likely get another big boost. Speaking from the standpoint of a developer, the number of tools, utilities and programs available under open source licensing continues to be very exciting. But it is also true that confusions still persist about what it is and, in particular, about its costs. "Open source" and "free" are not synonymous -- though there is a relationship between the two terms.
As with any engineering product, using software requires more than just having access to the application. To take a more concrete example, let's consider the task of building a bridge over a stream -- it involves more than just having a crew pull up to the river and start building. The environmental impact, the needs and concerns of the surrounding community, how to make a connection to the electric grid and even connecting to the existing roads are all factors that need to be taken into account. And that all occurs before the bridge is built. Once construction is done, it requires ongoing maintenance, inspection, repairs and a means of controlling the traffic on it.
But let's get a bit more precise about the analogy. Before the bridge is built, someone needs to have done the engineering work to figure out how the bridge is put together, the size of the beams, etc. A fabrication operation then makes the beams and other pieces needed to do the construction. If the bridge is small, it might be assembled in a shop and transported to the target site. If it is larger, then the fabricated pieces will shipped to the site and assembled in place.
Open source projects are similar: the architectural work has been done and has been made available for general use. Many of the pieces have been fabricated and often those pieces have been assembled and the "bridge" is just waiting to be transported to the installation location.
And while that means a lot of work has already been done and made available without cost, it doesn't mean that the new bridge will be "free." The fact that a general blueprint exists is nice but it may need some tweaking to make it fit for the specific use. This requires a resource that can read and update blueprints. When working on open source projects, you can't necessarily depend on a vendor to supply that resource -- you may have to supply it yourself.
To keep things in perspective, the following is a quick list of the items that commonly need to be taken into account when deciding on open-source software alternatives:
Many open source software applications, like commercial software, require configuration which can require expert-level knowledge. For example, the Apache Web server requires administration which is primarily done by editing one or more configuration files. Configuring Apache is not difficult but if your staff doesn't have existing expertise in it, then the total cost of ownership will need to include either hiring experts or getting staff up to speed.
By contrast, Microsoft's Web server also requires a great deal of customization but is all done through a graphical user interface. Both require expertise. This issue also shows itself when selecting an application that requires ongoing configuration or changes as part of routine use. For example, many open-source content management systems exist and are quite popular. But, when evaluating options, it is important to look into the technologies on which they are built. For example, if your staff's expertise is in .NET or ColdFusion, the no-cost price license for a PHP-based system may be appealing until you need to get something changed and find that
the skills don't exist within your organization to make it happen.
It is safe to assume that any software will require support at some time or another. Commercial software vendors usually provide support for their products. With open source, support options can be less clear. Some open source projects have spawned companies which specialize in providing support but if a company doesn't exist to specifically support a specific open source application, it is important to factor in the true support costs.
Large open source projects often have large communities of users who work to answer questions and address issues. But, in most cases, interacting with the community is most efficient when someone with technical knowledge is asking the questions. So, when making decisions about how to handle support, your planning should include having someone on staff or under contract who understands the product and who can, as needed, interact intelligently with the online community.
Particularly for government agencies, training is an important issue to consider when selecting software. Again, training on open source software is often available from traditional training companies -- at least for popular or large open source applications. But for smaller applications, no formal training may be available. This problem can be compounded by the tendency for some open source applications to focus more on functionality and performance than on user interface. For technologists, the functionality and performance is key and the user interface is something that can be adjusted or "lived with." But for end-users, the user interface is the application.
The intelligence and usability of the navigation, the quantity and quality of online help and access to solid training materials or classes can have a greater impact on application adoption than the application's features. As with support, open source projects often have online training and frequently there are community members who dedicate themselves to helping with documentation and training. But, the safest route for an agency is to have on staff someone with technical knowledge of the application and an ability to train others to fill in the training gaps when needed.
With commercial software products, it is possible to examine the financial health of a company to make a determination about whether the company might be around in five years. With open source, the same evaluation needs to be done but there's often no company to examine, just an online community that is supporting and developing the application. The size and activity of that community may be part of the evaluation, the number of installations of the software may figure into it and the general media buzz about it can also be a factor. The point is the evaluation of viability and longevity is at least as important to do for open source software as for commercial, but the ways to evaluate the software are different.
None of this is to say that commercial software should necessarily be preferred over open source. Open source software provides great possibilities and in some cases is the preferred solution. But it does mean that agencies shouldn't make the mistake of equating "open source" with "free." Total cost of ownership" may be less with open source in the long run, but it is not "free" and each case needs to be evaluated against the business purpose, the availability of ancillary services and the level of in-house expertise available with lots of emphasis on what in-house skills are available.
"Open source" in its strictest sense refers to the availability of the original work done by an application's developers. But, there are other aspects of what "open source" means in terms of application licensing. Applications are written by developers in a language that is readable by humans. It may require
specialized knowledge to read, but it is readable. In order for the application to actually run on a machine, it must be turned into a language that is understandable to the machine. The process by which the human-understandable form is translated into the computer-understandable form is called "compiling." Once the program has been compiled, it will run on a computer but is no longer readable by humans.
Most of the desk top applications we're familiar with such as Word, Excel, etc. are only available in compiled form -- Microsoft does not make the uncompiled version available. By contrast, open source software does include the uncompiled form so anyone can make additions or changes to it and compile it themselves. But, according to the Open Source Initiative, "open source" refers to more than just the handling of the application's code -- it also relates to the terms covering the way the application is distributed. Full details are available on the OSI site but the general concept is that open source software must remain open source and freely available to anyone for any purpose. If it is used as the basis for other products, those derivative products must, in their turn, abide by the open source distribution rules. This could have implications for agencies that use open source software as the basis for their own application development projects. In most cases, it will not be a problem but certain is an issue that needs to be considered.
So, though "open source" strictly speaking refers to the widespread availability of original developer work-product, it has come to mean much more as regards the ownership of software and the restrictions (or mandated lack of restrictions) on its distribution. | <urn:uuid:dba65c33-5212-4a7f-8cd8-a253b2554f6c> | CC-MAIN-2017-09 | http://www.govtech.com/pcio/Open-Source----Is-it-Free.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00334-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957205 | 1,780 | 2.671875 | 3 |
Computers and mobile devices store, process and transfer highly valuable information. As a result, your organization most likely invests a great deal in protecting them. Protect the end point and you protect the information. Humans also store, process and transfer information — people are in many ways nothing more than another operating system, the Human OS.
Yet if you compare how much organizations invest in securing their computers versus how much effort they put into teaching employees how to safeguard information, you would be stunned at the difference. For example, organizations typically invest in the following resources to protect an end device:
- Antivirus software
- Patch management
- Virtual private networks
- Host-based prevention systems
- Two-factor authentication
- Vulnerability scanning
- End-point encryption
- Log monitoring
Now go down that list and add up the cost for securing each computer. Then add support contracts, help desk phone calls, and how many full-time employees it takes to maintain all of this technology. You probably end up spending $100 or $200 a device.
Now, let's go through the exact same process for people. How much to secure each employee? Hear those crickets chirping? Your organization is most likely spending 20 to 50 times more on securing computers than on securing the Human OS, if it's working with those employees at all.
If finding the dollar amount for each computer is too complex, try a simpler metric. Count how many people you have on your information security team. Now, out of all those people, how many focus on securing technology and how many on securing the Human OS? You probably will end up with a very similar metric, something like 20-1 or 50-1. And organizations still wonder why the human is the weakest link.
Technology is important, and we must continue to invest in and protect it. However, eventually you hit a point of diminishing returns. We have to invest in securing the Human OS as well, or bad guys will continue to bypass all of our controls by simply compromising the human end-point.
Think of it in these terms: Fifteen years ago was the wild, wild West of hacking, the golden age of worms. Cyberattackers could easily compromise millions of systems by randomly scanning every system on the Internet and break into anything that was vulnerable, which was most systems in those days. We in the security community felt a great deal of pain and invested heavily in securing computers. Nowadays, computers come out of the box with firewalls, minimized services, automated patching and memory randomization. Fifteen years later, it has become much harder to compromise a computer.
But in those same fifteen years, what have we done for the Human OS? Nothing. As a result, the Human OS is still stuck in the days of Windows95, WinNT or Solaris 2.5. There is no firewall on by default, all the services are enabled, and this operating system is happy to share data with anyone that asks.
Until we begin to address the human problem, the bad guys will continue to have it easy.
Lance Spitzner is the training director for the SANS Institute's Securing the Human program. | <urn:uuid:46ab876f-9003-4e11-92fa-2ca0e04218b6> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2132691/metrics-budgets/it-s-time-to-start-patching-the-human-os.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00510-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95242 | 652 | 2.5625 | 3 |
CoPP – Control Plane Protection or better Control Plain Policing. It is the only option to make some sort of flood protection or QoS for traffic going to control plane.
In the router normal operation the most important traffic is control plain traffic. Control plane traffic is traffic originated on router itself by protocol services running on it, destined to other router device on the network. In order to run properly, routers need to speak with each other. They speak with each other by rules defined in protocols and protocols are running in shape of router services.
Examples for this kind of protocols are routing protocols like BGP, EIGRP, OSPF or some other non-routing protocols like CDP etc..
When router is making BGP neighbour adjacency with the neighbouring router, it means that both routers are running BGP protocol service on them. BGP service is generating control plane traffic, sending that traffic to BGP neighbour and receiving control plane traffic back from the neighbour.
Usage of Control Plane Protection is important on routers receiving heavy traffic of which to many packets are forwarded to Control Plane. In that case, we can filter traffic based on predefined priority classes that we are free to define based on our specific traffic pattern. | <urn:uuid:f127b155-2ce5-4b15-8f20-caa5efae059c> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/tag/control-plane | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00510-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95135 | 250 | 3.234375 | 3 |
Sometimes the best design idea is to borrow someone else’s.
The National Oceanic Atmospheric Administration did just that to illustrate Great Lakes currents.
The visualization shows current flow and speed by drawing white lines across the blue background of the Great Lakes at six different speeds—on the day of this writing, the strong currents flowing across Lake Superior show clearly why the freighter Edmund Fitzgerald wanted to make Whitefish Bay by the Sault Ste. Marie Locks leading into Lake Huron before it sank in 1975.
The visualization can also be switched from surface currents to depth-averaged currents, which will show how pollution would be moved.
Developers at NOAA adapted code borrowed from the Wind Map visualization introduced in late March 2012 that depicts wind flow across the U.S. It’s also interactive—it lets you zoom into an area and scroll over it to find wind speed and direction. (“It’s beautiful to look at,” wrote Nathan Yau at FlowingData.com.) The Wind Map was developed by Martin Wattenberg and Fernanda Viegas, who created IBM’s Many Eyes visualization project and are now co-leaders of Google’s “Big Picture” data visualization project.
Collaboration and Code Sharing
In fact, there were two stages of borrowing to get to the Great Lakes map. An oceanographer named Rich Signell saw the Wind Map and got permission to use the code for a map of coastal currents in the U.S.
Signell then told a colleague. “He said ‘hey, look what I did’”, says David J. Schwab, an oceanographer at NOAA’s Great Lakes Environmental and Research Laboratory in Ann Arbor, Mich. “And we looked at the Great Lakes and decided there was enough interest in Great Lakes currents, and we went ahead with it,” Schwab says. (His lab has not yet received formal permission to use the code, so it is piggy-backing on Signell’s permission to use it).
Signell had written a script in Python to pull data on coastal currents from NOAA’s databases. In the Great Lakes lab, a research scientist, Gregory A. Lang, tweaked the Python script to work with its databases on Great Lakes currents (among other things, it changes wind speed to current speed). Lang also made it dynamic, because the Great Lakes data updates every six hours.
Lang said it took him about three weeks, working occasionally on it, to make the modifications he needed. He tweaked the code to change the graphic’s legends, to plot depth average current versus surface current, and to do monthly averages.
The hardest part of creating the visualization was learning Python, a scripting language Lang said he didn’t know. The visualization has required no maintenance since it was posted in early July, before the annual Port Huron to Mackinac (July 14) and Chicago to Mackinac (July 21) sailing races. Schwab says it received about 5,000 views a day when it was first posted.
Lang has received emails from, among others, Tom Skilling, the weatherman at Chicago’s WGN, who wrote “How cool is this! It’s fascinating!”
Other prospective users are less sanguine. A charter fisherman told the Great Lakes Echo that “for what we do, day to day fishing, I don’t see an application for it,” adding that he intends to continue using a current probe that he tosses over the side of his boat. But another charter captain posted in the comments on the story “I think it will help” explain movements of fish.
Schwab says the lab had worked on other ways to represent the Great Lakes currents, using vector maps and pseudo colors. Vectors represent how forces move things. “But vector fields are notoriously hard to visualize,” he says. “That’s why we were impressed by this technology.”
Schwab says the Great Lakes lab had come close to visualizing how currents circulated in the Great Lakes, but the Wind Map folks “did it in a very elegant way, and a way you can display on the Web very easily. “
Schwab, who studied computer science in the 1960s, called the current map “the kind of thing that you dreamed about” back then. “Now the day has come. It’s kind of cool.” | <urn:uuid:31971373-a1f2-4330-aa7f-8a64056dd5a5> | CC-MAIN-2017-09 | http://data-informed.com/an-interactive-map-visualizes-great-lakes-water-currents/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00030-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953188 | 943 | 3.265625 | 3 |
FBI Warns Of Mobile Cyber ThreatsCriminals will target holiday shoppers with SMS text and voice mail scams, or smishing and vishing, said the agency.
(click image for larger view)
Slideshow: Inside DHS' Classified Cyber-Coordination Headquarters
As if online phishing scams aren't enough to worry about, people also should be wary of criminal efforts targeting their cell phones, the FBI is warning.
The agency's Internet Crime Complaint Center (IC3) said that creative criminals will be using scams called "smishing" or "vishing" to steal people's personal information, such as bank account numbers, personal identification number (PIN) codes, or credit card numbers.
Smishing is a combination of SMS texting and the common online practice of phishing, which uses e-mails to direct people to websites where they are asked to give up personal information.
In a smishing scam, people receive a text message on their phone telling them there's a problem with their bank account. The message will contain a phone number to call or a website to log into.
To pull off these crimes, people set up an automated dialing system to text or call mobile phone subscribers in a particular region or area code. They also steal phone numbers from banks and credit companies and target people on these lists, according to the FBI.
If a person follows through and follows directions, it's likely there's a criminal on the other end stealing personal information.
Vishing is similar to smishing except instead of an SMS, a person will receive a voicemail giving them the same information.
People who fall victim to mobile device scams could be in danger even if they stop short of giving up the information requested, the FBI warned. If they only log onto the fake website via their mobile device, they could end up downloading malicious software giving criminals access to anything on their phone, the agency said.
To protect themselves from these new types of scams, the FBI's IC3 is recommending that people refrain from responding to text messages or automated voice messages from unknown or blocked numbers.
With the proliferation of using the web on mobile phones, people also should treat their phones like they would their PCs, and avoid downloading any applications or files to a mobile device unless it comes from a trusted source, the agency said.
People also should be more cautious when making purchases on mobile devices and only use legitimate payment services and credit cards -- not bank accounts or debit cards -- to do so. The FBI recommends credit cards for all online purchases because people can work with a credit card company to refute any unauthorized charges on a bill, while this process is trickier when dealing with funds from a bank account.
As usual, people also should continue to protect themselves while surfing the web on their computers. The FBI, as it did last holiday season, recommended that people ignore unsolicited e-mails requesting personal information, and refrain from clicking on links or attachments contained within those e-mails. | <urn:uuid:3ae5b104-7ac3-4649-bf70-7e70dea78155> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/fbi-warns-of-mobile-cyber-threats/d/d-id/1094445?piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00082-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935377 | 608 | 2.65625 | 3 |
Adding server power only improves site server performance to a point: Once server power is maxed out, the physical distance between a site’s host server and the visitor becomes a key component in how long a page takes to load. Content Delivery Networks (CDNs) are a widely used solution to improve load times and performance on websites by decreasing the physical distance data travels.
Not every CDN is created equal, however. Your business may opt to measure latency and acceleration to determine which CDN, if any, best suits the audience.
A server’s “megabits per second” performance only tells part of the tale when it comes to load speed. Bandwidth measures how much data moves at once, while latency measures how long that data takes to move from the source to the destination. Ookla Speedtest explains the situation with a pipe metaphor: latency measures how much time it takes for water to enter a pipe and reach the end of that pipe, while bandwidth measures the pipe’s diameter.
Moving large amounts of data, such as app updates, isn’t time-sensitive (in terms of when it starts)—so latency is not an issue here. However, latency is an extremely important performance metric for things like loading web pages, which should at most take only a few seconds.
How CDNs Work
CDNs utilize a network of servers across multiple geographical locations that mirror website content from the original source. When website visitors access a web page, their devices can receive the information from a physically closer server, reducing the time it takes for site data to reach the viewer. It’s a lot like buying milk from the corner store down the street from your house instead of driving out to a rural farm to get it.
However, CDNs don’t improve performance for everyone. For example: If the person trying to buy the metaphorical milk lived closer to the dairy farm, going to the store would take longer. Additionally, if the dairy farm and corner store were equidistant from the milk shopper, they would not see a performance boost. CDNs can also help with capacity and bandwidth management.
Calculating just how much a CDN improves a site’s performance in one location is straightforward: Measure how long a page takes to load before and after utilizing the CDN. In practice, however, testing how well a site and CDN are performing across different regions is tricky without designated testing stations. The acceleration test is often accomplished by measuring how long it takes to download files of various sizes in each tested geographical location from the mirror server and the host server.
According to cloud services provider Radware, picking the most effective CDN often requires some market research to identify where a site’s users are located. For example, a site that’s hosted out of Boston and does most of its traffic in the Northeastern and Western United States would benefit more from a CDN that improves load times in Los Angeles, Portland, and Seattle than one that boosts load times in Boston, Beijing, and London.
Acceleration testing data can help businesses make smart decisions when it comes to CDNs. Find out more about how Apica can support your CDN testing process from more than 83 countries and 2,600 monitoring nodes across the globe on our website. | <urn:uuid:6a3b90eb-b07b-4ebe-9022-fe730caefa50> | CC-MAIN-2017-09 | https://www.apicasystem.com/blog/cdn-companies-measure-latency-acceleration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00502-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934052 | 679 | 2.734375 | 3 |
Sniffing Out Packet Sniffers
One of the oldest methods of stealing information off of a network is through packet sniffing. In case you aren't familiar with the term, packet sniffing refers to the technique of copying each packet as it flows across the network. While this may prove a boon for network managers for traffic analysis, it also allows access to malevolent hackers. Today, protocols such as IPSec are designed to prevent packet sniffing by encrypting packets. However, many networks have not yet employed this encryption technology, or are only encrypting a portion of their data. Because of this, packet sniffing is still a viable method for stealing information.
The reason that packet sniffing works is due to the way Ethernet networks send their packets. Any time that a PC sends out a packet, it is sent out as a broadcast. This means that every PC on the network sees the packet. However, every PC is supposed to ignore the packet, except for its intended destination.
As mentioned, packet sniffing works by making a copy of each packet as it flows across the network. In the past, it has been difficult to tell if anyone on your network is engaging in packet sniffing. After all, no one is hacking into a server or anything, so the audit logs wouldn't indicate any sort of unusual activity. A person who's packet sniffing is merely reading information as it comes to them.
Fortunately, there are some tell-tale signs that may signal unauthorized interception. If the suspected hacker has limited resources, they may try to use the Network Monitor utility for packet sniffing. (A limited version of Network Monitor comes with Windows NT and Windows 2000, and a full-featured version comes with SMS Server.) Network Monitor is a good choice for the small time hacker because it's easy to come by and relatively easy to use, compared to some of the other packet sniffers that are available. Happily, it's really easy to tell if someone is using the Network Monitor utility. To do so, simply select the Identify Network Monitor Users command from Network Monitor's Tools menu.
What if the hacker is using one of the dozens of other available sniffing utilities? While there's no foolproof way to spot someone who's packet sniffing, there are some good indicators. Perhaps the best is your DNS database. Any time that a system needs to resolve a host's IP address, it sends a query that is based on the host name to a DNS server. The DNS server then looks up the host name in its database and returns the host's IP address. If a hacker were running a packet sniffing program that displayed host names (most of them do), then the machine doing the packet sniffing would generate an extremely large volume of DNS queries.
Try watching for machines that are performing lots of DNS lookups. Although a high volume of DNS lookups alone doesn't necessarily indicate packet sniffing, it's a good indicator. If you suspect that a particular machine might be packet sniffing, try setting up a bait machine. A bait machine would be a PC that no one knows exists. Plug it up to the network and generate a small amount of network traffic. As you do, keep an eye on the DNS queries to see if the suspected machine ran a DNS query on the bait machine. If it did, then it's almost certainly sniffing packets.
Another popular method for spotting packet sniffing is to measure the response time of the suspected machine. This technique is tricky and fairly unreliable, but it will at least let you know if you're on the right track. The idea is to ping the suspected machine in order to measure the response time. After doing so, generate some network traffic that a suspected malevolent hacker might be interested in. Remember that someone who's sniffing packets probably wouldn't want to copy every packet because of the sheer volume of information. Instead, they would probably set up a packet filter and only copy the packets that they're interested in, such as those used for authentication. Therefore, have several of your co-workers log in and out repetitively while you re-measure the suspected PC's response time. If the response time hasn't changed much, then the PC probably isn't sniffing packets, but if you get a really slow response then there's a good chance that the PC is sniffing packets.
Utilities exist that use the methods that I've discussed and a few others to track down packet sniffers. One of the better tools is a program called AntiSniff. You can download a free 15 day-trial of the Windows version of AntiSniff or a free version for UNIX from www.securitysoftwaretech.com/antisniff/download.html.
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:403ea770-d181-43cb-a9d0-faf059182e16> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/766671/Sniffing-Out-Packet-Sniffers.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00026-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957239 | 1,046 | 2.796875 | 3 |
Data.gov, the federal government's clearinghouse of downloadable information, plans to release new gadgets that will enable the public to easily create mashups of maps and statistics, according to officials working on the enhancements.
Mashups are a fusion of information and images that can illustrate relationships or patterns and, in this case, provide transparency into the business of Washington. Data.gov is the brainchild of federal Chief Information Officer Vivek Kundra, who has said he envisions the website becoming an online marketplace where people worldwide can exchange entire databases and reuse content in ways the federal government could never imagine.
Within the next month, the site will offer the public a chance to preview a so-called viewer that will let them combine many of the 270,000 data sets posted on Data.gov with maps, said Jerry Johnston, geospatial information officer at the Environmental Protection Agency. For the past couple of months, representatives from various agencies, including EPA, the General Services Administration, U.S. Geological Survey, Health and Human Services Department, and NASA, have assisted in the effort to add more interactive features to the site.
"Vivek Kundra wanted to make sure there was agency involvement in the project," Johnston said. "When we first stood up [Data.gov], he said what was in my mind at the time: 'It's great that you have geospatial data in the catalog, but it doesn't mean anything to me if I can't see it.' "
With the new tools, anyone will be able to diagram in one place official statistics from across the federal government -- on everything from mortality rates to houses with substandard plumbing. Individuals won't need special technical skills to create the mashups.
The feature is made possible by Geodata.gov, a separate catalog of geographic data that USGS operates. The website will power part of Data.gov through a connection that is invisible to the user, Johnston explained. Internet users can permanently download federal maps to their own computers through Data.gov, or view them with the new mashup tool for as long as they are on the site.
He said the next goal is to make the maps available as services, which are Web applications users access through the network of the agency that provides the map.
At present, "Data.gov focuses on storing data for downloads in files," and federal officials "want to move to the next step of visualizing data," said Jack Dangermond, president of ESRI, which supplies nearly every federal agency with geographic information system software. The company is providing the viewer and linking Data.gov to the maps through Geodata.gov. The work is part of a competitive contract to build Geodata.gov, which USGS awarded to ESRI in 2004.
The new mapping capabilities will allow third parties, including nonprofit government watchdogs, the press, private software providers and citizens to discover interesting or suspicious trends and correlations such as, perhaps, a high death rate in a region where a large proportion of the population is employed by mining companies.
GIS companies, including ESRI and its competitor FortiusOne, will be able to combine the maps with their products to create custom applications that they can sell to clients. In addition, open government organizations such as the Sunlight Foundation in Washington will be able to use the services to distribute free apps. School children also will be able to create and print maps for class projects.
With its viewer, Data.gov has the potential to fulfill the three objectives of President Obama's open government initiative, said Andrew Turner, chief technology officer of FortiusOne, a mapping firm that helps federal agencies and companies visualize their business data to aid in decision-making. Obama has committed his administration to achieving greater transparency, more citizen participation in government, and increased collaboration between the public and private sectors.
"Transparency is about opening the data, and Data.gov did a really good job of that at first," when the site launched in May 2009, Turner said. "Participation -- that's pulling things off for my social networking group. Collaboration -- how do I feed this back to the government? The success from the data to the tools, along the entire way, will be dependent upon making sure that entire chain stays open."
FortiusOne and ESRI offer free consumer sites, respectively called GeoCommons and ArcGIS.com, that let users create mashups with publicly available geographic data. They work similarly to the way Data.gov will function with the viewer. Johnston said he encourages this kind of repurposing of government information, but also noted that, unlike commercial or nonprofit sites, Data.gov "gets the authoritative stamp of being a .gov site and a high-profile site."
Turner said the Data.gov tool sounds like it could turn maps into social objects -- items that instigate conversation -- in the same way the photo-sharing site Flickr has turned photos into social objects.
The initiative also could become a performance management tool by enabling agencies to push out studies and asking the public to review them on easy-to-read maps, said T. Jeff Vining, a research vice president for Gartner Research. The challenge will be ensuring agencies are reporting accurate and timely data, he said.
"I think it's knowledge management concepts meeting geospatial concepts," Vining added. In the future, he expects people will be able to download the mashup maps to smart phones and broadcast their creations using the mass text-messaging service Twitter.
"Looking at how we can add mobile applications to the mix is certainly a logical next step for the team," Johnston said. | <urn:uuid:62900e4b-353d-4ba8-8dac-07b7b44acfaa> | CC-MAIN-2017-09 | http://www.nextgov.com/mobile/2010/06/datagovs-next-big-thing-mashing-up-federal-stats-with-maps/46973/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00026-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947418 | 1,153 | 2.609375 | 3 |
Google, working in partnership with the Israel Antiquities Authority, posted about 5,000 images of the ancient Dead Sea Scrolls online Tuesday.
Pieces of the text going online include one of the earliest known copies of the Book of Deuteronomy, which holds the Ten Commandments. The images also include part of the Book of Genesis, which describes the creation of the world.
A portion of the Dead Sea Scrolls which were digitized and posted online as part of the Dead Sea Scrolls Digital Project. (Photo: Baz Ratner/Reuters)
For these latest images, Google worked with the Israel Antiquities Authority to launch the Leon Levy Dead Sea Scrolls Digital Library.
These images join five Dead Sea Scroll manuscripts that Google has already put online.
"Today, we're helping put more of these ancient treasures online," wrote Eyal Miller, a technology manager with Google Israel, in a blog post. "The texts include ... 2,000-year-old texts, shedding light on the time when Jesus lived and preached, and on the history of Judaism."
The Dead Sea Scrolls, which are considered to be of great historical and religious importance, are Biblical manuscripts written more than 2,000 years ago on parchment and papyrus. They include the earliest known surviving copies of biblical documents.
There are about 972 texts that were discovered on the northwest shore of the Dead Sea between 1946 and 1956.
"Millions of users and scholars can discover and decipher details invisible to the naked eye, at 1215 dpi resolution," wrote Miller. "The site displays infrared and color images that are equal in quality to the Scrolls themselves. There's a database containing information for about 900 of the manuscripts, as well as interactive content pages."
To get the images online, the company used Google Storage and App Engine, as well as Google Maps and YouTube.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about internet search in Computerworld's Internet Search Topic Center.
This story, "Google adds to Dead Sea Scroll images online" was originally published by Computerworld. | <urn:uuid:cc941973-66ed-43ac-aae7-99645ac1756a> | CC-MAIN-2017-09 | http://www.itworld.com/article/2717055/networking/google-adds-to-dead-sea-scroll-images-online.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00202-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940176 | 473 | 2.765625 | 3 |
2013 is progressively becoming the year of the hacker. The main target of these hackings? Smartphones.
Consumers are constantly voicing their wants of a smaller, thinner device with more storage space to make their lives a little easier. In reality, the shrunken screens and huge storage space are making hackers lives easier, too!
Since smartphones, tablets and other mobile devices have increasingly become more prominent than PC’s, hackers have realized their old school tactics are still relevant.
Compared to desktop computers, the tiny screens of mobile devices present an opportunity to create legitimate looking sites without users noticing red flags as they would on a larger screen. The illegitimate websites can trick smartphone users into entering personal information a hacker needs to drain an account.
Smartphone users desire instant access to information including bank accounts, email, etc. Applications make this possible, but not without risking data security. With sensitive information built into the apps, hackers are able to use “Trojan horse” viruses to utilize short message services (SMS) that charge per text to extract information.
According to CNN, “the amount of malware detected by McAfee on the devices in 2012 was 44 times what it was the previous year.”
Consumers have been known to keep their whole lives on their smartphones, from bank account information to sensitive company data and hackers have taken advantage of this. Especially in BYOD programs, storing personal and company data on one device brings those two worlds together in a convenient package for hackers.
A change is definitely in the air for BYOD programs and data security, especially after this years Mobile World Congress. Companies are adopting mobile devices with BYOD-friendly operating systems, which allows IT departments to control data security and data erasure of sensitive information without touching an employee’s personal files.
Click here to read more about BYOD at the Mobile World Congress.
A common misconception by consumers is the factory reset when disposing of their phones. More than likely, personal and company data remnants will still be stored in the device, susceptible to mobile hackers. Following through with proper data erasure of the phone is necessary to prevent data leaks.
Mobile asset disposition is moving towards becoming an industry-wide term, with data erasure top priority. HOBI International Inc, and similar organizations, are embracing this term with open arms by implementing successful data erasure procedures prior to disposing or refurbishing mobile devices. The growing market for smartphones with big data storage make these mobile asset disposition plans essential for data security.
Click here to read about the benefits of mobile asset disposition by a certified ITAD company.
Small devices with big security threats, smartphones have become the perfect target for hackers to steal any information they desire. Don’t become the next victim of a data hacking. | <urn:uuid:353411cb-fdf0-46e1-9154-3e81e6fc5e97> | CC-MAIN-2017-09 | https://hobi.com/smartphone-target-security-threat/smartphone-target-security-threat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00078-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925598 | 567 | 2.59375 | 3 |
How smart are social bots?
- By William Jackson
- Nov 28, 2011
In 1950, computer pioneer Alan Turing proposed the Imitation Game in which a person would question two unseen subjects, one a machine and the other a human, in an effort to distinguish them. This has become known as the Turing Test, as he wrote in his paper, "Computing Machinery and Intelligence."
“I believe that in about fifty years' time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
It now is 61 years after Turing’s prediction. He set the bar pretty low, giving the machine only three chances in 10 of winning. How are we doing?
New cyber threats put government in the cross hairs
'Socialbot' fools Facebook, and users give up their info
With the creation of the Internet, our increasing use of remote transactions and interactions, and especially with the advent of social networking, the question has taken on more than academic importance. It is at the heart of the problem of how we know who or what we are dealing with online and how to ensure our security and our privacy.
In an interesting experiment, researchers at the University of British Columbia recently were able to infiltrate Facebook with a herd of automated “social bots” that went largely undetected by the network’s defenses for eight weeks, friending thousands of users and harvesting their personal information. Over a six-week period, the bots sent out 3,517 friend requests to human Facebook users, 2,079 of which — 59 percent — were accepted. At first glance, this looks as if the social bots won the Imitation Game and passed the Turing Test with flying colors.
We need to take those results with a grain of salt, however. The bots were good at defeating defenses such as CAPTCHA codes used to identify and block spamming bots and at gathering and posting appropriate information to create the impression that there were real people behind the accounts. But the bots were not really speaking with the other Facebook users. The bots didn’t pass the Turing Test because the Facebook users never really questioned them.
It turns out that the automated software bots really aren’t that smart but that the Facebook users were acting dumb. When it comes to social networking, we are our own worst enemies.
Social networking creates online communities that can be used for socializing and for collaboration and increasingly it is being used in the workplace. This has led to a lot of questions about the security and privacy controls of the systems. But the first question we need to ask about these networks is how we are behaving on them. Are “friends” being collected indiscriminately as status symbols, and is personal information being posted inappropriately? It is very difficult — if not impossible — to protect a person who is determined to be his own worst enemy by palling around with semi-sentient social bots.
Alan Turing might be disappointed in the performance of our 21st-century computers in the Imitation Game if he were around today, but he might be even more disappointed in the performance of the people in the game. Artificial intelligence won’t be very impressive if it is measured against people who have lowered themselves to the level of machines.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:54c9ea7a-bd5f-44f2-908b-28fa680abe5d> | CC-MAIN-2017-09 | https://gcn.com/articles/2011/12/05/cybereye-how-smart-are-social-bots.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00254-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968947 | 703 | 2.96875 | 3 |
Solar Catamaran Finishes Trip Around the World
/ May 22, 2012
In September 2010, the world's largest solar powered boat -- the TÛRANOR PlanetSolar -- set sail from Monaco to become the first boat to circumnavigate the globe using only the sun's power. Last month, it finished its journey where it began.
According to gizmag.com, a crew of five piloted the 102-foot long, 49-foot wide vessel, which is covered in 5,780 sqare feet of solar panels. These provide power to four electric motors (two located in each hull), that have a maximum output of 120 kW and can propel the boat to a speed of 14 knots. It is constructed mainly of a light yet durable carbon fiber-sandwich material.
Photos courtesy of planetsolar.org | <urn:uuid:456e7f8d-499c-422a-a9b7-0acd118e5c74> | CC-MAIN-2017-09 | http://www.govtech.com/photos/Photo-of-the-Week-Solar-Catamaran-Finishes-Trip-Around-the-World-05222012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00199-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935538 | 171 | 2.546875 | 3 |
Advances in data-intensive supercomputing have set the stage for tremendous strides in the study of genetics. In a recent blog post, Cray’s Marketing Director Maria McLaughlin reveals that sophisticated supercomputing capabilities enabled scientists to pinpoint the genetic patterns underlying autism-spectrum disorders, schizophrenia and similar brain conditions.
With funding from the National Science Foundation, scientists from the San Diego Supercomputing Center (SDSC) and the Institute Pasteur “identified a time-dependent gene-expression process that could help medical professionals” one day eradicate these types of disorders.
According to a report in the Genes, Brain and Behavior journal, these life-changing breakthroughs are the result of a confluence in computational and life sciences.
Igor Tsigelny, a research scientist with SDSC and UC San Diego’s Moores Cancer Center, highlighted the role that data plays in the research:
“We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored,” said Tsigelny.
“I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible,” he added.
The project relied on the innovative flash-based Gordon supercomputer, a Cray CS300-AC Cluster system (né Appro HPC cluster), installed at the San Diego Supercomputer Center.
“Gordon’s I/O nodes are specifically designed to handle large, complex data-intensive workloads that address I/O bottlenecks,” writes McLaughlin.
The Gordon supercomputer employs a massive amount of flash memory (Flash Gordon, get it?) which is how it powers through solutions that would be bogged down by slower spinning disk memory. Its innovative architecture allows Gordon to process large data-intensive problems about 10 times faster than other supercomputers, according to SDSC, and it can hold as many as 100,000 entire human genomes in its flash memory system.
Gordon has been engaged in cutting-edge research since January 2012 and is a key resource for the NSF’s Extreme Science and Engineering Discovery Environment (XSEDE) program, a nationwide partnership that includes 16 high-performance computers and high-end visualization and data analysis resources. | <urn:uuid:646890cb-91b0-42ab-beec-ee9e416b551c> | CC-MAIN-2017-09 | https://www.hpcwire.com/2013/04/17/supercomputing_transforms_data_into_knowledge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00075-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.909887 | 525 | 3.390625 | 3 |
In Windows it is possible to configure two different methods that determine whether an application should be allowed to run. The first method, known as blacklisting, is when you allow all applications to run by default except for those you specifically do not allow. The other, and more secure, method is called whitelisting, which blocks every application from running by default, except for those you explicitly allow.
With the wide distribution of computer ransomware and other malware infections and the high costs of recovering from them, a very strong computer protection method is whitelisting. This allows you to block all programs by default and then setup rules that specifically allow only certain programs to run.
Though easy to setup initially, whitelisting can be burdensome as you will need to add new rules every time you install a new program or want to allow a program to run. Personally, I feel if you are willing to put the time and effort into using whitelisting, the chances of a computer infection damaging your computer becomes minimal.
This tutorial will walk you through setting up whitelisting using Software Restriction Policies so that only specified applications are able to run on your computer. Though this guide will be geared towards individual users, this same approach can be used in the enterprise by pushing these policies to a Windows domain.
To get started white listing your applications you need to open the Security Policy Editor, which configures the Local Security Policies for the machine. To do this, click on the Start button and then type secpol.msc into the search field as shown below.
When secpol.msc appears in the search list, click on it to start the Local Security Policy editor.
You should now see the Local Security Policy editor as shown below.
To begin creating our application whitelist, click on the Software Restriction Policies category. If you have never created a software restriction policy in the past, you will see a screen similar to the one below.
To create the new policy, right click on the Software Restriction Policies category and select the New Software Restriction Policies option as shown below.
A new Software Restriction Policy will now be created as shown below.
The first thing you need to do is configure the Enforcement section. This section allows us to specify general settings on how these restriction policies will be configured. To get started, click on the Enforcement object type as indicated by the blue arrow above.
I suggest that you leave the settings like they are for now. This allows you to create a strong policy, without the issues the may be caused by blocking DLLs. When you are done configuring these settings, click on the OK button.
You will now be back at the main Software Restriction Policies window as shown in Figure 5. We now want to configure what file types will be considered an executable and thus blocked. To do this click on the Designated File Types object.
This will open the properties window for the designated file types that will be considered as an executable and therefore blocked by the software restriction policy that you are creating.
Unfortunately, the above the list is not as exhaustive as you would like and includes an extension that should be removed. First, scroll through the above list of file extensions and remove the LNK extension from the list. To remove the extension, left-click on it once and then click on the Remove button. If you do not remove this extension, then all shortcuts will fail to work after you create our whitelist.
Now you want to add some extra extensions that are known to be used to install malware and ransomware. To add an extension, simply add it to the File Extension field and click on the Add button. When adding an extension, do not include the period. For example, to exclude powershell scripts, you would enter PS1 into the field and click on the Add button.
Please add the following extensions to the designated file types:
|Extensions to add to the File Type List|
When you are done adding the above extensions, click on the Apply button and then the OK button.
We will now be back at the main Software Restrictions Policy section as shown in Figure 8 below. At this point, you need to configure the default policy that decides whether the file types configured in figure 7 will be automatically blocked or allowed to run. To do this, click on the Security Levels option as indicated by the blue arrow below.
When you double-click on the Security Levels category, you will be brought to the screen below that has three security levels you can apply to your software restriction policies.
In order to select which level should be used, you need to double-click on the particular level and set it as the default. Below are the descriptions for each type of security level.
Disallowed: All programs, other than those you allow by the rules you will configure, will not be allowed to run regardless of the access rights of the user.
Basic User: All programs should execute as a normal users rather than as an Administrator.
Unrestricted: All programs can be run as normal.
Since you want to block all applications except those that you white list, you want to double-click on the Disallowed button to enter its properties screen as shown below.
In the above properties screen, to make it so all applications will now be blocked by default, please click on the Set as Default button. Then click on the Apply and OK buttons to exit the properties screen.
We will now be back at the Security Levels list and almost every program will now be blocked from executing. For example, if you try to run Internet Explorer, you will receive a message stating that "This program is blocked by group policy." as shown below.
Now that you have configured Windows to block all applications from running, you need to configure rules that allow your legitimate applications to run. The next section will explain how to create path rules so that the applications you wish to allow to run are whitelisted.
If you followed the previous steps, Software Restriction Policies are now enabled and blocking all executables except those located under C:\Program Files and C:\Windows. Those two directories are automatically whitelisted by two default rules that are created when you setup Software Restriction Policies.
Obviously, in order to have a properly working machine you need to now allow, or whitelist, other applications. To do this, you need to create additional rules for each folder or application you wish to allow to run. In this tutorial, we are going to add a new Path Rule for the C:\Program Files (x86) folder as that needs to also be whitelisted for 64-bit versions of Windows.
While in the Local Security Policy editor, click on the Additional Rules category under Software Restriction Policies as shown below.
As you can see from above, there are already two default rules configured to allow programs running under C:\Windows and C:\Program Files to run. If you are running a 64 bit version of Windows you now want to add a further rule that will allow programs under the C:\Program Files (x86) folder to run as well.
To do this, right-click on an empty portion of the right pane and click on New Path Rule... as shown below.
This will open up the New Path Rule Properties dialog as shown below.
As you want to create a path rule for C:\Program Files (x86), you should enter that path into the Path: field. Then make sure the Security Level is set to Unrestricted, which means the programs in it are allowed to run. If you wish, you can enter a short description explaining what this rule is for in the Description field. When you are finished, the new rule should look like the one below.
When you are ready to add this rule, click on the Apply and then OK button to make that rule active.
You will now be back at the Rules page and the new C:\Program Files (x86) rule will be listed and programs located in that folder will now be allowed to run.
You now need to make new rules for other programs that you wish to allow to run in Windows. For example, if you play games with Steam, you should follow the steps above to add an unrestricted rule for the C:\Program Files\Steam\ folder.
In the next two sections, I have provided tips and and other types of rules that can be created to whitelist programs. I suggest you read it to take advantage of the full power of Software Restriction Policies.
As always, if you need help with this process, please do not hesitate to ask in our tech support forums.
When adding a path rule that is a folder, it is important to note that any subfolder will also be included in this path rule. That means if you have applications stored in C:\MyApps and create a path rule that folder specifies that folder is unrestricted, then all subfolders will be allowed to run as well. So not only will C:\MyApps\myapp.exe be allowed to run, but also C:\MyApps\games\gameapp.exe is allowed to execute as well.
To make it easier when creating rules, it is also possible to use wild cards to help you specify what programs should be allowed to run. When using wild cards, you can use a question mark (?) to denote a single wildcard character and a asterisk (*) to denote a series of wildcard characters.
For example, if you have a folder of executables that you wish to whitelist, you can do so by using a wildcard path rule like this: C:\MyApps\*.exe. This rule would allow all files that end with .exe to execute, but not allow executables in subfolders to run. You can also use a path rule that specifies a single wildcard character like C:\MyApps\app?.exe. This rule would allow C:\MyApps\app6.exe to run, but not C:\MyApps\app7a.exe to run.
It is also possible to use environment variables when creating path rules. For example, if you wish to allow a folder under all the user profiles, you can specify a rule like %UserProfile%\myfolder\*.exe. This would only allow executables under that particular folder to execute, but would expand %UserProfile% to the correct folder for whoever is logged into the computer.
Last, but not least, if you wish to run executables from a network share, then you need specify the full UNC path in the rule. For example, \\Dev-server\Files.
When creating rules, it is also possible to create other rules called Certificate Rules and Hash Rules. These rules are described below.
Certificate Rule: A certificate rule is used to allow any executable to run that is signed by a specific security certificate.
Hash Rule: A hash rule allows you to specify a file that can be run regardless of where it is located. This is done by selecting an executable when creating the rule and certain information will be retrieved by SRP and saved as part of the rule. If any other executables on the computer match the stored file hashed and information, it will be allowed to run.
Note: Microsoft has stated that Certificate Rules could cause performance issues if used, so only use them if absolutely necessary.
Many Spyware, Hijackers, and Dialers are installed in Internet Explorer through a Microsoft program called ActiveX. These activex programs are downloaded when you go to certain web sites and then they are run on your computer. These programs can do a variety of things such as provide legitimate services likes games or file viewers, but they can also be used to install Hijackers and Spyware on to ...
This tutorial will walk you through recovering deleted, modified, or encrypted files using Shadow Volume Copies. This guide will outline using Windows Previous Versions and the program Shadow Explorer to restore files and folders as necessary.
Notepad++ is a very powerful text and source code editor with a lot of features. Unfortunately, those features tend to require a lot of settings. This means that common settings, such as the displaying of line numbers, may not always be so easy to find. This tutorial will walk you through showing and hiding line numbers in the Notepad++ editor.
When you install Windows, you are shown the Windows license agreement that provides all the legal language about what you can and cannot do with Windows and the responsibilities of Microsoft. Finding this license agreement, afterwards, is not as easy. This tutorial will explain how to find the license agreement for the edition of Windows installed on your computer.
If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware. | <urn:uuid:3a97bf52-33e9-490c-b9c9-4c845fd593f4> | CC-MAIN-2017-09 | https://www.bleepingcomputer.com/tutorials/create-an-application-whitelist-policy-in-windows/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00603-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921225 | 2,661 | 2.53125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.