text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Megharaj M.,University of South Australia |
Megharaj M.,Cooperative Research Center for Contamination |
Ramakrishnan B.,University of South Australia |
Ramakrishnan B.,Indian Agricultural Research Institute |
And 7 more authors.
Environment International | Year: 2011
Due to human activities to a greater extent and natural processes to some extent, a large number of organic chemical substances such as petroleum hydrocarbons, halogenated and nitroaromatic compounds, phthalate esters, solvents and pesticides pollute the soil and aquatic environments. Remediation of these polluted sites following the conventional engineering approaches based on physicochemical methods is both technically and economically challenging. Bioremediation that involves the capabilities of microorganisms in the removal of pollutants is the most promising, relatively efficient and cost-effective technology. However, the current bioremediation approaches suffer from a number of limitations which include the poor capabilities of microbial communities in the field, lesser bioavailability of contaminants on spatial and temporal scales, and absence of bench-mark values for efficacy testing of bioremediation for their widespread application in the field. The restoration of all natural functions of some polluted soils remains impractical and, hence, the application of the principle of function-directed remediation may be sufficient to minimize the risks of persistence and spreading of pollutants. This review selectively examines and provides a critical view on the knowledge gaps and limitations in field application strategies, approaches such as composting, electrobioremediation and microbe-assisted phytoremediation, and the use of probes and assays for monitoring and testing the efficacy of bioremediation of polluted sites. © 2011 Elsevier Ltd. Source | <urn:uuid:417c3d87-0111-4714-b16f-dda1cbd8bb06> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/cooperative-research-center-for-contamination-1898017/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00089-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.887606 | 349 | 2.8125 | 3 |
Using high performance computing to help modernize US manufacturing is one of those good ideas that seems inevitable but always just out of reach. A recent study confirms this, and provides a framework for strengthening the HPC landscape in this sector.
Of course some might ask what’s the point of trying to boost manufacturing in the US when the sector only employs about 10 percent of the workforce, a figure that is projected to decline further in the coming years. Also, the use of HPC to make manufacturing more efficient is not likely help the downward employment trend. Employing virtual product design and development and automating other manufacturing processes will probably eliminate more jobs than it creates.
By world standards, the US manufacturing market is already fairly efficient. Despite the relatively few workers employed in the segment, because of its sheer size, US manufacturing dominates world production. Output in 2009 was $2.15 trillion (expressed in 2005 dollars), besting China’s contribution of $1.48 trillion and representing about 20 percent of the world’s manufacturing output.
But the real value of the US manufacturing sector is that it’s at the heart of much of the science and engineering innovation on which the remainder of the economy rests. Today US manufacturers employ more than a third of the country’s engineers and account for 60 percent of all private sector R&D. As such, it creates products that are used by the more lucrative service industries. Think, for example, of all the myriad services that are dependent on the production of computer chips and other electronic devices. Manufacturing, like agriculture before it, is a foundational activity that acts as a catalyst to other business sectors.
Furthermore, according to a recent article in The Atlantic, there is no realistic way to balance US foreign trade that relies exclusively on the service sector. Nor is there a feasible way to employ existing (and future) blue-collar workers without a healthy manufacturing sector.
And healthy it is not — at least from a global perspective. Based on a survey of CEOs conducted by Deloitte and the Council on Competitiveness released in June 2010, the US is ranked fourth in manufacturing competitiveness, behind China, India, and South Korea, and is expected to drop to fifth place, behind Brazil, by 2015. A National Institute of Standards and Technology factsheet recounts the need for the industry to focus on developing technologically-advanced products that can compete in the global marketplace. “There is widespread agreement that rather than engage in a ‘race to the bottom’ for low-wage production facilities, the United States should aim for high-value-added manufacturing opportunities,” says the factsheet.
Moving up the manufacturing foodchain often leads to a much better bottom line, and in some cases, extra jobs. For example, Frank van Mierlo, CEO of 1366 Technologies, claims that the US is in a good position to build a silicon chip industry for solar cells. According to Mierlo, the nation produces around 40 percent of the world’s high grade silicon for both chips and solar cells, which is worth about $1.7 billion. He says if US-based companies turned that silicon into wafers, it would become a $7 billion business and add 50,000 jobs.
That kind of thinking is being embraced by non-profit groups as well. US government agencies, the Council on Competitiveness, and the National Center for Manufacturing Sciences (NCMS) are all big proponents of high-tech solutions. HPC, in particular, is seen as a key driver in upgrading the nation’s manufacturing capabilities. The use of such technology allows engineers and designers to perform prototyping, product design and analysis, product lifecycle management, and product optimization/validation, with much less reliance on physical mockups and testing.
But despite better access to HPC than is generally available in other countries, in the US fewer than 10 percent of manufacturers use this technology — that according to a recent study conducted by InterSect360 Research in conjunction with NCMS. The report surveyed 323 respondents across industry, academic, government and trade organizations in July 2010 to gather a snapshot of digital manufacturing practices and attitudes in the US.
Not surprisingly it found that top manufacturers were already major users of high performance computing. Based on the survey, 61 percent of companies with over 10,000 employees are using HPC today to model everything from engine parts to product packaging. The numerous case studies of digitally-engineered products at companies like Boeing, Procter & Gamble, and General Motors attest to the acceptance of HPC at these large firms.
Meanwhile, small manufacturers, which by number represent the vast majority of the companies in this sector, have barely touched the surface of high performance computing. Here only 8 percent of businesses with under 100 employees are using such technology. Where modeling and simulation tools are being employed, they’re mostly restricted to desktop systems, representing a sort of poor man’s HPC.
The study found the most significant barriers to adoption were the lack of internal expertise, the cost of software, and to a lesser extent, the cost of hardware. To some degree, though, cost concerns may be a misconception. Over 80 percent of companies that currently use HPC report they spent less than one-third of their IT budgets on HPC — not an insignificant amount, but not an overwhelming expense either.
Importantly, 72 percent of desktop-bound CAE users did see a competitive advantage in adopting more advanced computational technology. In such environments, long simulation times and other software issues (compatibility, robustness, data management) were cited as major limitations.
When asked about the importance of different business drivers — production efficiency, time to market, product novelty, product quality, industry leadership, etc. — the survey takers said all were important, but it was product quality that garnered the most intense response. Since HPC enables iterative product refinement in a virtual design and test environment, that could turn out to be a big selling point for the technology.
In manufacturing, as in most verticals, smaller companies tend to be at a disadvantage when it comes to adopting HPC, and this is certainly reflected by the InterSect360 study. But costs, at least of hardware, are coming down. And software costs, while more worrisome, would likely be no more expensive (or at least not substantially more) on an eight-node cluster than on eight standalone workstations.
What most of these manufacturers require is a low-risk path that allows them to segue into high performance computing. Whether that turns out to be partnerships with HPC-savvy organizations, system vendors who can understand and cater to low-end HPC users, or something else remains to be seen. What seems much more certain is the need for manufacturers in the US to be able to compete at the high end of the market with superior quality products. To do that, companies will need to accept HPC as a foundational technology for their businesses. | <urn:uuid:384f3553-972c-47da-9dc5-8eedf6556176> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/02/24/the_hpc_gap_in_us_manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00017-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954623 | 1,430 | 2.703125 | 3 |
Walk into any public-sector data center today, and you'll likely see the same thing: rows upon rows of racks that hold servers, servers and more servers (though these days, there are far fewer racks and servers thanks to virtualization).
And to help maximize energy usage and keep the room cooler, governments often utlize the hot aisle/cold aisle layout design, in which racks are lined up so that cold air intakes all face one way while hot air exhausts face the other. The rows composed of rack fronts are the cold aisles and typically face air conditioner output ducts, and the rows the heated exhausts pour into are called hot aisles, which will typically face air conditioner return ducts.
In September 2012, The New York Times reported that a yearlong examination revealed that "most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner."
While the public sector aims to control these costs by using such methods as the hot aisle/cold aisle layout design, cooling this equipment is still of concern -- and there's now a new solution for doing so: by housing that equipment in vats of mineral oil.
By submerging system components in oil, heat can be dispersed far more efficiently than through air, says Andy Price, director of business development for Green Revolution Cooling, an Austin, Texas-based company that's dedicated to changing the way data centers are cooled. And oil cooling, he said, is particularly effective when it comes to high-density data centers, which is why the company’s technology is gaining interest from both private and public industry.
Environmental problems such as dust and extreme temperatures can be solved with oil-cooling, while power consumption can be reduced by 40 to 45 percent, Price said. And a server room on a forward-operating base run by the military offers extreme conditions that illustrate the benefits oil-cooling has to offer. Though the military offers a good example of where the technology is useful, Price notes that everyone can benefit from oil-cooling, particularly in an era of budget constraint.
Reduced energy consumption means lower operating costs, but the initial investment to build a data center can be reduced, too. “They have to manage their costs,” he said. “If they’re tasked with building a new data center, or even retrofitting, they have to look at the equipment. And our solution, from a capital standpoint, is less expensive than building out a traditional air-cooled data center.”
Cooling computer equipment with air requires air flow management systems, specialized rooms, raised floors, as well as additional generators and uninterruptable power supplies (UPS) to support the air cooling systems. But when using oil cooling, Price said, “those things can typically be cut in half, and generators and UPS’s are a significant expense.”
By making simple modifications to traditional computing equipment, old servers and equipment can be used in an oil-cooled system. Cooling fans are removed, and thermal paste is replaced with indium foil.
And there are several solutions for managing storage devices. Hard disk drives (HDD), for instance, used to be sealed, Price said, but now there are drives sold that come pre-sealed, like the helium filled drives sold by Hitachi -- or solid state drives (SSD) can be used. Alternatively, HDDs can be mounted outside of the fluid, attached to heat sinks that are submerged in oil.
As with most new technologies, oil cooling has its detractors: Some are understandably hesitant to believe that submerging computer parts in liquid is a good idea. But Price said the technology is now beyond the testing period. “The technology works,” he said. “We’re beyond the point where we have to demonstrate that servers can survive in a dielectric fluid and that they’re actually more reliable.”
The oil isn’t just safe for components, but it’s safe for people too, Price said. “It’s not a harmful solution, it’s very, very safe for humans to be exposed to,” he said. “It’s baby oil without the fragrance. It’s safe for human exposure, even safe for human consumption.”
At 104 degrees Fahrenheit, it might even be good for the skin if someone were to, say, take a bath with the servers, he said.
At the end of 2012, Intel completed a yearlong test to measure the benefits of Green Revolution’s oil cooling system -- and the semiconductor giant endorsed the technology.
“We can reduce cooling energy use by 90 to 95 percent while also reducing server power by 10 to 20 percent," Intel reported upon completion of the pilot. (And the company is reportedly continuing to evaluate the long-term viability of the technology to see how data center costs might be reduced.)
While the technology is best suited for such places as research facilities, national labs, military bases, weather modeling and national weapons research labs, Price said scale is not the main factor driving cost savings. The savings, he says, come from power density – the more dense a data center, the more benefit oil cooling confers.
Though no public-sector entities are known to have deployed this system of cooling just yet, some are going to keep their eyes on it.
Officials at the city of Sacramento, for instance, said they recently learned about the technology, and though they're not eager to become an early adopter, CIO Gary Cook said it looks like it has promise ... though there are some potential percieved drawbacks. “I’d hate to work on the machine after you pull it out of the mineral oil," he said. "It’s going to be a mess.”
Despite that, he admitted that even a 5 percent savings on cooling overhead could make the technology an attractive investment. The city manages an ever-shrinking server room of about 40 to 50 server racks. The equipment footprint has been shrinking thanks to server virtualization. “We’re about 65 percent virtualized right now,” said Darin Arcolino, IT manager of technical infrastructure.
While virtualization offers many of the same benefits as oil cooling, such as savings on power and a decreased equipment footprint, the technology could someday become the norm, Cook estimated. “Five or 10 years ago, people weren’t virtualizing servers and now it’s the norm,” he said. “So this could be the next generation of the norm for cooling systems.”
Photo via Green Revolution Cooling | <urn:uuid:d93a0c50-5263-4586-8b39-803bf07791dc> | CC-MAIN-2017-09 | http://www.govtech.com/pcio/Cool-Your-Servers-in-Vats-of-Oil.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00137-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955137 | 1,389 | 2.609375 | 3 |
Following the open source model for collaborative software development can cut costs while providing a basis to create other innovative networks to develop technology specific to your company. While few have tried to develop industry-specific or vertically oriented open-source solutions up to now, it could become the future of software development.
Open-source software (OSS) is a development process that requires collaboration between individuals and organizations that isn't necessarily driven by a traditional hierarchy of command and control. Simultaneously, IT departments are driven to be more efficient while creating innovative new solutions to meet their business needs. More and more companies are turning to external sources for ideas that drive innovation. A series of books by Henry Chesbrough has coined the term innovation networks to discuss R&D departments that treat their systems as open—meaning, how do you include your partners, customers and even competitors as part of an extended R&D team?
The question is, can OSS be used to establish innovation networks for IT departments? What steps are required to establish a successful software innovation network, and what are the resulting benefits for organizations?
The Drive for Collaboration
Determining the scope of collaboration is often the most challenging aspect of starting an open-source project. The key challenge is to understand the areas of technology that are a core business value of the organization. Based on previous experiences in the software industry, OSS tends to lead to two logical strategies for collaboration:
- Collaborating on the implementation of industry standards or protocols, and
- Establishing an industry platform to grow a market.
Collaborating on Open Standards
Globalization and government regulation have increased the importance of industry standards and protocols. There are many examples of consortiums that define standards and protocols for specific technologies or specific industries. However, the implementation of these standards is often left to ISVs or individual IT organizations.
Software vendors were expected to implement technology standards such as HTTP, XML, Java, etc., in their products, but the implementations provided very little differentiating features and customer value add. Open-source software provides an effective mechanism for creating a common implementation that drives the adoption of these standards; the Apache HTTP Web server is a great example of driving the
For more on collaborative software development, see Grady Booch's 10 Tips to Help Employees Collaborate.
A similar case can be made for IT organizations that need to implement specific industry standards and protocols. The actual implementation of these standards provides very little benefit to the core business of an organization. Today, IT organizations typically rely on ISVs or internal development groups to implement these standards and thus incur the costs of sourcing the implementation.
The drive for collaboration is propelled by the need for IT organizations to quickly and efficiently implement new regulations or standards for their business. Organizations within the same industry can join together as a software-innovation network to create a shared implementation of a standard. A common implementation would mean that the cost is shared and the common deployments would result in greater interoperability.
For more on corporate involvement in open-source applications, see The Enterprise Committer: When Your Employee Develops Open-Source Code on the Company Payroll.
Collaborating on a Common Platform
Creating a common industry platform can address the IT challenge of integrating solutions from different vendors and help accelerate the growth of a fragmented market.
A consistent requirement of IT organizations is the need to integrate solutions from different vendors. For instance, CRM systems often need to be integrated with e-mail systems; financial institutions need to integrate data feeds from many providers; and large-scale manufacturers, such as automotive or aerospace OEMs, have extensive supply chains that need to integrate across the product lifecycle. Typically, the integration is a cost of doing business, not a core value, so creating a common platform that is adopted by a number of industry players effectively streamlines the integration requirements.
Establishing a common platform in a fragmented market of providers can help grow the entire industry. In fragmented markets, significant investment is often duplicated across solution providers but provides no real customer value. In addition, a valuable market ecosystem cannot develop because the market share of each provider is not big enough to sustain investment on one particular platform. Therefore, if multiple players agree to collaborate on a common platform, it can reduce the barriers for increasing the size of the overall market.
Factors to Consider When Establishing a Software-Innovation Network
Open-source software development provides a proven model for creating shared implementations, however, the ultimate goal of a software innovation network is to increase business value. Therefore, we need to consider several aspects of OSS that allow for value creation and value capture when establishing a collaboration amongst equal partners.
Read CIO Editor-in-Chief's thoughts about Enterprise Innovation with Open Source
The success of OSS development in facilitating collaborative development is in an open-development process. Most major open-source communities, such as Apache, Eclipse andLinux, work on the following principles:
- Openness: being open to participation by any individual or organization, including competing organizations.
- Meritocracy: Openness does not mean democracy; in fact, successful open-source projects work on the principle of a meritocracy. Therefore, newcomers are invited to participate based on their proven merit and ability.
- Transparency: having important project discussions, plans and meeting minutes available in a transparent manner so anyone can view them.
Enabling a Governance Model for Collaboration
All successful long-term organizations require a set of rules that establish a governance model for setting policies and strategies. Governance becomes even more important if the organization is a collaboration among competitors. It is, therefore, critical that the governance model not allow a single player to control or influence the organization. The perception or reality that a single participant controls the overall community can inhibit the participation of others.
Intellectual-property management is a critical consideration when you are creating a shared technology base. Effective IP management includes the selection of an appropriate software license, legal agreements for participants that cover the contribution of IP, and scanning of source code to ensure pedigree and license compatibility.
For instance, the Eclipse Foundation has a well-established IP management system. All participants in the Eclipse community sign the same exact agreement and follow the same IP processes. All Eclipse open-source project committers sign a "committer agreement" that specifies that their contribution is licensed under the Eclipse Public License (EPL). All source code that is contributed to Eclipse projects is automatically scanned to ensure that all of the code is licensed under the EPL or a compatible open-source license. The result is that the technology created in the open-source projects has clear software license and IP pedigree.
Creating a Community
Tim O'Reilly coined the term architecture of participation to describe how open-source projects are able to build and engage a community. The idea is that an open-source community forms around the ability of an individual, regardless of his or her affiliation, to participate. An architecture of participation is created by:
- Making it easy to extend the technology, and
- Having an open development process that is transparent to all.
Participation then occurs when those individuals contribute directly back to the project or build new technology on top of the base technology. The end result is an ecosystem that adds the needed components for quick adoption of new technology.
The network effect of smaller communities within the larger communities has also proven very beneficial for starting new projects. A significant challenge for any new community is generating awareness and participation. Organizations such as Apache and Eclipse allow new projects to leverage the larger community to raise their profile with potential community members.
Establishing the IT Infrastructure
The IT infrastructure to host a community-oriented software-innovation network is nontrivial. Typically, open-source collaborations will require a website, source-code repository, bug-tracking database, wikis, mailing lists and newsgroups. Consideration needs to be given into the ongoing administration and management of the infrastructure.
Open Business Models
A goal of a software innovation network is to create an ecosystem of organizations, commercial and not-for-profit, that benefit from a common platform. These organizations will employ a variety of business models and strategies. Therefore, it is important to ensure that the choice of license and governance model allow for maximum flexibility.
Where Do We Go from Here?
Most IT organizations have reduced software-licensing costs by being users of OSS. The next step to additional IT efficiencies will be their participation in OSS projects. Open-source communities like Apache, Eclipse and Linux have demonstrated a model for collaborative software development that can be the basis for any software-innovation network. Visionary IT departments have already begun to leverage this model to collaborate on the development of technology specific to their domain. Over the next few years, open software-innovation networks could very well be the future of software development.
Ian Skerrett is the director of marketing at the Eclipse Foundation, a not-for-profit corporation supporting the Eclipse open-source community and commercial ecosystem. In this role, he is responsible for implementing programs that raise awareness of the Eclipse open source project and grow the overall Eclipse community. | <urn:uuid:e615136b-2401-435f-94cd-fa16b946f466> | CC-MAIN-2017-09 | http://www.cio.com/article/2436119/open-source-tools/using-open-source-innovation-networks-to-drive-collaborative-software-development.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00133-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928077 | 1,865 | 2.96875 | 3 |
There’s an important effort underway among health care data experts to enable clinicians and medical researchers to share the same data for analytics to improve patient outcomes.
At issue is the structure of electronic health records (EHR) that were originally designed to be used in day-to-day patient care and are not set up to handle much bulkier data types such as X-ray images and genomic tests.
As a recent editorial in the Journal of the American Medical Association notes, a critical shortcoming of EHRs of today is that despite their usefulness they can’t hold and analyze much of the ancillary data that health care experts need in a timely fashion. Ancillary data could include laboratory and imaging test results. (See “Why Digital Medical Records Can’t Hold an X-Ray,” below.)
This condition persists even though available technology is already able to gather some of this information.
“EHRs were never designed to develop insights on large-scale sets of data. They help to collect information that can address inefficiencies of paper records and provide basic error-checking when you saw patients,” says Dr. Graham Hughes, chief medical officer at SAS for the SAS Center for Health Analytics and Insights. Hughes is a developmental neurobiologist and a leader in health informatics.
Addressing this problem is the focus of the Electronic Medical Records and Genomics (eMERGE) network, funded by the National Human Genome Research Institute, a division of the National Institutes of Health. It is a bioinformatics program established in 2007 with seven facilities to develop, disseminate, and apply approaches to research that result from the mapping of the human genome. The program’s coordinating center is located at Vanderbilt University.
This national consortium of scientists and organizations, using supercomputer systems, so far “has captured data sets from 56,000 individuals,” says Dr. Rongling Li, a genetic epidemiologist at the National Human Genome Research Institute, and eMERGE’s program director. Multiply 3 billion pairs of data for each of those 56,000 people and, she notes, “You can see what we mean by really big data.”
Experts say such a program is a way to move beyond the limitations of medical records.
“Even when EHRs advance, unless there are fundamental changes, they will not be able to handle large volumes of genetic data. We need to build a more fluid system,” says Dr. Justin Starren, chief of the Division of Health and Biomedical Informatics at Chicago’s Northwestern University., adding: “We could wait for the mainstream EHR vendors to solve the issue in the near term, or simply try to stuff the genomic data into the current system.”
Neither of those options seems likely, however. Hughes says he doesn’t think EHRs are the answer.
“You don’t need all [the genetic information] stored in the patient’s health record,” he says. “What you do need are new algorithms that will teach a system to say, ‘I know that I need to look at this particular gene…I know that’s a variant.’ Then signals in the EHR would provide some guidance to the doctor as to the implication of what impact these variants could have on that patient’s care.”
The Potential for Data-Driven Benefits
Discussions about data-driven health care improvements have been going on for years in political and public policy circles, not just the medical field. And they continue among experts working to come up with new data models for patient records.
Crunching vast repositories of genomic data has enormous potential for saving lives. Starren offers this example of a maternity patient:
“There was a woman who was on codeine after her delivery and, unfortunately, turned out be among the approximately 6 percent of the population that doesn’t metabolize codeine efficiently. She ended up retaining so much of it in her breast milk that her baby’s respirations were depressed and the child died.”
If there had been an easier way to analyze her gene sequence to show whether this woman was one of these “high metabolizers” during her pregnancy, and there was a process in place to flag the doctor about the variant, either the mother wouldn’t have received codeine, or wouldn’t have initially breastfed her baby.
Hughes says this kind of preventive scenario is not far-fetched. “We can already find data that allows us to suggest very specialized patterns of treatment (for example, surgery, a specific drug, exercise), determining first what’s best for a specific group overall—like 64-year-old black women—and then eventually for individuals within that group,” says Hughes. “The technology is here today, [just] not used widely.”
Once these analytics provide more easily read data, health care economics will also benefit, Li says: “When we get the right diagnosis, and provide the right dose of the right medicine, we’ll save money.”
Such data analysis might eventually help avoid malpractice suits. “It would act as smart surveillance that can troll through this information 24/7 looking for warnings, information that your care team is too harried to look for,” says Hughes.
Analytics could also lead to personalized medicine. “Think about the number of drugs people over 65 take, and how many are necessitated by a genetic influence, like cholesterol,” says Starren. “Where we’re going over the next 10 years is not just checking your blood pressure at a pharmacy. You’ll have your entire genome sequenced and your risks will be sent on to your doctor to guide your individual treatment,” Hughes says.
On the downside, algorithms allowing this kind of genetic sifting raise other issues, such as privacy and ethics. “We haven’t figured out all the unexpected consequences, in areas like insurance or employment, when each individual can be flagged as carrying ‘dangerous’ genes,” says Starren.
In the meantime, though, Starren says, “I think one of the lessons behind this is that we traditionally think of research and clinical care as two separate worlds that have nothing to do with each other.
But as medicine becomes recognized as a big data problem, the researchers and the clinical IT people will see the need to work much more closely together.”
He adds, “If you’re going to be a scientist in this century, you’ll have to follow algorithms.”
Wendy Meyeroff, of WM Medical Communications, is an experienced freelance writer based in Baltimore who specializes in health care and IT topics.
Home page illustration of chromosomes of the human genome, via National Human Genome Research Institute. | <urn:uuid:0ef8395e-79eb-4622-a0ea-b58c7e0bc839> | CC-MAIN-2017-09 | http://data-informed.com/program-seeks-to-merge-electronic-health-data-and-genomic-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00309-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949441 | 1,450 | 2.703125 | 3 |
The US Air Force this week said it will base the first Space Fence radar post on Kwajalein Island in the Republic of the Marshall Islands with the site planned to be operational by 2017.
The Space Fence is part of the Department of Defense's effort to better track and detect space objects which can consist of thousands of pieces of space debris as well as commercial and military satellite parts. Approximately 19,000 objects larger than 10 cm are known to exist, according to NASA. The Space Fence will replace the current VHF Air Force Space Surveillance System built in 1961.
The Space Fence will use multiple S-band ground-based radars -- the exact number will depend on operational performance and design considerations -- that will permit detection, tracking and accurate measurement of orbiting space objects. The idea is that the Space Fence is going to be the most precise radar in the space situational surveillance network and the S-band capability will provide the highest accuracy in detecting even the smallest space objects, the Air Force stated. The Fence will have greater sensitivity, allowing it to detect, track and measure an object the size of a softball orbiting more than 1,200 miles in space. Because it is an uncued tracking system, it will provide evidence of satellite break-ups, collisions or unexpected maneuvers of satellites, the Air Force said.
The Space Fence program, which will ultimately cost more than $3.5 billion, will be made up of a system of geographically dispersed ground-based sensors to provide timely assessment of space events.
"The Space Fence will provide precise positional data on orbiting objects and will be the most accurate radar in the Space Surveillance Network. Space Fence data will be fed to the Joint Space Operations Center at Vandenberg Air Force Base, Calif. Data from the Space Fence radar will be integrated with other Space Surveillance Network data to provide a comprehensive space situational awareness and integrated space picture," the Air Force said.
Construction is expected to begin September 2013 and is planned to take 48 months to complete construction and testing, the Air Force said.
Lockheed Martin reported earlier this year that a prototype system it is developing to track all manner of space debris is now tracking actual orbiting space objects. Raytheon and others are involved in that Space Fence development process.
Check out these other hot stories: | <urn:uuid:8e2b3d38-87ec-4ecb-bc22-a35897d0bd3b> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2223205/security/air-force-sets-first-post-in-ambitious-space-fence-project.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00309-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.912396 | 473 | 2.734375 | 3 |
Intel researchers have built a 48-core microprocessor that the chip giant is pitching as a "single-chip cloud computer," Intel's chief technology officer said Tuesday in San Francisco.
"With a chip like this, you could imagine a cloud data center of the future which will be an order of magnitude more energy efficient than what exists today," said Justin Rattner, Intel's CTO and head of Intel Labs.
The prototype single-chip cloud computer, which Santa Clara, Calif.-based Intel has dubbed an SCC, is the second generation of Polaris, a many-core computer chip that Intel introduced at the International Solid State Circuit Conference (ISSCC) two years ago. The experimental 48-core chip shares some attributes of Intel's future-generation GPU microarchitecture, code named Larrabee, Rattner said.
As with Larrabee and unlike the first Polaris chip, the cores that make up Intel's new SCC are compatible with the x86 instruction set, or as Intel prefers it to be known, the Intel Architecture (IA).
The experimental 48-core computer chip "rethinks many of the approaches used in today's designs for laptops, PCs and servers," according Intel. One key such "rethinking" is the utilization of software to manage page-level memory coherency, rather than baking that functionality into the silicon as with previous architectures, Rattner said.
Removing such hardware functionality is a silicon space-saver, allowing room for a new, high-speed on-chip information sharing network built onto the processor die. The SCC team also developed new power management techniques that Rattner said allow all 48 cores to operate while drawing as little as 25 watts in power. At peak performance, the prototype chip draws 125 watts, putting it within the power band of Intel's Core 2 and Nehalem-based processors currently on the market.
Intel will be sharing about 100 of the experimental chips with industry and academic partners in 2010, Rattner said. Research teams at Microsoft, ETH Zurich, University of California at Berkeley and the University of Illinois already have such chips to play with, he added.
"This is not a product. It never will be a product. But it provides a very good platform for conducting research," Rattner said.
Intel sees a strong play for future many-core chips in cloud computing installations, where energy efficiency and the ability to build extremely dense computing are at a premium. The "many-core era" will also mark a shift to computing that is more "immersive, social and perceptive," Rattner said.
"Computers will see and hear and they will probably speak, and do a number of other things that resemble what humans do," he said.
The experimental SCC was produced by 40 Intel Labs researchers in the U.S., Europe and India, Rattner said. The 1.3-billion transistor chip features Intel's current-generation 45-nanometer, high-k metal gate process technology. Bringing more cores, better power management and x86 compatibility to the first Polaris design was a largely glitch-free exercise, Rattner added.
"There was only one significant bug" during the design process, he said.
The chip's 48 x86-compatible cores are "the most ever built on a single chip," according to Intel. Those cores are laid out on the processor in a two-dimensional grid which further maps 24 tiles that have two cores apiece.
Rattner described the power management capabilities as "fine-grain," though not so fine-grain as to allow for power to be throttled up or down at the core level. Instead, it's possible to run each two-core tile at a different frequency, while the chip's regions -- six banks of four tiles -- can each be run at different voltages, Rattner said.
The second-generation 2D mesh network on the SCC features 24 routers, one per tile, and consumes just a third of the power of the previous Polaris network, Rattner said. Each core has its own dedicated L2 cache, with 256 Gbps bisection bandwidth and 64 Gbps duplex link bandwidth. The chip has four integrated, 64GB-addressable DDR3 memory controllers. | <urn:uuid:5ff3bdf0-8a90-4065-af63-be80133f1614> | CC-MAIN-2017-09 | http://www.crn.com/news/components-peripherals/222000357/intel-labs-unveils-48-core-chip.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00485-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944173 | 868 | 2.578125 | 3 |
University of California scientists have created bendable electronic skin that uses lights to react to stimuli like touch and pressure, according to a report in the Daily Mail. Researchers, including UC Berkeley associate professor of electrical engineering and computer sciences Ali Javey, say one potential use could be to help restore feeling to people with prosthetic limbs.
The "e-skin" is made from a thin layer of plastic melted onto a silicon strip, which is then adhered to flexible electronic circuits. Once the plastic hardens, it is removed. Each pixel in the sample prototype contains a semiconductor carbon nanotube transistor, an organic light emitting diode and a pressure sensor.
Javey and his team also think that e-skin could be used to give realistic skin and touch capabilities to robots. | <urn:uuid:e60343e3-e1ca-4d51-9546-bc0226891325> | CC-MAIN-2017-09 | http://www.govtech.com/Photo-of-the-Week-Electronic-Skin.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00361-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941876 | 160 | 3.171875 | 3 |
Technology in the Political Arena
A few weeks ago, Barack Obama was sworn in as president of the United States. But that’s not the only news that has tongues wagging. People are talking about the crucial medium that helped put him there.
In today’s increasingly interconnected world, in which geographic boundaries are obviated by the ubiquitous nature of the World Wide Web, even some of the most highly ranked government officials turn to familiar, regular-people technology to get their messages across.
Take, for instance, the president himself. While he was running for office, President Obama had about 3 million Facebook supporters and managed to muster up four times as many MySpace “friends” as former Republican nominee John McCain, according to a CNN article.
Further, the president revealed the identity of his running mate, Joe Biden, to his loyal and tech-savvy supporters via text message.
After his election in November, Obama wasted no time going live with a Web site that would give the American people — or anyone in the world, for that matter — important news and updates, including a detailed agenda for the Obama administration.
The new millennium appears to have ushered in a technology platform that allows candidates to get increasingly competitive, not to mention creative.
Are the days of the Roosevelt fireside chats officially over? It seems so, as Obama appears to want to move beyond one-way communication and spur interactivity between the government and its people.
For instance, Change.gov urges visitors to share their visions for the Obama presidency. A few months ago, CNN reported that prior to signing any nonemergency legislation, the president would wait a period of five days to allow the general public to post their thoughts online.
Technology and the Internet may have made an indelible mark on the 2008 U.S. presidential campaign — and, undoubtedly, many campaigns to come — but the newly elected president isn’t the only political figure going online as a means of communicating with his constituency. Toward the end of December, San Francisco Mayor Gavin Newsom used the ever-popular YouTube site to publicly broadcast his entire State of the City speech.
In his online message, Newsom said he expects to inform the public on matters such as universal health care, education and the budget without the running commentary of various media outlets.
Even British Prime Minister Gordon Brown has applauded the Internet as a practical way to communicate directly with the public. Almost a year ago, Brown created an online version of “Questions to the Prime Minister,” a constitutional convention in the U.K. in which members of Parliament are given half an hour to ask the prime minister questions.
According to a Telegraph article, the new version would allow any member of the general public to post video questions on the YouTube-hosted site, and Brown would pick the most popular questions to respond to via video messages.
Opening up this avenue of communication via technology leads to greater transparency in the government, which certainly is a good thing. However, it also allows for more prodding on the part of the people, so even the slightest misstep or faux pas can be magnified.
In my estimation, even politicians with the best intentions who choose to leverage technology to their advantage must brace themselves, as they could be viewed and criticized more harshly than those who don’t.
Nonetheless, it seems to me that the Internet — and indeed technology in general — will continue to play a fundamental role in the political arena for decades to come. Just looking at the technology-related strides made by the Obama campaign, the future of elections and campaign races is ripe for even more creative and innovative tactics. 8
– Deanna Hartley, firstname.lastname@example.org | <urn:uuid:c3edd083-d631-4105-a264-12347dd48efc> | CC-MAIN-2017-09 | http://certmag.com/technology-in-the-political-arena/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00537-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967526 | 764 | 2.515625 | 3 |
A faster Mac is always a better Mac, and there are many things you can do to get the most performance from your computer. But what really works? Here are some common myths about Macs and what does--and does not--affect performance.
1. More processing cores always means better performance.'
To test this theory, we ran benchmarks on two 2012 Mac Pros, one with 12 processing cores running at 2.4GHz and one with a quad-core processor running at 3.2GHz. With a MathematicaMark score of 5.70, the 12-core Mac Pro's result was twice that of the quad-core Mac Pro. The 12-core Mac Pro also finished the Cinebench CPU test in the half the time of the quad-core Mac Pro.
+ ALSO ON NETWORK WORLD 10 free ways to make your PC faster +
But despite all of those extra cores, the 12-core Mac Pro posted slower times than the quad-core system in our iTunes encode, Aperture test, and file compression tests.
While some professional applications can benefit greatly from multiple processors, most applications aren't designed to take advantage of more than four cores. For the majority of applications, fewer but faster processors are preferred.
2. Having an external monitor plugged into your MacBook will slow down performance.'
We tested a late 2013 11-inch MacBook Air, with and without a 27-inch Apple Cinema Display attached, and found almost no performance differences between the two configurations in the 14 tests in our Speedmark 9 benchmarking suite.
We thought switching to an older, slower, MacBook Air might show more or a difference, but we were wrong. The only test to show any real difference was our iMovie test, which was less than 4 percent faster on the 2013 MacBook Air without the external monitor and just over 2 percent faster on a 2011 Air without the external monitor. Differences like that are hardly worth mentioning, much less unplugging a monitor over.
3. Lower capacity SSDs are slower than high capacity SSDs.'
When it comes to solid-state drive (SSD) performance, capacity matters. We took a pair of Toshiba Q series Pro drives and two Samsung EVO 840 drives and ran our performance tests on them.
The 512GB Samsung EVO 840 was 39 percent faster than a 256GB EVO 840 in our 10GB large file write test and 26 percent faster than the lower capacity drive in our 10GB files and folders test. Read speeds, however, were unaffected by the capacity. Black Magic and AJA tests both showed the 512GB drive's write speed to be about 32 percent higher than the 256GB model, with read speeds again showing little change.
The difference in write speeds were even more pronounced in our Toshiba tests. Testing two Q series Pro drives--one at 128GB, the other at 512GB--the 512GB Toshiba was 2.5 times as fast as the 128GB SSD in our large file write tests and 2.3 times as fast in our files and folders read test. Read times were within a percent point of each other.
It's also worth noting that the smaller drives were wildly erratic in their write times. Occasionally they would spike to the speeds found in the larger capacity drives, and other times they dropped way below the average. On the other hand, the larger capacity drives were highly consistent in their read and write speeds throughout our testing.
4. Keeping lots of free space on your startup drive will improve your Mac's performance.'
Our tests on a late 2012 27-inch iMac with 2.9GHz quad-core Core i5 processor, 8GB of RAM and a 7200-rpm 1TB hard drive showed some serious performance degradation as the drive filled up. The two tests that showed the biggest change in performance were in our 6GB files and folders copy test and unzipping a 6GB compressed file. Our baseline tests, with the disk about 5 percent full, showed the iMac taking 93 seconds in the copy test and 84 seconds in the unzip test. When we filled the drive to 50 percent of its capacity, the results slowed down by 4.3 percent on the copy test and just under 8 percent on the unzip test. Filling the drive to 80 percent capacity, the baseline results were more than 11 percent faster than the almost-full iMac in the copy test and 17.6 percent faster in the unzip test. Pushing it even further, we ran the tests again at 97 percent of capacity. This time the baseline results were nearly 21 percent faster in the copy test and almost 35 percent faster on the unzip test.
With SSDs, it was a different story. Only at the 97 percent full capacity did we see any difference in our SSD results. The baseline result for SSD in the MacBook Pro was 35 percent faster, but only in the unzip test.
5. Adding RAM always improves performance.'
The lab has done quite a bit of testing on this subject over the years; our most recent coverage was last May with Mountain Lion and older versions of apps. This time out, we took a mid 2012 15-inch MacBook Pro with quad-core 2.3GHz Core i7 processor and a 512GB hard drive and ran it with 4, 8, and 16GB of RAM on loan from Crucial.
The tasks in our Photoshop tests showed the greatest benefit of increased RAM. Using our standard Speedmark 9 action script with a 100MB test file, the 8GB setup was about 14 percent faster than the 4GB configuration. Upping the RAM to 16GB shaved another couple of seconds off of the time and was 15.5 percent faster than the 4GB baseline configuration. We ran a more intensive test, one that uses more hardware acelerated tasks, and found an even greater benefit using increased RAM. In this test, the 4GB configuration took almost exactly 10 minutes to complete, upgrading the RAM to 8GB brought down the time to 7 minutes 18 seconds, and the 16GB configuration finished the test in just under five minutes.
Many other tests, however, were unaffected by the addition of RAM. These tests included Cinebench CPU and Open GL tests, HandBrake, iMovie, Heaven and Valley graphics benchmarks, and PCMark 8's Office application tests. Some tests actually ran slower with more RAM. Our iPhoto import test took 112 seconds with 4GB of RAM, 117 seconds with 8GB of RAM, and 138 seconds with 16GB of RAM. Similarly, our Aperture import and process test showed the 4GB configuration taking just over 121 seconds to complete, 8GB took an extra 10 seconds and the 16GB configuration added another twenty seconds to the time. Copy, zip and unzip tests were also slower with 8GB and even slower with 16GB of RAM installed.
From our tests, 8GB would probably be the sweet spot for most users. It offers a performance boost in applications like Photoshop, but with fewer performance penalties in apps like iPhoto and Aperture.
6.Faster graphics cards only improve gaming performance.'
While faster graphics cards certainly can pump up 3D gaming frame rates, more and more applications are using OpenCL to take advantage of those powerful GPUs. Two such applications are Photoshop and Final Cut Pro X. Much of time, the GPU acceleration makes for a smoother interface, faster previewing and other UI enhancements. Photoshop has a handful of effects, filters, and manipulations that are GPU accelerated.
We took a 2012 Mac Pro--the most recent Mac to offer easy swapping of graphics cards--and ran a Photoshop action script made up of these GPU accelerated tasks on the stock AMD Radeon HD 5770 with 1GB of VRAM and a Sapphire HD 7950 with 3GB of VRAM. The Sapphire finished the test in 239 seconds, 5 percent faster than the stock card.
In the Heaven and Valley graphics benchmarks, the biggest differences showed up in high 2560 by 1600 resolution tests, where the Sapphire was able to push 14.2 frames per second in the Heaven benchmark versus the 5770's unplayable 1.15 frames per second. Valley results at that high resolution were similar, with the Sapphire achieving 18.3 fps versus the Radeon's 1.25fps.
This story, "Fact or fiction: What does (and doesn't) actually speed up your Mac" was originally published by Macworld. | <urn:uuid:2fb9e7a6-7749-45ac-ae79-8b4f8fb90de8> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2174146/data-center/fact-or-fiction--what-does--and-doesn--39-t--actually-speed-up-your-mac.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00357-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947942 | 1,714 | 2.640625 | 3 |
Tokoro M.,Asada Ladies Clinic Medical Corporation |
Fukunaga N.,Asada Ladies Clinic Medical Corporation |
Yamanaka K.,RIKEN |
Itoi F.,Asada Ladies Clinic Medical Corporation |
And 5 more authors.
PLoS ONE | Year: 2015
Generally, transportation of preimplantation embryos without freezing requires incubators that can maintain an optimal culture environment with a suitable gas phase, temperature, and humidity. Such incubators are expensive to transport. We reported previously that normal offspring were obtained when the gas phase and temperature could be maintained during transportation. However, that system used plastic dishes for embryo culture and is unsuitable for long-distance transport of live embryos. Here, we developed a simple lowcost embryo transportation system. Instead of plastic dishes, several types of microtubes-usually used for molecular analysis-were tested for embryo culture. When they were washed and attached to a gas-permeable film, the rate of embryo development from the 1-cell to blastocyst stage was more than 90%. The quality of these blastocysts and the rate of full-term development after embryo transfer to recipient female mice were similar to those of a dish-cultured control group. Next, we developed a small warm box powered by a battery instead of mains power, which could maintain an optimal temperature for embryo development during transport. When 1-cell embryos derived from BDF1, C57BL/6, C3H/He and ICR mouse strains were transported by a parcel-delivery service over 3 days using microtubes and the box, they developed to blastocysts with rates similar to controls. After the embryos had been transferred into recipient female mice, healthy offspring were obtained without any losses except for the C3H/He strain. Thus, transport of mouse embryos is possible using this very simple method, which might prove useful in the field of reproductive medicine. © 2015 Tokoro et al. Source | <urn:uuid:cb49e7ed-eaaa-4ebf-b59b-c817d48f8718> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/asada-ladies-clinic-medical-corporation-423900/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00178-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945695 | 400 | 2.578125 | 3 |
Preventing companies and government agencies from gathering embarrassing or damaging personal information about you may be a fool’s errand, a White House panel on privacy in the age of big data said on Thursday.
Instead, lawmakers and regulators should focus their efforts on preventing the dissemination or other use of damaging personal data, according to the report from the President’s Council of Advisers on Science and Technology Policy.
In some cases that damaging data may be as personal as an individuals’ genome that, if shown to a potential employer, could lead to job discrimination based on her likelihood of developing a degenerative disease. In other cases, organizations may collect the most common characteristics of terrorists or criminals, which could lead to discrimination against people who have those characteristics but are neither terrorists nor criminals.
The report follows a three-month study led by White House Counselor John Podesta on how the explosion of new data sources and new tools to gather intelligence from them will affect Americans’ privacy.
The term “big data” generally refers to data from Internet and smartphone activity, video and other sources that don’t fit neatly into a spreadsheet. Within the past decade, computer programmers have developed new tools to analyze those masses of data with positive and negative consequences.
“For example, large-scale analysis of research on disease, together with health data from electronic medical records and genomic information, might lead to better and timelier treatment for individuals but also to inappropriate disqualification for insurance or jobs,” the report found. “GPS tracking of individuals might lead to better community-based public transportation facilities, but also to inappropriate use of the whereabouts of individuals.”
Protecting individuals’ privacy by limiting data collection is likely impractical, first, because “the beneficial uses of near-ubiquitous data collection are large, and they fuel an increasingly important set of economic activities,” the report found.
Second, much of the data that companies collect -- either through the device you create the data on, such as a smartphone app, or through sensors that collect “born-analog data” -- is perfectly innocuous unless combined with other innocuous data to paint a larger picture. Government attempts to regulate the collection of any small grain of data that might one day form part of a larger privacy invasion would be impossible and impractical, the report said.
Technologies that anonymize data are “increasingly easily defeated by the very techniques that are being developed for many legitimate applications of big data,” the report found. “In general, as the size and diversity of available data grows, the likelihood of being able to re-identify (that is, re-associate their records with their names) grows substantially.”
Similarly, it’s impractical to expect citizens’ privacy rights to be protected by terms of service agreements with digital companies, the report found. This is, first, because very few people actually read those agreements and, second, because, in many cases, competition has not produced a viable but fully private competitor to services such as Google and Amazon.
“PCAST believes that the responsibility for using personal data in accordance with the user’s preferences should rest with the provider rather than the user,” the report said.
Consumer protection organizations or app stores might serve as intermediaries, rating companies on their privacy protections and allowing users to choose based on those ratings, the report suggests.
“The federal government could encourage the development of standards for electronic interfaces between the intermediaries and the app developers and vendors,” the report states.
The report also recommends that:
- Policies and regulations aimed at addressing data privacy focus on what’s being exposed rather than the technology that’s exposing it which may be soon superseded or out of date.
- Federal agencies increase research into both the ways personal data is being exposed and the social structures that are enabling or inhibiting that exposure.
- The White House should work with universities and professional societies to encourage digital privacy.
- The U.S. should “take the lead both in the international arena and at home by adopting policies that stimulate the use of practical privacy-protecting technologies.”
The report also urges Congress and the White House to:
- Advance the administration’s Consumer Privacy Bill of Rights
- Pass national data breach legislation along the lines of the administration's 2011 Cybersecurity legislative proposal.
- Extend Privacy Protections to non-U.S. persons
- Enhance protections for data collected about U.S. students
- Use technical expertise to limit the ways big data can be used to discriminate against protected classes in employment, housing and other fields.
- Update the Electronic Communications Privacy Act to reflect modern technology
TechAmerica, a leading coalition of government vendors, gave the report a mixed reception.
“We appreciate the report’s focus on the overall benefits that the effective use of big data can achieve but are somewhat confused as to why the administration has also focused on hypothetical concerns about the use of data,” Mike Hettinger, TechAmerica’s senior vice president for federal government affairs, said. “This creates uncertainty in the minds of Americans about a technology that has so much potential.”
The New America Foundation’s Open Technology Institute praised the report for focusing on the possible discriminatory effects of big data as more information about individuals is potentially exposed.
“The White House report highlights how more work must be done to ensure that big data does not lead to discrimination when it comes to key services and economic opportunities such as housing, employment, and credit,” the group said. | <urn:uuid:582ca5bb-04ca-4812-992c-5fa8ceabf53c> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2014/05/trying-limit-collection-personal-data-would-be-lost-cause/83614/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00530-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940854 | 1,166 | 2.703125 | 3 |
Last week, HPC in the Cloud discussed what types of HPC applications are best suited for cloud technologies. While capabilities offered by cloud providers (minimal upfront costs, high scalability and quick time to deployment) remain attractive to HPC users, the needs of their workloads are sometimes at odds with the technology. One particular hurdle is the amount of bandwidth between the end user and their provider of choice. Earlier this week, a scalability.org blog covered this dilemma, calling it a “non-trivial” issue.
Most public cloud providers are best suited for Web hosting, email services and similar ongoing tasks. Their infrastructures are geared toward these purposes, scaling up capacity relative to end user demand. However, if a single user wants to store and process massive datasets, the lack of high bandwidth connectivity can severely hinder their research.
NASA is familiar with this problem. The agency recently launched a program called NEX, which houses 40 years of earth satellite data in a storage cluster next to their Pleiades supercomputer. NASA AMES Earth scientist Ramakrishna Nemani, spoke to us about the project. He described how long it took to migrate a large collection of landsat images from a datacenter in South Dakota to the AMES facility.
“I’ll give you an example about how difficult this has been. We brought about 400 terabytes of data from the EROS datacenter in Sioux Falls, South Dakota. I was blown away, it took us nearly 6 ½ months.”
With a turnaround time like that, it probably would have been easier to FedEx the dataset on a set of hard drives. The scalability blog directs blame for this kind of issue at lack of competition between ISPs in the US.
They priced an asymmetric connection delivering 100Mbit/s down and 10-15Mbit/s up at roughly $300/ mo. That translates to 12.5MByte/s down and 1.25MByte/s up.
Given that performance, an end user could download roughly one terabyte per day. But since the upload transfers at 10 percent the download speed, it would take approximately 10 days to upload a single terabyte.
Although standard service providers have been lacking in their ability to match throughput with demand, they may receive more incentive from Google. The Internet search giant has decided to throw themselves into the mix, launching their own fiber service in Kansas City. For $70 a month, users can get symmetrical 1,000Mbit/s (1 Gb/s) connectivity. With that performance, the 10 day/TB upload becomes a more practical, two hour transfer.
By effectively eliminating the bandwidth bottleneck, end users have the ability to implement a new range of cloud-based services. This includes high capacity storage and data-intensive research. Unfortunately Google’s service is limited to Kansas City and no plans to expand the program have been announced. | <urn:uuid:04d9a409-004a-4265-96d8-1f1e52ae65df> | CC-MAIN-2017-09 | https://www.hpcwire.com/2012/08/16/the_bandwidth_bottleneck/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00530-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95238 | 606 | 2.5625 | 3 |
Setting Up Files and Web Sites for Offline Access
So far in this series, I've explained how the various types of offline file caching work. In this article, I'll explain how to actually set up files for offline access. I'll then go on to explain how to make Web sites available offline as well.
Before you can make any folder available offline, you must first share it. To do so, open My Computer and navigate to the folder you want to make available offline. Next, right-click on the folder and select the Sharing command from the resulting context menu. When you do, you'll see the folder's properties sheet with the Sharing tab selected.
To share the folder, select the Share This Folder radio button and enter a share name. You can now make the folder available for offline use. To do so, click Caching on the folder's Sharing tab. You'll see the dialog box shown in Figure 1.
As you can see, the Caching Settings dialog box contains a drop-down list containing the various types of caching. You can select Automatic Caching, Manual Caching, or Automatic Caching for Programs. The dialog box also contains a brief description of each type of caching and what it's good for.
If you set a folder to use Automatic Caching or Automatic Caching for Programs, then the caching process is... well, automatic. However, if you decide to manually cache a folder, the caching process requires some user intervention. Before a user can use a manually cached folder offline, he must go through a process called pinning.
Pinning is the process of selecting which files should be available offline. Once you manually cache a folder, any user who normally has access to the folder also has rights to pin the folder. However, you can modify the group policy so that only a select few individuals have pinning privileges.
To pin a folder, the user must be online. Once the user is logged in, he must navigate through the directory structure to the folder to which he needs offline access. After selecting the folder, the user must select File|Make Available Offline. Windows 2000 will then launch the Offline Files Wizard.
The wizard's initial screen simply gives an explanation of the wizard's purpose, and the user can click Next to move on. The next screen asks if the user wants to automatically synchronize the offline files when he logs on and off the computer. The user makes the selection using the check box provided and then clicks Next. The wizard's final screen gives the user a chance to see a periodic reminder that he isn't online. After the wizard completes, a dialog box asks whether the user wants to make the selected folder the only thing that's available offline, or if he would also like to include the contents of the folder's subfolders. After the user selects the appropriate radio button and clicks OK, the folder will be available offline.
Caching Web Pages
As I mentioned earlier, you can also make Web sites available for offline use. For example, you might like to take a copy of your company's Web site with you when you go on business trips. To make a page available offline, you must first add it to your favorites. To do so, go to the desired page and select Add To Favorites from Internet Explorer's Favorites menu.
Next, return to the Favorites menu and right-click on the Web site you've just added. Select Make Available Offline from the resulting context menu. At this point, you'll see the Offline Favorite Wizard.
Begin by clicking Next to get through the wizard's introduction screen. Next, you'll see a screen similar to the one shown in Figure 2. By default, the wizard makes only a single page of the Web site available for offline use. However, you can make the entire site available offline, if you wish.
To do so, click Yes to make the page's links valid. You can then use the dialog box's counter to tell Internet Explorer how many layers deep you want to make available offline. If you make enough layers available offline, you can download an entire Web site. Be careful about doing that, though--some Web sites are huge, and trying to download the entire thing can cause you to run low on hard disk space.
Click Next to continue. The next screen informs you that you can update the page any time you're online by selecting Tools|Synchronize. However, you can also use this screen to establish an automatic synchronization schedule.
After deciding on your synchronization schedule, click Next. The wizard's final screen asks if the Web site requires a password, and gives you the opportunity to supply one. When you complete the process, Internet Explorer will begin downloading the page for offline use.
In this series, I've explained that mobile users sometimes need access to network resources and Web sites when no network or dial up connection is available. In answer to this problem, Windows 2000 offers several ways to make files, folders, programs, and Web sites available for offline use through caching. In my discussion, I've explained the pros and cons to each type of offline caching, as well as the setup procedures for each. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:bb72a392-2ad2-4557-ad4f-a4b0afa83814> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/625491/Setting-Up-Files-and-Web-Sites-for-Offline-Access.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00582-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.905834 | 1,134 | 2.515625 | 3 |
Public Key Encryption
In order to use public key encryption, you must generate two keys: a public key and a private key. You keep the private key for yourself and give the public key to the world. In a similar manner your friends will generate a pair of keys and give you their public keys. Public key encryption is marked by two distinct features.
1. When you encrypt data with someone’s public key, only that person’s private key can decrypt it.
2. When you encrypt data with your private key, anyone else can decrypt it with your public key.
You may wonder why the second point is useful at all: Why would you want everybody else to be able to decrypt something you just encrypted? The answer lies in the purpose of the encryption. Although encryption changes the original message into unreadable ciphertext, the purpose of this encryption is to provide a digital signature. If the message decrypts properly with your public key, only you could have encrypted it with your private key, proving that the message is authentic. Combining these two modes of operation yields privacy and authenticity. You can sign something with your private key so that it is verified as authentic, and then you can encrypt it with your friend’s public key so that only your friend can decrypt it.
Public key encryption has three major shortcomings.
1. Public key encryption algorithms are generally much slower than symmetric key algorithms and usually require a much larger key size and a way to generate large prime numbers to use as components of the key, making them more resource intensive.
2. The private key must be stored securely and its integrity safeguarded. If a person’s private key is obtained by another party, that party can encrypt, decrypt, and sign messages impersonating the original owner of the key. If the private key is lost or becomes corrupted, any messages previously encrypted with it are also lost, and a new keypair must be generated.
3. It is difficult to authenticate the origin of a key, that is, to prove who it originally came from. This is known as the key-distribution problem and is the raison d’etre for such companies as VeriSign.
Algorithms such as RSA, Diffie-Hellman, and El-Gamal implement public key encryption methodology. Today a 512-bit key is considered barely adequate for RSA encryption and offers marginal protection; 1,024-bit keys are expected to withhold determined attackers for several more years. Keys that are 2,048 bits long are now becoming commonplace and rated as espionage strength. A mathematical paper published in late 2001 and reexamined in the spring of 2002 describes how a machine can be built-for a very large sum of money-that could break 1,024-bit RSA encryption in seconds to minutes (www.counterpane.com/crypto-gram-0203.html#6). Although the cost of such a machine is beyond the reach of most individuals and smaller corporations, it is well within the reach of large corporations and governments.
Symmetric Key Encryption
Symmetric key encryption is generally fast and simple to deploy. First, you and your friend agree on which algorithm to use and a key that you will share. Then either of you can decrypt or encrypt a file with the same key. Behind the scenes, symmetric key encryption algorithms are most often implemented as a network of black boxes, which can involve hardware components, software, or a combination of the two. Each box imposes a reversible transformation on the plaintext and passes it on to the next box, where another reversible transformation further alters the data. The security of a symmetric key algorithm relies on the difficulty of determining which boxes were used and the number of times the data was fed through the set of boxes. A good algorithm will cycle the plaintext through a given set of boxes many times before yielding the result, and there will be no obvious mapping from plaintext to ciphertext.
The disadvantage of symmetric key encryption is that it depends heavily on a secure channel to send the key to your friend. For example, you would not use e-mail to send your key; if your e-mail is intercepted, a third party is in possession of your secret key, and your encryption is useless. You could relay the key over the phone, but your call could be intercepted if your phone were tapped or someone overheard your conversation.
Common implementations of symmetric key algorithms are DES (Data Encryption Standard), 3-DES (triple DES), IDEA, RC5, Blowfish, and AES (Advanced Encryption Standard). AES is the new Federal Information Processing Standard (FIPS-197) algorithm endorsed for governmental use and chosen to replace DES as the de facto encryption algorithm. AES uses the Rijndael algorithm, chosen after a thorough evaluation of 15 candidate algorithms by the cryptographic research community.
None of the aforementioned algorithms has undergone more scrutiny than DES, which has been in use since the late 1970s. However, the use of DES has drawbacks, and it is no longer considered secure, as the weakness of its 56-bit key makes it unreasonably easy to break. With advances in computing power and speed since DES was developed, the small size of its key renders it inadequate for operations requiring more than basic security for a relatively short period of time. For a few thousand U.S. dollars, you can link off-the-shelf computer systems so that they can crack DES keys in a few hours.
The 3-DES application of DES is intended to combat its degenerating resilience by running the encryption three times; it is projected to be secure for years to come. DES is probably sufficient for such tasks as sending e-mail to a friend when you need it to be confidential, or secure, for only a few days (for example, to send a notice of a meeting that will take place in a few hours). It is unlikely that anyone is sufficiently interested in your e-mail to invest the time and money to decrypt it. Because of 3-DES’s wide availability and ease of use, it is advisable to use it instead of DES.
In practice, most commercial software packages use both public and symmetric key encryption algorithms, taking advantage of the strengths of each and avoiding the weaknesses. The public key algorithm is used first, as a means of negotiating a randomly generated secret key and providing for message authenticity. Then a secret key algorithm, such as 3-DES, IDEA, AES, or Blowfish, encrypts and decrypts the data on both ends for speed. Finally, a hash algorithm, such as DSA (Digital Signature Algorithm), generates a message digest that provides a signature that can alert you to tampering. The digest is digitally signed with the sender’s private key.
The most popular personal encryption packages available today are GnuPG and PGP. GNU Privacy Guard was designed as a free replacement for PGP, a security tool that made its debut during the early 1990s. Phil Zimmerman developed PGP as a Public Key Infrastructure (PKI) featuring a convenient interface, ease of use and management, and the security of digital certificates. One critical characteristic set PGP apart from the majority of cryptosystems then available: PGP functions entirely without certification authorities (CA). Until the introduction of PGP, PKI implementations were built around the concept of CAs and centralized key management controls.
PGP and GnuPG use the notion of a ring of trust:(2) If you trust someone and that person trusts someone else, the person you trust can provide an introduction to the third party. When you trust someone, you perform an operation called key signing. By signing someone else’s key, you are verifying that that person’s public key is authentic and safe for you to use to send e-mail. When you sign a key, you are asked whether you trust this person to introduce other keys to you. It is common practice to assign this trust based on several criteria, including your knowledge of a person’s character or a lasting professional relationship with the person. The best practice is to sign someone’s key only after you have met face to face to avert any chance of a person-in-the-middle(3) scenario. The disadvantage of this scheme is the lack of a central registry for associating with people you do not already know.
PGP is available without cost for personal use, but its deployment in a commercial environment requires you to purchase a license. This was not always the case: Soon after its introduction, PGP was available on many bulletin board systems, and users could implement it in any manner they chose. PGP rapidly gained popularity in the networking community, which capitalized on its encryption and key management capabilities for secure transmission of e-mail.
After a time, attention turned to the two robust cryptographic algorithms RSA and IDEA, which are an integral part of PGP’s code. These algorithms are privately owned. The wide distribution and growing user base of PGP sparked battles over patent violation and licenses, resulting in the eventual restriction of PGP’s use.
Enter GnuPG, which supports most of the features and implementations made available by PGP and complies with the OpenPGP Message Format standard. Because GnuPG does not use the patented IDEA algorithm but uses BUGS instead, you can use it almost without restriction: It is released under the GNU GPL. The two tools are considered to be interchangeable and interoperable. The command sequences for and internal workings of PGP and GnuPG are very similar.
The GnuPG System Includes the gpg Program
GnuPG is frequently referred to as gpg, but gpg is actually the main program for the GnuPG system.
GNU has a good introduction to privacy, The GNU Privacy Handbook, available in several languages listed at www.gnupg.org/docs.html. Listed on the same Web page is the Gnu Privacy Guard (GnuPG) Mini Howto, which steps through the setup and use of gpg. And, of course, there is a gpg info page.
(2) For more information, see the section of The GNU Privacy Handbook titled Validating Other Keys on Your Public Keyring.
(3) Person in the middle: If Alex and Jenny try to carry on a secure e-mail exchange over a network, Alex first sends Jenny his public key. However, suppose that Mr. X sits between Alex and Jenny on the network and intercepts Alex’s public key. Mr. X then sends his own public key to Jenny. Jenny then sends her public key to Alex, but once again Mr. X intercepts it and substitutes his public key and sends that to Alex. Without some kind of active protection (a piece of shared information), Mr. X, the person in the middle, can decrypt all traffic between Alex and Jenny, reencrypt it, and send it on to the other party. | <urn:uuid:70cf7a26-2de5-41c1-9904-fce6e9a63510> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2003/05/13/linux-security-public-key-and-symmetric-key-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00106-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943731 | 2,265 | 3.921875 | 4 |
DARPA: Your Tech Will Self-DestructDefense Advanced Research Projects Agency seeks a new class of electronic devices that can dissolve on command as a way of staying out of enemy hands.
14 Amazing DARPA Technologies On Tap (click image for larger view and for slideshow)
The Defense Advanced Research Projects Agency is launching a research initiative to develop sensors and other electronic devices that self-destruct when no longer needed.
DARPA's Vanishing Programmable Resources (VAPR) program aims to create "transient" electronics capable of dissolving into the environment around them or otherwise transforming into useless blobs. The U.S. military doesn't want its remote sensors, radios, mobile phones and other high-tech gear to fall into the wrong hands.
Electronics developed under VAPR would have functionality on par with products currently in use, but they could be completely or partially damaged with relative ease. By way of illustration, DARPA shows an image of a processor that appears to be melting. "The breakdown of such devices could be triggered by a signal sent from command or any number of possible environmental conditions, such as temperature," said DARPA program manager Alicia Jackson in a written statement.
[ Read about a more mundane, but necessary, DARPA project: DARPA Takes Aim At Space Junk. ]
DARPA has begun accepting proposals for research into materials, devices and manufacturing processes, and for applicable designs, with a goal of producing "a new class of electronics" that are characterized both by performance and transience.
Potential applications include sensors for buildings and transportation and environmental monitoring. Networks of sensors, for instance, could be used during a military mission, then dissolve into the environment.
This isn't the first time DARPA has delved into transient technologies. Last year, agency researchers unveiled a new class of electronics, intended for implantable medical treatment, that dissolve in liquid. They use ultra-thin sheets of silicon and magnesium wrapped in silk, so they can dissolve harmlessly into the body to prevent infection. "We want to develop a revolutionary new class of electronics for a variety of systems whose transience does not require submersion in water," said Jackson.
The agency will hold a conference on Feb. 14 in Arlington, Va., to discuss requirements for the VAPR program. The event aims to bring together organizations that have expertise, resources and facilities that are relevant to research and development in the area.
InformationWeek is surveying IT executives on global IT strategies. Upon completion of our survey, you will be eligible to enter a drawing to receive an Apple 32-GB iPad mini. Take our 2013 Global CIO Survey now. Survey ends Feb. 8. | <urn:uuid:61a3d179-1b87-48a6-801a-40d77051e609> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/darpa-your-tech-will-self-destruct/d/d-id/1108426?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00350-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929375 | 549 | 2.890625 | 3 |
Storage Virtualization is the newest of the three management technologies and is far and away the most efficient and reliable technology. In a storage virtualization implementation, a storage controller sits between a pool of centralized storage and the servers, which have storage needs. With virtualization, IT managers can share storage resources to all servers regardless of storage hardware type (direct attached SCSI and IDE storage can be shared, as well as large Fibre Channel RAID units) or physical location (Fibre Channel links can run 10km in length). The most common implementation uses a specialized server running storage virtualization software acting as the gateway between the storage and the servers (see diagram). Two solutions that worked well in our tests in house are FalconStors IPstor and DataCores SANsymphony virtualization software packages.Since the servers and storage are in different zones there is no danger of servers or users accidentally using and corrupting the data on the shared storage. On the server side, a specialized device driver, allowing the server to communicate with the storage controller, needs to be installed. Once the software and hardware is configured, the storage controller will be able to distribute LUNs out to the servers easily through a central management console. For mission-critical sites, its extremely important to set up redundant storage controllers, since an outage at the storage controller will cripple all of the servers than rely on the SAN. Since storage controllers are essentially acting like RAID controllers in a storage virtualization scenario, by beefing up a storage controller with RAM and additional processors, its possible to boost the overall performance of a SAN by doing some caching on the controllers.
To make storage virtualization work, the SAN is configured to have two zones: a zone for the servers and a second zone for the storage. To communicate to both sides, the storage controller (i.e. the server with the virtualization software) runs two Fibre Channel HBAs (one dedicated to each zone) and the software routes the traffic from one zone out to the other. | <urn:uuid:1156101d-e4b8-48c6-b7a6-fd3acc4f43fe> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Data-Storage/Build-Your-Own-SAN/9 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00350-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.922649 | 408 | 2.6875 | 3 |
The way students learn about subjects from the Middle East to Mars and from history to health care is undergoing a remarkable transformation. No longer will faculty be limited by pictures in a textbook or videos on classroom televisions. A new wave of technological innovation is redefining what it means to learn — bringing to life new places, topics and experiences in ways that will revolutionize learning for students, no matter where they live.
Today colleges and universities are exploring the power of immersive learning technologies to solve real learning challenges, offering benefits for students in fields from nursing education and engineering to construction and surveyor training. This is being done through immersive learning, as facilitated through virtual reality and new technologies such as Microsoft HoloLens, an “augmented” or “mixed” reality device that allows users to see and interact with holograms in their own environment.
An Explanation of Immersive Technologies
Augmented/Mixed Reality — In Augmented Reality (AR), learners can still see the environment around them, but digital content is overlayed into their space. Mixed Reality (MR) is a subset of AR and is powered by a headset – usually a 3D holographic model – which is superimposed over the user’s current surroundings. MR allows the user to walk around and interact with said model and analyze it from angles or select specific areas with which to interact (see image). Mobile-based AR allows users to view digital content via a handheld device (see image). The user can be guided by voiceover located in the headset and only needs to use their hands and own body movements to control interactivity within the environment.
Virtual Reality — Virtual Reality (VR) is a completely immersive experience in which users are taken from their real world surroundings and transported virtually into an entirely new digital and game-like environment. The user can look around and see a full panoramic view of what is happening in the virtual space, and can listen to accompanying audio, and interact with things that they see. In being unable to see what is happening outside of the headset, the user is fully transported into this virtual world, allowing us to use visualization in new and previously unimagined ways.
360 Content – This is a full panoramic video or photographic view of a real environment – similar to VR but with video. This 360 content can be viewed in a headset or via PC.
Immersive Learning in Action at Colleges and Universities Around the World
Pearson is collaborating with Microsoft to explore the power of mixed reality to solve real challenges in areas of learning, ranging from online tutoring and coaching, nursing education, and engineering to construction and surveyor training. With Microsoft HoloLens, the world’s first self-contained holographic computer, Pearson is developing and piloting mixed reality content at colleges, universities and secondary schools in the United States and around the world.
HoloLens embraces virtual reality and augmented reality to create a new reality – mixed reality. With virtual reality, the user is immersed in a simulated world. Augmented reality overlays digital information on top of the real world. Mixed reality merges the virtual and physical worlds to create a new reality whereby the two can coexist and interact. By understanding the user’s environment, mixed reality enables holograms to look and sound like they are part of that world. This means learning content can be developed for HoloLens that provides students with real world experiences, allowing them to build proficiency, develop confidence, explore and learn.
For example, at Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens for connecting students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking. If successful, this out-of-the-box solution could provide struggling students with richer, more personalized just-in-time support from expert tutors as if they were sitting side-by-side. Bryn Mawr will also experiment with using holographs and mixed reality to explore 3D content and concepts in a number of academic disciplines, including physics, biology, and archaeology.
Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education. Today many nursing programs hire and train actors to simulate scenarios nurses will face in the real world — a process that is hard to standardize and even harder to replicate. As part of the mixed reality pilot, faculty at the two universities’ schools of nursing are collaborating with Pearson to improve the value and efficacy of the types of simulations in which students participate. To develop the content for this pilot, Pearson will use Microsoft’s holographic video capture capability, filming actors to simulate patients with various health concerns and then transferring that video into holograms for the student nurses to experience in a clinical setting. When student nurses participate in the simulations using HoloLens, they will have a real world experience diagnosing patients, building the confidence and competence that they will need in their careers.
In today’s technological and budgetary climate, a technology audit automation solution is an essential element of your technology plan. Knowing what assets you have, how they’re used, and who is using them is critical to efficient and cost-effective asset management, especially when budgets are tight. Download our FREE guide and discover if an automated technology audit will work for your organization.Can You Benefit from an Automated Technology Audit
Pearson’s work with mixed reality and HoloLens isn’t limited to higher education. The company is in the early stages of evaluating the impact of holographic learning at the late grammar school stage. At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.
It’s exciting to see how these technologies are being leveraged to create high-quality immersive learning experiences designed to meet specific learning needs in higher education and vocational training. With the addition of effective faculty training to help educators become more confident with the use of these technologies in the classroom, immersive learning can make a measurable difference in the lives of students and instructors. | <urn:uuid:989c8650-ea07-42e4-8b1a-8dc0deab825a> | CC-MAIN-2017-09 | https://techdecisions.co/mobility/can-immersive-technologies-improve-learning-higher-ed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00350-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937151 | 1,310 | 3.75 | 4 |
Touchscreens were a pivotal innovation for computing: Much like the graphical user interface, much like the trackpad, much like the mouse, they gave us a brand-new way to put human bodies to the service of human technology. But the touchscreens we have today are also -- just like graphical interfaces and trackpads and mice once were -- a first step toward something more intricate and sophisticated and better. Mice were, in short order, improved with scrolling and right-click capabilities. Trackpads got adapted to distinguish between the gestures of one finger, or two, or three. Graphical interfaces became more intuitive and more attractive.
And now touchscreens are approaching the moment that all good innovations do: the end of the beginning. Computer scientists are developing software that expands the capability of the touchscreen by expanding the idea of touch itself.
So researchers at Microsoft are working on 3-D gestural control that would transfer users' hand gestures to a screen. Scientists at Purdue are experimenting with haptic feedback (pdf) that approximates the feel of discrete buttons -- even on a piece of flat glass. There's the bendable touchscreen. There's the capacitive fingerprinting that allows screens to distinguish among different users. And over at Carnegie Mellon, the computer scientist Chris Harrison has developed softwarethat is able to distinguish between touches delivered from the fingertip, the knuckle, and even the fingernail. The whole of one hand, communicating with the machine held by the other. | <urn:uuid:78d8103b-3a1a-4872-b764-420668ce3bbb> | CC-MAIN-2017-09 | http://www.nextgov.com/mobile/2012/11/future-smartphone-tech-knuckle-dragging/59655/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00226-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946807 | 302 | 3.390625 | 3 |
Ok, this is strange! At least this was my first reaction when I saw that in one of my CCIE labs that I am trying to resolve all the links between routers are addresses with a subnet /31.
Isn’t that weird that something like this you see for this first time after couple of years in networking. For me it was. It blow my mind out. I asked my more experienced networking colleagues later but for them it seemed new too. They said at first: Ok men, that’s not possible!
Well, try to type it on router interface and you will se that it is possible. It strange for sure, but it’s possible. Router OS (Cisco IOS in this case) will try to be sure that you will use this kind of subneting only for Point-to-point links. That’s why it will issue a warning message if you apply this subnet mask on an Ethernet interface. For serial it will go without the warning.
The idea behind this is of course simple if you put it this way:
On point to point links we actually do not need special broadcast address of that subnet because there’s only one way you can send a packet across point to point link. All we have is the IP address on the other side of the link. We know that if we want to send broadcast it will go there no matter that address is separate broadcast address or any other address. There cannot be more destination than one and the router will then know that broadcast will be directed on the same link as the normal unicast for the link destination address.
Why should we have network name defined as first address of a range and not being able to use it on the interface, we want to use that one too.
If we have 256 different addresses in /24 range. Why we need to divide this on 64 subnet with 4 addresses each if we want to use only two addresses on every side on the link. This is the idea. For one /24 subnet we can use /31 subnets for point-to-point links and with that get double the number of point-to-point links that we can cover.
R1(config)#int fa 0/0 R1(config-if)#ip add 192.168.0.0 255.255.255.254 % Warning: use /31 mask on non point-to-point interface cautiously R1(config-if)# | <urn:uuid:88133807-b4aa-4202-a9b5-7edf0c0e79f7> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2014/point-to-point-subnet | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00578-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93763 | 513 | 2.71875 | 3 |
An original cassette tape of a 1983 speech made by late Apple co-founder Steve Jobs has been unearthed, revealing that Apple was thinking about iPads and the app store more than 20 years before they were launched.
The speech, which took place at the International Design Conference in Aspen, has been previously documented, but the Q&A session that followed the speech was left out. Now, tech blogger Marcel Brown has digitized a recording of the speech for the world to hear.
"We are putting a lot of computers out that are made to be used in a standalone mode, one person, one computer," said Jobs. "But it isn't very long before you're going to get a community of users that want to hook them all together. Because ultimately, computers are going to be a tool for communication."
"Apple's strategy is really simple," Jobs continued. "What we want to do is we want to put an incredibly great computer in a book that you can carry around with you and learn how to use in 20 minutes. That's what we want to do and we want to do it this decade. And we really want to do it with a radio link in it so you don't have t hook up to anything and you're in communication with all of these larger databases and other computers."
Jobs' ambitious aim to make a tablet within the 1980s was about 27 years out. Alternatively, Jobs may have been referring to a MacBook, however, as noted by The Next Web, Jobs spoke about mobile pocketable computers.
Jobs also spoke about an idea that would eventually become the App Store. "Where we'll be going in transmitting this stuff electronically over the phone line. So where when you wanna buy a piece of software we'll send tones over the phone to transmit directly from computer to computer, that's what we'll be doing."
Intriguingly, Brown claims that the attendee who gave him the tape of the speech met Steve Jobs at the conference. "During their interaction, Steve Jobs gave him something to put in a time capsule that was buried at the conference. To our knowledge this time capsule has yet to be dug up."
Other highlights of the recording include Jobs prediction that people would soon be spending more time interacting with computers than they do with cars, and his comment that voice recognition is a difficult thing to master. "This stuff is hard," he said.
To listen to the recording, visit Marcel Brown's blog.
This story, "Recording of 1983 Steve Jobs speech reveals Apple was working on iPad, App Store 30 years ago" was originally published by Macworld U.K.. | <urn:uuid:a48b19ab-83e3-4dcf-9298-03d1d035f603> | CC-MAIN-2017-09 | http://www.itworld.com/article/2722008/consumerization/recording-of-1983-steve-jobs-speech-reveals-apple-was-working-on-ipad--app-store-30-years-ago.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00454-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.982804 | 536 | 2.5625 | 3 |
For those outside the HPC/science realm who question why there need to be ever-more powerful supercomputers, one need only look at the amazing breakthroughs that the petascale age has facilitated. Astrophysics research out of Caltech is the latest example. Because of leadership-class systems like Stampede and Blue Waters and their experienced support staff, researchers from Caltech were able to perform fully 3D model simulations of supernova explosions.
The scientists are studying a somewhat rare phenomenon called extreme core-collapse supernovae. While these events comprise only one percent of all observed supernova, they are “extreme” in the amount of energy that’s emitted into space.
Up until recently, simulations in this field were mainly relegated to two dimensions (2D), and due to computational limitations codes could not incorporate all of the relevant physics, for example general relativistic effects were intentionally excluded. This study marks the first time that scientists are running fully general relativistic three dimensional (3D) simulations.
Because of the added realism, the research team, led by Philipp Mösta, postdoctoral scholar at Caltech, and Christian D. Ott, professor of astrophysics at Caltech, is discovering that previously held theories about how these explosions work might not be accurate.
The heart of the new finding is that the explosion is a highly dynamic process.
“What we’ve shown is that the jets that appear stable in 2D are actually unstable in 3D,” explained Mösta in an article by Liz Murray at the XSEDE website. “They twist, rotate and become unstable due to a phenomenon that is called the magneto-hydrodynamic kink instability. This instability of the magnetic field itself is the same that is also seen in fusion reactors that are using magnetic fields to confine the plasma.”
Supercomputing has been instrumental to the the project since it started in 2013 with an Extreme Science Engineering Discovery Environments (XSEDE) allocation on the Stampede supercomputer, installed at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.
The early work centered on code optimization, tweaking the code to take advantage of modern computing architectures. This crucial step enabled the team to run larger simulations without using up their alloted CPU hours too quickly.
“We were able to perform the first fully general relativistic 3D simulations without any symmetries and the difference in comparison to 2D was drastic,” stated Ott. “We now know if we want to predict what the signature of these extreme supernova explosions might look like, we need to do it in full 3D.”
After performing the initial general-relativistic magnetohydrodynamics (GRMHD) simulations on Stampede, the team hit a wall when trying to computationally reproduce the shockwave the extends out from the core of a massive star as it collapses to a proto-neutron star. To simulate this part of the process in 3D, they moved over to the Blue Waters supercomputer, a larger resource managed by the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.
This short video below shows the time-evolution of the shock wave. The 2D simulation is depicted on the left, while the corresponding meridional slice from a 3D simulation is shown on the right.
The scientists credit both of these supercomputers with their foray into 3D simulation. After further refining their code and running additional simulations, their goal is to create full 3D kinetic models of these extreme supernova explosions. The project involves connecting the simulations to actual observations collected from one of the NASA satellite telescopes.
Aside from being a valuable breakthrough for human understanding of supernovas, the emergence of 3D simulation has greater implications according to Mösta. “It will probably indicate to other groups who, so far, have focused on performing simulations with symmetries imposed, that they will have to move to full 3D simulations as well, which will ultimately strengthen our community,” he stated. | <urn:uuid:a707a631-5cb5-4c87-a97a-c0a0b2ef82e5> | CC-MAIN-2017-09 | https://www.hpcwire.com/2014/07/03/3d-simulations-raise-bar-astrophysics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00454-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941873 | 862 | 3.046875 | 3 |
The first step in helping your teen deal with cyberharrassment is for you to be aware there is a problem. The recent article “What Parents Can Do to Help Teen Victims of Cyber Bullying” provides great tips on how to help your teen open up, and work together to end the harassment:
Have the ‘Cyber Bullying’ Conversation: Children don’t like to talk about bullying, but according to Roberts, “the reason for this is they have likely bullied themselves, been bullied or been a bullying bystander and the talk brings up these memories and feelings of shame.” Parents need to have an open conversation and respond without judgment as their children open up about what they know.
Explain How What You Don’t Know Does Hurt You: Some kids minimize or justify cyber bullying by saying that the target didn’t even know what was said. Roberts suggests explaining to your kids that it still hurts. “Use their life experiences to illustrate how badly they feel when people talk about them negatively,” she says.
Set Cyber Safety Rules: Whenever your children interact online, remind them that they never really know who is on the other end of cyber communication. With that in mind, Roberts recommends enforcing the guideline of “don’t do or say anything online that you wouldn’t do or say in person.”
Monitor Online Use: Know what your children are doing online to help them prevent cyber bullying and cope with it. Limit time spent on technology to naturally minimize access to and involvement with cyber bullying, suggests Roberts. | <urn:uuid:dcda1ed5-6818-4fc6-9f3a-06a51c2b9163> | CC-MAIN-2017-09 | https://interwork.com/parents-can-help-teen-victims-cyber-bullying/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00046-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954885 | 327 | 3.21875 | 3 |
Along with the continuous development of optical fiber communication, optical cable also in constant development. Fiber is a communication cable, composed by two or more glass fiber cable or plastic optical fiber core within the cladding of the fiber core is located in the protective outer sleeve, plastic PVC covering. Along the internal optical fiber signal transmission is generally used infrared.Fiber optic cables are usually made of glass or plastic but those materials actually slow down the transmission of light ever so slightly.
Recently, researchers at the university of Southampton, UK, have created a kind of hollow optical fiber cable. This kind of equipment in the middle is hollow, only by filling up the air, but its transport rate is 1000 times faster than the other fiber optic cable. Researchers revealed that light in the air velocity is about 99.7% of its speed in a vacuum.
The idea was not be put forward recently, but in the past when encountered in the process of light transmission in the corner, the signal will always diminish. The researchers optimized the design, making the new type of hollow optical fiber cable data loss is 3.5 dB/km, such an ideal level. In this way, making it suitable for use in supercomputer and data center applications.
Hollow fiber optic cable(indoor/outdoor fiber optic cable) can go through air rather than light, therefore in many areas it has much more advantages than the traditional optical fiber and will eventually replace the traditional optical fiber.
Using hollow optic fiber cable, rather than the traditional high purity silica doped fiber core, its advantage are optical fiber performance is not restricted by material characteristics of the fiber core. Traditional optical damage threshold, the parameters such as attenuation and group velocity dispersion and nonlinear effects are affected by the silicon materials and other corresponding parameters. Through reasonable design, hollow fiber can achieve more than 99% of the light in the air instead of in the glass, thus greatly reduce the material properties of optical fiber properties. So in many important areas, hollow fiber optic cable transceiver have more advantages than the traditional optical fiber.
Theoretically, this kind of fiber optic cable no fiber core, reduced the loss, to increase the communication distance, preventing the dispersion caused by the interference phenomenon, can support more wavelengths, and allows the stronger light power injection, estimate its communications capacity can reach 1000 times of the cable at present.
Promote hollow optic fiber cable of the ongoing research, with the extensive application of optical fiber and cable, the fiber optic cable has been unable to meet the needs of the people, therefore, need to continue to study new fiber optic cable in order to adapt to the needs of people.Researchers at the University of Southampton in the UK have created a hollow fiber-optic cable.From Fiberstore,we supply many different types of fiber optic cables, and customers have the flexibility to choose a cable plant to best fit their needs.If you need some cables,welcome to Fiberstore to find it. | <urn:uuid:00e3f898-bbc8-43b7-9c45-20bfb5710a57> | CC-MAIN-2017-09 | http://www.fs.com/blog/new-fiber-optic-cable-the-advantages-of-the-hollow-fiber-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00446-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927929 | 602 | 3.546875 | 4 |
Government agencies, like companies, are struggling with the difficult task of delivering meaningful services online. There are legitimate reasons as to why the difficulties exist. Among the most frequently cited roadblocks are lack of access and lack of security.
The lack of access argument is becoming less prevalent as Internet user estimates now run as high as 50 million. Further, many states are exploring significant funding mechanisms to make public access to technology, and more specifically the Internet, more widely available. Texas, for instance, has created the Telecommunications Infrastructure Fund (HB2128), which will award $150 million in grants and loans per year for technology upgrades in schools, libraries and hospitals. The fund is expected to exceed $1.5 billion over the next 10 years.
The perceived lack of security of financial transactions is a significant reason why conducting direct business over the Internet has not exploded. There are at least two reasons why government Internet service delivery may have a distinct advantage over services offered by the private sector.
First, although government services are frequently transaction-based, they do not always involve a direct financial transaction. Also, while there are often confidentiality issues for government transactions, which raise similar security concerns, voluntary disclosure of personal information will demonstrate the public's perceived value of online service delivery.
If a service delivered online effectively reduces the time and effort required to accomplish a task and comply with government regulation, citizens will weigh the risk they feel regarding the disclosure of personal information vs. the convenience being provided. It seems likely convenience will win. It is not difficult to draw the comparison of registering a car online against going to a motor vehicle office. Both kiosks and mail service have already become popular delivery vehicles for this type of service. The added convenience of Internet-delivered services should be well-received.
The second reason government Web sites have an advantage is that the widest possible dissemination of government information will frequently lead to increased public awareness and benefit. Increased public access to government information is generally viewed as a value-added service by itself.
For example, if a state signs a multi-year, multi-million dollar agreement over the use of public lands, after which there is a period of public comment, a Web site with details about the deal and proposed land use that solicits feedback via e-mail would likely bring much broader public discussion and participation than regional public hearings that are often poorly attended.
Despite the arguments against Internet-delivered services, there are several examples where governments are successfully using this medium. America's Job Bank at offers job opportunities online that were traditionally available only by visiting a regional Labor Department office. According to Point, The Top
Sites of the Web at , America's Job Bank site is among the top five percent of sites being visited anywhere on the Internet.
South Dakota currently sends hunting license applications to interested parties via an online request at . New York state helps those interested in starting or expanding a business identify which permits are required through interactive forms. The Governor's Office of Regulatory Reform will send the permit applications to those who request them online at . While none of these services require a direct financial transaction, they all greatly reduce the time and effort required to get information or comply with government regulations.
Leveraging purchasing power has long been a tool governments use to buy goods and services at the best possible rate. Making bid information available online increases the number of companies made aware of the sales opportunity, which could increase competition and lower the price governments pay for goods or services.
Several state and local governments are beginning to take advantage of the Internet for this purpose and the results are impressive. According to Cary Paul Peck, supervisor of vendor relations for Los Angeles County Metropolitan Transportation Authority, the county has saved 7 percent on contracts awarded since they made contract notices available on a dial-up bulletin board.
Another site using the Web to deliver a creative service to a number of local governments is hosted by the Association of Bay Area Governments (ABAG). What makes this effort unique is that it is a consortium of local governments pooling their buying power by making their contract notices available together. The ABAG Contracts Exchange (ACE) is one service available from abagOnline, which can be reached at .
The abagOnline project organizes government information for nearly 100 city and nine county governments in the San Francisco Bay Area. The initiative began in January 1994 and is partially funded by a National Telecommunications and Information Administration (NTIA) grant. ACE began in May 1995 and nearly 25 percent of the governments are participating.
According to Terry Bursztynsky, director of abagOnline services, the response was slow at first, but has steadily increased. "Local governments are not used to this sort of advertising. It will be a while before they realize actual dollars saved, but several governments have gotten inquiries and responses to contract notices from companies outside of the area."
Governments are not charged for this service, nor are companies charged for accessing the information. According to Bursztynsky, the next step will be toward online commerce. "The way we feel this service can be improved and the way to get greater participation is to put this information on a secure server to allow governments to receive bids online."
Increasing awareness is one of the major concerns for the ABAG staff. They are busy educating as many local government officials as possible in order to promote their service and increase participation. They currently offer free, simple home pages to many local governments. ABAG intends to offer secure chat lines for city managers, information systems managers and city planners. With over 3,500 hits per day and an average increase of 10 percent per month, the ABAG site should provide dividends to participating governments very soon.
Michael Nevins is a co-founder and director of State Technologies Inc., a nonprofit research group. State Technologies publishes the Web service Government On Line: .
E-mail address: . | <urn:uuid:cab19047-ec6e-4f22-8c16-0c94a3248e6b> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/100555779.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00266-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951661 | 1,194 | 2.671875 | 3 |
The Body condition is used to define the message body (content of an email) that once met will trigger an action to be processed by the program.
To define the message body within the Conditions tab, expand the Field's context menu and choose Body (Fig. 1.).
|Fig. 1. Choosing Body in the Conditions tab.|
There are two factors (Fig. 2.) that need to be configured within this condition:
- Operator - enables definition of how the condition will be executed. The execution method may be only set to true.
In this particular condition, the operator may be set to trigger an action if the message body:
- contains keyword(s)
- Actual condition value - here you define the expected value of the condition that will trigger the rule to apply the action.
|Fig. 2. Configuring the message Body.|
Please note that the value field depends on the operator, and its definition type may differ in regards to the chosen operator. What is more, this field is always case-insensitive.
Choosing contains keyword operator, you will be able to define strings of characters to be searched for within the message body. Additionally, besides defining strings of characters (letters and numbers) as keywords, you can also make use of wildcards as a prefix or a suffix. Either way, If the defined keywords are found while processing messages, then the condition will be met yet the defined action executed.
The program will not recognize keywords defined with $ (dollar) and ; (semicolon) characters.
This feature lets you also decide if the found keyword should be removed or left unchanged. To define a keyword for the contains keyword operator, click Edit and then the Add button. In the window that opens enter the keyword and mark the Remove from message checkbox to delete it from the message body if found. On the other hand, leave the checkbox unmarked if you do not want to remove the found keyword. Be aware, this option works regardless of other conditions and exceptions.
|Fig. 3. Defining a keyword to be searched for within the body of the message.|
Message Direction - this article describes how to configure the Message Direction condition. | <urn:uuid:02b7b9d4-8d2e-4db1-aec7-f334557e04cb> | CC-MAIN-2017-09 | https://www.codetwo.com/userguide/exchange-rules-family/body.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00618-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.819033 | 451 | 2.53125 | 3 |
Just how big are big data? Not the big data hype bubble, mind you-we know that's enormous. Rather, how large do data sets have to be before we can consider them big data?
There is no one answer. Big data is a relative term. It refers to data sets, and the corresponding data challenges, so large that traditional data management and analytics approaches aren't up to the task of squeezing all the value we desire from the information we have. As a result, as our tools and techniques improve, the "bigness" threshold for big data will continue to rise.
This threshold also depends upon the context for the data, which generally aligns with the industry responsible for them. Genomics research, weather prediction and other scientific pursuits push the limit of data set size, but any business that collects information about its customers may also have big data challenges.
Keep in mind Parkinson's Law of Data: the amount of data available expands to fill the available space for it. As our technology for creating, moving and storing data improves, the big data threshold will continue to rise. If anything, it seems the relentless advance of technology is driving the ever-increasing acquisition of information-and this deluge promises to swamp even the most facile of big data strategies.
Analysis: Explosion in "Big Data" Causing Data Center Crunch
The central big data challenge, of course, is how to derive value from such immense data sets, essentially recovering those rare gems in the rough-identifying the important, meaningful and insightful nuggets in the onslaught of noise.
Counterintuitively, the more information we have, the less we actually desire, since we only prize the results of careful analysis of our big data, not the data themselves. A mountain containing gold is worthless, regardless of the size of the mountain, if the cost of extracting the precious material exceeds its value.
U.S. Government Sitting on Big Data Goldmine
Today, the U.S. government faces the mother of all big data mountains. From National Oceanic and Atmospheric Administration (NOAA) weather data to earth science information from the U.S. Geological Survey (USGS) to the genomics data at the National Institutes of Health (NIH), the government-and, therefore, the American people-own perhaps the largest collection of big data sets on this planet.
This is extraordinarily valuable in theory, true, but worthless if we're unable to extract the important nuggets. To mine this gold, the Obama Administration announced its
These are valuable nuggets, to be sure, and, in the grand scheme of things, $200 million is a bargain. But the administration's investments in Big Data don't stop there. In August the White House announced its Presidential Innovation Fellows program, which brings a crack team of innovators together to collaborate on projects with the goal to "improve the lives of the American people, save taxpayer money and fuel job creation." On the initial list of target projects are Blue Button for America, an extension of the Department of Veterans Affair's Blue Button initiative, as well as an open-ended set of projects the White House calls Open Data Initiatives.
News: White House Launches Big Data R&D Push
The Open Data Initiatives have a different mandate than the Big Data Initiative, but the synergy between them is obvious. Open Data focuses on "liberating" government data (as well as contributed corporate data) in order to achieve the strategic goals of the Innovation Fellows program.
What does it mean to liberate data? The two examples cited are NOAA weather data (now at the core of every weather report on television) and the Global Positioning System, without which we'd all literally be lost.
Of these examples, NOAA weather data most obviously present big data challenges. The value in such large data sets doesn't simply depend on the weather data themselves, but in the ability to forecast weather based upon those data-a classic big data problem. From the perspective of the American citizen, we value accurate forecasts; the immense quantity of historical weather data that feed the forecasting engines is merely the ore we must mine to find the nuggets we desire.
Such is the challenge facing the Open Data Initiative. The more data we have, the less we value the data sets themselves. The information we truly desire lies buried under increasing quantities of irrelevant or otherwise useless information. The danger is that the more data the government provides us, the better hidden are the nuggets we desire. In other words, in the absence of effective big data solutions, truly open government may be out of reach-or, worse, misapplied to obscure the very information that citizens would find most valuable.
Big Data Challenges Heightened by Citizens' Right to Information
This undesirable outcome is clearly not the intention of President Barack Obama's Open Government Initiative, which calls for a presumption of openness. True, there are types of information that the government may not or should not share, including military secrets, private data about individuals, and information relevant to ongoing criminal investigations. However, the list of such sensitive information categories is explicit and limited. All other government information is up for grabs.
News: Obama Promotes New Open Government Initiative
If you want access to such information, typically all you have to do is go to the relevant agency website, as the Obama Administration ordered them to proactively make information available to all citizens. If you can't find what youre looking for, you may make a Freedom of Information Act request. The act was passed in the 1960s, and Congress extended FOIA in 1974 as a result of Watergate. Today, the Government receives more than 500,000 FOIA requests per year, with a current backlog of more than 80,000 requests.
Typically a citizen makes a FOIA request for a particular document or other information- Steve Job's FBI background check, for example. While such documents have a historical as well as human interest value, their worth pales in comparison to the nuggets of gold that Big Data analyses can potentially reveal.
However, it would be impossible to submit a FOIA request for a big data analysis conclusion, since there may be no way to form such a request. Big data analyses typically ask, "What are the important or interesting conclusions I can draw from these large data sets?" They don't request a particular piece of information. The best big data analytics tell you what information you should think is important, rather than expecting you to know what information is important ahead of time.
Analysis: Big Data Analytics Today Lets Businesses Play Moneyball
Government agencies, therefore, face two strategic big data challenges. First, they must avoid swamping relevant information with noise; second, they must let citizens request important information from the government without having to know ahead of time why that information is important. Furthermore, the larger the available data sets become, the greater these challenges will be.
Our government can talk about open data and open government all it want, but if it doesn't get big data solutions right, then we risk floundering in an ocean of irrelevant information. The Presidential Innovation Fellows have their work cut out for them.
Read more about data management in CIO's Data Management Drilldown.
This story, "Can the government handle big data analytics?" was originally published by CIO. | <urn:uuid:d4fb8fc3-21dd-489d-8020-e965efd75659> | CC-MAIN-2017-09 | http://www.itworld.com/article/2719211/big-data/can-the-government-handle-big-data-analytics-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00142-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921964 | 1,489 | 2.953125 | 3 |
The Next Tech Wave
How machine learning changes the game
- By Konstantin Kakaes
- Jul 15, 2013
What do video surveillance, speech recognition and autonomous vehicles have in common? They're all getting better amazingly quickly -- and needing less and less human help to do so. (FCW illustration)
In the past decade, computer scientists have made remarkable progress in creating algorithms capable of discerning information in unstructured data. In controlled settings, computer programs are now able to recognize faces in photographs and transcribe spoken speech. Cars laden with laser sensors can create 3-D representations of the world and use those representations to navigate safely through chaotic, unpredictable traffic.
In the coming decade, improvements in computational power and techniques will allow programs such as voice and face recognition to work in increasingly robust settings. Those technological developments will affect broad swathes of the American economy and have the potential to fundamentally alter the routines of our daily lives.
There is not one single reason for these improvements. Various approaches have proven effective and have improved over the years. However, many of the best-performing algorithms share a common trait: They have not been explicitly programmed by humans. As David Stavens, a Stanford University computer scientist, wrote about Junior, an autonomous car that earned Stanford second place in the Defense Advanced Research Projects Agency's 2007 Urban Challenge: "Our work does not rely on manual engineering or even supervised machine learning. Rather, the car learns on its own, training itself without human teaching or labeling."
In a wide range of examples, techniques that rely on self-supervised learning have leapfrogged traditional computer science approaches that relied on explicitly crafted rules. Supervised learning — in which an algorithm is first trained on a large set of data that has been annotated by a human and then is let loose on other, unstructured data — is effective in cases where an algorithm benefits from some initial structure. In both cases, the rules the computer ultimately used were never explicitly coded and cannot be succinctly described.
Such so-called machine learning algorithms have a long history. But for much of that history, they were more interesting for their theoretical promise than on the basis of real-world performance. That has changed in the past few years for a variety of reasons. Chief among them are the availability of large datasets with which to train learning algorithms and cheap computational power that can do such training quickly. Just as important, though, are developments in methodology that make it possible to use that data — millions of images tagged online by, say, Flickr users, or linguistic data stretching to the billions of words — in advantageous ways.
The new generation of learning techniques holds the promise of not only being able to match human performance in tasks that have heretofore been impossible for computers but also to exceed it.
The market for speech recognition is huge and will only grow as the technology improves. Call centers alone account for tens of billions of dollars in annual corporate expenditure, and the mobile telephony market is also worth billions. Nuance, the company behind Apple's Siri voice-recognition engine, announced in November 2012 that it is working with handset manufacturers on a telephone that could be controlled by voice alone.
According to Forbes, Americans spend about $437 billion annually on cars and buy 15.1 million automobiles each year. According to the General Services Administration's latest tally, federal agencies own nearly 660,000 vehicles. As technologies for autonomy improve, many and eventually most of those cars will have detectors and software that will enable them to drive autonomously, which means the potential market is enormous.
The impact of image-analysis technologies such as facial recognition will also be transformative. Government use of such technologies is already widespread, and commercial use will increase as capabilities do. Video surveillance software is already a $650 million annual market, according to a June report by IMS Research.
Just as the commercial stakes for those and other applications of machine learning are high, so too are the broader questions the new capabilities raise. How does the nature of privacy change when it becomes possible not only to record audio and video on a mass scale but also to reliably extract data — such as people's identities or transcripts of their conversations — from those recordings? The difficult nature of the questions means they have largely escaped public discussion, even as the debate over National Security Agency surveillance programs has increased in recent weeks following Edward Snowden's disclosures.
Li Deng, a principal researcher at Microsoft Research, wrote in a paper in the May issue of IEEE Transactions on Audio, Speech and Language Processing that there are no applications today for which automated speech recognition works as well as a person. But machine learning techniques, he said, "show great promise to advance the state of the art."
There are many machine learning techniques, including Bayesian networks, hidden Markov models, neural networks of various sorts and Boltzmann machines. The differences between them are largely technical. What the techniques have in common is that they consist of a large set of nodes that connect with one another and make interrelated decisions about how to behave.
Those complicated networks can "learn" how to discern patterns by following rules that modify the way in which a given node reacts to stimuli from other nodes. It can be done in a way that simply seeks out patterns without any human-crafted prompting (in unsupervised learning) or by trying to duplicate example patterns (in supervised learning). For instance, a neural network might be shown many pairs of photographs along with information about when a pair consisted of two photographs of the same person and when it consisted of photographs of two different people, or it might be played many audio recordings paired with transcriptions of those recordings.
Deep neural networks have, since 2006, become far more effective. A shallow neural network might have only one hidden layer of nodes that could learn how to behave. That layer might consist of thousands of nodes, but it would still be a single layer. Deep networks have many layers, which allow them to recognize far more complex patterns because there is a much larger number of potential ways in which a given number of nodes can interconnect.
But that complexity has a downside. For decades, deep networks, though theoretically powerful, didn't work well in practice. Training them was computationally intractable. But in 2006, Geoffrey Hinton, a computer science professor at the University of Toronto, published a paper widely described as a breakthrough. He devised a way to train deep networks one layer at a time, which allowed them to perform in the real world.
In late May, Google researchers Vincent Vanhoucke, Matthieu Devin and Georg Heigold presented a paper at the IEEE International Conference on Acoustics, Speech and Signal Processing describing the application of deep networks to speech recognition. The Google researchers ran a three-layer system with 640 nodes in each layer. They trained the system on 3,000 hours of recorded English speech and then tested it on 27,327 utterances. In the best performance of a number of different configurations they tried, the system's word error rate was 12.8 percent. That means it got slightly more than one word in 10 wrong. There is still a long way to go, but training a network as complicated as this one would have been a non-starter just a few years ago.
Agencies that interact with the public on a massive scale will have to decide to what extent they wish to replace human operators with automated voice-recognition systems.
Nevertheless, speech-recognition technologies have already had a dramatic impact on call and contact centers. As the technology improves, agencies that interact with the public on a massive scale — such as the Social Security Administration, the National Park Service and the Veterans Health Administration — will have to decide to what extent they wish to replace human operators with automated voice-recognition systems.
On June 17, Stanford associate professor Andrew Ng and his colleagues presented a paper at the annual International Conference on Machine Learning describing how even larger networks — systems with as many as 11 billion parameters — can be trained in a matter of days on a cluster of 16 commercial servers. They do not yet know, they say, how to effectively train such large networks but want to show that it can be done. They trained their neural network on a dataset of 10 million unlabeled YouTube video thumbnails. They then used the network to distinguish 13,152 faces from 48,000 distractor images. It succeeded 86.5 percent of the time.
Again, that performance is not yet at a level that is of much practical use. But the remarkable thing is that Ng and his team achieved it on a dataset that wasn't labeled in any way. They devised a program that could, on its own, figure out what a face looks like.
Current commercial facial recognition systems include NEC's NeoFace, which won a competition run by the National Institute of Standards and Technology in 2010. NeoFace matches pictures of faces against large databases of images taken from close up and under controlled lighting conditions. NeoFace can work with images taken at very low resolution, with as few as 24 pixels between the subject's eyes, according to NEC. In the NIST evaluation, it identified 95 percent of the sample images given to it.
In a May 2013 paper, Anil Jain and Joshua Klontz of Michigan State University used NeoFace to search through a database of 1.6 million law enforcement booking images, along with pictures of Dzhokhar and Tamerlan Tsarnaev, who are accused of setting off the bombs at the Boston Marathon in April. Using the publicly released images of the Tsarnaev brothers, NeoFace was able to match Dzhokhar's high school graduation photo from the database of millions of images. It was less successful with Tamerlan because he was wearing sunglasses.
Eigenfaces allow computers to deconstruct images of faces into charcteristic components to enable recognition technoologies. (Image: Wikimedia Commons)
Jain and Klontz make the point that, even today, facial recognition algorithms are good enough to be useful in a real-world context. The methods for automatically detecting faces, though, are likely to get much better with machine learning. NeoFace and other commercial tools work in part by deconstructing faces into characteristic constituents, called eigenfaces, in a way roughly analogous to the grid coordinates of a point. A picture of a face can then be described as a distinct combination of eigenfaces, just as any physical movement in the real world can be broken down into the components of up-down, left-right and forward-backward.
But that approach is not very adaptable to changes in lighting and posture. The same face breaks down into very different eigenfaces if it is lit differently or photographed from another angle. However, it is easy for a person to recognize that, say, Angelina Jolie is the same person in profile as she is when photographed from the front.
Honglak Lee, an assistant professor at the University of Michigan in Ann Arbor, wrote recently with colleagues from the University of Massachusetts that deep neural networks are now being applied to the problem of facial recognition in a way that doesn't require any explicit information about lighting or pose. Lee and his colleagues were able to get 86 percent accuracy on a 5,749-image database called Labeled Faces in the Wild, which now contains more than 13,000 images. Their results compared favorably to the 87 percent that the best handcrafted systems achieved.
But the deep learning systems remain computationally demanding. Lee and his colleagues had to scale down their images to 150 by 150 pixels to make the problem computationally tractable. As computing power grows, there is every reason to believe that machine learning techniques applied on a larger scale will become still more effective. At present, it might seem that facial recognition programs are of interest only to law enforcement and intelligence agencies. But as the systems become more robust and effective, other agencies will have to decide whether and how to use them. The technology has broad potential but also threatens to encroach fundamentally on privacy.
In a sense, the machine learning algorithms for facial recognition are doing something analogous to speech recognition. Just as speech-recognition programs can't try to match sounds against all possible words that might have generated those sounds, the new generation of face-recognition techniques doesn't attempt to match patterns. Instead, the learning methodology allows it to discern global structure in a way loosely analogous to human perception.
The pace of such progress can perhaps best be seen in the case of autonomous cars. In 2004, DARPA ran a race in which autonomous cars had to navigate a 150-mile desert route. None of the 21 teams finished. The best-performing team, from Carnegie Mellon University, traveled a little more than 7 miles. In 2005, five teams finished DARPA's 132-mile course. Last year, Google announced that about a dozen of its autonomous cars had driven more than 300,000 miles.
Suddenly, DARPA's efforts to bring driverless vehicles to the battlefield look a lot closer to reality. Many elements must come together for this to work. As Chris Urmson, engineering lead for Google's Boss, which won the 2007 DARPA Urban Challenge, autonomous vehicles combine information from many sources. Boss had 17 sensors, including nine lasers, four radar systems and a Global Positioning System device. It had a reliable software architecture broken down into a perception component, mission-planning component and behavioral executive. For autonomous cars to work well, all those elements must perform reliably.
But the mind of an autonomous car — the part that's fundamentally new, as opposed to the chassis or the engine — consists of algorithms that allow it to learn from its environment, much as speech recognizers learn to recognize words out of vibrations in the air, or facial recognizers find and match faces in a crowd. The capacity to effectively program algorithms that are capable of learning implicit rules of behavior has made it possible for autonomous cars to get so much more capable so quickly.
A 2012 report by consultants KPMG predicts that self-driving cars will be sold to the public by 2019. In the meantime, the Transportation Department's Intelligent Transportation Systems Joint Program Office is figuring out how the widespread deployment of technologies that will enable autonomy will work in the coming years. DOT's effort is focused on determining how to change roads in ways that will enable autonomous vehicles. Besides the technical challenges, it raises a sticky set of liability issues. For instance, if an autonomous car driving on a smart road crashes because of a software glitch, who will be held responsible — the car's owner, the car's passenger, the automaker or the company that wrote the software for the road?
Clearly, autonomy in automobile navigation presents a difficult set of challenges, but it might be one of the areas in which robots first see large-scale deployment. That is because although part of what needs to be done (perceiving the environment) is hard, another part (moving around in it) is relatively easy. It is far simpler to program a car to move on wheels than it is to program a machine to walk. Cars also need to process only minimal linguistic information, compared to, say, a household robot.
Groups such as Peter Stone's at the University of Texas, which won first place in the 2012 Robot Soccer World Cup, and Yiannis Aloimonos' at the University of Maryland are creating robots that can learn. Stone's winning team relied on many explicitly encoded rules. However, his group is also working on lines of research that teach robots how to walk faster using machine learning techniques. Stone's robots also use learning to figure out how to best take penalty kicks.
Aloimonos is working on an even more ambitious European Union-funded project called Poeticon++, which aims to create a robot that can not only manipulate objects such as balls but can also understand language. Much as autonomous vehicle teams have created a grammar for driving — breaking down, say, a U-turn at a busy intersection into its constituent parts — Aloimonos aims to describe a language for how people move. Having come up with a way to describe the constituent parts of motions, called kinetemes — for instance rotating a knee or shoulder joint around a given axis — robots can then learn how to compose them into actions that mimic human behavior.
This is all very ambitious, of course. But if machine learning techniques continue to improve in the next five years as much as they have in the past five, they will allow computers to become very powerful in fundamentally new ways. Autonomous cars will be just the beginning. | <urn:uuid:904ea266-a6ed-47c0-9945-7d4e31384b89> | CC-MAIN-2017-09 | https://fcw.com/articles/2013/07/15/machine-learning-change-the-game.aspx?m=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00614-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957667 | 3,379 | 3.171875 | 3 |
Kaspersky Lab presents ‘End of the Line for the Bredolab Botnet?’, an article by malware analyst Alexey Kadiev. The botnet emerged in mid-2009 and comprised some 30 million infected computers all over the world. In October, the Dutch police force’s Cybercrime Department announced the shutdown of 143 Bredolab botnet control servers. Alexey Kadiev’s article reveals both the business models and the malware technologies used to construct a botnet that managed to operate successfully over a prolonged period of time.
Bredolab’s key purpose is to download other malicious programs onto victim computers. One of the botnet’s most distinguishing features was its method of operation: legitimate websites that had been hacked were used to spread the botnet’s payload. Visitors to these websites were redirected to malicious resources which resulted in their computers being infected with Backdoor.Win32.Bredolab. In turn, Bredolab downloads other malicious programs, including a Trojan that steals passwords to FTP accounts. After some time, the website for which the account details were stolen also becomes infected. Using stolen usernames and passwords for FTP accounts some of the website’s contents are downloaded and then uploaded back onto the website having been injected with malicious code from the server. After another user visits the infected site, the process described above begins all over again. The botnet’s self-sustaining capability as described above is no doubt effective, if only for the way that it automated the process of infecting ever more computers. Nevertheless, the cybercriminals continued to come up with new ways of spreading their malicious net ever wider. For example, the malicious code could be embedded into highly popular sites, distributed in spam mail imitating messages from Twitter, YouTube, Amazon, Facebook, Skype etc.
“Due to its complexity, the Bredolab botnet was most likely controlled by more than one person. However, at this point only one cybercriminal has been arrested in connection with this botnet,” says Alexey Kadiev. “It is possible that the other participants in this criminal group are still engaging in these activities, since the scheme that they came up with and put into operation is rather effective.”
Vulnerabilities in website coding can be used to infect a website. In order to minimize the chances that cybercriminals will take advantage of a vulnerability, it is necessary to monitor the software updates released and promptly update website software. It is worth remembering that some services also provide malware code scanning and scanning for unauthorized content changes. For security purposes, it is best to switch off any autosave functionality for FTP passwords and FTP clients. Many programs that steal FTP account passwords, particularly Bredolab’s Trojan-PSW.Win32.Agent.qgg, search for passwords that have been saved on an infected computer. For site administrators, it may be useful to make a backup copy of a website from time to time, including any databases and files that may contain important data, so that data is safe in the event of infection.
The full version of the article is available at www.securelist.com/en. | <urn:uuid:a5a500b0-a7aa-4055-b273-212a51a753d0> | CC-MAIN-2017-09 | http://www.kaspersky.com/au/about/news/virus/2010/Workings_of_30_Million_Strong_Bredolab_Botnet_Laid_Bare | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948855 | 662 | 2.65625 | 3 |
Richard Staton (Chair of Examiners for GCSE History) will take you through our key resources to help you plan, teach and assess our new GCSE History specification. In this 75 minute briefing, he'll cover the schemes of work, guidance on the Historic Environment, our guide to assessment and more.
This webinar has been specifically designed in response to the feedback that you’ve provided to us and the questions that our customer support team receive. Progressing through a new specification can be challenging so log in to our webinar to get the answers to your questions.
Our webinar has been specifically designed in response to the feedback that you’ve provided to us and the questions that our customer support team receive. Progressing through a new specification can be challenging so log in to our webinar to get the answers to your questions.
In this in-depth 90 minute briefing, Keith Milne (a highly experienced Principal Examiner and a Head of History with over 20 years teaching experience) will help you prepare for the new AS exams. He’ll provide an overview of teaching strategies for the analysis of interpretations, and primary and contemporary extracts at AS level. He’ll also discuss what skills your students will need to succeed at writing essays, including the expectations.
In this two hour briefing Richard Staton, Chair of Examiners for GCSE History, will explain the assessment details for our newly accredited GCSE History specification. He’ll talk you through our approach to question setting and mark schemes. You’ll also have the opportunity to learn more about our forthcoming support and post questions.
You have login credentials for AQA’s e-Library, but what next? If you’ve yet to get stuck in, this webinar is for you.
Led by Natalie Starkey from Cambridge University Press, in just 30 minutes you can gain ‘administrator level’ knowledge of our intuitive resource. The webinar will help you:
•Learn to navigate the English e-Library from page one.
•Confidently explain the features and functions to colleagues
•Get equipped to plan engaging lesson, homework and revision tasks
•Understand the role of the e-Library administrator, and why you need one.
Please note, the webinar will start at 16:00 on Friday 29th, so you won't be able to join it until 15:55. We suggest you check your settings in advance using links on the BrightTalk support pages.
If you have subject-specific questions, please email the AQA English team directly using either 'English-GCSE@aqa.org.uk' or 'English-GCE@aqa.org.uk'.
Good News! Our GCSE French, German and Spanish specifications have now been accredited. This webinar will provide you with an update on the changes to the specification and specimen assessment materials.
Hosted by Lindsay Porter, maths teacher and AQA Maths Advocate & Rick Horne, a Qualifications Developer on the AQA Maths Team
Now that we’ve (nearly) got the first term of teaching done it’s a good time to take a breath and look at how it’s gone. Lindsay will talk through:
* How her students have found the new GCSE
* Her department’s plans for the new assessment objectives
* Closing knowledge gaps around the new content
* Good resources they’re using
* The impact on their Key Stage 3 planning
Rick will be on hand to help with any questions for AQA and to tell you a bit about our new Key Stage 3 tests, how they help prepare students for the new GCSE exams, and a quick overview of our latest support and resources.
You’ve been teaching our new AS and A-level Economics specifications for a term now – we want to hear how it’s gone. Join us for an online webinar at 16:00 on Thursday 3 December to review the resources available to support your teaching, as well as the opportunity to have your questions about key areas of the new specifications answered by our presenter Mike Egan.
This free 60 minute webinar is a great opportunity for you to explore and share best practice – and make sure your voice is heard.
Keith Milne, Principal Moderator for the NEA and a Head of History with over 20 years teaching experience, will share his experiences of delivering the new AS and A-level History. In this two hour briefing, you'll receive guidance and practical advice on teaching the new specifications, cover strategies for handling the new demands and identify areas where support can be more effectively directed.
Government reform means that from September 2017, we won’t be able to offer GCSE Performing Arts (and neither will the other exam boards).
However, we’re looking into the possibility of developing a Technical Award in Performing Arts.
Still aimed at 14 - 16 Year olds, it will provide a stimulating alternative to the current GCSE and form part of a suite of new Technical (more vocationally focussed) qualifications. They will count towards school performance tables and students can study up to three of them alongside their GCSEs.
This session will give an update on the progress of the development of the Performing Arts Technical Award, and give you the opportunity to ask questions and help shape the design of the qualification.
Join us at our free webinar to learn more about our new ELC Mathematics specification. ELC Mathematics has been designed to build basic and relevant mathematical skills and is suitable for students of all ages. During the webinar you will learn more about the content, assessment structure and resources for the new specification. You will be provided with ideas for delivery from project-based approaches to co-teaching with GCSE Maths.
Join us at our free of charge webinar to learn more about AQA's new specifications for A-level Music. Our new A-level Music specifications are relevant and contemporary and will feature more music styles and genres, more artists and composers, and more opportunities to compose and perform to inspire your students and prepare them for further study.
What can the student of GCSE past tell you about GCSE papers yet to come?
Andrew Taylor, Head of Maths at AQA, and Craig Barton, Maths Advanced Skills Teacher, TES Secondary Maths Adviser and creator of mrbartonmaths.com will show you how last year’s Year 11’s performed when they sat question papers designed for the new Maths GCSE.
Andrew will dive into the data and what it can tell you, while Craig will share the lessons learned from reviewing all the students’ answers and how he’s planning to apply them to his preparation and lessons.
A repeat of our popular webinar before the summer break, find out more about our new specimen papers and see the great resources we have in place to help you plan, teach and assess our new Maths GCSE.
Get the detail on:
Our new papers: understand the latest changes, and learn about the key features of our assessments
Ready for first teaching: why we’ve got the best resources package you can get from an exam board, what it can do for you, and how to get started quickly and easily
Plan: how our route maps save you time and effort when writing your schemes of learning
Teach: how the route maps link your planning to practical classroom resources and lesson plans
Assess: measuring progress for the new GCSE, topic tests, mock exam analysers and more.
How our maths team can support you and who to ask for help | <urn:uuid:118cb5d0-cd66-44c6-ba07-4ef85d90cccc> | CC-MAIN-2017-09 | https://www.brighttalk.com/webcast/8193/206191 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92731 | 1,579 | 2.71875 | 3 |
File Anti-Virus intercepts all file operations (such as reading, copying, starting) using the klif.sys driver and scans the files being accessed. If the file is infected, the operation is blocked, and the file is either disinfected or deleted by default.
Even if the Mail Anti-Virus and the Web Anti-Virus components are disabled, the user cannot run an infected file received via e mail or downloaded from the Internet because once the file is saved on the hard drive, it will be detected and blocked by the File Anti-Virus. You cannot run the file from an e mail attachment or from a web site without saving it to the hard drive.
So, File Anti-Virus is of primary importance for the file system protection, which at the same time makes it the most important protection component in general.
File Anti-Virus uses the following scanning technologies:
Signature analysis. A virus detection method that uses signatures. A signature is a part of executable code, a checksum or some other binary string, which helps detect whether the file is infected by the corresponding virus. Consecutive file checks against the signatures of known viruses returns the verdict of whether the file is infected in general. This scanning method is very reliable, but only allows detecting the viruses whose signatures have been added in Anti-Virus databases
Heuristic analysis. This scanning method applies only to executable files. Kaspersky Endpoint Security starts the scanned file in a virtual environment, isolated from the operating system, and analyzes its behavior. This method requires more time when compared with the signature analysis, but allows the detection of some new viruses
Check against KSN lists. This method also applies to executable files only. A checksum is calculated for every scanned file, which is compared with the records in the local KSN database. Further, the following alternatives exist:
If neither signature nor heuristic analysis has detected an infection, the decision is made based on the information available in the local KSN cache on the client computer. If the local cache lacks information about this file, access to the file is allowed, and a background request is simultaneously sent to the KSN cloud. If the answer is received that the file is dangerous, File Anti-Virus scans it again. If KSN returns information that the file is harmless or if KSN servers cannot be reached, file scanning is finished
If either signature or heuristic analysis has detected that the file is infected, File Anti-Virus sends the request to KSN. If the local database lacks information about the file, File Anti-Virus will wait for the answer from the KSN cloud. If KSN considers the file to be clean, it is treated as non-infected despite the verdicts of signature and heuristic analysis. If the verdict is reaffirmed or information cannot be received from КSN (connection with KSN servers cannot be established), the file is processed as an infected one
As you can see from the scanning algorithm, the check against the KSN database complements the signature analysis and helps to decrease the probability of false positives.
File Anti-Virus settings that define the protection scope and other scanning parameters are gathered in the Security level group of parameters. In the policy, these parameters have a common lock, that is, they are locked or unlocked together. Considering the importance of the File Anti-Virus, the users should not be allowed to change the scanning parameters and the lock in the Security level area should be closed.
By default, Protection scope of the File Anti-Virus includes:
All removable drives
All hard drives
All network drives
In other words, all drives from which malware can be run. A protection area allows adding individual drives and folders instead of drive groups. However, disabling any standard scan scope considerably decreases the protection level. That is why this group of settings should be modified very cautiously. For example, if Cisco NAC or Microsoft NAP guarantees that all network nodes are protected with Anti-Viruses, All network drives can be removed from the protection scope. In this case, if a file from a network drive is accessed, it will be scanned by the Anti-Virus installed on the computer where the drive is located.
Types of files to be scanned
The File types setting can take one of three values:
Files scanned by format—i.e. files that can contain executable malware code; in this case the file format is determined as the result of the file header analysis rather than by the file extension
Files scanned by extension—i.e. files with extensions characteristic of infected formats
The optimum value for the File Anti-Virus is the middle one. Scanning of all files requires considerably more resources without a dramatic improvement of protection. The scanning based on the file extensions is fraught with skipping a renamed malware object or a non-typical extension may result in opening or even running such a file.
Heuristic analysis parameters are configured in the Scan methods group. Heuristics levels—Light, Medium or Deep—define the period of observing the object in the virtual environment. In the context of the File Anti-Virus operation this means an increased delay when a program is run. Therefore, completely disabling heuristic analysis within the File Anti-Virus component is acceptable.
The Scan only new and changed files option ultimately decreases the number of scans performed by File Anti-Virus. If an object was scanned and has not been modified ever since, it will not be scanned again. Kaspersky Endpoint Security receives information about the changes using iSwift and iChecker technologies, whose settings are located in the Additional tab.
It is not recommended to scan compound files using File Anti-Virus. Unpacking of these files consumes a lot of resources and they do not impose any direct threat. Even if an archive contains a virus, you cannot run any infected file without unpacking it. During unpacking it will be detected and blocked as a regular file. It is sufficient to scan compound files with on-demand scan tasks.
iSwift and iChecker
iSwift and iChecker scanning technologies are responsible for collecting data about the changes made to files. The iSwift technology extracts the data about changes from the NTFS file system. Therefore, the iSwift technology is used for the files located on NTFS drives. The iChecker technology is efficient for executable files located on the drives with non-NTFS file systems, for example, FAT32. The iChecker technology calculates and saves the checksums of the scanned executable files. If the checksum remains the same at the next check, it means that the file has not been changed. Both technologies save information about the file scan date and the version of the databases used for the scanning.
If the Scan only new and changed files option is enabled, the iSwift Technology and iChecker Technology checkboxes are of no importance. Even if you clear them, these technologies will still be used because without them Kaspersky Endpoint Security will not be able to determine which files have already been scanned and which of them have not been changed since the last scanning.
If the Scan only new and changed files setting is disabled, the iSwift Technology and iChecker Technology settings are relevant. In this case, a certain quarantine or a trust period is associated with each file. During the quarantine period the file will be scanned even if it has not been modified, while during the trusted period the file will not be scanned.
The quarantine period is assigned to all files which have not been scanned yet or which have changed since the last scanning. During the quarantine period, the file will not be scanned if it was already scanned with the same database version. For this purpose, the iSwift and the iChecker technologies register the version of the anti-virus databases used for the scanning. In all other cases, standard scanning is performed.
Once the quarantine period is over, the trusted period is assigned to the file. During the trusted period, the file is not scanned if it has not changed. Once the trusted period is over, the file is scanned once again when the necessity arises, and if it is not infected, a new trusted period is assigned, longer than the previous one. In case of any change, the file gets a quarantine period and everything begins from scratch.
When the Scan only new and changed files setting is enabled, the trusted period is not restricted in time. The trusted period expires only if the file is changed.
Disabling the iSwift and iChecker technologies makes no sense in File Anti-Virus. This will either have no effect (if the Scan only new and changed files feature is enabled) or will lead to more scans and a general decrease of the computer performance.
The Scan mode determines the file operations that trigger scanning. It is simpler to describe them in the reverse order of their appearance:
On execution—only executable files are scanned and only when they are started. Copying an infected executable file will remain unnoticed. Switching File Anti-Virus into this mode decreases the security level considerably
On access—files are scanned when they are opened for reading or execution. The user may download malicious code from a website but will not be able to do anything with this file
On access and modification—files are scanned when any operation is performed on them. This is the safest mode, yet the most resource-consuming
Smart mode—the order of operations performed with the file is analyzed. If a file is opened for writing, the scan will be performed after it is closed and all changes to it are made. Intermediate changes made to the file are not analyzed. If a file is opened for reading, it will be scanned once on opening, but will not be rescanned on intermediate read operations until the file is closed
Essentially, Smart mode ensures the same protection as On access and modification, but consumes less resources. Therefore it is recommended for most computers. On access or On execution modes can be used on the computers where efficiency is more important than security, understanding that the probability of infection or virus spreading increases.
Pausing File Anti-Virus
File Anti-Virus can be paused while a resource-consuming operation is performed using the settings in the Pause task area:
By schedule—the schedule (daily only) is set by specifying the time when the File Anti-Virus is to be paused and when it is to resume its normal operation. The time is specified in hours and minutes
At application startup—File Anti-Virus will pause when the specified program loads in the memory and will resume its operation when this program is unloaded from the memory
Standard security levels
The security levels can be managed using the three-position switch: Low, Recommended and High.
If any setting is modified, the security level is changed to Custom. In order to return to the standard level, click the By Default button.
When an infected object is detected, File Anti-Virus can try to disinfect or delete it. Most infected files cannot be disinfected, because they contain nothing but the infected code.
Before a file is disinfected or deleted, its copy is placed into the backup storage. That way, if it contains important information or is deleted because of a false positive, the file can be recovered.
In some cases, it is impossible to say whether the file is definitely infected. If the threat is detected using heuristic analysis, the KSN database, or is similar to a virus signature, it receives the "suspicious" verdict.
Instead of being disinfected, suspicious files are moved from their original location into a separate repository called Quarantine. The files in the quarantine can be rescanned so as to update their status.
If the Roll back malware actions during disinfection option is enabled within the properties of the System Watcher component, after deleting an infected object, Kaspersky Endpoint Security rolls back its actions.
Malware detected by File Anti-Virus should not be left unprocessed. That is why the settings that regulate File Anti-Virus actions should be locked. The optimal choice is to disinfect and if disinfection is impossible, delete infected files.
Exclusions for objects
Sometimes File Anti-Virus erroneously returns the “infected” verdict. Such cases are rare, and usually concern tailor-made software. This problem is reduced by creating exclusion rules for objects.
Exclusions are configured in a separate group of settings, which are used by all protection components. An exclusion rule for objects consists of three attributes:
Object—the name of the file or folder to which the exclusion applies. The name of the object may include environment variables (systemroot, userprofile and others) and also “*” and “?” wildcard characters
Threat type—the name of the threat to be ignored (usually corresponds to a malware name), which can also be specified using wildcard characters
Component—the list of protection components to which the rule applies
Of the three attributes, one of the first two attributes and the third one are mandatory. You can create a full-fledged exclusion rule for a separate file or folder without specifying the threat type—the selected components will ignore any threats in the objects specified. And, conversely, you can create an exclusion rule for some threat types, for example, for the UltraVNC remote administration tool, so that the selected components would not respond to this threat regardless of where it was detected.
All three attributes can also be specified simultaneously. For example, the exclusion list contains a set of rules for widespread remote administration tools: UltraVNC, RAdmin, etc. In these rules, both the threat type and the object (typical location of the executable file) are specified. In this case, Kaspersky Endpoint Security would not respond to the administration tools run from Program Files, but if the user runs UltraVNC from another folder, Kaspersky Endpoint Security would consider it a threat.
Exclusions for applications
Security level settings can be adjusted so as to achieve the optimal performance-reliability balance for an average computer. But if the computer runs resource-consuming programs, their operation can be slowed down by the File Anti-Virus. This is especially true for the programs that perform numerous file operations, for example, backup copying or defragmentation. To avoid slowdowns, special measures can be taken.
The first thing to do is to configure an exclusion so that File Anti-Virus ignores file operations performed by the program. When adding exclusions under Trusted applications, within the Exclusions for Application window, specify the full or partial path to the executable file of the program and select the action—Do not scan opened files.
If the program has many processes, and the data files are located in one directory, it might be worthwhile to exclude this directory from the File Anti-Virus scan scope: Under Exclusion rules, add the rule, specify the necessary directory as its object, do not specify any threat type, and select File Anti-Virus in the list of components to apply the rule.
If the desired effect cannot be achieved by setting up exclusions, as a last resort, configure pausing File Anti-Virus while the program runs (in the Security Level settings, on the Additional tab).
Exclusion settings should be locked. Users are often unable to properly configure their exclusions and may abuse such an ability and considerably weaken the protection of the computer.
When a policy is applied, all local exclusions are disabled and replaced with centralized ones. The default exclusions configured in the standard policy apply only to the remote administration tools; moreover, they are disabled. Therefore, in order to create a useful set of exclusions, the administrator should find out which exclusions are required to minimize impact to the users, and to set them up in the policy. The best way to do this is to create exclusions in the local Kaspersky Endpoint Security interface and then import them into the policy. | <urn:uuid:c875d823-8acd-4fa4-9de1-d0f258e88e26> | CC-MAIN-2017-09 | http://support.kaspersky.com/learning/courses/kl_102.98/chapter2.2/section1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00186-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917249 | 3,298 | 2.8125 | 3 |
Authentication, Authorization, and Access Control
Identification vs. authentication vs. authorization
Numerous ponder the idea of verification in data security. What has a tendency to happen is that they befuddle validation with recognizable proof or approval. They are indeed all different ideas, and ought to be considered such. Identification is just guaranteeing one is someone. One distinguish one's self when one identify with somebody on the telephone that one don't have the foggiest idea, and they ask one who they're addressing. When one say, "I'm Tom." none've quite recently recognized one's self.
In the data security world, this is similar to entering a username. It does not closely resemble entering a watchword. Entering a secret word is a technique for checking that one is who one distinguished one's self as, and that is the following one on our rundown.
Authentication is the way one demonstrates that they are who they say they are. When one claim to be the tommy south by logging into a workstation framework as "smith", its doubtlessly going to approach one for a secret key. None've guaranteed to be that individual by entering the name into the username field (that is the distinguishing proof part), however now one need to demonstrate that one are truly that individual. Most frameworks utilize a secret word for this, which is focused around "something one knows", i.e. a mystery in the middle of one and the framework. An alternate type of verification is displaying something one have, for example, a driver's permit, a RSA token, or a keen card. One can likewise verify through something one are. This is the establishment for biometrics. When one does this, one first distinguishment oneself and afterward submit a thumb print, a retina sweep, or an alternate manifestation of bio-based verification. When one have effectively validated, one have now done two things: one have guaranteed to be somebody, and one have demonstrated that one are that individual. The main thing that is left is for the framework to figure out what one of them is permitted to do. Approval is the thing that happens after an individual has been both distinguished and validated; it's the step figures out what an individual can then do on the framework. Authorization is the procedure of giving somebody consents to do or have something. In multi-client machine frameworks, a framework chairman characterizes for the framework which clients are permitted access to the framework and what benefits of utilization, (for example, access to which document indexes, hours of access, measure of allotted storage room, et cetera).
As it has been mentioned before too, the authorization is basically the process which is used for the permission. Here are the ways through which it can be given;
Least privilege: one might be given the access but he can have it for some places only which means it is limited.
Separation of duties: the duties which are assigned call also be separated hence one can ensure that there are no clashes of any problems.
ACLs: the ACLS, as mentioned above should be the various ones so that one can ensure that he is having the right access and can get benefits out of it.
Mandatory access: there can be some mandatory access which has to be done by all the people who work in organization.
Discretionary access: the access can also be defined as the discrete one and hence one can safe guard the data he has.
Rule-based access control: there can be some controls where the rules can be accesses. Hence those rules are to be followed.
Role-based access control: the role based access means that one must be having some of the role in the organization because of what he can get some access.
Time of day restrictions: one can also see that there are some restrictions which are put up only because of the time of the day which is faced by one.
Authentication can be given this way;
Tokens: one can have some tokens which can define the authentication.
Common access card: the cards can be given to employees.
Smart card: smart cards which can be scanned can be issued.
Multifactor authentication: there can be some multifactor authentications too which can be used.
Besides them all, one can benefit from using the following things;
- TOTP (Algorithm which is online and is time based)
- HOTP ( the algorithm which is one timed and is based on the HMAC)
- CHAP (authorization protocol which is challenge handshake based)
- PAP (9protocol for password authentications)
Single sign-on: a card system with single sign can be introduced.
Access control: the control can be accessed by keeping some logs
Implicit deny: if there is some mistake, then deny can be done which is implicit.
Trusted OS: The OS that one has must be the trusted one.
Here are the authentication factors which are used;
Something one is: it means that identify of that person.
Something one has: it means the company which that person has, or the person he is with.
Something one knows: it can be for someone who is trudges one and is known.
Somewhere one are: the authentication also can be effected if someone is not in the place where he is supposed to be.
Something one do: also, the job which is carried out by one can also reflect the authentication factor.
Here are some ways which can be sued for identifications;
Biometrics is seen as a panacea for confirmation issues, however obviously it isn't. Usually endeavored biometric information incorporates fingerprints, retina sweeps, voice distinguish, and face distinguishes. Fingerprints are the most widely recognized, having generally modest peruses (Us$50 to $200) that give sensibly useful information. Hard information is not accessible on how regularly fingerprints are comparative, yet it is for the most part accepted that false matches are uncommon. Retina outputs are likely just as solid, however once more; hard information is not broadly accessible. Voice and face distinguishment are hard to get right. Its certifications of different sorts have various problems: the peruse programming dependably matches the approaching picture against a set of standard pictures, one for every known client. We would favor on the off chance that it would put out a standardized datum that should be same each time the same client is seen, as a watchword would be, on account of some verification plans require such a datum to use as an encryption key. The client's body is not static. Case in point, a cut finger may refute a unique mark and a stuffed-up nose would negate a voiceprint. The verification framework must be capable, without losing security, to supplant the client's standard picture on short perceive without access to the old confirmation token, and for a few utilization, e.g. restorative, it is especially critical to give benefit dependably to a harmed or debilitated client.
Personal identification verification card:
The identification card is utilized for client verification as a part of each mobile phone (the SIM), is making advances in the MasterCard business, and is utilized by a few organizations for verifying clients on their workstations. It goes about as a key executor, holding a mystery key, for the most part a RSA key. At the point when a server doing verification communicates something specific, customer programming passes it to the shrewd card, which scrambles or decodes it. Shrewd cards have various security issues:
The cards are joined with the customer workstation by physical contact in a USB or hardwired peruse (ISO 7810) or by radio (RFID, ISO 14443); IRDA (tight pillar infrared) is conceivable however I have not become aware of it being utilized. With physical contact the holder knows which have the card is embedded in, however RFID can act at a separation and card skimming, as a cheat may do, has been showed. A little subset of the cards incorporates a keypad so the client can enter a secret word each time the card is to be utilized. This equipment is lavish and effortlessly harmed, and is seldom utilized. The secret key may be judicious on a Visa however keeps its utilization for transitive confirmation that happens habitually, for example, document get to or message recovery. A few cards need to see a secret word (PIN, four to six digits) from the client before they will convey, sent over the standard interface. Once more, this blocks utilizing the keen card for bland transitive verification. Anyhow more awful, in the Visa setting the PIN would need to be given to the dealer's gear and to the criminals swarming his framework. Whatever is left of the cards are constantly dynamic, so if a foe physically takes the card or corresponds with it surreptitiously (RFID just) then he can mimic the holder. Much better would be if the card would oblige the accomplice to validate, e.g. with a X.509 testament that it has been customized to trust.
Login ID, and client ID, username or client name is the name given to a client on a workstation or machine system. This name is normally a shortened form of the client's full name or his or her nom de plume. For instance, an individual known as John Smith may be allotted the username of smith, which is the initial four letters of the last name, took after by the first letter of the first name. In the picture indicated on this page, the username is root. Usernames permit various clients to utilize the same workstation or online administration with their own particular individual settings and records. At the point when utilized on a site, a username permits you to have your particular settings and distinguishing proof with that site or administration.
In data innovation (IT), federal identity management (Firm) adds up to having a typical set of strategies, practices and conventions set up to deal with the personality and trust into IT clients and gadgets crosswise over associations. Single sign-on (SSO) frameworks permit solitary client verification prepare crosswise over numerous IT frameworks or even associations. SSO is a subset of united personality management, as it relates just to validation and specialized interoperability. Centralized character management results were made to help bargain with client and information security where the client and the frameworks they got to were inside the same system - or at any rate the same "area of control". Progressively be that as it may, clients are getting to outer frameworks which are in a general sense outside of their space of control, and outer clients are getting to interior frameworks. The undeniably normal partition of client from the frameworks obliging access is a certain by-result of the decentralization achieved by the coordination of the Internet into each part of both individual and business life. Advancing personality management challenges, and particularly the difficulties connected with cross-organization, cross-space access, have offered ascent to another methodology to character management, referred to now as "unified character management". Firm, or the "organization" of character, depicts the advances, gauges and utilization cases which serve to empower the compactness of personality data crosswise over generally independent security spaces. A definitive objective of character alliance is to empower clients of one area to safely get to information or frameworks of an alternate space flawlessly, and without the requirement for totally repetitive client organization. Personality organization comes in numerous flavors, including "client controlled" or "client driven" situations, and additionally endeavor controlled or business-to-business situations. Alliance is empowered through the utilization of open industry norms and/or unabashedly distributed particulars, such that various gatherings can accomplish interoperability for normal utilization cases. Run of the mill utilization cases include things, for example, cross-space, online single sign-on, cross-area client record provisioning, cross-space qualification management and cross-area client property, etc.
Transitivity figures out if a trust might be reached out outside the two areas between which the trust was structured. You can utilize a transitive trust to augment trust associations with different spaces. You can utilize a no transitive trust to deny trust associations with different areas.
Hence one can find out many of the methods which he can use for the security. All the authentication and the access controls are done so that one can stays safe. So one must take care of these things and should have knowledge about them so that he doesn't get any trouble in the future regarding any type of intrusion. | <urn:uuid:88fc535e-e0d8-41b3-8595-0caaedf913f1> | CC-MAIN-2017-09 | https://www.examcollection.com/certification-training/security-plus-authentication-authorization-and-access-control.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00362-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949742 | 2,568 | 2.84375 | 3 |
Prevention and protection vital in countering the pernicious threat of ransomware
“Your personal files are encrypted!” glares the headline on a red pop-up window. The text that follows warns the user that all of the photos, videos and documents stored on the computer were encrypted with a secret encryption key. Unless the user pays a $500 ransom, then a virus will destroy those files permanently.
Words like this must have struck fear into the hearts of IT administrators at the Midlothian, Ill., police department when they came up on a police computer in January 2015. Lacking any solid technical alternative, the department paid a $500 ransom to unknown attackers to restore access to critical files.
While a police department may feel especially embarrassed when successfully extorted by unknown cybercriminals, thousands of people around the world experience this same scenario every day. Ransomware, a fairly new class of malware, infects systems and holds important personal information hostage unless the user meets the attackers’ financial demands. Fortunately, there are simple steps that users and businesses can take to protect themselves against ransomware infection.
What is Ransomware?
From a technical perspective, ransomware isn’t much different from any other form of malware. It spreads to new victims through a variety of mechanisms, including the use of drive-by downloads. In this attack, hackers compromise otherwise normal websites and reconfigure the site to distribute ransomware. When an unsuspecting user visits the compromised site, a hidden download exploits vulnerabilities in the user’s computer to install the ransomware on the system and wreak havoc on personal information.
Ransomware departs from the tactics of its malware brethren by taking advantage of strong cryptographic techniques to prevent legitimate access to files. Cryptography, normally a technique used to protect sensitive information, uses encryption keys to convert normal files into versions that may not be read without the appropriate decryption key. It’s a tactic similar to password protecting a file. If you don’t know the decryption key, you simply can’t access the content.
This is a very effective technique for transferring sensitive information between systems and individuals over otherwise insecure networks. In fact, the HTTPS secure websites users visit every day use encryption to protect information sent back and forth between the user and the web server.
When ransomware uses encryption, however, it has much darker intent. The malware scours the infected system’s hard drive, searching out personal files. Each time it encounters such a file, it encrypts it using a secret key known only to the malware author. When the legitimate user attempts to access his or her files, the encryption stymies that effort and the ransomware pops up a demand for payment in Bitcoin or other anonymous digital currency. If the user pays the ransom, the attacker sometimes (but not always!) provides the decryption key used to restore file access. If the user doesn’t pay the ransom, the encryption may result in the potentially devastating permanent loss of data.
Protecting Against Ransomware
Fortunately, there are ways that users and organizations can protect themselves against the ransomware threat. The same good computer security practices that IT professionals advocated in years past apply to this new threat. Well-maintained systems should be immune from most ransomware threats, although no technique is foolproof.
First and foremost, every system connected to a network should run antivirus software from a reputable vendor with current signature files installed. That means paying the annual license fee to maintain current protection. If users don’t purchase these updates, the antivirus software cannot effectively defend against new risks. Each day that passes without a signature update significantly increases the risk of infection by ransomware or other malware nasties.
Second, IT staffers should install operating system patches and software security updates on a regular basis. The drive-by download technique favored by ransomware creators depends upon exploiting known flaws in operating systems, web browsers and other applications. Running old, unpatched software provides a pathway that may allow malware to enter the system.
Finally, there’s no substitute for practicing safe web browsing habits. Users should avoid visiting suspicious sites, downloading unapproved software, and clicking on unknown attachments. Making one of these simple mistakes, even a single time, can trigger an irreversible ransomware infection. Organizations can complement safe browsing education programs with technical filters that block access to known malicious sites from the organization’s network. This is an effective way to block some infections, but IT staffers must remember that many computers leave the safe confines of the corporate network and access the Internet from unfiltered connections at hotels, airports, coffee shops and similar locations.
The key to avoiding ransomware infection is the same as protecting against many other security risks — practice defense in depth. No single security control is a panacea in the fight against malware. Building a series of layered defenses dramatically increases the safety of Internet-connected systems.
What If You’re Infected?
What happens when defenses fail and a system falls victim to ransomware infection? Unfortunately, the prognosis is bleak. Ransomware uses very strong encryption technology and it is virtually impossible to decrypt files without access to the secret decryption key.
If an organization has backups of the files stored on a computer, the best bet is to simply wipe and rebuild the infected system and then restore the unencrypted files from backup. When taking this path, it’s very important to verify the security controls described earlier are in place. Without antivirus software, content filtering and safe browsing habits, the system may fall victim to the same infection again.
If backups don’t exist, there aren’t many great options. Organizations can take the same path as the Midlothian police department and pay the ransom, but that’s a risky proposition. There’s no guarantee that anonymous criminals will honor their word and provide the decryption key. If the organization refuses to pay the ransom and no copies of the files exist elsewhere, data loss may be inevitable.
Ransomware is big business. Symantec recently issued a report analyzing the ransomware industry and estimated that ransomware developers may rake in as much as $400,000 per month! By taking simple security steps, organizations may protect their computers and critical files from this dangerous threat. | <urn:uuid:bdc55060-2f29-4c5d-9f45-caa811d73ad5> | CC-MAIN-2017-09 | http://certmag.com/protection-prevention-vital-countering-threat-of-ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00062-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.899633 | 1,282 | 3.3125 | 3 |
PON ( Passive Optical Network) refers to the optical distribution network does not contain any electronic device and electronic power, optical distribution network (ODN) all by the optical splitter and other passive components, without the need for expensive electronic equipment, is a form of fiber-optic access network. PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures.
A PON consists of an optical line terminal (OLT) at the service provider’s central office and a number of optical network units (ONUs) near end users. In OLT/ONU between the optical distribution network includes optical fiber and passive optical splitter or Fiber Optic Coupler.
An OLT, generally an Ethernet switch, router, or multimedia conversion platform, is located at the central office (CO) as a core device of the whole EPON system to provide core data and video-to-telephone network interfaces for EPON and the service provider.
ONUs are used to connect the customer premise equipment, such as PCs, set-top boxes (STBs), and switches. Generally placed at customer’s home, corridors, or roadsides, ONUs are mainly responsible for forwarding uplink data sent by customer premise equipment (from ONU to OLT) and selectively receiving downlink broadcasts forwarded by OLTs (from OLT to ONU).
An ODN consists of optical fibers, one or more passive optical splitters (POSs), and other passive optical components. ODNs provide optical signal transmission paths between OLTs and ONUs. A POS can couple uplink data into a single piece of fiber and distribute downlink data to respective ONUs.
There are two passive optical network technologies: Ethernet PON (EPON) and gigabit PON (GPON). EPON and GPON are applied in different situations, and each offers its own advantages in subscriber access networks. EPON focuses on FTTH applications while GPON focuses on full service support, including both new services and existing traditional services such as ATM and TDM.
EPON is a Passive Optical Network which carries Ethernet frames encapsulated in 802.3 standards. It is a combination of the Ethernet technology and the PON technology in compliance with the IEEE 802.3ah standards issued in June, 2004. A typical EPON system consists of three components: EPON OLT, EPON ONU and EPON ODN. It has many advantages, such as lower operation and maintenance costs, long distances and higher bandwidths.
GPON utilizes point-to-multipoint topology. GPON standard differs from other PON standards in that it achieves higher bandwidth and higher efficiency using larger, variable-length packets. And GPON is generally considered the strongest candidate for widespread deployments. GPON has a downstream capacity of 2.488 Gb/s and an upstream capacity of 1.244 Gbp/s that is shared among users.
There are also many differences between EPON and GPON. EPON, based on Ethernet technology, is compliant with the IEEE 802.3ah Ethernet in the First Mile standard that is now merged into the IEEE Standard 802.3-2005. It is a solution for the “first mile” optical access network. GPON, on the other hand, is an important approach to enable full service access network. Its requirements were set force by the Full Service Access Network (FASN) group, which was later adopted by ITU-T as the G.984.x standards–an addition to ITU-T recommendation, G.983, which details broadband PON (BPON).
Both EPON and GPON are accepted as international standards. They cover the same network topology methods and FTTx applications, incorporate the same WDM technology, delivering the same wavelength both upstream and downstream together with a third party wavelength. PON technology provides triple-play, Internet Protocol TV (IPTV) and cable TV (CATV) video services. | <urn:uuid:da7631a7-de4a-4eb4-9f95-13d09be8c92d> | CC-MAIN-2017-09 | http://www.fs.com/blog/epon-and-gpon-of-passive-optical-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932825 | 825 | 3.25 | 3 |
On the popular television series CSI, detectives and criminalists use technology to gather electronic evidence from cellphones and mobile devices during an investigation. Much like the TV show, the Madison, Wis., Police Department finds that technology is key for collecting such forensic evidence once the devices in question are in their possession.
Madison PD detective Cindy Murphy said that cellphones often contain information relevant to an investigation, particularly in cases that involve stalking, homicide and illegal drugs. To extract information from cellphones, like text messages, call history, photos, or other data, police department staff have utilized Cellebrite technology. Since 2006, they have used the tool for critical mobile data visualization.
According to Cellebrite, the technology can extract data and passwords from thousands of phones, smartphones, portable GPS units and tablets - even phones manufactured with Chinese chipsets. The technology can also perform physical extraction and decoding on platforms including BlackBerry, iOS, Android and Nokia.
By connecting a cellphone or mobile device to Cellebrite’s hardware component, the data can be pulled out of the phone. Using its software component, the department can then analyze the data, Murphy said.
However, the department can only legally extract cellphone data if the device was obtained with a search warrant or if a witness provided consent to have his or her cellphone information used for investigative purposes.
Warrant exceptions do exist, for example, in cases of life and death situations. Murphy said if such conditions arise, the department can get the green light to use the extraction technology. In cases that deal with illegal activity like drug deals, suspects might take photos on their phones of the illegal narcotics. If the police obtain the phone, the photos can be used as evidence later in court, Murphy said.
“Bad guys always like to take pictures of their drugs, their guns and their girlfriends,” Murphy said. “If you want to brag to your buddies about what you’ve just done [sharing photos] is one way to do it.”
But unlike on CSI, where cellphone data is extracted for forensic examination in a matter of seconds, Murphy said the process takes much longer in reality. Phone type and the kind of data the department is looking to extract from a phone factor in to the length of the recovery process.
Extracting text messages may be as quick as 15 minutes, but performing full data recovery on a cellphone requires additional forensics, which can take days or even weeks. Depending on the phone’s make and model, the department may also be able to recover a device's deleted material.
In many cases, Murphy said the department successfully recovered deleted text messages, videos and call history.
Outside Wisconsin, other law enforcement agencies have also deployed Cellebrite technology to help collect evidence from mobile devices. The Anderson, S.C., Police Department and the Sacramento County, Calif., Sheriff’s anti-gang task force, are among the users, according to a Cellebrite press release.
“The more we can do in the field to identify leads and cut short criminal operations, the faster we can complete our investigations,” said Dan Morrissey, the task force’s commander of the Intelligence Operations Group, in the release. | <urn:uuid:43b835f2-7a68-4613-ad35-6c65919ed9cf> | CC-MAIN-2017-09 | http://www.govtech.com/local/Madison-Police-Extract-Forensic-Evidence-from-Cellphones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94097 | 667 | 2.65625 | 3 |
The Fourth Industrial Revolution is under way and while it could lead to the loss of more than 7 million jobs over the next few years, it also should add jobs in the fields of computer science, math and engineering.
The global workforce is expected to go through "significant churn" in the next four years, according to a report from the World Economic Forum, a Geneva, Switzerland-based organization focused on analyzing and improving the state of the world.
"Developments in previously disjointed fields, such as artificial intelligence and machine learning, robotics, nanotechnology, 3D printing and genetics and biotechnology are all building on and amplifying one another," the report noted. "Smart systems -- homes, factories, farms, grids or entire cities -- will help tackle problems ranging from supply chain management to climate change."
Analysts expect the impact from many of these changes to arrive within the next five years.
That could include a net employment impact of more than 5.1 million jobs lost to changes in the labor market between 2015 and 2020, with a total loss of 7.1 million jobs when other job market factors are included. Two-thirds of those losses could come from the white-collar jobs.
It's not all bleak, though -- especially for those in tech fields.
The World Economic Forum predicts that, in the same time frame, the world workforce will gain 2 million jobs in fields related to computer science, engineering and mathematics.
Survey respondents from around the world largely pointed to data analysts as workers who will be in demand across a multitude of industries. Data analysts are expected help companies "make sense and derive insights from the torrent of data generated by technological disruptions," according to the report.
"This makes sense," said Patrick Moorhead, an analyst with Moor Insights & Strategy. "As with the Industrial Revolution, there was job shrinkage in some areas and job additions. I think the same applies here, but we may call it the robotic revolution. We would say it's a good thing, even though some people will lose their jobs."
Echoing some previous reports, this recent survey notes that high-tech advances, like robotics and artificial intelligence, won't simply take jobs away but will change the types of jobs available.
"Technological disruptions such as robotics and machine learning, rather than completely replacing existing occupations and job categories, are likely to substitute specific tasks previously carried out as part of these jobs, freeing workers up to focus on new tasks and leading to rapidly changing core skill sets in these occupations," the report noted. "On average, by 2020, more than a third of the desired core skill sets of most occupations will be comprised of skills that are not yet considered crucial to the job today, according to our respondents."
That makes sense to Moorhead.
"If you have sensors and wireless communications on your gas meters, you don't need as many people checking those gas meters," he explained. "But you need people to monitor the sensors and to design and develop the systems that do that."
In the technical field, the biggest drivers of change will be the cloud and mobility, followed by processing power and big data. New energy supplies will be followed by the Internet of Things, reported the World Economic Forum.
And technical skills alone won't be enough to get that great job. Survey respondents said they want people who also show sound social skills, like persuasion, emotional intelligence and the ability to teach others.
"[They] will be in higher demand across industries than narrow technical skills, such as programming or equipment operation and control," the report said. "In essence, technical skills will need to be supplemented with strong social and collaboration skills."
This story, "The tech revolution could change (or erase) your job by 2020" was originally published by Computerworld. | <urn:uuid:d71be469-5801-483f-9610-7a31511df0bc> | CC-MAIN-2017-09 | http://www.itnews.com/article/3026294/it-careers/the-tech-revolution-could-change-or-erase-your-job-by-2020.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00058-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966379 | 774 | 2.96875 | 3 |
Forget Captcha, Try InkblotsResearchers propose using an inkblot-matching scheme, dubbed Gotcha, to defeat dictionary-based hacks of the Captcha system.
9 Android Apps To Improve Security, Privacy (click image for larger view)
Psychoanalysis fans, rejoice: You might soon be able to log in to websites using inkblots. So goes the pitch for a new password mechanism developed by researchers at Carnegie Mellon University.
The three researchers have dubbed their new system Gotchas -- for Generating panOptic Turing Tests to Tell Computers and Humans Apart -- which they said boils down to "a randomized puzzle generation protocol, which involves interaction between a computer and a human," according to a summary of their research. They're scheduled to present a related "Gotcha Password Hackers!" paper at the 2013 ACM Workshop on Artificial Intelligence and Security (AISec) next month in Berlin.
Here's how a Gotcha works: First, an inkblot gets generated, and a user is asked to enter a text description. The site then stores both inkblot and description for whenever the user returns, at which point it displays the inkblot and asks the user to recognize their previous description from multiple potential selections.
Information security researchers have already tested inkblots -- which of course recall the Swiss Freudian psychiatrist and psychoanalyst Hermann Rorschach's pioneering, eponymous work -- as an authentication mechanism. But previous approaches forced users to recall the exact phrase they'd first used to describe the stored inkblot, which created a usability challenge, the Carnegie Mellon researchers argued. By comparison, the construction of their system "relies on the usability assumption that users can recognize the phrases that they originally used to describe each inkblot image," they said.
[ What other personal info is the National Security Agency grabbing? Read NSA Harvests Personal Contact Lists, Too. ]
One use for Gotcha would be to prevent attackers from grabbing password files from servers, then cracking them offline, which continues to be a pervasive problem. "Any adversary who has obtained the cryptographic hash of a user's password can mount an automated brute-force attack to crack the password by comparing the cryptographic hash of the user's password with the cryptographic hashes of likely password guesses," the researchers said in their paper. "This attack is called an offline dictionary attack, and ... [such attacks] are -- unfortunately -- powerful and commonplace." Indeed, numerous companies, including Gawker, LinkedIn, Sony and Zappos, have seen their users' passwords compromised in this manner.
By using Gotchas, businesses could "mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack," the researchers said. In other words, even if attackers recovered usernames and passwords via an offline dictionary attack, they'd still need a human to manually handle one or more Gotcha challenges before gaining access to any given account. From an economic standpoint, such attacks likely wouldn't be worth an attacker's time.
New inkblot test?
As the name Gotcha suggests, the proposed new system might also serve as a replacement for the reviled Captcha tests currently employed by many sites as a challenge-response mechanism. Captcha -- based on the word "capture" -- is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. The technique, likewise developed at Carnegie Mellon, but back in 2000, was intended to allow a computer to tell if it was dealing with a human or another machine.
Since its debut, the Captcha has become a standard challenge-response system for everything from ticket-buying sites to online comment boxes. The underlying goal has always been to make the puzzles easy for real people to solve, and difficult -- if not impossible -- for a computer to conquer. Unfortunately, however, spam syndicates and online criminals keep improving their ability to bypass Captchas, in some cases by designing more automated attack tools, and in other cases by tricking people into solving a site's Captchas for them, for example by offering free porn.
Will the new Gotcha system be stronger than the Captcha that people have come to know and despise? To test that possibility, the Carnegie Mellon researchers have issued an open call to security researchers to try to break their inkblot-matching Gotcha construction techniques via their Gotcha Challenge website. "The goal of this challenge is to see if artificial intelligence techniques can be applied to attack our Gotcha construction," they said.
Participants can download five files associated with passwords generated using Gotcha inkblot-generating techniques. Depending on how tough these password files get cracked, website users might soon be describing inkblots. The psychoanalysis is optional. | <urn:uuid:5e979192-82a5-49b0-8081-8ee5af498956> | CC-MAIN-2017-09 | http://www.darkreading.com/attacks-and-breaches/forget-captcha-try-inkblots/d/d-id/1111975?cid=sbx_iwk_related_mostpopular_default_microsoft_gets_gartners_business_intelli&itc=sbx_iwk_related_mostpopular_default_microsoft_gets_gartners_business_intelli | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00234-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938123 | 989 | 2.875 | 3 |
Cloud computing is the delivery of a computing resource via the Internet and as a service. Commonly associated with software, cloud computing could theoretically be any computing asset or object being delivered via the Internet and as a service.
Cloud computing has its roots going back to the mid 1990s when Application Service Providers (ASP) were popular and delivered hosted applications as a service via the Internet. ASPs were unique because their applications were delivered to customers on a one to many basis. So, instead of a customer buying an application, hosting it and maintaining it themselves, they contracted with the ASP to do it for them. These ASP companies were close cousins to Managed Service Providers who delivered other forms of computing resources on a one to many basis, also as a service.
Today, the "cloud" is often used to refer to the Internet or "online" by non-technical people, and has taken on a very generic meaning. Cloud computing today refers to both consumer and business grade solutions, from storing photos and music to performing complex computing services for businesses and organizations.
There are also large companies, like Apple, Microsoft, Google, Amazon, and others, all of whom have their own cloud computing environments. For example, Apple's iCloud is a popular consumer platform used to provide email, photo storage and sharing, music stream, movie viewing, and other services. Amazon, on the other hand, has developed a cloud platform which is widely used by companies for cloud computing solutions like storage, security, etc.
In contrast to these "public cloud" vendors, there is a growing movement of MSPs who are developing "private cloud" platforms, which have a number of benefits and distinctions when compared to public cloud offerings. | <urn:uuid:4c0693f6-9dfc-408d-aa90-f6023d7a9f63> | CC-MAIN-2017-09 | https://mspalliance.com/cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00234-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.975056 | 349 | 3.453125 | 3 |
The FBI's Internet Crime Complaint Center (IC3) recently published a warning about Smishing and Vishing. These mobile phone threats are variations of phishing, but smishing uses SMS texts to initiate the scam, while vishing uses automated phone calls.
These threats are new variations on an old and costly mythology of identity theft. The problem here is that mobile users who are novice with regard to computer security threats are simply unaware they are in jeopardy when they respond to text and audio phishing on their mobiles.
Similarly, sophisticated corporate IT users who should know better, are similarly compromised via their mobile phones.
Just to backup a step, SMS stands for short message service. SMS is also often referred to as texting, sending text messages or text messaging. The service allows for short text messages to be sent from one cell phone to another cell phone or from the Web to another cell phone.
Just because the SMS service runs on a phone does not make it impervious to computer phishing.
The particularly nasty form of SMS spam called smishing, is the act of phishing by SMS for private information, often to be used for identity theft. These smishing attempts take the form of text messages and voice massages, which come to your phone saying things like "We’re confirming you've parcel delivery” Your account status as been changed or ABC credit card is confirming your purchase."
The user is given a phone number to call or a website to log onto to provide account credentials to remedy the issue. Or the victim is directed to a spoofed web site. A spoofed web site is a fake site that misleads the victim into providing personal information, which is in turn routed to the scammer's computer.
If a victim attempts to telephone back to the inbound number of a phishing call they will most probably encounter no voice mail or a constantly busy signal. This is due to attackers calling from throw-away, untraceable phones, rendering these calls virtually untraceable.
The FBI report said a recent smishing scam was used to steal money from customers of a credit union. After receiving a text about an account problem, victims called the number provided and gave out their personal information. Within 10 minutes money was withdrawn from their bank accounts. The same technique also recently used to attack banking customers who were told via text that they needed to reactivate their ATM cards at a bogus web site.
What to do. What not to do.
Once again, here are old and trusted simple steps to avoid being a victim of identity theft and fraud:
• Do not respond to respond to text messages or automated voice messages from unknown or blocked numbers.
• Do not respond to unsolicited (spam) email.
• Do not click on links contained within an unsolicited email.
• Be cautious of email claiming to contain pictures in attached files, as the files may contain viruses. Only open attachments from known senders. Avoid filling out forms contained in email messages that ask for personal information.
• Do compare the link in the email with the link to which you are directed. Look and see for yourself if it is the legitimate URL address. Better still, just log directly onto the official web site for the business identified in the email. If the email appears to be from your bank, credit card issuer, or other company you deal with frequently, your statements or official correspondence from the business will provide the proper contact information.
• Do contact the actual business that supposedly sent the email to verify if the email is genuine.
• Do verify any requests for personal information from any business or financial institution by contacting them using the main contact information.
Have a secure week. Ron Lepofsky CISSP, CISM www.ere-security.ca | <urn:uuid:396471ef-5165-406a-97c1-845de4da8866> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2227870/access-control/what-s-your-pain-threshold-for-mobile-phone-identity-theft-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00103-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938564 | 771 | 2.71875 | 3 |
Following our series of posts on Digital Forensics we will continue our journey about analyzing our compromised system. In the last two articles we covered Windows Prefetch and Shimcache. Among other things, we wrote that Windows Prefetch and ShimCache artifacts are useful to find evidence about executed files and executables that were on the system but weren’t execute. While doing our investigation and looking at these artifacts, the Event Logs and the SuperTimeline, we found evidence that REGEDIT.EXE was executed. In addition, from the Prefetch artifacts we saw this execution invoked a DLL called CLB.DLL from the wrong path. On Windows operating systems CLB.DLL is located under %SYSTEMROOT%\System32. In this case CLB.DLL was invoked from %SYSTEMROOT%.
However, when we looked inside the %SYSTEMROOT% folder and we could not find any traces of the CLB.DLL file. This raised the following questions:
- How did this file got loaded from the wrong PATH?
- Did file got deleted by the attacker?
Let’s answer the first question.
Inside PE files there is a structure called Import Address Table (IAT) that contains the addresses of the library routines that are imported from DLL’s. When an application is launched the operating system will check this table to understand which routines are needed and from which DLL’s. For example, when I execute REGEDIT.EXE the binary has a set of dependencies needed in order to execute. To see this dependencies, you can look at the IAT. On Windows you could use dumpbin.exe /IMPORTS or on REMNUX you could use pedump as illustrated below.
But from where will this DLL’s be loaded from? The operating system will locate the required DLL’s by searching a specific set of directories in a particular order. This is known as the DLL Search Order and is explained here. This mechanism can and has been abused frequently by attackers to plant a malicious DLL inside a directory that is part of the DLL Search Order mechanism. This will trick the Windows operating system to load the malicious DLL instead of the real one. The DLL Search Order by default on Windows XP and above is the following:
- The directory from which the application loaded.
- The current directory.
- The system directory.
- The 16-bit system directory.
- The Windows directory.
- The directories that are listed in the PATH environment variable.
Not all DLL’s will be found using the DLL Search Order. There is a mechanism known as KnownDLLs Registry Key which contains a list of important DLL’s that will be invoked without consulting the DLL Search Order. This key is stored in the registry location HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\KnownDLLs.
Throughout the years Microsoft patched some of the problems with DLL Search Order mechanism and also introduced some improvements. One is the Safe DLL Search Order Registry which changes the order and moves the search of “The Current Directory” to the bottom making harder for the attacker without admin rights to plant a DLL in a place that will be searched first. This feature is controlled by the registry key HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SafeDllSearchMode.
Bottom-line, this technique is known as DLL pre-loading, side-loading or hijacking and is an attack vector used to takeover a DLL and escalate privileges or achieve persistence by taking advantage of the way DLL’s are searched. This technique can be pulled off by launching an executable that is not in %SYSTEMROOT%\System32 – like our REGEDIT.EXE – or by leveraging weak Directory Access Control Lists (DACLS) and dropping a malicious DLL in the appropriate folder. In addition, for this technique to work the malicious DLL should contain the same exported functions and functionality has the hijacked DLL or work as proxy in order to ensure the executed program will run properly. The picture below shows the routines that are exported by the malicious DLL. As you could see these are the same functions like the ones required by REGEDIT.EXE from the CLB.DLL.
To further understand the details, you might want to read a write-up on leveraging this technique to escalate privileges described by Parvez Anwar here and to achieve persistence described by Nick Harbour here. Microsoft also gives guidance to developers on how to make applications more resistant to this attacks here.
Considering the REGEDIT.EXE example we can see from where the DLL’s are loaded on a pristine system using Microsoft Windows debugger like CDB.EXE. Here we can see that CLB.DLL is loaded from %SYSTEMROOT%\System32.
We have now a understanding about how that DLL file might have been loaded. DLL sideloading is a clever technique that load malicious code and is often used and abused to either escalate privileges or to achieve persistence. We found evidences of it using the Prefetch artifacts but without Prefetch e.g., a Windows Server, this won’t be so easy to find and we might need to rely on other sources of evidence like we saw on previous articles. Based on the evidence we observed we consider that the attacker used DLL sideloading technique to hijack CLB.DLL and execute malicious code when invoking REGEDIT.EXE. However, we could not find this DLL file on our system. We will need to look deeper and use different tools and techniques that help us find evidence about it and answer the question we raised in the begging. This will be the topic of the upcoming article!
Luttgens, J., Pepe, M., Mandia, K. (2014) Incident Response & Computer Forensics, 3rd Edition
Carvey, H. (2014) Windows Forensic Analysis Toolkit, 4th Edition
Russinovich, M. E., Solomon, D. A., & Ionescu, A. (2012). Windows internals: Part 1
Russinovich, M. E., Solomon, D. A., & Ionescu, A. (2012). Windows internals: Part 2 | <urn:uuid:d6472e87-8289-4313-80d4-af86a67276f9> | CC-MAIN-2017-09 | https://countuponsecurity.com/2016/05/24/digital-forensics-dll-search-order/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00455-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920313 | 1,332 | 2.640625 | 3 |
Many people have a great deal of faith in the security of their desktop computers and operating systems. But there is a constant stream of evidence that shows that this faith is unfounded.
Take some time and read about what is contained in your operating system’s next update. How many ‘patch Tuesdays’ have gone by without a security update? It’s terrific that the operating system vendors have been investing in creating those patches, but keep in mind that at any given time there are entire categories of vulnerabilities in your operating system that attackers know about. If you are being targeted, you can be successfully attacked.
In March, eight groups of security researchers at the Pwn2Own contest earned a total of $850,000 by publicly hacking just about every major browser technology. Do you browse the Web or read PDF files? There is a very high probability that the technology you use was compromised during this competition. Thankfully, many patches for these vulnerabilities have been released, but that’s not the point. At any given time, these researchers are aware of other unreleased vulnerabilities. And so are the criminal organizations.
So, if we can’t completely trust our desktop operating systems, what can we do? You probably use social media sites that implemented two-step verification. Twitter’s implementation utilizes their mobile app to authenticate when using new computers. Google Authentication is a mobile app that issues a short-lived, one-time passcode that is utilized for authentication.
There are other examples and most utilize smartphones to enhance authentication security. This is a very good trend. When you are considering second-factor authentication technologies, be sure to match the strength of the technology to the risk level of the resource you are trying to protect.
In a corporate environment, there is an even more important need for second-factor authentication. With almost any level of complexity, you may need to continue to use legacy authentication systems alongside stronger modern authentication. The ability to match risk to authentication strength is a consideration for efficiency.
The ability to scale and manage the identities being protected is an important consideration, and this is where identity management systems are vital. If the authentication technique is hard to use, your users may not respond to it. Smartphones can play a strong role as either soft tokens or mobile smart credentials. Strong security and easier usability can be delivered together. | <urn:uuid:fd9a9861-78a7-4b99-892d-7c0ffaa57e68> | CC-MAIN-2017-09 | https://www.entrust.com/desktop-security-need-second-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00047-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947732 | 481 | 2.5625 | 3 |
Social platform attacks target websites with large user bases, such as Facebook, LinkedIn, Twitter, and Instagram. A majority of current attacks simply use the social platforms as a delivery mechanism, and have been modeled after the older Koobface malware. However, researchers are now anticipating that advanced attacks against social media networks will be able to leverage a user’s contacts, location, and even business activities. This information can then be used to develop targeted advertising campaigns toward specific users, or even help spark crime in the virtual or real world.
Most often, social platform attacks are able to breach users’ accounts by stealing their authentication credentials upon login. This information is then used to discreetly pull personal data from users’ online friends and colleagues. A recent Stratecast study states that 22% of social media users have fallen victim to a security-related incident, and recent documented attacks support the numbers. The Pony botnet affected Facebook, Google, Yahoo, and other social media users, stealing more than two million user passwords. Facebook estimates that anywhere from 50-100 million of its monthly active user accounts are fake duplicates, and as many as 14 million of those are "undesirable" on the site.
Another social media attack that is expected to take a stronghold of user information in 2014 is the "false flag" attack that tricks a user into revealing personal information or authentication credentials under the guise of the site itself. Upon changing the password, the attack will steal the username and password information to then steal personal information about the user. Users should remain alert to any "urgent" request from the site to reset a password.
Enterprises are also expected to leverage social platforms for "reconnaissance attacks" either directly or through third parties to collect valuable user and organization information about rivals. This data can provide businesses with a competitive edge in future business endeavors, and these attacks are expected to climb in 2014.
To prevent social media breaches, protect user information, and secure company data, increased vigilance by individual users and enterprise policies are the best ways to ensure data breaches are avoided. | <urn:uuid:ca8f3c1c-01f9-4ad5-a83b-92f1d8f9ae0e> | CC-MAIN-2017-09 | https://www.mcafee.com/de/security-awareness/articles/how-cybercriminals-target-social-media-accounts.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00223-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928213 | 422 | 2.65625 | 3 |
A Virus is a small program that embeds itself into other programs. When those other programs are executed, the virus is also executed, and attempts to copy itself into more programs. In this way, it spreads in a manner similar to a biological virus.
viruses, by definition, can "infect" any executable code. Accordingly, they are found on floppy and hard disk boot sectors, executable programs, macro languages and executable electronic mail attachments.
Some viruses are self-modifying, in order to make detection more
difficult. Such viruses are called polymorphic (many shapes). | <urn:uuid:63da5003-371c-42cc-ba9f-e4cdf6770313> | CC-MAIN-2017-09 | http://hitachi-id.com/concepts/virus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00099-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95241 | 119 | 3.46875 | 3 |
Understanding the IEEE 802.11ac Wi-Fi Standard
IEEE 802.11ac, also known as Gigabit Wi-Fi, is the latest Wi-Fi standard that builds upon 802.11n with improved data rates, network robustness, reliability, and RF bandwidth utilization efficiency.
This white paper provides an overview of emerging 802.11ac Wi-Fi standards on how technology will be introduced and describes the importance of 802.11ac Wi-Fi in the enterprise network.
This White Paper on IEEE 802.11ac Wi-Fi Standard Covers:
Introduction to 802.11ac Wi-Fi standard: IEEE 802.11ac will not necessarily require “rip and replace”
Inside IEEE 802.11ac: IEEE 802.11ac is not only for high powered, gigabit-per-second clients
Status of 802.11ac: IEEE 82.11ac will not replace 802.11n
Enterprise 802.11ac Standard Deployment Considerations: IEEE 802.11ac is for consumers and enterprise networks as well
To know more about the IEEE 802.11ac Wi-Fi standard, read this white paper titled “Understanding the IEEE 802.11ac Wi-Fi Standard”. | <urn:uuid:8d876887-a7fc-4882-9419-2bf288fdb8d6> | CC-MAIN-2017-09 | http://wireless.cioreview.com/whitepaper/ieee-80211ac-wifi-standard-wid-36.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00275-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.798954 | 254 | 3.03125 | 3 |
How does security apply to Cloud Computing? In this article, we address this question by listing the five top security challenges for Cloud Computing, and examine some of the solutions to ensure secure Cloud Computing.
Organizations and enterprises are increasingly considering Cloud Computing to save money and to increase efficiency. However, while the benefits of Cloud Computing are clear, most organizations continue to be concerned about the associated security implications. Due to the shared nature of the Cloud where one organization's applications may be sharing the same metal and databases as another firm, Chief Security Officers (CSOs) must recognize they do not have full control of these resources and consequently must question the inherent security of the Cloud. However, it is important to note that Cloud Computing is not fundamentally insecure; it just needs to be managed and accessed in a secure way.
All Cloud Models Are Not the Same
Although the term Cloud Computing is widely used, it is important to note that all Cloud Models are not the same. As such, it is critical that organizations don't apply a broad brush one-size fits all approach to security across all models. Cloud Models can be segmented into Software as a Service (Saas), Platform as a service (PaaS) and Integration as a Service (IaaS). When an organization is considering Cloud security it should consider both the differences and similarities between these three segments of Cloud Models:
SaaS: this particular model is focused on managing access to applications. For example, policy controls may dictate that a sales person can only download particular information from sales CRM applications. For example, they are only permitted to download certain leads, within certain geographies or during local office working hours. In effect, the security officer needs to focus on establishing controls regarding users' access to applications.
PaaS: the primary focus of this model is on protecting data. This is especially important in the case of storage as a service. An important element to consider within PaaS is the ability to plan against the possibility of an outage from a Cloud provider. The security operation needs to consider providing for the ability to load balance across providers to ensure fail over of services in the event of an outage. Another key consideration should be the ability to encrypt the data whilst stored on a third-party platform and to be aware of the regulatory issues that may apply to data availability in different geographies.
IaaS: within this model the focus is on managing virtual machines. The CSOs priority is to overlay a governance framework to enable the organization to put controls in place regarding how virtual machines are created and spun down thus avoiding uncontrolled access and potential costly wastage.
The following check-list of Cloud Security Challenges provides a guide for Chief Security Officers who are considering using any or all of the Cloud models. Note, some of these issues can be seen as supplementing some of the good work done by the Cloud Security Alliance, in particular their paper from March 2010 Top Threats to Cloud Computing [PDF link].
For CSOs focused on PaaS
Challenge #1: Protect private information before sending it to the Cloud
There are already many existing laws and policies in place which disallow the sending of private data onto third-party systems. A Cloud Service Provider is another example of a third-party system, and organizations must apply the same rules in this case. It's already clear that organizations are concerned at the prospect of private data going to the Cloud. The Cloud Service Providers themselves recommend that if private data is sent onto their systems, it must be encrypted, removed, or redacted. The question then arises "How can the private data be automatically encrypted, removed, or redacted before sending it up to the Cloud Service Provider". It is known that encryption, in particular, is a CPU-intensive process which threatens to add significant latency to the process.
Any solution implemented should broker the connection to the Cloud Service and automatically encrypt any information an organization doesn't want to share via a third party. For example, this could include private or sensitive employee or customer data such as home addresses or social security numbers, or patient data in a medical context. CSOs should look to provide for on-the-fly data protection by detecting private or sensitive data within the message being sent up to the Cloud Service Provider, and encrypting it such that only the originating organization can decrypt it later. Depending on the policy, the private data could also be removed or redacted from the originating data, but then re-inserted when the data is requested back from the Cloud Service Provider.
For CSOs Focused on SaaS
Challenge #2: Don't replicate your organization in the Cloud
Large organizations using Cloud services face a dilemma. If they potentially have thousands of employees using Cloud services, must they create thousands of mirrored users on the Cloud platform? The ability to circumvent this requirement by providing single sign-on between on-premises systems and Cloud negates this requirement.
Users with multiple passwords are also a potential security threat and a drain on IT Help Desk resources. The risks and costs associated with multiple passwords are particularly relevant for any large organization making its first foray into Cloud Computing and leveraging applications or SaaS. For example, if an organization has 10,000 employees, it is very costly to have the IT department assign new passwords to access Cloud Services for each individual user. For example, when the user forgets their password for the SaaS service, and resets it, they now have an extra password to take care of.
More on cloud computing and security
- Cloud security predictions for 2011
- Cloud Security Alliance updates controls matrix
- Survey finds companies still struggling with cloud security
By leveraging single sign-on capabilities an organization can enable a user to access both the user's desktops and any Cloud Services via a single password. In addition to preventing security issues, there are significant costs savings to this approach. For example, single sign-on users are less likely to lose passwords reducing the assistance required by IT helpdesks. Single sign-on is also helpful for the provisioning and de-provisioning of passwords. [Editor's note: Also read Role management software—how to make it work for you.] If a new user joins or leaves the organization there is only a single password to activate or deactivate vs. having multiple passwords to deal with. In a nutshell, the danger of not having a single sign-on for the Cloud is increased exposure to security risks and the potential for increased IT Help Desk costs, as well the danger of dangling accounts after users leave the organizations, which are open to rogue usage.
For CSOs focused on PaaS
Challenge #3: Keep an Audit Trail
Usage of Cloud Services is on a paid-for basis, which means that the finance department will want to keep a record of how the service is being used. The Cloud Service Providers themselves provide this information, but in the case of a dispute it is important to have an independent audit trail. Audit trails provide valuable information about how an organization's employees are interacting with specific Cloud services, legitimately or otherwise!
The end-user organization could consider a Cloud Service Broker (CSB) solution as a means to create an independent audit trail of its cloud service consumption. Once armed with his/her own records of cloud service activity the CSO can confidently address any concerns over billing or to verify employee activity. A CSB should provide reporting tools to allow organizations to actively monitor how services are being used. There are multiple reasons why an organisation may want a record of Cloud activity, which leads us to discuss the issue of Governance.
For CSOs focused on IaaS
Challenge #4: Governance: Protect yourself from rogue cloud usage and redundant Cloud providers
The classic use case for Governance in Cloud Computing is when an organization wants to prevent rogue employees from mis-using a service. For example, the organization may want to ensure that a user working in sales can only access specific leads and does not have access to other restricted areas. Another example is that an organization may wish to control how many virtual machines can be spun up by employees, and, indeed, that those same machines are spun down later when they are no longer needed. So-called "rogue" Cloud usage must also be detected, so that an employee setting up their own accounts for using a Cloud service is detected and brought under an appropriate governance umbrella.
Whilst Cloud Service providers offer varying degrees of cloud service monitoring, an organization should consider implementing its own Cloud service governance framework. The need for this independent control is of particular benefit when an organization is using multiple SaaS providers, i.e. HR services, ERP and CRM systems. However, in such a scenario the CSO and Chief Technology Officer (CTO) also need to be aware that different Cloud Providers have different methods of accessing information. They also have different security models on top of that.
Some use REST, some use SOAP and so on. For security, some use certificates, some use API keys, which we'll examine in the next section. Some simply use basic HTTP authentication. The problem that needs to be solved is that these cloud service providers all present themselves very differently. So, in order to use multiple Cloud Providers, organizations have to overcome the fact they are all different at a technical level.
Again, that points to the solution provided by a Cloud Broker, which brokers the different connections and essentially smoothes over the differences between them. This means organizations can use various services together. In situations where there is something relatively commoditized like storage as a service, they can be used interchangeably. This solves the issue of what to do if a Cloud Provider becomes unreliable or goes down and means the organization can spread the usage across different providers. In fact, organizations should not have to get into the technical weeds of being able to understand or mitigate between different interfaces. They should be able to move up a level where they are using the Cloud for the benefits of saving money.
For CSOs focused on SaaS, PaaS and IaaS
Challenge #5: Protect your API Keys
Many Cloud services are accessed using simple REST Web Services interfaces. These are commonly called "APIs", since they are similar in concept to the more heavyweight C++ or Java APIs used by programmers, though they are much easier to leverage from a Web page or from a mobile phone, hence their increasing ubiquity. "API Keys" are used to access these services. These are similar in some ways to passwords. They allow organizations to access the Cloud Provider. For example, if an organization is using a SaaS offering, it will often be provided with an API Keys. The protection of these keys is very important.
Consider the example of Google Apps. If an organization wishes to enable single sign-on to their Google Apps (so that their users can access their email without having to log in a second time) then this access is via API Keys. If these keys were to be stolen, then an attacker would have access to the email of every person in that organization.
The casual use and sharing of API keys is an accident waiting to happen. Protection of API Keys can be performed by encrypting them when they are stored on the file system, or by storing them within a Hardware Security Module (HSM).
Conclusion: Homemade or Off-the-shelf?
When implementing a security framework to address these challenges, the CSO is faced with a buy vs. build option. They could engage developers to put together open source components to build Cloud Service Broker-like functionality from scratch. This approach creates the runtime components of a broker, such as routing to a particular Cloud Service Provider. However, other components of the solution, such as reporting and an audit trail, may not be present. An off-the-shelf Cloud Service Broker product will provide these extra features as standard and should also provide support for all the relevant WS-Security standards at a minimum.
As the Cloud Security Alliance notes in its Security Guidance White Paper. "Cloud Computing isn't necessarily more or less secure than your current environment. As with any new technology, it creates new risks and new opportunities. In some cases moving to the cloud provides an opportunity to re-architect older applications and infrastructure to meet or exceed modern security requirements. At other times the risk of moving sensitive data and applications to an emerging infrastructure might exceed your tolerance." I hope this article provides sufficient data points to guide readers on their journey. | <urn:uuid:3094ab29-1e84-44f1-9447-e33923226b23> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2126885/cloud-security/saas--paas--and-iaas--a-security-checklist-for-cloud-models.html?page=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00451-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943416 | 2,552 | 2.578125 | 3 |
Many people confuse data privacy and data security. While there are similarities, privacy and security are not the same thing. Data security focuses on the confidentiality, integrity and availability of information and information technology resources, whereas data privacy is about an individual’s ability to retain control over his or her personally identifiable information (PII).
As individuals, we should ensure we are responsible “digital citizens” when using the Internet. Part of this responsibility includes understanding how to configure and manage the privacy settings for the Internet services that we use. This includes social networking services like Facebook and Twitter. Social networking services tend to change their privacy options frequently, so it is important to ensure you understand how you have configured the privacy settings for the social networking services you use.
In the case of Facebook, they have recently introduced a powerful new search feature called Facebook Graph Search. This new feature will improve the ability to search and find information; however, it can increase the likelihood that other people can find your information through the search if your privacy settings aren’t set correctly. You must be sure your privacy settings are properly configured so that your personal information (posts, photos, likes, etc) doesn’t end up as a search result for someone you don’t wish to have access to your data. The EFF has an informative article about how to protect your Facebook privacy from the new Graph Search.
In addition to social networking, many of us are now using applications on our smart phones and tablets. Some of these applications are able to access privacy data from the device on which they run. One example of this is “location settings” for applications. The ability to have the application know your location can improve the application’s functionality and ease of use, but it can also put your privacy at risk. Many devices have the capability to restrict an application’s ability to determine the user’s geographical location (also known as “geolocation”). Mobile devices often use a built-in GPS along with wireless hotspot proximity to determine location. You should carefully consider sharing geolocation information with applications, especially on devices used by minors. Decide which applications should have access to location services and disable access for all others. Does the game app you’re playing really need to know where you’re physically located? Think about it.
Geolocation privacy concerns are not limited to apps though, as most smart phones include built-in cameras that have the ability to include geolocation metadata in each digital photograph captured by the device. Unless you disable the location awareness setting for the phone’s camera, every photo you take and share will contain geolocation metadata that can be examined by anyone with whom you share the photo.
With the explosive growth in the number of applications available, it shouldn’t be surprising that some of them have been discovered to have software defects that have unintended consequences in regard to privacy. Here is a case in point: a recent popular mobile application “Crazy Blind Date”, coordinates blind dates: “Pick a time, pick a place, we find you a blind date”, and claims to keep your personal contact information, such as your phone number and email address confidential. However, the Wall Street Journal discovered that due to a programming mistake, technically-inclined users of the service were able to access the profile information of other users (including birth date and email address). The developer of the application, OKCupid.com, promptly fixed the problem after being informed by the Wall Street Journal.
It is important to be aware that applications may have access to your privacy information, and that there is potential for unintentional disclosure of this information, either as a result of software defects, or improper configurations.
For individuals, the key points in regard to electronic data privacy are:
- Understand the services and devices you’re using to make certain you know how your privacy data is, or isn’t, being shared electronically.
- Take time to review the settings for the Internet services and devices you and your family members use.
- Think about what information you are comfortable sharing, and the impact of the improper disclosure of the information you’ve shared.
Businesses and data privacy | <urn:uuid:34e9a017-2382-4503-b8a0-dee819458d2f> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/01/29/privacy-tips-for-social-networking-apps-and-geolocation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00627-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9423 | 873 | 3.28125 | 3 |
The Department of Transportation’s support will help advance vehicle-centric information and communication technologies, but it must offer clearer V2V communication guidelines and timelines to be useful to the industry.
On 3 February 2014, the U.S. Department of Transportation's (DOT's) National Highway Traffic Safety Administration (NHTSA) announced that it will take steps to enable vehicle-to-vehicle (V2V) technology for passenger cars and light trucks. Later this year, NHTSA will publish a research report that analyzes the DOT’s research findings on technical feasibility, privacy and security, and preliminary estimates on costs and safety benefits. NHTSA will then work on a regulatory proposal that would require V2V devices in new vehicles at a future date.
The DOT announcement is the strongest signal thus far that the U.S. government plans to use IT to improve road traffic safety and utilization, and to reduce environmental impact. The DOT's action is the earliest globally among government entities, and is likely to:
Accelerate the public policy decision-making of other countries that are evaluating or piloting technologies.
Positively impact other technologies that will affect automobiles and road infrastructure, including e-call functions, vehicle-to-infrastructure communications and self-driving vehicles.
The DOT's announcement does create uncertainty that will require clarification or further action:
It lacks a deployment timeline and is short on details regarding the pending communication technology requirements, which the automotive industry requires to create consumer-targeted offerings. As a result, the final bill may be watered down or offer limited or delayed benefits. The technologies are likely to center on the dedicated short-range communication (DSRC) protocol, but might also include Wi-Fi and cellular spectrums to incorporate infrastructure and pedestrian notifications.
The initial mandate calls for the V2V communication technology to trigger warnings (such as a flashing light to warn drivers when another vehicle runs a red light), rather than utilizing active safety systems (such as automatic braking). Making the driver responsible for reacting to traffic problems limits the systems' effectiveness and appeal to consumers, and makes them more difficult for automakers to market and price advantageously.
Even if the DOT addresses the announcement's shortcomings, V2V communication benefits will not be fully realized for years, until vehicles that can communicate with each other attain critical mass on the roads. This makes a government mandate for automakers' compliance critical. If adoption is widespread, safety benefits will be apparent within eight years; in approximately 15 years, nearly all U.S. vehicles would include V2V technology.
Automakers, automotive suppliers and technology providers:
Work with regulators to define the implementation timeline for V2V based on your in-vehicle technology road map. For example, integrate a DSRC-based communication module with a connectivity offering that includes other advanced driver assistance system functions, especially active safety systems, and communication technologies. Appeal to drivers' need to extend their digital lifestyle to the vehicle, which will increase their willingness to pay for the technology, as V2V capability by itself offers fewer initial benefits.
Seek ways to offset the cost of enabling V2V communication by meeting other regulatory requirements. For example, V2V technologies can help address fuel efficiency mandates by improving road utilization and minimizing vehicle congestion.
Leverage U.S.-centric efforts to comply with V2V mandates globally and to expand technology usage scenarios. For example, use DSRC for automated payments for road tolling and automated paying, to enable automakers to tap into commercial transactions initiated from within the vehicle. | <urn:uuid:99ad6fef-b71b-456b-b7ff-2e997ed9c515> | CC-MAIN-2017-09 | https://www.gartner.com/doc/2664419 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00271-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928422 | 730 | 2.640625 | 3 |
Cyber attacks generally refer to criminal activity conducted via the Internet.
These attacks can include stealing an organization’s intellectual property, confiscating online bank accounts, creating and distributing viruses on other computers, posting confidential business information on the Internet and disrupting a country’s critical national infrastructure.
The focus of this report is to quantify the economic impact of cyber attacks and observe cost trends over time.
Consistent with the previous two US studies, the loss or misuse of information is the most significant consequence of a cyber attack, and it comes at significant financial cost. Based on this finding alone, organizations need to be more vigilant in protecting their most sensitive and confidential information. | <urn:uuid:75a24c5d-9286-4c6a-aebd-9c2ce82b49bc> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/09/20/cost-of-cyber-crime-study-united-states/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00091-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917556 | 136 | 2.953125 | 3 |
When it comes to security, most mobile devices are a target waiting to be attacked. That's pretty much the conclusion of a report to Congress on the status of the security of mobile devices this week by watchdogs at the Government Accountability Office.
Combine the lack of security with the fact that mobile devices are being targeted by cybercriminals and you have a bad situation. For example, the number of variants of malicious software aimed at mobile devices has reportedly risen from about 14,000 to 40,000 or about 185% in less than a year, the GAO stated.
"Mobile devices face an array of threats that take advantage of numerous vulnerabilities commonly found in such devices. These vulnerabilities can be the result of inadequate technical controls, but they can also result from the poor security practices of consumers," the GAO stated. "Private [companies] and relevant federal agencies have taken steps to improve the security of mobile devices, including making certain controls available for consumers to use if they wish and promulgating information about recommended mobile security practices. However, security controls are not always consistently implemented on mobile devices, and it is unclear whether consumers are aware of the importance of enabling security controls on their devices and adopting recommended practices."
The GAO report came up with a list of mobile vulnerabilities it says are common to all mobile platforms and it offered a number of possible fixes for the weaknesses: From the report:
" Mobile devices often do not have passwords enabled. Mobile devices often lack passwords to authenticate users and control access to data stored on the devices. Many devices have the technical capability to support passwords, personal identification numbers (PIN), or pattern screen locks for authentication. Some mobile devices also include a biometric reader to scan a fingerprint for authentication. However, anecdotal information indicates that consumers seldom employ these mechanisms. Additionally, if users do use a password or PIN they often choose passwords or PINs that can be easily determined or bypassed, such as 1234 or 0000. Without passwords or PINs to lock the device, there is increased risk that stolen or lost phones' information could be accessed by unauthorized users who could view sensitive information and misuse mobile devices.
" Two-factor authentication is not always used when conducting sensitive transactions on mobile devices. According to studies, consumers generally use static passwords instead of two-factor authentication when conducting online sensitive transactions while using mobile devices. Using static passwords for authentication has security drawbacks: passwords can be guessed, forgotten, written down and stolen, or eavesdropped. Two-factor authentication generally provides a higher level of security than traditional passwords and PINs, and this higher level may be important for sensitive transactions. Two-factor refers to an authentication system in which users are required to authenticate using at least two different "factors" something you know, something you have, or something you are before being granted access. Mobile devices can be used as a second factor in some two-factor authentication schemes. The mobile device can generate pass codes, or the codes can be sent via a text message to the phone. Without two-factor authentication, increased risk exists that unauthorized users could gain access to sensitive information and misuse mobile devices.
" Wireless transmissions are not always encrypted. Information such as e-mails sent by a mobile device is usually not encrypted while in transit. In addition, many applications do not encrypt the data they transmit and receive over the network, making it easy for the data to be intercepted. For example, if an application is transmitting data over an unencrypted WiFi network using http (rather than secure http), the data can be easily intercepted. When a wireless transmission is not encrypted, data can be easily intercepted.
" Mobile devices may contain malware. Consumers may download applications that contain malware. Consumers download malware unknowingly because it can be disguised as a game, security patch, utility, or other useful application. It is difficult for users to tell the difference between a legitimate application and one containing malware. For example, an application could be repackaged with malware and a consumer could inadvertently download it onto a mobile device. the data can be easily intercepted. When a wireless transmission is not encrypted, data can be easily intercepted by eavesdroppers, who may gain unauthorized access to sensitive information.
" Mobile devices often do not use security software. Many mobile devices do not come preinstalled with security software to protect against malicious applications, spyware, and malware-based attacks. Further, users do not always install security software, in part because mobile devices often do not come preloaded with such software. While such software may slow operations and affect battery life on some mobile devices, without it, the risk may be increased that an attacker could successfully distribute malware such as viruses, Trojans, spyware, and spam to lure users into revealing passwords or other confidential information.
" Operating systems may be out-of-date. Security patches or fixes for mobile devices' operating systems are not always installed on mobile devices in a timely manner. It can take weeks to months before security updates are provided to consumers' devices. Depending on the nature of the vulnerability, the patching process may be complex and involve many parties. For example, Google develops updates to fix security vulnerabilities in the Android OS, but it is up to device manufacturers to produce a device-specific update incorporating the vulnerability fix, which can take time if there are proprietary modifications to the device's software. Once a manufacturer produces an update, it is up to each carrier to test it and transmit the updates to consumers' devices. However, carriers can be delayed in providing the updates because they need time to test whether they interfere with other aspects of the device or the software installed on it.
In addition, mobile devices that are older than two years may not receive security updates because manufacturers may no longer support these devices. Many manufacturers stop supporting smartphones as soon as 12 to 18 months after their release. Such devices may face increased risk if manufacturers do not develop patches for newly discovered vulnerabilities.
" Software on mobile devices may be out-of-date. Security patches for third-party applications are not always developed and released in a timely manner. In addition, mobile third-party applications, including web browsers, do not always notify consumers when updates are available. Unlike traditional web browsers, mobile browsers rarely get updates. Using outdated software increases the risk that an attacker may exploit vulnerabilities associated with these devices.
" Mobile devices often do not limit Internet connections. Many mobile devices do not have firewalls to limit connections. When the device is connected to a wide area network it uses communications ports to connect with other devices and the Internet. A hacker could access the mobile device through a port that is not secured. A firewall secures these ports and allows the user to choose what connections he wants to allow into the mobile device. Without a firewall, the mobile device may be open to intrusion through an unsecured communications port, and an intruder may be able to obtain sensitive information on the device and misuse it.
" Mobile devices may have unauthorized modifications. The process of modifying a mobile device to remove its limitations so consumers can add features (known as "jailbreaking" or "rooting") changes how security for the device is managed and could increase security risks. Jailbreaking allows users to gain access to the operating system of a device so as to permit the installation of unauthorized software functions and applications and/or to not be tied to a particular wireless carrier. While some users may jailbreak or root their mobile devices specifically to install security enhancements such as firewalls, others may simply be looking for a less expensive or easier way to install desirable applications. In the latter case, users face increased security risks, because they are bypassing the application vetting process established by the manufacturer and thus have less protection against inadvertently installing malware. Further, jailbroken devices may not receive notifications of security updates from the manufacturer and may require extra effort from the user to maintain up-to-date software.
" Communication channels may be poorly secured. Having communication channels, such as Bluetooth communications, "open" or in "discovery" mode (which allows the device to be seen by other Bluetooth-enabled devices so that connections can be made) could allow an attacker to install malware through that connection, or surreptitiously activate a microphone or camera to eavesdrop on the user. In addition, using unsecured public wireless Internet networks or WiFi spots could allow an attacker to connect to the device and view sensitive information.
The GAO report went on to state that connecting to an unsecured WiFi network could let an attacker access personal information from a device, putting users at risk for data and identity theft. One type of attack that exploits the WiFi network is known as man-in-the-middle, where an attacker inserts himself in the middle of the communication stream and steals information.
So what can be done to secure mobile devices? The GAO report offers a number of ideas including:
" Enable user authentication: Devices can be configured to require passwords or PINs to gain access. In addition, the password field can be masked to prevent it from being observed, and the devices can activate idle-time screen locking to prevent unauthorized access.
" Enable two-factor authentication for sensitive transactions: Two-factor authentication can be used when conducting sensitive transactions on mobile devices. Two-factor authentication provides a higher level of security than traditional passwords. Two-factor refers to an authentication system in which users are required to authenticate using at least two different "factors" something you know, something you have, or something you are before being granted access. Mobile devices themselves can be used as a second factor in some two-factor authentication schemes used for remote access. The mobile device can generate pass codes, or the codes can be sent via a text message to the phone. Two-factor authentication may be important when sensitive transactions occur, such as for mobile banking or conducting financial transactions.
" Verify the authenticity of downloaded applications: Procedures can be implemented for assessing the digital signatures of downloaded applications to ensure that they have not been tampered with.
" Install antimalware capability: Antimalware protection can be installed to protect against malicious applications, viruses, spyware, infected secure digital cards,b and malware-based attacks. In addition, such capabilities can protect against unwanted (spam) voice messages, text messages, and e-mail attachments.
" Install a firewall: A personal firewall can protect against unauthorized connections by intercepting both incoming and outgoing connection attempts and blocking or permitting them based on a list of rules.
" Install security updates: Software updates can be automatically transferred from the manufacturer or carrier directly to a mobile device. Procedures can be implemented to ensure these updates are transmitted promptly.
" Remotely disable lost or stolen devices: Remote disabling is a feature for lost or stolen devices that either locks the device or completely erases its contents remotely. Locked devices can be unlocked subsequently by the user if they are recovered.
" Enable encryption for data stored on device or memory card: File encryption protects sensitive data stored on mobile devices and memory cards. Devices can have built-in encryption capabilities or use commercially available encryption tools.
" Enable whitelisting: Whitelisting is a software control that permits only known safe applications to execute commands.
" Establish a mobile device security policy: Security policies define the rules, principles, and practices that determine how an organization treats mobile devices, whether they are issued by the organization or owned by individuals. Policies should cover areas such as roles and responsibilities, infrastructure security, device security, and security assessments. By establishing policies that address these areas, agencies can create a framework for applying practices, tools, and training to help support the security of wireless networks.
" Provide mobile device security training: Training employees in an organization's mobile security policies can help to ensure that mobile devices are configured, operated, and used in a secure and appropriate manner.
" Establish a deployment plan: Following a well-designed deployment plan helps to ensure that security objectives are met.
" Perform risk assessments: Risk analysis identifies vulnerabilities and threats, enumerates potential attacks, assesses their likelihood of success, and estimates the potential damage from successful attacks on mobile devices.
" Perform configuration control and management: Configuration management ensures that mobile devices are protected against the introduction of improper modifications before, during, and after deployment.
Read more about anti-malware in Network World's Anti-malware section.
This story, "The 10 most common mobile security problems and how you can fight them" was originally published by Network World. | <urn:uuid:2e25a784-fdde-41ee-be9f-d83b2371cdb6> | CC-MAIN-2017-09 | http://www.itworld.com/article/2721318/mobile/the-10-most-common-mobile-security-problems-and-how-you-can-fight-them.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00495-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925797 | 2,546 | 2.546875 | 3 |
Amidst internal and external security threats, natural disasters, hacking attempts and technological changes, banks and service providers today are constantly faced with the possibilities of data loss, security breaches and breaks in business continuity. These institutions are being asked more frequently than ever what plans they have in place for speedy recovery should systems be compromised. Following a number of hard-hitting storms in the United States, including Hurricane Sandy and the devastation wrought on the Midwest following recent tornadoes, attention is focused on preparing for a recovery after natural disasters. Though preparing for natural impact is important, it becomes easy to forget there is just as much, if not more, potential for malicious manmade threats from a security and technology perspective.
All disaster recovery efforts, whether they are for natural disasters or security threats, must ultimately be tested for efficiency and reliability. While banks across the board conduct regular tests, the way in which these tests are conducted is crucial to determining a bank’s true ability to recover in the event of a disaster. In most instances, testing can be considered either static or dynamic. Most disaster recovery tests currently conducted are static in nature, meaning they are crafted to be sterile and built for success, to allow banks to ‘prove’ they have the ability and tools needed to succeed in the event of disruption. In these instances, banks and service providers are able to conduct tests and prove they have a perfect fail-over recovery system in place. The issue here is that these tests are rarely built to actually mimic any real disaster.
An alternative to static testing is dynamic testing. In this instance, banks implement tests that stress their systems, processes and procedures to provide a more accurate look at how disaster recovery systems in place may work in the event of true disruption. These tests are designed to push bank systems to their limits and are undoubtedly more difficult. The risk with dynamic tests is that by adding more variability, more uncertainty and more issues requiring resolution, the likelihood of institutions being able to complete the tests and prove complete fail-over is more complicated. The benefit is that because these tests are designed to evaluate systems and processes in the most real-world, worst-case scenarios, institutions learn a great deal about the true ability of their disaster recovery plans. As a result, they are able to make necessary adjustments to better prepare themselves for prospective disaster. Though peppered with potential for test failure, the benefits of dynamic testing strongly outweigh potential perception risk.
Another important aspect of a sound disaster recovery infrastructure is the ability to deal with and rapidly recover from denial of service attacks, which have quickly become one of the largest, most common threats to banks over the recent years. These attacks, often from overseas, can easily infiltrate thousands of computers and overwhelm entire networks and servers, rendering sites useless until service can be restored. Banks need multiple layers of protection to be best prepared for these seemingly random attacks. This starts with an institution’s ISP and includes hardware and software at all data centers, as protecting each piece is an imperative part of being prepared for these potential attacks. Particularly for smaller regional and community banks, finding a vendor solution provider that can provide the best technological capabilities and tools for intrusion detection and prevention is extremely important.
Finally, in addition to regular testing and security measures, continual education of IT personnel is also a key factor in ensuring banks and service providers are properly prepared. While testing aims to stress systems and processes in case of a disaster, investing in a knowledgeable IT staff can actually serve as a preventative measure. In both small and large banks alike, regular employee education and training is an important step in the disaster recovery process as many technological threats derive from virus-infected emails, links and other Trojan horses employees may encounter.
Modern advancements in technology have increased the general expectation that services provided by banks and service providers are invincible, secure and always available. These institutions are expected to find ways to keep the lights on even through the storm. With regular dynamic testing, investments in security technologies, and staff education, banks and service providers will be best prepared to face threats of manmade or natural disasters.
Danne Buchanan is EVP, Head of North America Operations for Fundtech. | <urn:uuid:cf0c4e2f-faec-4551-b371-549556955677> | CC-MAIN-2017-09 | http://www.banktech.com/disaster-recovery-test-invest-and-educate/a/d-id/1296398?cid=sbx_banktech_related_commentary_default_us_sec_chairman_mary_schapiro_to_step_do&itc=sbx_banktech_related_commentary_default_us_sec_chairman_mary_schapiro_to_step_do | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00615-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957466 | 841 | 2.546875 | 3 |
The Department of Homeland Security's Federal Emergency Management Agency announced new guidance which provides communities additional time to gather data needed to assess the protective capabilities of levees while still allowing new Flood Insurance Rate Maps to be released on time.
"When levees fail, they fail catastrophically. The flooding may be much more intense and damaging than if the levee was not there," said David Maurstad, FEMA's Mitigation Director and Federal Insurance Administrator. "No levee system will provide full protection from floods. Levees are designed to provide a specific level of protection, and they can be overtopped in larger flood events. People need to be aware of the risks they face living behind levees -- including levees credited as providing protection from the one percent annual chance flood." | <urn:uuid:705adc89-706f-4909-b16f-68dfc5f83499> | CC-MAIN-2017-09 | http://www.govtech.com/policy-management/FEMA-Clarifies-Policy-on-Mapping-Areas.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00615-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941873 | 157 | 2.515625 | 3 |
Using sudo to Keep Admins Honest? sudon't!
The consensus among many Unix and Linux users seems to be that sudo is more secure than using the root account, because it requires you type your password to perform potentially harmful actions. While sudo is useful for what it was designed for, this thinking is flawed, and usually comes from inexperience.
The concept behind sudo is to give non-root users access to perform specific tasks without giving away the root password. It can also be used to log activity, if desired. Similar functionality can be found in operating systems with role-based access control (RBAC). Solaris 10, for instance, has greatly improved RBAC capabilities; so you can easily allow a junior admin access to Web server restart scripts with the appropriate access levels, for example. And while Linux has recently acquired RBAC capabilities through the integration of SELinux, sudo remains in common use, even though more widespread use of RBAC will eventually make it a redundant choice.
Sudo is supposed to be configured to allow a certain set of people to run a very limited set of commands, as a different user. Unfortunately, sysadmins and home users alike have begun using sudo for everything. Instead of running 'su' and becoming root, they believe that 'sudo' plus 'command' is a better alternative. Most of the time, sysadmins with full sudo access just end up running 'sudo bash' and doing all their work from that root shell. This is a problem.
Using a user account password to get a root shell is a bad idea.
Why is there a separate root account anyway? It isn't to simply protect you from your own mistakes. If all sysadmins just become root using their user password by running sudo bash, then why not just give them uid 0 (aka root) and be done with it? For a group of sysadmins, the only reason they should want to use sudo is for logging of commands. Unfortunately, this provides zero additional security or auditing, because an attacker would just run a shell. If sysadmins are un-trusted such that they need to be audited, they shouldn't have root access in the first place.
Surprisingly, the home-user rational makes its way into the workplace as well. The recurring argument is that running a root shell is dangerous. Partially to blame for this grave misunderstanding are X login managers, for allowing the root user to login. New users are always scolded and told that running X as root is wrong. The same goes for many other applications, too. As time progressed, people started remembering that "running as root" is wrong, passing this notion down to their children, but without any details. Now that Ubuntu Linux doesn't enable a root account by default, but instead allows full root access to the user via sudo, the world will never be the same.
People praise sudo, while demeaning Windows at the same time for not having any separation of privileges by default. The answer to security clearly is a multi-user system with privilege separation, but sudo blurs these lines in its most common usage. The Ubuntu usage of sudo simply provides a hoop to jump through, requiring users to type their password more often than they'd like. Of course this will prevent a user's web browser from running something as root, but it isn't security.
We'd really like to focus on the enterprise, where sudo has very little place.
The sudo purists, or sudoists, we'll call them, would have you run sudo before every command that requires root. Apparently running 'sudo vi /etc/resolv.conf' is supposed to make you remember that you're root, and prevent mistakes. Sudoists will also say that it protects against "accidentally left open root shells" as well. If there are accidental shells left on computers with public access, well that's an HR action item.
Sudo doubters will quickly point out that using sudo without specifically defined commands in the configuration file is a security risk. Sudoist user account passwords have root access, so in essence, sudo has un-done all security mechanisms in place. SSH doesn't allow root to login, but with sudo, a compromised user password removes that restriction.
In a true multi-user environment, every so often a root compromise will happen. If users can login, they can eventually become root, and that's just a fact of life. The first thing any old-school cracker installs is a hacked SSH program, to log user passwords. Ideally, this single hacked machine doesn't have any sort of trust relationship with other computers, because users are allowed access. The next time an administrator logs into the hacked machine, his user account is compromised. Generally this isn't a big deal, but with sudo, this means a complete root compromise, probably for all machines. Of course SSH keys can help, as will requiring separate passwords for administrators on the more important (non user accessible) servers; but if they're willing to allow their user account access to unrestricted root-level commands, then it's unlikely that there's any other security in place elsewhere.
As we mentioned, sudo has its place. Allowing a single command to be run with elevated privileges in an operating system that doesn't support such things is quite useful. Still, be very careful about who gets this access, even for one item. As with all software, sudo isn't without bugs.
No matter where you choose to fit sudo into your workflow, do not use it for full root access. Administrators keep separate, non-UID 0 accounts for a reason, and it's not for "limiting the mistakes." Everything should be done from a root shell, and you should have to know an uber-secret root password to access anything as root. | <urn:uuid:19741665-728d-4017-8070-9446b8111f68> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3641911/Using-sudo-to-Keep-Admins-Honest--sudont.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00491-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953513 | 1,193 | 2.78125 | 3 |
If time marches on, computing marches up. Currently, in the terascale, and early petascale era, we are seeing thousands of processors on a given machine. Connecting all these processors requires even more hardware. And the more hardware there is, the greater the odds of component failure. Such is the subject of an article at Scientific Computing. Author Doug Baxter urges his audience to think about accomodating hardware failure by redesigning the software.
Hardware fault-tolerance measures are in use today, but the drawbacks are many. The ability to predict when hardware is about to fail, making it hot swappable, and proactively rescheduling software running on parts about to fail are all current ways to deal with the problem of faulty hardware. These methods are helpful, but only in hardware that is actively monitored. Another workaround is hardware redundancy, but the expense can make it impractical. There’s checkpoint restarting, but the cost and logistics issues involved with check-pointing massive volumes of distributed memory can cancel out the benefits.
It is for these reasons that Baxter recommends looking to the software design community to achieve fault-tolerant computing. He reports that researchers have started working on this goal and categorizes their efforts into two groups: data-centric software and process-centric software. Baxter proceeds to explore a process-centric strategy. In order for process-centric HPC codes to accommodate hardware failutres, Baxter says that there must first be a shift in software design paradigms and a discarding of outmoded assumptions. Some examples of the latter are that input/output operations never fail and are relatively inexpensive, and that communications calls always succeed. Although the idea that Baxter sets himself to debunking, and one he says is particulary entrenched, is that a consistent set of resources is available for the duration of a computation. He goes on to make his case in detail, including possible pitfalls with suggested solutions.
In the end, Baxter calls for the software developer community to “design locally synchronized, dynamically scheduled, and hierarchically managed applications that can complete computations despite the expected modest number of hardware component failures.” Imagine an application that can sense a hardware failure and just work around it, like a car avoiding a large pothole, able to continue to its destination. | <urn:uuid:12baf108-0aff-42a8-a4b4-e08211268684> | CC-MAIN-2017-09 | https://www.hpcwire.com/2010/11/09/looking_to_fault-tolerant_software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00015-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943492 | 472 | 2.53125 | 3 |
Is the Supply Chain Ready for 3D Printing?
The word "printer" typically conjures up the image of a box-like machine that spits out flat, one-dimensional paper versions of what someone is looking at on his or her computer screen. That perception could shift over the next few years as 3D printing becomes more affordable and accessible for a larger number of users.
Traditionally priced at $10,000 and up, 3D printers turn digital models into three-dimensional objects. Those objects are made by "layering" plastic, metal, or other materials - a process that distinguishes 3D printing from machining equipment (which uses cutting and drilling techniques to "remove" materials). | <urn:uuid:b469de86-c25c-4433-be22-17447fd29268> | CC-MAIN-2017-09 | https://msdynamicsworld.com/story/supply-chain-ready-3d-printing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00367-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952498 | 144 | 2.90625 | 3 |
Thermal cameras use infrared radiation to form images. Thermal technology has become more affordable as the technology advances. Thermal cameras are an excellent addition to an advanced surveillance system. Because thermal cameras only see heat signatures, they are not considered to be a replacement for a security camera. A thermal IP camera complements a security system by allowing a user to see in extreme weather. For example, traditional network cameras cannot see through heavy rain or smoke. A thermal camera can see through dense rain, smog, and clouds of smoke. Because the camera is able to detect heat signatures, it is difficult for intruders to avoid the cameras field of view where a network camera might not see at a long distance and capture movement. Thermal cameras do not use lighting so they are 100% discreet and don’t use more electricity for a light source. These are specialized products for applications demanding an extra level of security. Thermal imaging devices can be used together with advanced detection software to create virtual perimeters around high security areas like airports, refineries, chemical plants, and other areas which require demanding surveillance on the fence. Thermal cameras are used on fences and even helicopters for ground surveillance. Please contact our systems specialist for assistance. Some more advanced thermal imaging devices have the ability to accurately calculate distance based on heat signature. Heightened body temperature or a car that’s off but still off can be detected by thermal network cameras. | <urn:uuid:33c2cfe4-9399-48ba-b15c-48ac08dc28fb> | CC-MAIN-2017-09 | https://www.a1securitycameras.com/thermal-ip-security-cameras/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00367-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.91839 | 282 | 2.6875 | 3 |
Design with Multiplexers
Consider the following design, taken from the 5th edition of my textbook.
is a correct implementation of the Carry–Out of a Full Adder.
In terms of Boolean expressions, this is F(X, Y, Z) = S(3, 5, 6, 7).
try this with a common circuit emulator, such as Multi-Media Logic,
and find that we need to think about more.
An Eight–to–One MUX in Multi–Media
Here is the circuit element selected in the Multi–Media Logic tool.
is an 8–to–1 MUX with inputs labeled 7 through 0, or equivalently
X7 through X0. This is expected.
The selector (control) lines are as expected; 2 through 0.
my notes, I use M for the output of the Multiplexer. This figure uses
the symbol Y (not a problem) and notes that real multiplexers
also output the complement.
only issue here is the enable. Note that
the MUX is enabled low;
this signal must be set to ground in order for the multiplexer to
function as advertised.
Carry–Out of a Full Adder
Here is a screen shot of my implementation of F(X, Y, Z) = S(3, 5, 6, 7).
NOTE: Show simulation here.
Gray Codes: Minimal Effort Testing
the above circuit with three basic inputs S2, S1, S0.
How can one test all possible inputs with minimum switching?
One good answer is to use Gray Codes for input. Here are the 2–bit and 3–bit codes.
generate an (N + 1)–bit code set from an N–bit code set.
1. Write out the N–bit codes with 0 as a prefix, then
2. Write out the N–bit codes in reverse with 1 as a prefix.
00, 01, 11, 10 becomes 000, 001, 011, 010, 110, 111, 101, and 100
Testing the Carry–Out Circuit
If the Enable switch is set to 1, the output is always 0. Y’ = 1.
Set the Enable switch to 0 and generate the following sequence.
Start with S2 = 0, S1 = 0, S0 = 0. 0 0 0
Click S0 to get 0 0 1
Click S1 to get 0 1 1
Click S0 to get 0 1 0
Click S2 to get 1 1 0
Click S0 to get 1 1 1
Click S1 to get 1 0 1
Click S0 to get 1 0 0
Design with Decoders
We now look at another circuit from my textbook. This shows the implementation of a Full Adder with an active high decoder and two OR gates. The outputs are:
F2 the Carry–Out
F1(A, B, C) = S(1, 2, 4, 7) = P(0, 3, 5, 6)
F2(A, B, C) = S(3, 5, 6, 7) = P(0, 1, 2, 4)
PROBLEM: Almost all commercial decoders are active low.
Active Low Decoders
let’s use 3–to–8 decoders to describe the difference between
active high and active low.
the active–high decoder, the active output
is set to +5 volts (logic 1), while the other
outputs are set to 0 volts (logic 0).
the active–low decoder, the active output
is set to 0 volts (logic 0), while the other
outputs are set to +5 volts (logic 1).
Enabled Low, Active Low Decoders
All commercial decoders have an enable input; most are enabled low.
the decoder is enabled low, when the input
signal E’ = 1, none of the decoder outputs are active. Since the
decoder is active low, this means that all of the outputs
are set to logic 1 (+5 volts).
the decoder is enabled low, when the input signal E’ = 0,
the decoder is enabled and the selected output is active. Since the
decoder is active low, this means that the selected output is set to
logic 0, and all other outputs are set to logic 1.
Why Active Low / Enabled Low?
This is a conjecture, but it makes sense to me.
The active–high decoder is providing power to the device it enables.
active–low decoder is just providing a path to ground for the device it
It is likely that this approach yields a faster circuit.
Back To Active High: A Look At F2
Seeking a gate that outputs 1 if at least one of its inputs is 1, we are led to the OR gate.
Active Low: F2(X, Y, Z) = P(0, 1, 2, 4)
is 1 if and only if none of the outputs Y0, Y1, Y2,
or Y4 are selected.
of those outputs must be a logic 1. This leads to an AND gate implementation.
Full Adder Implemented with a 3–to–8 Decoder
sum is at top: F(X, Y, Z) = P(0, 3, 5, 6)
The carry–out is at bottom: F(X, Y, Z) = P(0, 1, 2, 4)
Where are the Decoders?
One will note that the Multi–Media Logic tool does not provide a decoder circuit.
Fortunately, a 1–to–2N demultiplexer can be made into an N–to–2N decoder.
at the circuit to the left. The control
signals C1,C0 select the output to receive the input
This is exactly equivalent to a decoder.
the circuit at right, the selected output gets the input, now called “Enable”.
For the demultiplexers we use, the other outputs get a logic 1.
We can fabricate an active low decoder.
The MUX as an Active–Low Decoder
Here is the 2–to–4 Demultiplexer as an 2–to–4 active low decoder.
is an answer to one of the homework problems: use a 2–to–4 decoder for XOR.
The function is either S(1, 2) or P(0, 3). | <urn:uuid:2aefb96c-50d5-43c4-945d-ffbfc12920ee> | CC-MAIN-2017-09 | http://edwardbosworth.com/My5155_Slides/Chapter05/DesignWithRealDevices.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00135-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.858534 | 1,406 | 3.265625 | 3 |
OpenSSL is an open source toolkit that implements the Secure Socket Layer (SSL) versions 2 and 3 protocols, the Transport Layer Security (TLS) version 1 protocol as well as a general use cryptography library.
A buffer overflow vulnerability was found in vulnerable versions of OpenSSL's SSL_get_shared_ciphers() function. A malicious user could cause this overflow condition by sending a specially crafted list of ciphers to applications linked to OpenSSL that make use of this function. Few programs are known to make use of this function and often times is only available when programs are built with debugging symbols. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2006-3738 to this issue.
OpenSSL's client code was found to contain a flaw when handling SSLv2 connections. A malicious server could possibly cause the client to crash when negotiating a connection. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2006-4343 to this issue.
OpenSSL's ability to parse ASN.1 formatted data structures was found to contain two Denial of Service (DoS) vulnerabilities. The first ASN.1 issue related to parsing of particular public key types, which could take excessive amounts of time to parse and cause a DoS. The second ASN.1 issue related to trying to parse invalid ASN.1 data structures, which would lead to an infinite loop condition consuming excessive amounts of system memory, potentially creating a DoS. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2006-2940 and CVE-2006-2937 respectively, to these issues.
More information about these vulnerabilities can be found in the security advisory issued by RedHat Linux:
|Product:||Affected Version(s):||Risk Level:||Actions:|
|Avaya S87XX/S8500/S8300||All||Medium||Upgrade to Communication Manager 3.1.4 or later.|
|Avaya Intuity LX||1.x||Medium||Upgrade to Intuity LX 2.0 or later.|
|Avaya Messaging Storage Server||All||Medium||Upgrade to MSS 3.1 or later.|
|Avaya Message Networking||All||Medium||Upgrade to MN 3.1 or later.|
|Avaya CCS/SES||3.1.1||Medium||Upgrade to SES 5.0 or later.|
For all system products which use vulnerable versions of openssl, Avaya recommends that customers restrict local and network access to the server. This restriction should be enforced through the use of physical security, firewalls, ACLs, VPNs, and other generally-accepted networking practices until such time as an update becomes available and can be installed.
Avaya software-only products operate on general-purpose operating systems. Occasionally vulnerabilities may be discovered in the underlying operating system or applications that come with the operating system. These vulnerabilities often do not impact the software-only product directly but may threaten the integrity of the underlying platform.
In the case of this advisory Avaya software-only products are not affected by the vulnerability directly but the underlying Linux platform may be. Customers should determine on which Linux operating system the product was installed and then follow that vendors guidance:
|Product:||Affected Version(s):||Risk Level:||Actions:|
|Avaya Interactive Response(IR)||All||None||Depending on the Operating System provided by customers, the affected package may be installed on the underlying Operating System supporting the IR application. The IR application does not require the software described in this advisory.|
|CVLAN||All||Medium||See recommended actions below.|
|Avaya Integrated Management Suite(IMS)||All||Medium||See recommended actions below.|
Avaya recommends that customers follow recommended actions supplied by the Operating System vendor (e.g. RedHat Linux).
Additional information may also be available via the Avaya support website and through your Avaya account representative. Please contact your Avaya product support representative, or dial 1-800-242-2121, with any questions.
ALL INFORMATION IS BELIEVED TO BE CORRECT AT THE TIME OF PUBLICATION AND IS PROVIDED "AS IS". AVAYA INC., ON BEHALF ITSELF AND ITS SUBSIDIARIES AND AFFILIATES (HEREINAFTER COLLECTIVELY REFERRED TO AS "AVAYA"), DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, INCLUDING THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND FURTHERMORE, AVAYA MAKES NO REPRESENTATIONS OR WARRANTIES THAT THE STEPS RECOMMENDED WILL ELIMINATE SECURITY OR VIRUS THREATS TO CUSTOMERS' SYSTEMS. IN NO EVENT SHALL AVAYA BE LIABLE FOR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE INFORMATION OR RECOMMENDED ACTIONS PROVIDED HEREIN, INCLUDING DIRECT, INDIRECT, CONSEQUENTIAL DAMAGES, LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF AVAYA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE INFORMATION PROVIDED HERE DOES NOT AFFECT THE SUPPORT AGREEMENTS IN PLACE FOR AVAYA PRODUCTS. SUPPORT FOR AVAYA PRODUCTS CONTINUES TO BE EXECUTED AS PER EXISTING AGREEMENTS WITH AVAYA.
V 1.0 - October 12, 2006 - Initial Statement issued.
V 2.0 - September 14, 2007 - Updated Actions for Avaya S87XX/S8500/S8300, Intuity LX, MSS and MN.
V 3.0 - July 24, 2008 - Updated SES recommended actions and ASA status.
Send information regarding any discovered security problems with Avaya products to either the contact noted in the product's documentation or email@example.com.
© 2006 Avaya Inc. All Rights Reserved. All trademarks identified by the ® or ™ are registered trademarks or trademarks, respectively, of Avaya Inc. All other trademarks are the property of their respective owners. | <urn:uuid:18b6d379-16e1-4286-8e8e-e01fc90ced90> | CC-MAIN-2017-09 | https://downloads.avaya.com/elmodocs2/security/ASA-2006-220.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00311-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.836356 | 1,356 | 2.71875 | 3 |
This is the first of several articles discussing the various technologies and design criteria used for HPC systems. Building computer systems of any sort, but especially very large systems, is somewhat akin to the process an apartment real-estate developer goes through. The developer has to have an idea of what the final product will look like, its compelling features, and the go-to-market strategy.
Do they build each unit the same, or provide some level of heterogeneity, different floor plans. Do they construct one monolithic building or a village with walkways? What level of customization, if any, should be permitted?
In contemporary HPC design we face similar decision-making. Do we build tightly coupled systems, emphasizing floating point and internode bandwidth, or do we build nodes with extensive multi-threading that can randomly reference data sets? In either case, we always need to scale out as much as possible.
And finally we have the marketing picture of the system under a beautiful clouded-blue sky with mountains or lake in the background. Since we intend to market to international buyers, we have to figure out which languages to support on our marketing web site. Almost forgot: do we sell these systems outright or base our financial model on a timeshare condo arrangement?
Is programming an HPC system equivalent to the above? For example, there’s a choice to be made between extending existing languages or creating new ones. Are the languages domain specific unique to a particular application space, like HTML, Verilog or SQL; or do we add new features to existing languages, like global address space primitives, such as UPC?
For this initial piece, we will discuss these design issues in the context of “big data.” It’s seems reasonable to suggest that building an exaOPS system for big data systems is different from building an exaFLOPS machine for technical applications. But is it? While clearly the applications are different, that doesn’t necessarily mean the underlying architecture has to be as well.
The following table compares some of the characteristics of OPS versus FLOPS at the node level.
Examining the attributes listed above would initially lead one to the observation that there are substantive differences between the two. However, looking at a hardware logic design reveals a somewhat different perspective. Both systems need as much physical memory as can be directly supported, subject to cooling and power constraints. Both systems also would like as much real memory bandwidth as possible.
For both systems, the logic used by the ALU’s tends to be minimal. Thus the amount of actual space used for a custom design floating point ALU is relatively small. This is especially true when one considers that 64×64 integer multiplication is an often-used primitive address calculation in big data and HPC applications. In many cases, integer multiplication is part of the design of an IEEE floating point ALU.
If we dig a little deeper, we come to the conclusion that the major gating item is sustained memory bandwidth and latency. We have to determine how long it takes to access an operand and how many can be accessed at once, Given a specific memory architecture, we need to figure out the best machine state model for computation. Is it compiler managed-registers using the RAM that would normally be associated with a L3 cache, or keep scaling a floor plan similar to the one below?
The overriding issue is efficiency. We can argue incessantly about this. As the datasets get bigger, the locality of references — temporal and spatial — decreases and the randomness of references increase. What are the solutions?
In HPC classic, programmers (and some compilers) generate code that explicitly blocks the data sets into cache, typically the L2 private or L3 shared cache. This technique tends to work quite well for traditional HPC applications. Its major deficiencies are the extra coding work and the lack of performance portability among different cache architectures.
Several techniques, especially the ones supported by the auto-tune capabilities of LAPACK, work quite well for many applications that manipulate dense matrices. Consequently, the memory systems are block-oriented and support is inherent in the memory controllers of all contemporary microprocessors.
For big data, however, accesses are relatively random, and this block approach tends not to work. As a function of the data structure — a tree, a graph, a string — different approaches are used to make memory references more efficient.
Additionally, for big data work, performance is measured in throughput of transactions or queries per second and not FLOPS. Coincidentally, perhaps, the optimal memory structure is HPC classic, meaning, highly interleaved, word-scatter/gather-oriented main memory. This was the approach used in Cray, Convex, Fujitsu, NEC, and Hitachi machines.
There is another interesting dynamic of cache- or register-based internal processor storage: power consumption and design complexity. While not immediately obvious, for a given amount of user-visible machine state, a cache has additional transistors for maintaining its transparency, which translates into additional power consumption.
For example, there is storage for tags and logic for the comparison of generated address tags with stored cache tags. There is additional logic required for the control of the cache. It is difficult to quantify the incremental power required, but it is incremental.
Another aspect of cache versus internal state, especially for big data, is the reference pattern. Random references have poor cache hit characteristics. But if the data can be blocked, then the hit rate increases substantially. The efficiency of managing large amounts of internal machine is proportional to the thread architecture.
We have to determine if we have lots of threads with reasonable size register sets, or a smaller number of threads, like a vector machine, with a large amount of machine state. The latter approach places a burden on physical memory design.
Attaching private L1 and L2 caches per core is relatively straightforward and scales as the number of cores increases. A shared L3 cache increases the complexity of the internal design. We need to trade off bandwidth, simultaneous accesses, and latency and cache coherency. The question that needs to asked is if we are better off using internal static RAM for compiler-managed data registers per core/thread.
Obviously both memory structures have their own cost/performance tradeoffs. A cache-based memory system tends to be more cost-effective, but of lower performance. The design of the memory subsystem is easier, given that off-the-shelf DRAM DIMMS are commercially available.
The HPC classic architecture results in higher performance and is applicable to a wider range of applications. The available memory bandwidth is more effectively used, and operands are only loaded and stored when needed; there is no block size to deal with.
In summary, this article discusses the single-node processor architecture for data-centric and conventional high performance computing. There are many similarities and many differences. The major divergence is in the main memory reference model and interface. Data caches were created decades ago, but it’s not clear if that this architecture is still optimal. Will Hybrid Memory Cube (HMC) and Processor in Memory (PIM) architectures make tradeoffs for newer designs that move away from the traditional memory designs? Time will tell.
The next article will discuss the design approaches for global interconnects. | <urn:uuid:83cb2527-f498-421b-842d-1085a64245d6> | CC-MAIN-2017-09 | https://www.hpcwire.com/2012/10/17/designing_hpc_systems:_ops_versus_flops/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00311-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92708 | 1,512 | 2.53125 | 3 |
In the previous installment, we looked at and discussed strategies for business simulation and the infrastructure needed to make such initiatives successful. Now, we’re ready to discuss some practical examples of business simulation. Imagine a mail order company selling products together with the necessary financing. Assume they’re considering replacing one of their credit risk models while at the same time trying to boost sales of a certain widget. Let’s further assume that their overall decision strategy to determine the best product to offer to a customer may be overruled in circumstances where the risk model determines that additional selling is not desirable because the underlying loan is too likely to default.
In the example, this company makes two changes to its in-production decision strategy. First, they replace the existing credit risk model with a new version. Second, they multiply the outcome of the propensity model for the widget by some factor greater than 1 to make it more likely to be prioritized as the product to offer. As the next step they want to apply this revised strategy to a selection of the recorded data. As stated above, every single product recommendation and every credit risk evaluation has been recorded. Because their mail order business sells fashion in addition to other products, it is sensitive to seasons. So to understand the new strategy’s effect during the summer they decide to apply the new strategy to last year’s interactions over the same period and study the deltas.
Slice and dice data
Once that slice of recorded data has been loaded, the company may take a sample from it. With so many millions of interactions recorded, a large enough sample will be representative of all of them. They will then proceed to apply the revised strategies, with all its predictive propensity models, risk models, and rules, and look at the distribution of the results. How many more widgets will be sold? It’s possible to simulate this because the company is using propensity models to predict the likelihood of a customer accepting an offer for a widget. Thus, the change they made to boost the offer rate of the widget should see more (simulated) interactions where the widget is being offered and accepted by the customer. Unless, that is, widgets are expensive and it turns out the new risk model will reject more widget offers in favor of lower prioritized products (per the new strategy) that keep the company’s exposure within the desired bandwidth.
The company can thus study both metrics. How many widgets would we have sold if this had been the marketing strategy used during last year’s summer season? And how many write-offs on the financing would have been the result of using the new risk strategy alongside the new, Go Widget, sales strategy? If the metrics show favorable improvements the new strategy can be taken into production. If not, the marketing and sales teams and their colleagues from the risk department can tweak their strategies and see if it makes the desired difference when applied to last year’s interactions.
Cause and effect
This simulation is not perfect. For instance, last year’s economy may have been worse than this year’s, allowing more customers to pay back their loans now. Unless some economic data is part of the credit risk strategy, the overall strategy may not be sensitive to it and the simulation will therefore miss it. And a causal chain of events will also be increasingly hard to predict. If the revised strategy would have offered product X instead of Y to a customer, the actual service interaction about a problem with product Y which is part of the recorded data didn’t actually happen. So while it’s quite possible to predict the one-time effects of a strategy change, simulating the downstream effects of those new outcomes quickly becomes less useful conjecture. There are other caveats as well, a bit too detailed to cover here. However, don’t compare this with a hypothetical oracle that can tell you exactly how your strategy will fare, compare it to the common practice of making changes and hope for the best.
The more explicit a company is around the decision strategies that govern its processes – customer processes or otherwise – the fewer surprises. And when those decisions are based on predictive analytics and carefully recorded data it becomes possible to simulate future business outcomes by replaying the past, and making the effect of changes, even in complex strategies, more predictable. | <urn:uuid:59cb0713-cacb-4931-957c-8de69a1e93d0> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2475992/business-intelligence/replay--the-value-of-business-simulation--part-2-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00363-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95663 | 880 | 2.859375 | 3 |
Distributed Denial of Service attacks are nothing new, but they’re becoming more and more common, from politically motivated attacks on financial and government institutions to recent attacks on data centers like Digital Ocean. DDoS attacks are when hackers use hijacked computers to flood servers with incoming requests and essentially shut down services by clogging network traffic or sending mass quantities of junk data. They are increasingly difficult to defend against as they grow in scale, and because they are distributed among various infected machines, it can be difficult to block traffic based on IP address.
Public institutions, financial industries, eCommerce sites, and hosting providers are among the most popular targets, but anyone can be a victim—and if your IT infrastructure is hosted in a data center, you need that facility to provide strong DDoS mitigation to avoid service interruptions of your own.
These days, SYN or HTTP GET flood attacks are very common ways to overload firewalls or IPS systems and make the servers behind them unresponsive. Network switches and servers do not have the resources to respond to every incoming request and therefore begin to drop network packets from any incoming source. The DDoS source traffic can come from either volunteered computers (scoundrels!), a single computer masquerading as many IP addresses, or, as is most common, a botnet of hijacked computers.
A SYN flood attack uses SYN packets, which are the first packet sent to a server to request a connection. This is part of the standard “handshake,” and the server would normally respond with a SYN-ACK message. With a SYN flood, the connecting client does not respond with ACK, causing the server to wait for a response. SYN floods are a type of Bulk Volumetric attack.
Other Bulk Volumetric attacks include ICMP packet floods, which send “PING” commands, TCP/UDP floods, which send to open network ports like TCP 81, fragment floods, which send fragmented packets, anomalous packet floods, which send error scripts within network packets, and DNS amplification, which uses DNS EDNS0 protocol to amplify the attack. This last example, uses public Domain Name Service servers to send DNS lookups to a DNS server while pretending to be the target server, so the DNS server replies to the target.
HTTP GET is an Application Layer attack, which is smaller and more targeted, going after the Layer 7 of the OSI model, which is the top layer of network traffic, rather than Layer 3 Network traffic targeted in Bulk Volumetric attacks. HTTP GET exploits the process of a web browser or other HTTP client asking an application or server for an HTTP request, which is either GET or POST. Attackers must have some knowledge of their target, as they will usually request the most resource-intensive process. They are hard to defend against because the use standard URL requests, rather than broken scripts or huge volumes.
ISPs have DDoS protection at Layer 3 and Layer 4 (network traffic), but that ignores the more targeted Layer 7 attacks, and total coverage is not guaranteed.
DDoS service providers exist. Usually they will reroute your incoming traffic through their own systems and “scrub” it against known attack vectors. They might scan for suspicious traffic from uncommon sources or geolocations, or reroute your legitimate traffic away from botnet sources.
Most modern firewalls and Intrusion Protection Systems (IPS) offer DDoS defense abilities as well. These can take the form of a single device scanning all incoming traffic, or distributed devices or software at the server level. Dedicated DDoS appliances are also available and may offer better protection against Layer 7 attacks.
Network scanning and traffic monitoring with alerts can also help you catch a DDoS attack early and take action to avoid total service loss.
Once you have a DDoS protection system in place, you’ll want to test it before it comes under fire. The first step to take is to identify attack vectors and key applications. What ports are open? What bandwidth do you have available to you? Where are likely network bottlenecks? What critical systems need additional protection?
Note areas of your infrastructure that are vulnerable based on their reliance on other systems—like a central database that could take down functionality for several applications if it is overloaded.
There are a variety of open source software tools you can use to test DDoS mitigation, as well as hardware options that can reach multi-Gigabit traffic levels. However, hardware options are expensive. A professional white hat security firm may be able to offer testing as a service.
DDoS attacks are certainly an annoyance, but with some preparation, you can be ready to intercept or respond to them quickly and avoid service interruptions for your users. | <urn:uuid:b808bb5e-aaca-4e89-84d0-df94d7f1f7ad> | CC-MAIN-2017-09 | https://www.greenhousedata.com/blog/protect-yourself-as-ddos-attacks-on-data-centers-increase | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00539-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946839 | 974 | 3 | 3 |
A multi-core system is a computer with multiple central processing units (CPU) or cores that are unified into one. The CPUs are also independent of each other. The cores perform basic computing tasks such as running programs, managing data, executing instructions, etc. The difference between single-core computer systems (e.g., Intel Pentium 4/AMD Athlon 64 FX-55) and multi-core systems is that multi-core systems can run multiple programs and instructions comfortably at the same time, thereby increasing the speed and agility of the computer. The cores are usually fitted onto a single integrated circuit board die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package.Since 2005, single-core systems have almost become history.
Information Communication Technology has evolved very fast in the past five years. Computer chip manufacturers are constantly targeting higher clock speeds and transistor density by having more processing cores and hardware threads per chip. A multi-core processor could have two cores, e.g., dual-core CPUs such as the Intel Celeron Dual-Core, the world’s first dual-core processor for entry-level computers, three cores as with AMD Phenom II X3, four to eight cores as with IBM Power 7 series, etc. High-end multi-core processors such as Intel Xeon Phi series have as many as 57-61 processing cores. Multi-core processors are indispensable for many computer application domains such as graphic processing units (GPUs), digital signal processing (DSP), and much more.
Multi-core processors are ideal for use in servers because they boost the number of users that can share server resources simultaneously. Servers also have independent threads of execution that enables web and application servers to have much better throughput.
Advantages of Multi-Core Processor
The location of multiple processing cores on the same die means the cache coherency circuitry can operate at higher clock speeds than if the signals have to be sent outside the chip. Signals communication between the cores travel a shorter distance and therefore signals are less likely to degrade, thus permitting more data to be transferred in a session and the signal does not have to be amplified often.
Since multiple cores are fitted into one die package, multi-core CPUs requires smaller printed circuit board (PCB) surface than two single cores coupled together.
A multi-core processor uses lower amounts of power than two wedged single-core processors because less power is required to drive electric pulse back and forth between chips.
Multi-cores share common circuitry, L2 cache, and front-side bus (FSB). Multi-core systems make better use of CPU core library architecture required to produce a system with low risk of design error.
Multi-core processors have higher performance with low power. This advantage makes it possible to use multi-core processor on battery-powered mobile devices.
Disadvantages of Multi-Core Processor
Before a multi-core processor can be used on any device, both the device’s operating system and existing application software have to be adapted to suit it.
Multi-core processors don’t just improve system performance on their own. The ability of multi-core processors to improve system performance largely depends on the utilization of multiple threads by the applications.
The legendary heat production problems in multi-core systems, particularly heat in mobile devices, are hard to manage.
Technically, crude processing power is really not the only factor required to boost system performance. Other factors such as memory, motherboard circuitry, cache, and on-board bandwidth also play important roles. Having more processing cores without addressing other factors wouldn’t bring any significant performance improvement.
The improvement in performance of a multi-core processor depends largely on the software algorithms used to implement it. Performance could be limited by the fraction of software that can run in parallel (the act of processing large amount data simultaneously) on the cores. There are many applications can be run on a single core, hence a multi-core system might be of little use for the application itself due to inability to spread the process evenly across multiple cores and the single thread doing all the processing work. Hence, multi-core processing determines the way modern software is built.
Some programming languages are not compatible with multi-core systems. To share application workload among the processors can sometimes be daunting, though there are various ways to deal with the problem, such as packing in coordination language or higher-order functions. Each block of the application can have a different unique implementation mode for different types of processor. During implementation, the program’s compiler chooses the best implementation mode based on the context. Application developers are required to make use of numerical libraries to access compatible languages such as FORTRAN and C which perform computations faster than popular programming languages such as C#.
High power consumption and heat problems mean that more emphasis is placed on multi-core chip design and threading. Multi-threading software to take advantage of the multi-core system’s higher clock speeds is what actually improves the computer’s performance. If developers of a program are unable to design it to fully exploit the advantages provided by multiple cores, it doesn’t make sense making such programs, as they would not be able to reach the system’s performance ceiling. To overcome this problem, two dual-cores may be implemented on a die with a single unified cache. Any of the two dual-core dies can be used, instead of running over three cores on a die.
To program a multithreaded code requires complex and careful coordination of threads. A simple error can introduce subtle bugs that are difficult to find due to the interweaving of shared data between multiple core threads. As a result, such programs are more difficult to debug when it breaks. Consequently, there aren’t many consumer-level threaded applications because most computer users hardly make maximum use of computer hardware.
Applications are required to perform and scale better by using multi-cores hardware threads’ higher memory in order to meet demands for faster performance and efficiency. Software has to be designed to include methods of efficiently sharing the software’s functionality among multiple cores. Any application meant to be run in multi-core environment that does not take this into consideration during design would definitely end up with performance issues.
How to spread the tasks that will be executed on the multiple processors is the main headache when designing software to run on a multi-core system. The most common way to manage the problem is to share tasks by using a threading model in which tasks can be broken down to separate execution units to run on different processors in parallel. However, if the threads are independent of each other, their design does not have to include how they will work together, as in the case of two different applications running on a system as separate processes. Each application runs on its own core and doesn’t have any awareness of the other. System and application performance is not affected unless the applications contend for a resource such as shared system memory. This gives rise to another issue, how to manage shared memory in a multi-core system.
Memory management is the process of allocating and sharing available computer memory among various running programs when needed and freeing up the memory when the application process has ended. Efficient memory allocation is important to any system that is required to multi-task at any time.
Memory management is a function of the hardware, operating system (OS), and the applications being run.
Hardware Memory: Memory management in the hardware is the function of physical parts of the electronic motherboard that store data such as flash-based solid-state drives (SSDs), ATA/SATA disks, RAM chips and memory caches.
Operating System Memory: Memory management in the operating system requires the OS to constantly allocate and re-allocate memory to individual user programs on demand as they require it and reserve the memory when it is no longer required, after the application has been closed. When available memory is used up, additional applications will no longer be able to run on the system. Memory can be freed up by deleting surplus data and uninstalling rarely used applications.
Application Memory: Applications cannot define in advance how many units of memory they will require to operate when launched, hence they need a code to make memory requests on their behalf. The code requests ensure the availability of memory for each running program until they are closed.
Application memory management involves the combination of two related tasks, known as allocation and recycling.
Allocation: When an application needs memory, it requests a block of memory. Memory is then allocated to it by the memory manager called “the allocator.”
Recycling: When an application is closed and its data in previously allocated memory blocks are no longer needed, the memory blocks can be recycled and reassigned till needed again. Recycling can be done automatically by the memory manager or manually by the programmer.
Automatic Memory Management
This is either a part of the programming language used to build an application or an application extension that automatically recycles memory after the program has been closed or uninstalled. Automatic memory managers, also called “collectors,” work by recycling blocks that are unreachable by an application, e.g., when the application can no longer reach data that has been moved or deleted. In automatic mode, memory management is clearly more efficient. There are also fewer incidents of memory bugs. On the downside, memory may be erroneously retained because it is reachable by the application so the collector won’t recycle the block for use again.
Manual Memory Management
Manual memory management requires the programmer to manually recycle the system memory using a code to manage the control stack or by direct calls to the heap (a reserved area of computer memory that applications can use to store data temporarily). The collector does not work or recycle any memory without being launched by the programmer. While this makes it easier for the system administrator to know everything going on within the system, s/he will have to write codes continually and take regular inventory of the memory.
It is quite common for programmers who are faced with inefficient manual memory manager to either write code to duplicate the memory manager, recycle memory blocks internally, or allocate large memory blocks and split them for use. To write memory management codes, programmers could use FORTRAN, C++, COBOL, Pascal, etc. Conservative collection extensions may also be used.
Memory Management Problems
The main problem with memory management is identifying which data to keep, how long to keep it, and when to clear the data so that the memory can be freed up for reuse. Although this is a trivial issue, the fact that poor management of a system’s memory can affect the effectiveness and speed of running applications. Common memory management problems include:
API Complexity: The application program interface (API), when being designed, must take into consideration and properly design how the application will manage memory, especially objects if constant allocation of memory is required.
Premature Frees and Dangling Pointer: After being closed, applications are required to give up memory for recycling. In an attempt to access the memory later, the application could behave sluggishly, hang or crash. This situation is called premature free. The application ought to forget data as soon as it has given up the memory. The inability of a program to forget about previous memory is called dangling pointer. Both premature free and dangling pointer are more predominant with manual memory management.
Fragmentation: Memory fragmentation occurs when the memory allocator is unable to perform its job efficiently by skipping or randomly allocating memory blocks until the system runs out of space. Free memory is split into smaller blocks separated by memory blocks still in use. Fragmentation results in waste of storage space.
Memory Leak: This is when some applications are continually allocated memory every time they request it without giving up the memory after being closed. This situation is referred to as a memory leak.
Misplaced Locality of Reference: Access to memory is faster when the memory managers arrange memory blocks close to. This is referred to as locality of reference. A shorter distance means data can be sent back and forth faster. However, if the memory blocks are located far apart, this will likely affect applications’ performance.
How to Manage Multi-Core Systems Memory
Avoid Memory Contention: Memory contention is the situation in which two different programs try to make use of the same memory resources such as disk space, RAM, cache, or processing threads at the same time. This could result in deadlock or thrashing (when the memory is forced to constantly receive and store data in secondary storage blocks called pages).
Memory bus traffic and core interactions should be kept as low as possible by avoiding sharing storage drives and data. Access to shared memory can be regulated by queuing and using a good scheduler program.
Avoid Heap Contention: As stated earlier, the heap is a reserved area of computer memory that applications can use to store data temporarily and it is also shared among the cores during processing. Heap contention is one of the inconveniences associated with multi-core applications that require intensive memory allocation. To avoid head contention, a private heap may be installed on the system to avoid heap contention. The use of a private heap also improves multi-core systems performance compared to using only global heap.
Avoid False Sharing: False sharing occurs when two or more processors in a multi-core system are making use of the same cache line that is not related to their operations concurrently. The cache system could become confused and this might result in invalidating or rewriting the cached copy of other processors.
Different processors have different ways of dealing with false sharing problems. It can be avoided by carefully aligning data structures to suit cache line boundaries using a compiler alignment program used for compiling cache boundary for each processor. Another way of dealing with false sharing is by grouping frequently used fields of a data to ensure that they are located in the same cache line and can be easily accessed when needed.
Avoid Lock Contention: Lock contention occurs when a thread attempts to use the lock to a program that is already acquired by another thread. One technique used to avoid lock contention is the adoption of lock-free algorithms and concurrent data structure designs that eliminate locks and synchronization tools such as Mutex. Concurrent data structure algorithms do not need to incorporate synchronization mechanisms.
When traditional locking tools such as spinlocks are used, the locks should be broken into pieces instead of using global or monolithic locks. Thereby the locks protect a specific small area of the data structure. This helps multiple threads to concurrently make use of different locks instead of contending for one lock. This way better concurrency can be attained.
Figure 1: High-level architecture of an example single-core system (left), a dual-core system (middle), and an N-core system (right). The chip is shaded. The DRAM memory system, part of which is off chip, is encircled.
Figure 2: Multi-core System Die
*Image Credit: Thomas Moscibroda, Onur Mutlu: Memory Performance Attacks: Denial of Memory Service in Multi-Core Systems; 16th USENIX Security Symposium, 5 July 2007/ Figure 1 image credit
*Gurudutt Kumar: Considerations in software design for multi-core multiprocessor architectures; IBM Developer Works, 20 May 2013
*Multi-Core Processor; https://en.wikipedia.org/wiki/Multi-core_processor
*The Memory Management Reference; Ravenbrook Limited, 2016. Available online at: http://www.memorymanagement.org/mmref/begin.html
*Thomas Moscibroda, Onur Mutlu: Memory Performance Attacks: Denial of Memory Service in Multi-Core Systems; 16th USENIX Security Symposium, 5 July 2007/ Figure 1 image credit | <urn:uuid:2407db1c-e9c1-4035-b795-b493b420e60b> | CC-MAIN-2017-09 | http://resources.intenseschool.com/memory-management-in-multi-core-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00063-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934165 | 3,257 | 4.1875 | 4 |
Virtualization breaks a high-performance computing barrier
- By John Breeden II
- Jul 11, 2014
Virtualization is all the rage in most places, even for large organizations like federal data centers. However, for all the advantages virtualization can bring, there is one piece of the computing arena the technology has not been able to crack, until now: high-performance computing and simulation environments.
The processing requirements of most scientific computing call for hefty hardware blocks with lots of CPUs and GPUs in massive racks supported by complicated cooling schemes. Computing tasks are carefully loaded into those systems, and competition for computing cycles can be both intense and political. Simulation applications have operating system requirements that are not often compatible with virtualized environments.
Almost nowhere are all the forces against virtualization at work quite so intensely than at the Johns Hopkins University Applied Physics Laboratory Air and Missile Defense Department's Combat Systems Development Facility.
Edmond DeMattia, a senior system engineer and virtualization architect at the facility, described its configuration, which is typical of many government simulation laboratories.
"We had two stovepipe systems with one running Windows and one running Linux," he said. "There are 1,500 cores per cluster, and everyone was sharing that computer using grid scheduling."
The tasks required of the system are intense. DeMattia explained that most simulations use the Monte Carlo method, a class of algorithms where repeated random sampling is inserted into equations to obtain concrete numerical results. That means that simulations need to be run many thousands of times in most cases.
"Some simulations take five seconds per task, and we run that same task up to a million times," DeMattia said. "While others may take 15 hours per task, but are only run 1,000 times."
The problem facing DeMattia and the lab is that while the computing requirements are intense, they are not used all the time. This is further complicated by working in different operating systems, a necessity as some of the simulation jobs come from outside sources, and the lab has to accept their programming requirements.
However, it also means that in many instances, either the Windows or Linux stacks might be maxed out while the other was idle. To keep up with such demand or to expand capacity, most labs purchase more computers.
This leads to wasted resources when cycles aren't being used, even as more power, cooling and physical space are being called for. Faced with the situation, DeMattia believed there had to be a way to tap into those idle computing cycles, but it simply wouldn’t be possible using traditional computing methods.
To tackle the problem, DeMattia began to experiment with ESXi from VMware, a bare-metal hypervisor based on the VMkernel operating system that manages virtual machines running on top of it.
ESXi acts as the one part of the computing grid that knows its maximum limits. In using the tool, various operating systems don’t know the others exist. However, they operate as if they have double the computing nodes as they can tap into nodes normally reserved for another OS when they are available. The ESXi layer manages the entire system.
Turning a high-performance computing environment into a virtualized one was new territory for DeMattia. But with the added layer, he wasn't really expecting too much.
In fact, in doing the initial experiments, he was trying to figure out how much performance loss would be acceptable in the new system, offset by the gains of opening up the new nodes. "I had it figured that doing it this way would result in a 6 to 8 percent loss, which would have been acceptable," DeMattia said. "But I was shocked when we measured a 2 percent gain instead."
Once the optimal virtual grid configuration was established, DeMattia's team, which included lead automation engineer Irwin Reyes and security office systems administrator Valerie Simon, removed physical nodes from the stove-piped HPC grids and simultaneously incorporated them into the vGrid architecture.
The end result was a seamless migration of independent grids into a fully virtualized environment, an end result that more than doubled the usable CPU cores for each OS platform.
"My team fundamentally redesigned how high-performance scientific computing is performed in the Air and Missile Defense Department by utilizing virtualization and distributed storage as the framework for pooling resources across multiple departments," DeMattia said.
"By leveraging the ESXi abstraction layer, multiple stove-piped high-performance computing grids are aggregated into a single 3,728 core vGrid, hosting multiple operating systems and grid scheduling engines. This has allowed our engineers to achieve decreased simulation runtimes by an order of magnitude for many studies."
It took a while for others to appreciate what DeMattia's team had done: They had created a virtualized high-performance computing environment that could reduce idle computing cycles and run more efficiently at the same time. Even with a gain of just a few percentage points in performance, it represents a big jump when pushing millions of calculations through the system. Another significant advantage is the ability to tap into every node at the same time.
Back at the Air and Missile Defense Department, vGrid led to quite a cost savings. Instead of buying new computing nodes, which can run up to $10,000 each, IT managers can tap their existing infrastructure to its fullest. This has led to an estimated $504,000 savings in hardware costs as the need to buy additional computing resources to meet peak demand was eliminated. Beyond that, it would have cost another $40,000 per year to cool and power the expansion.
DeMattia said he believes that other high-performance simulation and government labs can make use of this new technology. Even though most have shied away from trying to take high-performance computing into a virtualized space, he said they are highly complementary technologies when deployed correctly.
"I get a little embarrassed when people ask me to talk about what we did," DeMattia said. "At its core, it's really just a simple process using technology in a way other than it was designed."
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:46875373-36b7-443c-9c1a-63985b93f33f> | CC-MAIN-2017-09 | https://gcn.com/articles/2014/07/11/vgrid-hpc-virtualization.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00483-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966971 | 1,254 | 2.6875 | 3 |
Make a High MP camera with multiple sensors but just one lens.
Here's a picture of a sensor. Others are different, but generally you have an array of photodiodes (pixels) with sensor output leads, coming out below and to the side of the chip itself. Because of this encumbrance, you can't get them adjacent to each other to make a seamless array.
So, would it be a worthwhile idea to make sensors so they can be placed right next to each other with no gap? Like make the wires come out the back of the sensor more instead of the side. So you could do this:
Then you could take 4 4K sensors and make a 48 MP camera with one lens instead of the 4 you need with normal multi-view cameras. Send 4 independent streams to the VMS and do a simple, borderless quad, (writing a simple driver if necessary). Or send 4 streams to 4 monitors and put the monitors in a bezel-less display wall quad.
And unlike any typical multi-imager/lens quad that might suffer from pin cushion lens distortion in the corners, the sensor array with shared lens would be seamless, naturally.
So maybe it would be a good idea if they did make sensors that can just be arranged domino style to make a larger area sensor of varying dimensions. But I doubt it actually is, since they don't seem to make them, but I'm not sure why.
- It can't be technically done, not at a reasonable cost at least because ?
- Even if you could make sensors like that for the same cost, the "innovation", (1 lens, 4 sensors) does not provide any value because ?
- They actually do make sensors like that them but ?
What do you think, shoot the idea, or steal it? | <urn:uuid:2bcc161d-dcd3-47b3-a0ac-82dbc9785b87> | CC-MAIN-2017-09 | https://ipvm.com/forums/video-surveillance/topics/manufacturers-shoot-this-idea-down-if-you-can | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00007-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937602 | 376 | 2.5625 | 3 |
When Wired News reporter Mat Honan had his digital life hacked and subsequently, virtually wiped outin August, the significant loss of data he endured wasn't the scariest part of the experience. Much more terrifying was the method by which hackers drilled into his digital accounts.
Using clever social engineering exploits, the hackers posed as Honan and succeeded in extracting key bits of personal information from Amazon and Apple customer support. With the critical data in hand, the hackers then locked Honan out of his Google account, commandeered his Twitter stream, seized control of his Apple ID number, and wiped his computing devices clean.
It was momentarily life-wrecking, at least.
If a hacker wanted to ruin your lifewhether by identity theft or by a simple Honan-esque data wipehow difficult would that objective be to achieve? The answer is that it's likely a lot easier than you think.
Are you an easy target?
According to a recent Harris Interactive poll commissioned by Dashlane, a company that manages passwords and personal data, most online Americans are concerned that their personal data might be used online without their knowledge. Approximately 88 percent of the 2208 adults surveyed cited being at least "somewhat concerned," and 29 percent claimed to be "extremely concerned." In addition, three out of five respondents were worried that they were vulnerable to being hacked.
John Harrison, a group manager at Symantec Security and Response, says that people should be concerned, because they're sharing more than they think they are.
Because social networks, public records, and high-profile security breaches are so prevalent, a lot of potentially sensitive information is just floating around the Internet.
"Each piece of information adds to the puzzle," Harrison says. "We don't throw everything out there at once, but it eventually comes together. For example, you may not put your full birthday on Facebook, but it's not difficult for someone to find out what year you graduated from high school and put two and two together."
In other words, you may not think you're sharing too much just a snippet here and a snippet therebut to a hacker, you're building an easily harvested online profile.
Protect yourself the easy way
If you use the Internet in any meaningful way sending email, uploading photos, frequenting social networks, shoppingyour online profile is likely already floating around in the ether. And even if you haven't been online all that much, bits of your personal data may be available for online viewing via digitized public records. An interested person could readily find out if you have a mortgage, for example, or if you've recently gotten married or divorced.
You probably know that a typical five-character, dictionary-word password is easy to hack, and perhaps you rely on something far less penetrable. But you probably don't have the time or bandwidth to memorize a complicated mix of numbers and letters. So here are a few quick, easy-to-implement security tips that will drastically reduce your hackability.
Search for yourself. Before you start worrying, it's a good idea to get a handle on how much information about you is out there by searching for yourself. Type your name into Googleboth with quotation marks and withoutand with relevant keywords, such as your address, phone number, email addresses, job title, company, and alma mater.
See what you find, and try to look at the information the way a hacker would. Is there enough data there for someone to piece together your life? If so, you need to take steps to improve your personal security.
Use passphrases instead of passwords: Passwords are a tricky security issue. The best passwords are computer-generated mixtures of letters, numbers, and special characters (such as exclamation points and question marks). Unfortunately, the resulting alphanumeric strings are also extremely difficult for most people to remember. But since most passwords are hacked via brute-force methodsthat is, by having a computer go through all possible combinations of characters longer passwords are more secure simply because they take longer to discover.
For example, an Intel Core i7 processor takes just hours to crack a five-character password, but it takes more than 10 days to crack a seven-character password. That's why security experts recommend using passphrases instead of passwords. See Alex Wawro's password primer for pointers on building a good passphrase.
Stay updated: One of the easiest ways to prevent intruders from compromising your computer is to make sure that you're always running the latest version of all your PC applicationsincluding your antivirus program.
"Drive-by downloadsmalware that downloads to your computer when you click on a malicious linkoften work by exploiting known bugs in software," Harrison says. "These bugs are usually fixed in updated versions of the software, but that won't help you if you're still running the old version."
Prioritize accounts: You may not be able to remember complex passphrases for every account you have, and that's okay. According to Doug McLean, senior director of product marketing at McAfee's Global Threat Intelligence, the average online American has more than 100 accounts, not all of which are important.
Instead of creating different passwords for every account, create unique ones for only the important accountsemail accounts, online banking accounts, social networks, and other accounts that contain sensitive information. For relatively trivial accounts, such as message boards, it's fine to use an insecure, hackable password.
McLean also suggests creating a "junk mail" email address for accounts that you don't really care about. You can use this junk email address to sign up for message boards, contests, and newsletters. Then, if one of the junk accounts is compromised, hackers won't have your real email address or your real passwords.
Lie: Speaking of junk accounts, be careful about what information you give away to random websites. Sure, your bank needs to know your home address, but does a message board really need to know your zip code or your full birthday? If you can't get past a screen because the website wants you to give up too much information, Harrison suggests that you make things up. After all, he notes, message boards are notoriously hackable, and they really just want to verify that you're over a certain age.
Protect yourself offline: According to McLean, offline identity theft is still much more common than online identity theft. The reason: Email addresses have passwords, while mailboxes, dumpsters, and lost wallets do not. To protect yourself offline, McLean suggests that you get a locking mailbox (if you don't already have one), shred all important bills and documents before you throw them away, and never carry your Social Security card with you.
Use a password manager: Though password managers require a little setting up, they're worth it if you're worried about the integrity of your passwords or passphrases. Password managers such as Dashlane, 1Password, and LastPass not only store all of your passwords in a neat little encrypted program that you can unlock with a master password; they can also create secure, computer-generated passwords that even you don't know.
In choosing a password manager, it's important to pick one that's compatible with all of your devices, including your phone and tablet. Dashlane, 1Password, and LastPass are compatible with Windows, Mac OS X, iOS, and Android; and LastPass is also compatible with Linux, BlackBerry, Windows Phone, WebOS, and Symbian. Password managers can store form data, so you don't have to park credit card information on the Web.
Freeze your credit report: Freezing your credit report is the single most effective way to prevent identity theft, according to McLean. If you're over 30 and you're not getting married or divorced, you probably won't be applying for new credit cards, loans, or mortgages, so you don't need your credit report to be readily available.
To freeze your credit report, you must contact each of the three major credit bureaus (Equifax, Experian, and TransUnion), fill out a form, provide proof of identity, and pay a small fee (around $10, depending on your state). You'll then receive a PIN or password that will allow you to "thaw" your credit report (either temporarily or permanently) if you ever need to use it. Temporarily thawing your credit report usually takes less than a minute, McLean says.
Credit report freezes are free in the United States for victims of identity theft.
Even a little security goes a long way
McLean suggests that taking minimal security precautions is like outrunning a bear: You don't have to be faster than the bear; you just have to be faster than your friend who's also being chased.
Hackers are smart, but they're also somewhat lazy. So unless you happen to be a high-profile target, a hacker will likely give up if your data defenses prove to be too difficult to breach. Mat Honan's hackers even admitted that their attack was nothing personalthey simply wanted to break into his Twitter account because the three-character handle "@mat" signified the property of a Twitter superuser. Nothing more, and nothing less.
Ultimately, even taking small security steps, such as creating an eight-character password instead of a five-character password, can protect your personal information just well enough to convince hackers to move on to the next digital door.
This story, "Just How Hackable is Your Digital Life?" was originally published by PCWorld. | <urn:uuid:6b10b0b4-70c4-47d6-8253-94a009a253b3> | CC-MAIN-2017-09 | http://www.cio.com/article/2391961/security0/just-how-hackable-is-your-digital-life-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00359-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954981 | 1,971 | 2.5625 | 3 |
Jellyfish-like robot, developed with Navy funds, refuels itself with hydrogen and oxygen extracted from the sea. The goal: Perpetual ocean surveillance.
NASA's Blue Marble: 50 Years Of Earth Imagery
(click image for larger view and for slideshow)
Scientists at the University of Texas at Dallas and Virginia Tech have built a jellyfish-inspired robot that can refuel itself, offering the possibility of perpetual ocean surveillance.
Like Slugbot, a robot designed to be able to hunt garden slugs and devour them for fuel, Robojelly, as the machine is called, is self-sustaining. It extracts hydrogen and oxygen gases from the sea to keep itself running.
"We've created an underwater robot that doesn't need batteries or electricity," Yonas Tadesse, assistant professor of mechanical engineering at UT Dallas, told the UT Dallas news service. "The only waste released as it travels is more water."
The robot offers one way around a problem that continues to vex researchers developing autonomous machines: operational limitations imposed by the need for frequent refueling. Scientists at Sandia National Laboratories and Northrop Grumman last year concluded that nuclear power would extend the capabilities of aerial drones but couldn't be implemented due to political considerations. The U.S. government presumably would rather avoid the political outrage that would follow from a downed nuclear drone.
A self-sustaining surveillance bot that doesn't involve hazardous materials and doesn't pollute would be much more politically palatable, not to mention operationally useful.
Robojelly looks as if it could be related to a novelty umbrella hat, except that it has two hemispherical canopies, stacked one on top of another (the video embedded below depicts an earlier single-canopy version). These bell-like structures are made of silicone and are connected to artificial muscles that contract when heated. The contractions, like those in a real jellyfish, propel the device.
The muscles are made of a nickel-titanium alloy encased in carbon nanotubes, coated in platinum, and housed in a casing. The chemical reaction arising from contact between the mixture of hydrogen and oxygen and the platinum generates heat, which causes the artificial muscles to contract and move the silicone canopies while expelling water.
Tadesse says the next step in the project is to revise the device's legs so it can move in different directions. Right now, Robojelly's fixed supports allow it to move in only one direction.
Robojelly was funded by the Office of Naval Research, which has an obvious interest in monitoring the seas. In addition to scanning the waves, Tadesse suggests the device could be used to check the water for pollutants.
Nominate your company for the 2012 InformationWeek 500--our 24th annual ranking of the nation's very best business technology innovators. Deadline is April 27. Organizations with $250 million or more in revenue may apply for the 2012 InformationWeek 500 now.
New Best Practices for Secure App DevelopmentThe transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Published: 2015-10-15 The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...
Published: 2015-10-15 Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.
Published: 2015-10-15 Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year. | <urn:uuid:74ab08f1-c19a-4468-860c-108a74a78b4f> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/robot-jellyfish-may-be-underwater-spy-of-future/d/d-id/1103534?piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00004-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927264 | 944 | 3.15625 | 3 |
Humans are fairly sophisticated when it comes to understanding the complex meanings beneath the spoken or written word. For example, we can tell that a statement like, “My car had a flat. Brilliant!” is sarcastic, not actually brilliant.
And with the help of machine learning, computers are beginning to get better at reading between the lines of our tweets, Facebook updates, and email messages, resulting in a new kind of analytics: sentiment analysis.
Sentiment analysis, also known as opinion mining, seeks to determine the attitude of an individual or group regarding a particular topic or overall context – be it a judgment, evaluation, or emotional reaction – from text, video, or audio data.
For example, Expedia in Canada used sentiment analysis to determine that the music accompanying one of their commercials was receiving an overwhelmingly negative response online, and they were able to respond to that sentiment appropriately: by releasing a new version of the commercial in which the offending violin was abruptly smashed.
What Do You Really Think?
Say you have a lot of text data from your customers originating from emails, surveys, social media posts, etc. There are several hundred thousand words in the English language. Some are neutral in terms of emotional import, but others have a distinctly positive or negative connotation. This polarity of sentiment can be applied to your customer text to establish what your customers, as a stakeholder group, really think of you.
There are number of software tools that can help you to measure text sentiment around your product or service. Twitrratr, for example, allows you to separate the positive tweets about your company, brand, product, or service from the negative and neutral tweets so you can see how well you are doing in the Twitterverse.
People have long known that surveys and focus groups aren’t necessarily indicative of broader sentiment. The people who choose to respond to a survey may be the ones who have the most to complain about or the most to praise, but not the middle-of-the-road customers. People brought in for a focus group may alter their opinions based on what they think the company wants to hear.
With something like Twitter analysis, however, you’re getting the unfiltered opinions of millions of users, not a dozen people sitting in a white room.
Sentiment analysis can help you to gauge opinion, which, in turn, can guide strategy and help decision making. In the current business landscape, it’s increasingly important that we know what our customers, competitors, and employees think about the business, products, and brand. And sentiment analytics can help us do that – often relatively inexpensively.
More than Market Research
The technology also is being put to good use outside the marketing and sales arenas.
Researchers at the Microsoft Research Labs in Washington discovered that it was possible to predict with text-based sentiment analysis which women were at risk of postnatal depression just by analyzing their Twitter posts. The research focused on verbal cues that the mother would use weeks before giving birth. Those who struggle with motherhood tended to use words that hinted at an underlying anxiety and unhappiness. There was more negativity in the language used, with an increase in words such as disappointed, miserable, and hate, as well as an increase in the use of “I” – indicating a disconnection from the “we” of impending parenthood.
Co-director of Microsoft Labs Eric Horvitz acknowledged that this type of information can be incredibly useful in reaching out and helping women at this vulnerable time, and also to help break down the stigma around postnatal depression. It would be a relatively simple step, for example, for a welfare group to create an app that could run on a smartphone and alert pregnant women to the onset of potential postnatal depression and direct them to resources to help them cope.
Beyond Text Analytics
Audio sentiment analytics is being used to measure stress levels in call centers so that customer service representatives can measure how upset the caller is and intervene earlier, before things escalate. Callers often talk into the receiver while they are on hold or listening to the soothing music, and they also can also make various sounds, such as heavy sighing, which can indicate that they are growing increasingly frustrated.
Even Wimbledon began using sentiment analysis this year to help predict which headlines and news topics emerging from the tournament would most interest its fans and followers. Their systems could analyze existing Tweets, updates, and comments and make predictive suggestions about the types of stories that fans would be most likely to react to positively.
Of course, sentiment analysis is not yet 100 percent accurate and it still needs a human’s watchful eye to ensure that the nuances of human speech are being fully understood by the computer.
In addition, it’s important to note that not all communications can be classified as positive, negative, or neutral. Human language, feelings, and the way we communicate are just too complex for that. As a result, experts predict sentiment analytics soon will move beyond a simple positive/negative scale and expand into classifying a broader range of human emotions. And as sentiment analytics grows in its ability to accurately recognize a wider range of feelings and shades of meaning, organizations will become more comfortable with the idea of sentiment analytics and begin using it in new and even more exciting ways.
Bernard Marr is a bestselling author, keynote speaker, strategic performance consultant, and analytics, KPI, and big data guru. In addition, he is a member of the Data Informed Board of Advisers. He helps companies to better manage, measure, report, and analyze performance. His leading-edge work with major companies, organizations, and governments across the globe makes him an acclaimed and award-winning keynote speaker, researcher, consultant, and teacher.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks. | <urn:uuid:8de9bced-fa1b-4654-9119-81f5ebd2724d> | CC-MAIN-2017-09 | http://data-informed.com/social-media-and-the-power-of-sentiment-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00056-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953403 | 1,210 | 2.734375 | 3 |
Cogswell M.E.,Division for Heart Disease and Stroke Prevention |
Yuan K.,Division for Heart Disease and Stroke Prevention |
Gunn J.P.,Division for Heart Disease and Stroke Prevention |
Gillespie C.,Division for Heart Disease and Stroke Prevention |
And 8 more authors.
Morbidity and Mortality Weekly Report | Year: 2014
Background: A national health objective is to reduce average U.S. sodium intake to 2,300 mg daily to help prevent high blood pressure, a major cause of heart disease and stroke. Identifying common contributors to sodium intake among children can help reduction efforts.Methods: Average sodium intake, sodium consumed per calorie, and proportions of sodium from food categories, place obtained, and eating occasion were estimated among 2,266 school-aged (6–18 years) participants in What We Eat in America, the dietary intake component of the National Health and Nutrition Examination Survey, 2009–2010.Results: U.S. school-aged children consumed an estimated 3,279 mg of sodium daily with the highest total intake (3,672 mg/d) and intake per 1,000 kcal (1,681 mg) among high school–aged children. Forty-three percent of sodium came from 10 food categories: pizza, bread and rolls, cold cuts/cured meats, savory snacks, sandwiches, cheese, chicken patties/nuggets/tenders, pasta mixed dishes, Mexican mixed dishes, and soups. Sixty-five percent of sodium intake came from store foods, 13% from fast food/pizza restaurants, 5% from other restaurants, and 9% from school cafeteria foods. Among children aged 14–18 years, 16% of total sodium intake came from fast food/pizza restaurants versus 11% among those aged 6–10 years or 11–13 years (p<0.05). Among children who consumed a school meal on the day assessed, 26% of sodium intake came from school cafeteria foods. Thirty-nine percent of sodium was consumed at dinner, followed by lunch (29%), snacks (16%), and breakfast (15%).Implications for Public Health Practice: Sodium intake among school-aged children is much higher than recommended. Multiple food categories, venues, meals, and snacks contribute to sodium intake among school-aged children supporting the importance of populationwide strategies to reduce sodium intake. New national nutrition standards are projected to reduce the sodium content of school meals by approximately 25%–50% by 2022. Based on this analysis, if there is no replacement from other sources, sodium intake among U.S. school-aged children will be reduced by an average of about 75–150 mg per day and about 220–440 mg on days children consume school meals. © 2014, Department of Health and Human Services. All right reserved. Source
Losby J.L.,Division for Heart Disease and Stroke Prevention |
Patel D.,Division for Heart Disease and Stroke Prevention |
Schuldt J.,Schenectady County Public Health Services |
Hunt G.S.,Schenectady County Public Health Services |
And 2 more authors.
Journal of Public Health Management and Practice | Year: 2014
This article describes lessons learned from implementing sodium-reduction strategies in programs that provide meals to older adults in 2 New York counties, with one county replicating the approaches of the other. The implemented sodium-reduction strategies were product substitutions, recipe modifications, and cooking from scratch. Both counties were able to achieve modest sodium reductions in prepared meals. Lessons learned to implement sodium reduction strategies include the following: (1) identifying partners with shared experience and common goals; (2) engaging experts; (3) understanding the complexity of the meals system for older adults; (4) conducting sodium nutrient analysis; (5) making gradual and voluntary reductions to sodium content; and (6) working toward sustainable sodium reduction. © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source
Wang C.-Y.,Centers for Disease Control and Prevention |
Cogswell E.M.,Division for Heart Disease and Stroke Prevention |
Loria M.C.,U.S. National Institutes of Health |
Chen T.-C.,Centers for Disease Control and Prevention |
And 9 more authors.
Journal of Nutrition | Year: 2013
Because of the logistic complexity, excessive respondent burden, and high cost of conducting 24-h urine collections in a national survey, alternative strategies to monitor sodium intake at the population level need to be evaluated. We conducted a calibration study to assess the ability to characterize sodium intake from timed-spot urine samples calibrated to a 24-h urine collection. In this report, we described the overall design and basic results of the study. Adults aged 18-39 y were recruited to collect urine for a 24-h period, placing each void in a separate container. Four timed-spot specimens (morning, afternoon, evening, and overnight) and the 24-h collection were analyzed for sodium, potassium, chloride, creatinine, and iodine. Of 481 eligible persons, 407 (54% female, 48% black) completed a 24-h urine collection. A subsample (n = 133) collected a second 24-h urine 4-11 d later. Mean sodium excretion was 3.54 ± 1.51 g/d for males and 3.09 ± 1.26 g/d for females. Sensitivity analysis excluding those who did not meet the expected creatinine excretion criterion showed the same results. Day-to-day variability for sodium, potassium, chloride, and iodine was observed among those collecting two 24-h urine samples (CV = 16-29% for 24-h urine samples and 21-41% for timed-spot specimens). Among all race-gender groups, overnight specimens had larger volumes (P ≪ 0.01) and lower sodium (P ≪ 0.01 to P = 0.26), potassium (P ≪ 0.01), and chloride (P ≪ 0.01) concentrations compared with other timed-spot urine samples, although the differences were not always significant. Urine creatinine and iodine concentrations did not differ by the timing of collection. The observed day-to-day and diurnal variations in sodium excretion illustrate the importance of accounting for these factors when developing calibration equations from this study. © 2013 American Society for Nutrition. Source | <urn:uuid:06348bad-c70d-4d08-86d9-d0d0ba470b1d> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/division-for-heart-disease-and-stroke-prevention-431772/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00232-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.912277 | 1,332 | 3.34375 | 3 |
In a nod to the value of car manufacturers' mile-per-gallon ratings, researchers at Columbia University’s School of Engineering and Applied Science published a study that led to a color-coded map of the five boroughs of New York City that displays energy cost and consumption data.
Like a temperature map or Doppler radar, the New York Energy Mapping Project uses a range of colors, from dark red to forest green, to indicate energy consumed per square meter of each tax lot, both in terms of heat and electricity.
“This map will enable New York City building owners to see whether their own building consumes more or less than what an average building with similar function and size would,” said Vijay Modi, a Columbia University professor of mechanical engineering, reported the New York Times. Midtown Manhattan has more energy use than the whole country of Kenya, Modi said.
The interactive map can be used to show where energy is being used, but still does not provide building-specific information. As government data becomes more accessible, as in New York, maps like this one should become more informative. Starting in 2011, private buildings larger than 50,000 square feet were required to file energy consumption data (and sometimes water usage) with New York City, where it will be accessible though a public database.
To learn more about the New York Energy Mapping Project, visit IBM's Building a Smarter Planet website. | <urn:uuid:f5f0ea20-3996-4146-9fbc-d03602d354b3> | CC-MAIN-2017-09 | http://www.govtech.com/technology/How-Much-Energy-Does-Your-Home-Really-Consume.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00408-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919634 | 292 | 3.25 | 3 |
Black Box Explains 8 Advantages of Fiber
Fiber optic cable is one of the most popular mediums for both new cabling installations and upgrades, including backbone, horizontal, and even desktop applications. Fiber offers a number of advantages over copper.
1. Greater bandwidth
Fiber provides more bandwidth than copper and has standardized performance up to 10 Gbps and beyond. More bandwidth means fiber can carry more information with greater fidelity than copper wire. Keep in mind that fiber speeds are dependent on the type of cable used. Single-mode fiber offers the greatest bandwidth and no bandwidth requirements. Laser-optimized OM3 50-micron cable has an EMB of 2000 MHz/km. Laser-optimized OM4 50-micron cables has an EMBof 4700 MHz/km.
2. Speed and distance
Because the fiber optic signal is made of light, very little signal loss occurs during transmission, and data can move at higher speeds and greater distances. Fiber does not have the 100-meter (328-ft.) distance limitation of unshielded twisted pair copper (without a booster). Fiber distances depend on the style of cable, wavelength and network. Distances can range from 550 meters (984.2 ft.) for 10-Gbps multimode and up to 40 kilometers (24.8 mi.) for single-mode cable.
Your data is safe with fiber cable. It doesn’t radiate signals and is extremely difficult to tap. If the cable is tapped, it’s very easy to monitor because the cable leaks light, causing the entire system to fail. If an attempt is made to break the physical security of your fiber system, you’ll know it. Fiber networks also enable you to put all your electronics and hardware in one central location, instead of having wiring closets with equipment throughout the building.
4. Immunity and reliability
Fiber provides extremely reliable data transmission. It’s completely immune to many environmental factors that affect copper cable. The core is made of glass, which is an insulator, so no electric current can flow through. It’s immune to electrometric interference and radio-frequency interference (EMI/RFI), crosstalk, impedance problems, and more. You can run fiber cable next to industrial equipment without worry. Fiber is also less susceptible to temperature fluctuations than copper and can be submerged in water.
Fiber is lightweight, thin, and more durable than copper cable. To get higher speeds using copper cable, you need to use a higher grade of cable, which typically have larger outside diameters, weigh more, and take up more space in cable trays. With fiber cable, there is very little different in diameter or weight. Plus, fiber optic cable has pulling specifications that are up to 10 times greater than copper cable, depending on the specific cable. Its small size makes it easier to handle, and it takes up much less space in cabling ducts. And, fiber is easier to test than copper cable.
The proliferation and lower costs of media converters are making copper to fiber migration much easier. The converters provide seamless links and enable the use of existing hardware. Fiber can be incorporated into network in planned upgrades. In addition, with the advent of 12- and 24-strand MPO cassettes, cables, and hardware, planning for future 40- and 100-GbE networks is easier.
7. Field termination
Although fiber is still more difficult to terminate than copper, advancements in technology have made terminating and using fiber in the field easier. Quick fusion splicers enables with auto-alignments enable fast splicing in the field. Auto-aligning pins ensure accuracy. And the use of pig-tails and pre-terminated cable make field connections quick and easy.
The cost for fiber cable, components, and hardware has steadily decreased. Overall, fiber cable is more expensive than copper cable in the short run, but it may be less expensive in the long run. Fiber typically costs less to maintain, has less downtime, and requires less networking hardware. In addition, advances in field termination technology has reduced the cost of fiber installation as well. | <urn:uuid:53bdb4a5-cf20-473d-8517-f951bb1e25c3> | CC-MAIN-2017-09 | https://www.blackbox.com/en-pr/products/black-box-explains/8-advantages-of-fiber | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00052-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925543 | 851 | 2.734375 | 3 |
When people discuss Bitcoin, one of its properties that is often considered is its presumable anonymity. In this respect, it is often compared to cash. However, it shall be recognized and understood that Bitcoin is not as anonymous as cash; far from it, actually. Its anonymity relies on the concept of pseudonyms, which delivers some (unjustified) sense of anonymity, but very weak anonymity in practice.
A pseudonym of yourself is an identifier which is not directly linked to you by any mapping that is readily available. If you have a nickname, which does not clearly resemble your real name, and if you make sure that mapping from your nickname to your real name is not trivial (e.g., by not using both your real name and your nickname interchangeably), then your nickname is a pseudonym.
Bitcoin uses a similar concept to identify players. Each participant in the system has a randomly-looking string -- the hash of his public key -- which is used as his identity. This string identifies you in the Bitcoin system. When Bitcoin money changes hands, it is moved from one such identity to another, and a public ledger indicates which identity paid what identity and how much. There is no clear mapping between your real-life identity and your identifier in the Bitcoin system, and moreover, you can have as many identities as you like in the Bitcoin system, and move funds between those identities as you like.
The Bitcoin use of pseudonyms does not provide anonymity, certainly not of the type offered by cash.
Both academic and practical research had shown methods of "unmasking" pseudonyms, that is, of mapping them back to real identities, based on their being persistent across uses. When a pseudonym is used multiple times, its level of anonymity erodes. Each particular event in which it is used potentially narrows the circle around its owner. For example, imagine that you are "anonymously" surfing the web, identifying yourself only with a pseudonym. As you repeatedly use your pseudonym, it can be used to link your surfing actions over time. After some time, your real identity may be inferred from your surfing pattern alone. Narrowing on your identity is especially easy if you also surf to one or more uncommon websites. Also, one instance in which you surf to a location that discloses your real identity burns the entire pseudonym mask forever, and retroactively.
This situation is the same with Bitcoin, and is even more severe, for two reasons:
Bitcoin security is based on crowd-sourcing. The entire ledger of transactions is always available to the entire public. Therefore, finding patterns is made easier. Unlike with cash, all changes of hands are clearly documented. The fact that you can have many identities does not make a difference when all your internal transfers are documented along with the external ones.
The second point applies if you ever buy or sell Bitcoins for "real" currency. Trading Bitcoins for other currencies is done by entities that are bound by banking, fraud-prevention, and money laundering prevention regulations, and thus obtain your real world identity. As soon as a single Bitcoin finds itself out of your system, or gets into your system, your relevant Bitcoin identity is eternally revealed, as well as most likely any other Bitcoin identity you maintain.
We conclude that Bitcoin is not as anonymous as cash. For cash, there is no clear recorded trail of all its change of hands. Consequently, even when you withdraw or deposit cash, you only surrender information about your present ownership of the bills and coins you trade, nothing about what you did or will do with them. Moreover, I stress that Bitcoin may be considered as less anonymous than credit cards. The credit card company indeed knows all about your transactions, without de-anonymizing any pseudonym. However, it is only the credit card issuer that has this information (at least theoretically). In the Bitcoin case, revealing your identity requires some effort of de-anonymization, but this effort can be undertaken by anyone on earth, using the all-public records.
Bitcoin has the power to change economy, if it ever gains its critical mass and if it is fully commercialized. It has many advantages over traditional currency, such as by being decentralized and free of artificial inflation. Bitcoin has its advantages; but cash-like anonymity is not one of them.
Is Bitcoin pretending to be as anonymous as cash ?
If yes, your essay is quite amazing !!
Bitcoin is never officially claimed to be anonymous. However, it is rightfully claimed to be decentralized and unbacked by any state or financial institution. These features led to an implicit assumption of anonymity by some people. For example, Bitcoin is accepted by some websites that sell illegal goods over the net as well as by criminals running extortion activities.
Also, the common idea that Bitcoin will prevent sanctions and taxation has its roots based on an assumption of anonymity.
There are solutions for the lack of anonymity. Use coin remixers like coinjoin. blockchain.info provide a free, hassle free, opt-in remixer service.
Bitcoin is not anonymous and it is so boring and old. There are some real anonymous crypto. This year anonymymous cryptocurrencies will be in trend.
Just look at duckNote, one of my favorite crypto. duckNote!
duckNote brings idea of mixing and ASIC-resistance. Best crypto ever! Sorry for my emotions, but
duckNote is a true anonymous coin.
Form is loading... | <urn:uuid:65a539ec-ae59-4517-9fb0-3bd8abd578bd> | CC-MAIN-2017-09 | https://www.hbarel.com/gen/security/bitcoin-does-not-provide-anonymity | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00404-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96444 | 1,108 | 2.625 | 3 |
The Original Story: In July 2012, Government Technology wrote about a coalition of federal, state and local interests, including Fresno County, working to secure high-speed wireless broadband to take California’s San Joaquin Valley agricultural sector to the next level. Wireless broadband would allow farmers to put moisture sensors into the soil beneath individual trees, like olives and almonds, so that each tree gets exactly the right amount of water. Wireless technology also would allow farmers to incorporate GPS into their operations.
Project Update: In the year since the story appeared, the players working to develop an “ag-tech cluster” around Fresno have continued to collaborate on bringing together sources of innovation. Fresno was already one of six cities receiving assistance from the federal government’s Strong Cities, Strong Communities Initiative, which is designed to help ramp up economic development by supporting community programs. In addition, Fresno is participating in the IBM Smarter Cities Challenge. IBM researchers noted that the city already had access to super high-speed Internet, but that many businesses in the downtown area aren’t taking advantage of it, said CIO Carolyn Hogg, so one short-term goal is to increase those businesses’ digital presence.
Rachel Audino, government affairs manager in the Office of Community and Economic Development at California State University, Fresno, who leads the San Joaquin Valley’s broadband consortium, said the group is working to identify an agricultural pilot site to study broadband-enabled technologies that will promote water-efficient farming practices in the region.
“We went to the World Agricultural Expo in Tulare, Calif., and talked to farmers about their needs and expectations,” she said. “There was definitely a lot of interest and some existing technology use. Some farmers are now using GPS-enabled tractors that have increased furrowing efficiency by 7 percent. They want to work on the same kind of efficiency gains around watering.”
Robert Tse, a community planning and development specialist at the U.S. Department of Agriculture, said that to further develop the ag-tech sector, the region needs to have a source of innovation much like Stanford University is a source of innovation for California’s Silicon Valley. He said a memorandum of understanding has been created between the USDA and the U.S. Energy Department to work together on applications of technology related to water usage and the use of wireless broadband.
In August, the San Joaquin Valley Regional Broadband Consortium planned to hold an agriculture technology showcase in Fresno where researchers will present their ideas to entrepreneurs and venture capitalists. “The hope is that they will follow up and commercialize the technology,” Tse said. A Central Valley Business Incubator already exists to host such businesses. -- David Raths
The Original Story: Chattanooga calls itself The Gig City — in reference to the fiber-to-the-home network built across 600 square miles of Chattanooga and surrounding Hamilton County. Chattanooga’s municipally owned utility, EPB, built a fiber-optic grid with up to 1 gigabit-per-second service now available to all businesses, residences, and public and private institutions. The network has the business community dreaming big, with aspirations of becoming a Silicon Valley of the South. In April 2012, then-Mayor Ron Littlefield told Government Technology: “Here is a community with a Southern quality of life, has a pretty good university, has a lot of amenities, and once was the dirtiest city in America. And now [it has] this great technological tool that we can use to build a future.”
Project Update: EPB’s original purpose for rolling out a $300 million fiber-to-the-home network was to create a far more efficient electric grid. EPB spokeswoman Danna Bailey said the utility can point to several improvements from that smart grid investment. “We are seeing reductions in outage minutes because of real-time monitoring,” she said. “On Jan. 14, 2013, a huge tree fell on a line. Because of the way we can identify outages and reroute power, customers lost power for only three minutes.”
Bailey said the network’s subscriber base has grown to approximately 50,000 residential and 4,500 commercial customers. The utility has increased the network’s base speed from 30 megabits per second to 50 Mbps. That is 10 times faster than average residential rates, she said. “We also reduced the cost of the gigabit service from $350 to $300 per month.”
J.Ed. Marston, vice president of marketing and communications for the Chattanooga Area Chamber of Commerce, said the fiber network gives Chattanooga a recruiting edge. “Some companies are interested in the data infrastructure and others such as manufacturers are interested in the smart grid,” he said. “Many power-sensitive organizations have dual feeds to guarantee they have power if one source goes down. EPB has a way to do that virtually now that obviates the need to have those two feeds.
“We see the fiber network invigorating the entrepreneurial scene,” Marston added. “We have GIGTANK, the world’s only business accelerator on a fiber network, and the Chamber’s INCubator, which has 20 tech companies and a 91 percent success rate.”
Sheldon Grizzle, who runs the GIGTANK accelerator, said the entrepreneurial community has rallied around the fiber grid. “It is a huge thing for us,” he said, pointing to a Florida-based startup called Banyan that relocated to Chattanooga after using the GIGTANK last year. The company created a platform for scientists around the world to collaborate to find cures for diseases. “They came from Tampa last summer and really embraced the platform the city can offer, including our mentor network,” Grizzle said. Although the company founders went home when their GIGTANK program ended, they soon returned to Chattanooga permanently, saying they lost momentum when they left, according to Grizzle.
“They could have located anywhere or worked for any tech company, and they chose Chattanooga,” he said. “So I think we are making phenomenal progress, although there is always room for improvement.” -- David Raths
The Original Story: Two years ago, Wyoming surprisingly became the first state to roll out Google Apps enterprisewide, showing that the cloud isn’t just for big cities.
Gov. Matt Mead unveiled the new solution in 2011 at a news conference, announcing that 10,000 state employees had been shifted to Google’s cloud-based email and productivity suite. Mead said the new tools would improve communication and collaboration, and provide better storage capacity and cybersecurity protection. State officials predicted the hosted solution easily would save $1 million annually.
Project Update: State CIO Flint Waters said the state comfortably made its savings target, cutting email costs by more than $1 million per year. But the biggest benefit, Waters said, is a “significant cultural shift in how we capture creative thought.”
With Google Docs, state workers can collaborate on documents in real time, a process that’s cutting approval and processing time. The new approach is required when an agency submits a business case to the state’s IT department for approval, although Waters conceded that many of Wyoming’s agencies have retained their legacy workflows internally.
Mead recently released his energy policy on a Google Plus Hangout. Soon Wyoming will save $1.3 million a year by decommissioning its legacy Tandberg video-conferencing solution, Waters said.
Google and Wyoming are finding ties elsewhere too. The company is helping the state develop a SourceForge-style engine for software development, and has added new functionality to Google Apps for
Government when the state has requested it, Waters said. Next up, Wyoming is adopting Google Apps Vault for records retention. -- Matt Williams
The Original Story: Early last year, Government Technology reported that the Minnesota Office of Enterprise Technology had moved almost 40,000 workers to Microsoft Office 365 for email services and collaborative tools under an enterprisewide service agreement that the state signed with Microsoft in 2010. Minnesota was the first state to fully deploy Microsoft’s cloud-based Office 365 product, according to the company. Shortly after that announcement came word that the city of St. Paul would share the state email system and was in the process of transferring more than 3,000 city email accounts to the Office 365 platform.
Project Update: The project appears to be paying off for both Minnesota and St. Paul. State agencies have used the cloud-based platform — dubbed Enterprise Unified Communication and Collaboration (EUCC) by the state — for about a year and a half. The custom-built, cloud-based system integrates Office 365 tools such as SharePoint and Lync.
According to the Minnesota Office of Enterprise Technology, the EUCC is being used by more than 70 agencies, commissions and boards. It gives users new features like the ability to co-edit documents in real time, conduct tutorials by sharing desktop access with colleagues across town and actively participate in meetings while away from the office. Gov. Mark Dayton and other key officials can share information statewide with a single email post and coordinate activities in times of crisis.
“MN.IT continues to work on quantifying the long-term cost savings of this initiative,” said Tarek Tomes, assistant commissioner of Customer and Service Management. “However, the benefits from system improvement, new communication and collaboration capabilities have been substantial, allowing interagency collaboration on an unprecedented level.”
As of June, the communications platform had brought in more than 47,000 Exchange mailboxes and provisioned 35,000 SharePoint users, including external customers such as the city of St. Paul.
Cindy Mullan, St. Paul’s deputy CIO, said the state and the city knew from the start that moving together into the cloud would be a high-profile project with little room for error. She credits disciplined project management and teamwork between city and state tech staff for moving the project along. One key decision that helped, she said, was splitting off the most challenging work — email archiving — from the rest of the project. St. Paul expected to have access to the state’s archive system beginning in July.
Mullan said St. Paul also is saving money with Office 365. The cost per seat for the city’s 3,270 email boxes has gone from $56 a year to $43. -- Matt Williams | <urn:uuid:bed21ee0-e3e2-442b-b434-2f59387fe735> | CC-MAIN-2017-09 | http://www.govtech.com/network/Whatever-Happened-To--Chattanoogas-Gigabit-Internet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00104-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952273 | 2,193 | 2.921875 | 3 |
The term “watering hole” refers to initiating an attack against targeted businesses and organizations. In a watering hole attack scenario, threat actors compromise a carefully selected website by inserting an exploit resulting in malware infection.
Senior threat researcher Nart Villeneuve documented the use of the watering hole technique in both targeted and typical cybercriminal attacks as early as 2009 and 2010.
How does a watering hole technique work?
A watering hole attack typically works this way:
Attackers gather strategic information that they can use to gain entry into their targeted organization. This step can be compared to a military reconnaissance mission. The information gathered may include insights on trusted websites often visited by the employees or members of their targeted entity. The process of selecting websites to compromise was initially dubbed “strategic web compromises.”
Attackers insert an exploit into the selected sites.
Once targeted victims visit the compromised site, the exploit takes advantage of software vulnerabilities, either old or new, to drop malware. The dropped malware may be in the form of a remote access Trojan (RAT), which allows attackers to access sensitive data and take control of the vulnerable system.
Where is this attack technique used?
Watering hole attacks were previously documented in several high-profile cases which include:
Attack on high-profile groups. Just before the end of 2012, the Council on Foreign Relations (CFR) website was compromised to host a zero-day exploit in Internet Explorer. Those who visited the site were served with a backdoor malware. Microsoft addressed this vulnerability though the Microsoft Security Bulletin MS13-008.
Why is it effective?
Attackers incorporate strategies to circumvent the targeted organizations’ defenses in order for watering hole attacks to be effective. These may come in the form of outdated systems or simply human error.
In watering hole attacks, the goal is not to serve malware to as many systems possible. Instead, the attackers run exploits on well-known and trusted sites likely to be visited by their targeted victims. This makes the watering hole technique effective in delivering its intended payload.
Aside from carefully choosing sites to compromise, watering hole attacks are known to incorporate zero-day exploits that target unpatched vulnerabilities. Thus, the targeted entities are left with little or no defense against these exploits.
This doesn’t mean that attackers don’t target patched system vulnerabilities. Because of patch management difficulties in an enterprise setting, IT administrators may delay deploying critical updates. This window of exposure may lead to a targeted attack leveraging old, but reliable vulnerabilities.
Who are the targets of a watering hole attack?
The watering hole technique is used in targeted attacks that aim to gather confidential information and intelligence from the following organizations:
Human rights groups
The stolen information, in turn, may be used to initiate more damaging attacks against the affected organization.
What is the impact of these attacks?
The social engineering technique used in watering hole attacks is strategic. Unlike a usual social engineering attack, threat actors employing the watering hole technique carefully select the most appropriate legitimate sites to compromise, instead of targeting random sites. Because the watering hole technique targets trusted and frequented sites, relying on solely visiting trusted sites to avoid online threats may not be an effective practice.
In cases where watering hole attacks lead to a RAT, attackers can also execute commands on infected servers. These include spying and monitoring the activities of the target organization.�Because an attacker was able to infiltrate a targeted organization’s network, they can also initiate attacks that are harmful to the organization’s operations, which include modifying or deleting files with crucial information.
We may be seeing more of attacks using watering hole in the future. Trend Micro vice president for cyber security Tom Kellermann predicted that because of its better methodology, watering hole attacks can become a more popular way to pollute trusted sites in 2013.
What can I do to prevent these attacks?
Timely software updating. For watering hole attacks that employ old vulnerabilities, an organization’s best defense is to update systems with the latest software patches offered by vendors.
Vulnerability shielding. Also known as “virtual patching,” it operates on the premise that exploits take a definable network path in order to use a vulnerability. Vulnerability shielding helps administrators scan suspicious traffic as well as any deviations from the typical protocols used. Thus, this monitoring empowers system administrators to prevent exploits.
Network traffic detection. Though attackers may incorporate different exploits or payloads in their attack, the traffic generated by the final malware when communicating with the command-and-control servers remains consistent. By detecting these communications, organizations can readily implement security measures to prevent the attack from further escalating. Technologies such as Trend Micro Deep Discovery can aid IT administrators in detecting suspicious network traffic.
Correlating well-known APT activities. Using big data analytics, organizations can gain insight on whether they are affected by a targeted attack by correlating and associating in-the-wild cybercrime activities with what is happening on an enterprise’ network.
Organizations should also consider building their own local intelligence to document previous cases of targeted attacks within the company. These enable organizations to spot possible correlations and insights needed to create an effective action or recovery plan.
“Watering hole attacks will grow in popularity as polluting trusted websites is a far better targeted attack methodology than targeting individual users.” – Tom Kellerman, vice president for cyber security
"While cybercriminals use “drive-by” exploits to indiscriminately compromise as many computers as they can, the use of this technique in relation to APT activity is what Shadowserver aptly described as “strategic web compromises. The objective is to selectively target visitors interested in specific content. Such attacks often emerge in conjunction with a new drive-by exploit." – Nart Villeneuve, senior threat researcher | <urn:uuid:5e07d6c2-2449-4ebe-a050-1283021df885> | CC-MAIN-2017-09 | http://www.trendmicro.com.au/vinfo/au/threat-encyclopedia/web-attack/137/watering-hole-101 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00100-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937942 | 1,188 | 3.09375 | 3 |
Emerging Access and Authentication Methods for Healthcare
Medical records are now, by and large, available in electronic form – in fact almost 8 in 10 of every physician uses EHR. Conveniently accessing them in a secure and compliant way is the challenge that everyone involved in the Healthcare industry faces. In 2015 the top three healthcare breaches resulted in over 100,000 million compromised records. While full disclosure of these attacks is not fully released, the key for criminals is often stolen credentials whether that be a user, administrator, or someone else with privileged system access. These attacks show bravado and hit the major headlines. Alongside the big hacks, there is a growing rash of small crimes at healthcare facilities like stolen medications, illicitly written prescriptions and theft of targeted individual health care records. For example, in a Cleveland Clinic, four nurses are being accused of stealing patient medications such as Oxycodone (a pain opioid sought after by drug addicts.)
Implementing strong access and authentication controls is the next step healthcare organizations must take to comply with the HIPAA and harden the attack surface from both sophisticated criminals and petty staffer criminal alike. Healthcare organizations are still standardizing on the right approach – let’s take a closer look at some of the technologies that are currently in use and explore them from a security and hackers perspective.
RFID (Radio Frequency Identification)
You may have one and not even know it. RFID technologies make up the majority of the market, most white access badges that you swipe to gain access to a door or potentially a computer have sophisticated micro circuitry built in. Some of the amazing things that you might not know about RFID are:
- There is no battery! The circuitry is powered by the energy it receives from the antenna when it is near a card reader.
- Some RFID chips can contain up to 1K of data, that doesn’t sound like a lot but that is enough to hold your name, address, social security number and perhaps your last transaction.
- RFID chips can be so small they may be imperceptible, Hitachi has a chip that is 15 x 0.15 millimeters in size and 7.5 micrometers thick. That is thinner and smaller than a human hair.
The good news for security professionals at healthcare organizations is there are many choices and uses for RFID technology. Cards and readers purchased in mass quantities drive the price down and provide a homogeneous system that may be easy to administer as it becomes part of the onboarding and provisioning process. In addition to door access for staff, RFID cards can be given to patients on check in so that they have another form of identification. The bad news is that hackers are after consistent well-documented systems and they like hacking esoteric data transmissions like the ones that RFIDs use. Using inexpensive parts that are on my workbench like an Arduino Microcontroller, a criminal could create a system to capture the transmission and essentially clone the data on a card then pose as an insider.
There seem to be an ever-growing array of BioMetric devices like vein readers, heartbeat, iris readers, facial recognition and fingerprint readers. When implemented properly a live biometric, that is a biometric device that samples both unique physical characteristic and liveliness (pulse for example) is almost always a positive match, in fact, fingerprint reading is used at border control in the US and other countries. There are hacking demonstrations with molded gummy worm fingers, scotch tape finger lifts and even the supposed cutting off a finger. Those attacks are on the far end of a practical hack as it is not repeatable or easy for a criminal. The hurdles that biometrics face are:
- Near 100% Match – This is a good news as we truly want valid users however skin abrasions, irregular vital signs, and aging are just some factors that make the current set of bio-metrics sometimes create false positives.
- Processing Time – There are several steps to the fingerprint and biometric authentication process. Reading, evaluating the match then validating with an authentication service can take up to a second. The process is not instantaneous – I can enter my password faster on my iPhone than I can get a positive fingerprint match. Doctors and nurses patients simply don’t have the seconds to spare.
- Convenience – Taking off gloves, staring at a face or retinal reader is simply not an option when staff is serving potentially hundreds of patients a day.
As the technology and processing improve, I think we will see a resurgence in BioMetric in healthcare but for now my local clinic has decommissioned the vein reader.
Bluetooth technology is becoming ubiquitous. It is being built into almost all devices – some estimate that it will 90% of mobile devices by 2018. Bluetooth is still emerging in the healthcare market which is dominated by RFID, however, there are advantages to Bluetooth over RFID cards:
- Contactless – Bluetooth low energy relies on proximity rather than on physical contact. While this might not seem like a huge advantage in a high traffic critical situation such as an emergency room, seconds count. In addition, systems that require contact such as a card swipe or tap require maintenance to clean the contact.
- BYOD Cost – For smaller clinics and organizations that are cost conscious using employee devices as a method of authentication may be the way to go as they will not incur the expense and management of cards and proprietary readers. In fact, a Bluetooth reader can be purchased for as low as little as $4 compared with $100 card readers.
- BYOD Convenience – Many organizations recognize an added convenience factor in using their employee, partners and customers mobile devices as a method of authentication. Individuals are comfortable and interested in using their phones as access devices. Administrators can quickly change access controls just-in-time for access to different applications, workstations and physical locations rather than have to restripe cards.
On the hacker side, Bluetooth signals just like RFID can be cloned however combined with OTP (One Time Password) for another layer of authentication criminals could be thwarted.
I contacted Jim Gerkin Identity Director from NovaCoast and he mentioned that we may see an uptick in small and mid-sized clinics using authentication devices in 2017. They are looking for cost effective and open standard systems based on FIDO standards. Bluetooth has the potential to meet requirements from a cost and security perspective again if OTP is used in conjunction.
The good news is that Micro Focus’s Advanced Authentication works with multiple types of authentication methods whether it be legacy systems, RFID, BioMetric and now Bluetooth. In addition Micro Focus is part of the FIDO alliance which ensures a standardized approach. I look forward to evaluating emerging authentication technologies in 2017 that may use DNA, speech recognition and other Nano-technology – watch this space! | <urn:uuid:42718d70-2f59-4743-b73f-5f20ba80b44f> | CC-MAIN-2017-09 | https://blog.microfocus.com/tag/access-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00396-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943273 | 1,399 | 2.515625 | 3 |
We carry miniature computers around with us – but are we doing enough to protect them and our data?[Article written by Nick Booth, andpublished in The Review, October 2012]
We take them for granted these days, but smartphones are actually miniature computers with more processing power than NASA used to put a man on the moon. Thanks to mobile broadband, smartphones now outsell PCs and laptops because they can do anything that a desktop computer can do - and more.
But while we routinely protect our PCs from viruses and hackers, a third of mobile users have no protection, according to McAfee research. This makes us - and our devices - incredibly vulnerable. If a criminal can plant some software on your phone, they can take control of it, steal all your banking details, spy on you and run up huge phone bills on your account. And with the number of smartphone users increasing by the day and ever more services being created, there is an increasing need for vigilance.
Criminals only need to find one open window for the virus writer, hacker or identity thief to steal everything. So what are the windows of opportunity for criminals and how do you close them down?The most obvious way to safeguard your privacy is to passcode-protect your phone in case you lose it. Some handset vendors now offer biometric recognition; Motorola, for example, created a fingerprint sensor for the Atrix, its Android mobile phone.
Malware – rogue software used for criminal purposes – is the next biggest threat. Criminals can fool you into allowing rogue software on to your phone when you download apps, respond to texts or visit Facebook. As with desktop PCs, downloading apps from unknown sources is the biggest risk, as they can be conduits for malware.But it is SMS texting, which is still phones’ most-used feature, that creates a hacker’s biggest opportunity to steal from you. Mobile malware can make your phone send thousands of premium-rate SMS texts and you won’t even know it until your six-figure phone bill arrives. By the end of 2011, there were 130,000 malware apps in existence for Android phones alone, according to Trend Micro, and most were for SMS fraud.
Even legitimate mobile apps have their security vulnerabilities, and cybercriminals are finding these coding weaknesses and beginning to load their rogue code into them.
The moral is that you must never assume your software is safe, even if it comes from a reputable supplier. So how do you minimize the risk of falling prey to all these online threats? Here are some strategies to adopt.
Limit the number of downloads you make. The sites you visit most frequently are also likely to be havens for criminals, who try to exploit popular apps, URLs, attachments, social media or email. By clicking a link or downloading an attachment on your mobile device, you may end up installing mobile malware instead.
App stores are a danger area. Although the proprietors try to monitor their stores for malware, rogue software vendors can sneak in. Malware disguised as a stock market app – that was actually designed to steal information from the downloader’s device – made it into the iTunes App Store recently.
Apple users should avoid the temptation to “jailbreak” their iPhones using software that allows them to break out of the confines of iOS. This can lead to a malware invasion. If you use an Android phone, jailbreaking isn’t an issue as Android phones have no boundaries. That’s not to say they’re risk-free, however: in the last seven months of 2011, malware targeting Android grew by 3,325% and Android malware accounted for about 46.7% of unique malware samples, according to Juniper Networks. Google is now attempting to secure its App Market with an internal malware detector called Bouncer that scans apps submitted to the Android Market.
Even the most vigilant mobile users drop their guard at times, so it is vital to install security management systems. These software solutions and gadgets will create a secure foundation. The rest is up to you.
Robert Winter, 48, mobile data recovery manager, UK
1- Go into the Settings menu and set up a passcode for your phone.
2 - While in Settings, Android users should turn off the Access from Unknown Sources option.
3 - Check the reputation of any publisher before you buy an app from it.
4 - When you install an app, check the permissions it asks for. Be very careful about granting any. No game app needs to know your contacts or location.
5 - Watch out for social media – hackers are now placing malicious links on your friends’ profiles that install malware on your device when you click them.
6 - Keep your phone updated with the latest security firmware to correct possible vulnerabilities.
7 - Block the installation of rogue software by using the Tools menu of your internet browser to disable Java.
8 - Don’t trust public Wi-Fi, especially for financial or other secure personal transactions. | <urn:uuid:d7666f11-ecf2-4dec-bd6f-34f42c35743b> | CC-MAIN-2017-09 | http://www.gemalto.com/mobile/inspired/mobile-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00096-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927265 | 1,022 | 2.609375 | 3 |
Cloud Computing: Exciting Future for IT Virtualization
Cloud computing: exciting future for IT virtualization
Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users' applications. Essentially, it is an idea that the technological capabilities should "hover" over everything and be available whenever a user wants.
Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations.
Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future.
Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations.
Cluster architecture for virtual servers
There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another. | <urn:uuid:3b995ae2-b4f7-4e0f-bb76-06faa33750ec> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Virtualization/How-to-Implement-Green-Data-Centers-with-IT-Virtualization/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00324-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931858 | 397 | 2.828125 | 3 |
The pace of change for Information Technology is challenging established notions of "What is IT?" and "What is Information Security in the modern age?" For one example, the "new" data center technologies such as virtualization, Software-Defined Networking (SDN), service-oriented delivery models, and cloud computing have radically changed the typical IT infrastructure from a defined set of assets owned and controlled by the organization to a constantly fluctuating roster of resources that can come and go from IT department visibility and control.
As this has occurred, we have witnessed the equivalent of a Cambrian Explosion of new Internet-connected life forms--mobile devices, tablets, sensors, actuators, home appliances, monitoring systems, content access devices, and wireless terminals. Applications running on these devices range from recreation to services critical to the functioning of our social and economic infrastructure. Put it all together, and we expect that world population of Internet-connected devices will grow from today's 10 billion to over 50 billion by the year 2020.
From a security point of view, these IT changes, including the expansion of Internet-connected devices, lead to a corresponding increase in attack surface. Instead of the mission of protecting a reasonably known and enclosed IT perimeter, we now must be ready to secure any connected device humans can make against any threat a hacker can innovate. Clearly, using established security practices, except on a larger scale, will not suffice.
Plainly said, we need to think differently about cybersecurity.
One classic strategy and two new ones
The aspects I just quickly described may sound overwhelming, but I remain optimistic, however, that methods exist to contain damage to assets, processes, and people that make use of information technology. Ironically, what is old is new again for some of this, and then there are just plain new ways to approach. Of the many to surface, I'd like to talk about three in particular.
Do the basics and do them well
This includes taking a diligent approach to software patching, user identity management, network management, and eliminating any dark space in your infrastructure. The main objectives in this endeavor include reducing attack surfaces available to adversaries and basing resource access policies on need-to-know/need-to-use principles. Even just getting better at patching can reduce available attack surface by 70 percent. Organizations that perform thorough asset inventories are often surprised by how many previously undocumented systems they discover connected to their network.
This do-the-basics strategy might sound commonplace, but it can be quite demanding when one takes into account the diversity and sheer numbers of devices and systems that today's IT operations must secure. A sophisticated identity management program that brings together the latest strong password, federated identity, privilege management and anomalous behavior detection technologies would not have been possible a few short years ago, but it can go far in improving the ability of security teams to prevent, see, and contain security incidents.
Strive to spread doubt and confusion in the adversary's mind
There are plenty of ways to do this. You can start by making your infrastructure a moving target by changing addresses, infrastructure topologies, and available resources daily. An activist approach to virtualization makes it possible to build up and tear down resources at will. SDN technology can virtualize the deception process while streamlining the process of building security management and control features into the network fabric. In short, do what you can to prevent the adversary from seeing the same infrastructure twice.
You can also set up honey pots and Potemkin villages on your network that can waste the adversaries' time, divert them from real assets, lead them to tainted intellectual property, or cause them to stumble into alarms that announce their presence in your domain. At their most advanced, these techniques can shake adversaries' confidence in their hacking prowess and increase their anxiety over being caught, exposed and prosecuted.
Collect, correlate, and analyze as much operational data as you can
This strategy is significant as it signals a shift in the remediation mode to detecting and defeating attacks and intrusions quickly and thoroughly when they do occur. In the data, you are looking for Indicators of Compromise (IoCs) -- anomalous device or user behavior, network traffic to and from known addresses, and other tip-offs. Data subject to analysis can include local telemetry from your infrastructure, information and intelligence from beyond your infrastructure, or data traffic that doesn't conform to normal patterns of activity.
Changing your mental approach is just as essential
This new approach to security carries with it a not-trivial change in our mental approach for security. Formerly, we thought of security as defending perimeters and hardening assets against attack. The new model calls for assuming that if people, things, and business processes haven't been compromised, they will be shortly. Established security tools and products like firewalls, security appliances, or anti-malware software do a good job of blocking known threats and leave us freer to detect, recognize, and contain those threats that manage to slip through basic defenses.
Increasingly, we have come to understand that the most dangerous threats do their work quietly and quickly, and then disappear. A threat of this kind will typically wreak its damage in minutes, hours or days. By contrast, too many security teams require days, weeks, or months to discover and remediate an intrusive threat of this kind. That's not good enough.
We also need accountability shifts, a measure by which to define efficacy, and a willingness to "break some glass" to change what we have...otherwise, we continue to get more of what we have today, and that isn't acceptable.
The strategies recommended in this article do three things to make adversaries' life more difficult:
- Shrink attack surfaces and vulnerabilities through the basics
- Shift the burdens of fear, uncertainty, and doubt onto the bad actors
- Reduce latencies between the moment a threat lodges in an infrastructure, its detection, and its disposal
While we have a challenging road ahead to secure IT-enabled social and economic processes from deliberate harm, new technologies, network intelligence, and new ways of thinking about cybersecurity itself give us a fighting chance.
John N. Stewart is the CSO of Cisco Systems, Inc.
This story, "3 Strategies for the New Era of Enterprise Cybersecurity" was originally published by CSO. | <urn:uuid:38841814-18bd-4c2e-b5f6-24b5b5531b19> | CC-MAIN-2017-09 | http://www.cio.com/article/2375420/cybercrime/3-strategies-for-the-new-era-of-enterprise-cybersecurity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00444-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948382 | 1,299 | 2.546875 | 3 |
For the first time in history, more than half of the people in the world live in urban areas. Now more than ever, cities are supporting rapidly increasing populations while struggling to maintain services, operations, and quality of life for their inhabitants.
As cities grow, the task of understanding how they work is becoming a pressing global issue. Currently, about 80 percent of the U.S. and about 50 percent of the world’s population resides in urban areas, growing at over 1 million people per week. In the face of unprecedented growth, cities are faced with a unique challenge: refurbishing and maintaining existing infrastructures to support their current inhabitants while also planning sufficiently to accommodate future populations. If growth patterns continue at this speed, by 2050, 64 percent of people in the developing world, and 85 percent of people in the developed world, will call an urban area their home.
But while global urbanization seemingly presents myriad challenges, it also offers a potential solution – in the form of data. Thanks to the digital revolution, we now have more information at our disposal than ever before, and the amount of data that urban areas are generating is truly staggering. In New York City alone, the local government creates a terabyte of raw data every day, with information on everything from parking tickets to electricity.
So how do we use this data to extract valuable information? The emerging field of urban science is dedicated to answering that question.
Scientists and governments are finding ways to unite two extraordinarily profound developments in human history: the digital revolution and global urbanization. The result is the nascent field of urban informatics, the use of data to better understand how cities work. Using urban informatics, large-scale data and analytics can be interpreted to address problems and create solutions for operations, planning, and development.
The chief task of urban scientists is to give structure and new meaning to the sea of information that people produce every day. Cities collect data from two main sources: the digitized records of commercial and government files from years past, and the ever-growing pool of sensors and data-collection tools throughout our society.
The Role of Physicists
Given the amount of data available to interpret, many urban scientists have adopted methods from another field: physics. In the past, the set of tools and methods that physicists use has been applied to other sciences, such as astronomy and biology. Physicists are trained to solve complicated problems, handle large data sets, develop new instrumentation, work with interdisciplinary teams, and apply procedures to avoid self-deception. They have a tradition of organizing large groups of scientists focused on specific research questions. The sheer amount of data that a metropolis can produce makes urban science studies especially suited to the same concepts of scientific inquiry that physicists use on a daily basis.
Urban Informatics in Action
By bringing big data into the public sphere, researchers can analyze and improve the ways in which city agencies work together to provide services, as well as the ways in which they interact with their citizens. Sensors can report real-time traffic conditions, utility supply and consumption, public transportation activity, environmental quality, and crime.
In a recent project, my colleagues at New York University’s Center for Urban Science and Progress and I launched a project to monitor noise in Brooklyn, NY. We mounted sound sensors on streetlight poles and building facades to gauge the volume of house parties and car horns. This data can be distributed to city agencies, giving officials research that can help them to enforce noise ordinances that are often overlooked in large cities like New York.
Big data offers the potential to provide citizens with new ways to observe and interact with their cities. Officials can track smartphones to understand road congestion and send more accurate news alerts out to the public. Knowing pollution levels block by block can help families choose where to live. In addition to these more practical uses, social media tools like Facebook, Twitter, and other mobile devices provide detailed data streams on what people are doing, how they are feeling, and what they are observing. In aggregate, these data streams are signatures of the functioning of the metropolis and the quality of life of its inhabitants. This information can be collected and analyzed to identify issues and provide insight into potential solutions for everything from public Wi-Fi connection problems to neighborhood crime levels.
Beyond its potential impact on urban life, municipal data can provide a valuable resource to a city’s economy. Knowledge of noise and pollution levels can help cities to collect greater revenue for violations. Retailers may use pedestrian traffic data to choose optimal store locations. Sensors in trash cans can help the sanitation department optimize collection schedules and routes.
As urban science continues to evolve, researchers are seeking new ways to identify new uses for the data that cities collect every day. With this information and these data-collection tools at our disposal, we hope to refurbish and improve existing cities, and to build future cities with efficiency, quality of life, and resilience in mind.
Steven E. Koonin was appointed as the founding Director of NYU’s Center for Urban Science and Progress in April 2012. This consortium of academic, corporate, and government partners pursues research and education activities to develop and demonstrate informatics technologies for urban problems in the “living laboratory” of New York City. Prior to his NYU appointment, Dr. Koonin served as the second Under Secretary for Science at the U.S. Department of Energy from May 2009 through November 2011. In that capacity, he oversaw technical activities across the Department’s science, energy, and security activities and led the Department’s first Quadrennial Technology Review for energy. Before joining the government, Dr. Koonin spent five years as Chief Scientist for BP plc, where he played a central role in establishing the Energy Biosciences Institute. Dr. Koonin was a professor of theoretical physics at California Institute of Technology (Caltech) from 1975-2006 and was the Institute’s Provost for almost a decade. He is a member of the U.S. National Academy of Sciences and the JASON advisory group. Dr. Koonin holds a B.S. in Physics from Caltech and a Ph.D. in Theoretical Physics from MIT (1975) and is an adjunct staff member at the Institute for Defense Analyses.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise. | <urn:uuid:83d6dbdd-fa37-49f5-a4a3-fb9c3fe87e17> | CC-MAIN-2017-09 | http://data-informed.com/urban-informatics-putting-big-data-to-work-in-our-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00320-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939251 | 1,313 | 3.359375 | 3 |
BARCELONA -- Intel researchers envision a future of driverless smart cars that can be updated at any time with the latest technology and apps.
Intel hopes to play a major role in the new age, creating small, energy-efficient multi-core chips that can make cars more intelligent.
"In the next generation, we are talking about quad-core," said Michael Konow, an Intel engineering manager based in Germany.
"We are looking far ahead to safe driving cars," he said. "We would need a lot of compute power for a car to understand that if there's a ball rolling on the street, there might be a kid running after it. This is very, very difficult. As humans, we have intuition. We need to find a way to get this intelligence into the system."
Konow, who presented a smart car demo at Intel's European Research and Innovation Conference here today, told Computerworld that the auto industry is several years away from having many-core chips in cars, but that lab work on the technology is well underway.
"A car that drives autonomously and has a 100% guarantee that an accident won't happen would require a lot more compute performance," he said. "How much? We don't really know yet."
Today's cars, said Konow and Enno Luebbers, a research scientist at Intel Labs Europe, are getting overloaded with single-core chips. That's a problem, because eventually there won't be enough room for the additional chips that would be necessary to accommodate the ever-growing user demand for new functionality.
Adding a new function requires adding a new chip, said Konow. That means "you [might need] more than a 100 single cores in one high-end car," he said. "You cannot keep up this trend."
With so many single-core chips stuffed into one vehicle, onboard computer systems are becoming too large and complicated, he said.
The goal now is to save power and space, "which is critical because there is basically no space left," added Konow. "[Researchers] are trying to come up with weird shapes of boxes to squeeze them into the tiny amount of space left."
When automakers are able to integrate multi-core chips -- from quad-core to 8-core, 12-core and beyond -- into vehicles, they'll be able to add a lot more functionality, such as updated navigation options, more safety features and social applications.
Today, you have to buy a new car if you want the latest automotive apps. In the future, automakers will offer programmable cars, and users will be able to simply download new apps or upgrades if they want state-of-the-art systems, say Intel execs.
"It's almost like 'What applications wouldn't you want in your car?'" said Intel CTO Justin Rattner. "Once the car is a programmable platform, you'll see all kinds of innovation."
Rattner noted that the smarter cars could work together to make commutes easier.
For instance, cars could have sensors, cameras and computer chips programmed to report potholes to road maintenance crews, and to report traffic jams or accidents to other cars in the area.
In-car apps also could tell drivers which local parking garage has spaces available, or if any of their friends are driving nearby.
"We'll start thinking of our cars more like we think of our laptops and phones -- updateable," said Luebbers.
"For me, it's about synthesis," said Martin Curley, director of Intel Labs Europe. "We're thinking about how these can be integrated into a system of systems that helps us achieve a sustainable society."
Luebbers said a key challenge for engineers working on smarter cars is to ensure safety and security. It's one thing for an entertainment system to be breached; it's another for hackers to access a rearview monitoring system, for instance.
"One of the main challenges is integrating functions of different criticality," he said. "You have to treat the testing and development differently."
A higher level of security will be necessary when we start driving connected cars.
"Over the last 20 or 30 years, [onboard car computers] weren't built with security in mind. It was not required," said Konow. "[Automakers] were looking to save costs. They did not need to design it to be secure."
Widespread connectivity, though, presents the potential for significant problems, he said. "[Automakers] don't want to re-engineer a whole system, but have to find a way to protect systems from external attacks."
Luebbers also noted that car makers have traditionally focused on making sure vehicles did not fail by accident. Now they have to focus on making sure they do not fail because of a digital attack.
That, he said, forces OEMs to think about security in a new way.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:10f8e2b3-daee-435d-ad4e-8ce495eecc9d> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2492782/emerging-technology/intel-readies-for-programmable-smart-cars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00016-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96768 | 1,073 | 2.5625 | 3 |
In a move to fight back against governments that try to block their citizens' Internet access, Google released tools to keep people around the world online.
"As long as people have expressed ideas, others have tried to silence them," wrote Jared Cohen, director of Google Ideas in a blog post. "Today one out of every three people lives in a society that is severely censored. Online barriers can include everything from filters that block content to targeted attacks designed to take down websites. For many people, these obstacles are more than an inconvenience -- they represent full-scale repression."
Bringing together security experts, entrepreneurs and dissidents, Google focused the summit on talking about the changing nature of conflict along with ways to address online censorship.
At the summit, Google took the wraps off uProxy, which acts as a digital underground railroad, connecting people in censored areas with a pathway to an online connection. The uProxy is a browser extension, which Google said is still under development. It is designed to let people, in the U.S. or Canada for example, to provide friends in countries where the Internet access is restricted a connection to the Web.
Google Ideas funded the research for the tool developed by programmers at the University of Washington and at the nonprofit Brave New Software.
Dan Olds, an analyst with The Gabriel Consulting Group, said uProxy can be a helpful tool for anyone living in a country, such as Iran and Sudan, where the governments have sometimes blocked online access.
"Yes, it can definitely help but users would need to have a friend in another country that they can connect through in order to use this tool," Olds said. "And the level of trust between the two parties needs to be high. The person using the connection needs to trust that his friend truly has a secure access point. And the person who is providing the connection needs to trust that the person using it isn't doing anything illegal."
Google also is working with Arbor Networks to create what they're calling a Digital Attack Map, a real-time map of DDoS attacks on Websites around the world.
According to Google, the map lets users explore historic trends and see related news reports of outages happening on any given day. | <urn:uuid:c982b15d-c7d5-4fdb-846b-d85dfcdb4222> | CC-MAIN-2017-09 | http://www.computerworld.com.au/article/530034/google_fights_internet_freedom_new_tools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00312-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965102 | 449 | 2.71875 | 3 |
IBM is working to develop microservers based on low-power processors but isn't sure yet when the systems will be introduced.
The company has already built a prototype board that could function as microserver but has yet to determine what workloads it would be used for, said Gaurav Chaudhry, IBM worldwide marketing manager for System x high-performance computing.
Dozens of microservers packed into a chassis could reduce energy use, space requirements and cost compared to a smaller number of traditional 1U or 2U servers, he said, but IBM is still researching the systems and figuring out who would be the target audience.
After a few delays, Hewlett-Packard launched its first Moonshot server earlier this year and recently updated it with Intel's newer Avoton chips. The server is aimed at Web-scale workloads and HP claims Moonshot is 90 percent more energy-efficient than a more traditional Proliant DL380 server.
IBM, not surprisingly, says it's determined to do better. "We want to go out and beat the competition," Chaudhry said.
While it works toward microservers, the company is also introducing other types of high-density server design.
On Tuesday it introduced the NextScale System, a rack server system that can accommodate up to 84 x86-based systems and 2,016 processor cores.
One component of the rack is the NeXtScale n1200, a 6U enclosure that can hold up to 12 NeXtScale nx360 M4 servers. Each NeXtScale nx360 M4 server will be able to host up to two processors, up to 256GB of RAM, two hard drives or four solid-state drives. The server is targeted at analytics, databases and technical computing.
The 6U enclosures can share cooling resources and power supplies, reducing power costs.
"You cut down on the number of power supplies ... and double the density," Chaudhry said.
The first NextScale System will be based on Intel's Xeon E5-2600 v2 chips, which were announced Tuesday at the Intel Developer Forum and are based on the Ivy Bridge architecture. The new server chips, based on the Ivy Bridge processor, will have up to 12 cores and draw between 70 watts and 130 watts of power.
"The whole idea is there are different people who have different requirements. Cloud guys don't care about a ton of compute, they care about density," Chaudhry said.
Each server board has its own storage and network components and connects directly to the top of the rack for networking. Other components, such as graphics processors, can be attached to a PCI-Express 3.0 slot, and a mezzanine card can bring different types of networking such as InfiniBand into the servers.
"We thought a lot about it. Turns out 6U is the optimal space for many things that we can do to satisfy the industry requirements," Chaudhry said
The NextScale chassis is flexible and could be extended to microservers in the future, Chaudhry said, adding that IBM didn't want to have to go back and design a different chassis when its microservers come out.
The Ivy Bridge-based NextScale System will start at US$4,049 and vary in price based on configuration, IBM said. | <urn:uuid:c84890c6-3556-40f2-8b49-257d93a859ba> | CC-MAIN-2017-09 | http://www.cio.com/article/2382612/hardware/ibm-eyeing-microservers-to-compete-with-hp-s-moonshot.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00488-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.960234 | 694 | 2.65625 | 3 |
Editor’s Note: Steve DuScheid is marketing director of Maponics, a developer of polygonal map data, such as neighborhood boundaries, ZIP codes and school attendance zones.
Every year, federal and state government agencies collect, analyze and publish an enormous amount of data — directly and through grants to universities and foundations. Researchers and policymakers often segment this data by geographic area to compare regions, analyze trends and draw conclusions. One challenge to effectively grouping data by geography is finding the right level of granularity suited to answering particular questions. Too often, researchers simply use what’s readily available or must be satisfied with the level of geography inherent in the processes or organizations used to collect it.
Some common geographic entities used to segment and analyze data include: county, ZIP code and U.S. Census Bureau geography (i.e., block groups).
While there are real benefits to using these defined areas — including wide availability, broad geographic coverage, and the ability to link and compare multiple data sets — none of them truly reflect social and cultural boundaries at the local level. Therefore, they may not answer fundamental research questions or address key factors for policy decisions. ZIP codes and similar entities were defined to facilitate and administer government operations and services — and while some may take into account population characteristics — their borders aren’t meaningful to local citizens.
Standard geographic entities will always be important in how researchers analyze data and how policymakers draw conclusions. But with the availability of new geographic data sets and the growing volume of geotagged data, it’s now possible for researchers to consider questions in new ways that align data to the geographic areas most relevant to answering them.
Below are some of pros and cons of using the standard geographic entities in research and some alternatives that offer new ways to look at data.
County. There are many data sets collected and managed at the county level and made available to federal, state and local government agencies. There are many reasons for this — not least of which is the established infrastructure in place within county governments. Also, data at the county level is manageable to work with because there are only about 3,100 counties in the U.S. But counties are far too large (averaging more than 3,000 square miles) and too varied in population (from as few as 45 to as many as 9 million people) to get at many local socio-economic questions. Population groups within counties are often too diverse for researchers to characterize behaviors or outcomes.
ZIP code. Zone Improvement Plan codes were created by the U.S. Post Office Department in 1963 to improve mail delivery service. ZIP Codes are defined and made up of carrier routes, also designed to optimize mail delivery. Researchers are drawn to ZIP Codes for obvious reasons—they are essentially ubiquitous in databases and they can be easily linked to households and related demographics.
Because ZIP codes were so prevalent for data collection and aggregation, beginning with the 2000 Census, the U.S. Census Bureau compiled and released a new set of geographic areas called ZIP Code Tabulation Areas (ZCTAs) intended to align census-tabulated data to ZIP code areas.
While ZIP codes and ZCTAs are generally easy to use, the geographic areas that they represent are a function of process — not people. Other than knowing a ZIP code to address a piece of mail, people don’t use or relate to them and certainly don’t place any cultural significance on their boundaries.
Census. The primary way researchers organize and analyze data is by the geographic entities defined by the Census. This is because when it comes to demographics, almost all data — whether published directly by the Census or by private companies — originates from the core decennial Census dataset. In terms of small-area analysis, the following Census geographic entities are often used (along with the number of each entity as of the 2010 Census): blocks (11.1 million), block groups (220,000) and census tracts (65,000).
Census geography was developed primarily to facilitate, execute and tabulate the decennial census. As a result, it not only covers the entire U.S. and its territories but also is organized into a clean hierarchy, with larger areas (e.g., counties) composed of a set of smaller areas (census tracts). The Census boundaries also largely obey administrative entities, ensuring, for instance, that block groups don’t cross county lines. And while Census entities are designed to be relatively homogeneous with respect to their population characteristics, they are still derived through an administrative process and are not determined organically by the people who live in them. As a result, analysis performed strictly by these geographic units is limited in terms of how well it represents populations segmented according to locally defined boundaries.
When examining cultural and social trends at the local level, neighborhoods are typically the geographic areas that best reflect how local residents think about the places where they live, work and play. People don’t think about the area around them in terms of ZIP Codes or census tracts — in fact, very few people have any idea where these begin and end in the area immediately surrounding their homes and communities. But people can almost certainly identify and describe their neighborhood as well as the surrounding ones. This is, of course, because neighborhoods are social constructs that reflect the history, values and culture of the people who live in them.
In fact, research often cites statistics, characteristics and trends by neighborhood. But in reality, the delimiter used is almost always some kind of neighborhood surrogate, like a census tract. When true neighborhood boundaries are overlaid onto census tracts for the same area, it’s clear that there is far from a one-to-one correlation.
So, for researchers to adjust U.S. Census geography to conform to areas local citizens identify with, they would need to manually aggregate block groups or census tracts together to align with what people on the ground would consider true neighborhood boundaries. In other cases, census tracts would have to be split to accurately reflect true neighborhood boundaries. For research purposes, neighborhood boundaries would need to be determined and then redrawn. This may be possible at a very small scale but is generally not feasible for larger geographic areas due to the time and expertise needed. At the very least, researchers would need to somehow translate tract numbers to neighborhood names — no trivial task. It isn’t generally meaningful when illustrating a point to say something like, “… as we can see from the results in census tracts 36061006300 and 36061005600 …”
An argument can be made that in small areas, there won’t be a significant statistical difference between using census tracts or block groups compared to true neighborhoods. But it really depends on the area of study. And in many instances, alternate geography can be used to augment traditional methods. After all, looking at intractable problems and policy questions in new ways is the only way to come up with new solutions and ideas.
So how can data be tagged, aggregated and analyzed by neighborhood?
Nationwide Neighborhood Boundaries Data Set. In recent years, geographic data sets have been developed to map tens of thousands of neighborhoods across the U.S. and abroad. Neighborhoods are informal in nature and don’t necessarily follow administrative boundaries or physical features. And while not all local citizens would agree on the exact borders for any given neighborhood, multiple sources can be used to represent a consensus view of the boundaries.
Other Alternate Geography for Small Area Analysis. In addition to neighborhood data sets, there are other alternatives. While neighborhoods are a recognized geographic unit in urban areas, other spaces are important across the suburban landscape. In terms of residential real estate, much of the development in the U.S. during the last half century has been organized around subdivisions — which can include everything from a few homes within a gated community to a development with hundreds of properties. Attributes tied to subdivisions impact everything from quality of life to housing values.
A common research topic is education. Whether stratifying a sample by education level or examining the impact of funding levels on student performance, the relationship between numerous variables and education can be significant. In terms of geography, when looking at the public education system, researchers can use school district boundaries from the U.S. Census. But school districts often cover large areas (nearly 300 square miles on average) and have heterogeneous populations — which can make drawing conclusions about data aggregated by school district difficult.
An alternative geographic entity — and one that is significant for many research questions — are the areas that define which households attend specific public schools. These attendance zones, or catchment areas, have only been available from local school authorities until recently. But there is now detailed attendance zone data available for schools covering more than 70 percent of the U.S. student population.
There are two primary approaches to conducting analysis based on the alternate geographic entities discussed above. Direct methods simply add attributes to data records to assign the proper geographic entity and indirect methods perform some type of translation of data organized by standard entities to alternatives.
Direct. For studies that include source data collection (versus using pre-existing data sets), researchers can simply tag data points with the appropriate alternate geography as it’s collected. Also, any data that can be geocoded (basically, data with an address or even just ZIP code) or that is already geotagged (has latitude/longitude associated with it) can be related directly to any type of geographic entity — including the alternate areas discussed previously. For example, using the address of a set of health clinics can be geocoded and once the latitude/longitude is determined, the set’s location can be resolved to the boundary it falls within. With the proliferation of GPS-enabled devices, there is now a massive amount of geotagged data available. Everything from point-of-sale data to individual tweets are tagged with a lat/lon attribute and can be resolved to and then analyzed by virtually any geographic entity.
Indirect. In many cases, researchers must combine one or more pre-existing data sets or join collected data to demographics and other statistics that are only available in standard Census geographic areas. In these cases, it’s often still possible to use a variety of statistical and spatial processing methods to transpose data from Census areas to alternatives that are more meaningful for evaluation. For example, if basic demographics are needed as part of data analysis and the data is only available by block group, this data can be transposed to neighborhood areas using several techniques. One approach would be to take the geographic center-point (i.e., centroid) of the block groups and determine which neighborhoods they fall within and aggregate the data accordingly. Or, if more precision is required, the overlay of two sets of geographic entities can be calculated to assign demographic values based on overlay proportions.
There are so many ways that alternate geography can be applied to answer interesting research questions and address policy and funding decisions. Even if only a subset of the data in a given study is examined in new ways — it may provide new insights into age-old questions. Here are several examples of how research or policy decisions might be improved by looking at data in a new way.
Health Policy. The U.S. Centers for Disease Control and Prevention track the spread of infectious diseases. Geography is an important element given the nature of how infections spread among populations. In many ways, proximity is the key determinant in looking at concentrations and movement of contagions. Proximity is an easy variable to consider in analysis. A simple radius approach can be used to draw virtual perimeters around infection clusters.
Of course, proximity is a function of social ties and tendencies — and neighborhoods represent a unit of geography that reflects social groupings. In this way, neighborhoods are natural population boundaries that can be useful in looking at how diseases spread. Since school-age children are also a key factor in the spread of infections, another geographic entity that can be used by epidemiologists is the school attendance zone. Adding a geographic layer that shows the exact households from which children attend public school can provide meaningful data that allow health-care professionals to understand trends at a deeper level and take corrective action more quickly.
Consumer Lending. In 1977, the Community Reinvestment Act (CRA) was passed to help ensure banks offered services and credit in all areas — including low-income regions. The CRA created a set of self-reporting requirements for banks to demonstrate compliance. Because the CRA is tied directly to geographic areas and socioeconomic data, it makes sense that regulators would dictate that banks use Census geographic units as a way to group data in compliance reporting. While true neighborhoods can’t necessarily be substituted for census tracts in regulatory reporting, they can provide an interesting way to examine trends and contrast data sets. This kind of analysis is useful for governing bodies and the financial institutions themselves. Imagine if financial products and services could be tailored and marketed based on the population characteristics and preferences of true neighborhood areas. This type of target marketing can take advantage of the social connections inherent in locally defined spaces.
Crime. Every year, thousands of studies are conducted that examine crime in the U.S. America incarcerates a higher percentage of its population than any nation. And crime is linked to many
other socio-economic variables. There is a growing trend to tag crime incidents with location data. Local communities are using this data to display crime statistics in interactive Web maps and to make citizens more aware. In large urban areas, there is so much data that showing individual incidents is overwhelming and as a result, metro areas must be divided into areas with statistics summarized for each. What better way to segment and present data than terms that local residents would use — neighborhoods. Similarly for research conducted at the national, regional or metro level, slicing and dicing crime statistics by neighborhood offers a great way to align results to the geographic entities that reflect local cultural distinctions and norms. | <urn:uuid:e159b843-020b-4623-a116-a9e3146b5f25> | CC-MAIN-2017-09 | http://www.govtech.com/geospatial/GIS-Geotags-Replacing-ZIP-Codes-Census-Tracts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00012-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947363 | 2,854 | 3.234375 | 3 |
The Air Force launches a 'collaboratory'
The Air Force recently launched a website called "The Air Force Collaboratory," which entices young people to participate in an online dialogue and share ideas for solving (initially) three unclassified research projects in which the Air Force is engaged. None of the projects is directly (or at least exclusively) military in nature. One involves developing technology to allow the Air Force quickly to determine the location of survivors of building collapses; a second focuses on a new kind of robot with various search and rescue capabilities; and a third involves determining the proper point in space to which a new GPS satellite should be launched.
The first-level purpose of the site is to involve young people, in a collaborative way, in dealing with a tough technical challenge for the government. This is not a contest; there are no prizes. Instead, the site appeals both to a desire to excel and a desire to serve, both important themes for the Air Force.
"I hope you're up to this," a voice says about the GPS project. "Your idea will save lives," reads a computer-generated text for the collapsed building project. It is nice to involve young people in something bigger than themselves, and it also sends a good message about the ability of the government to innovate that the Air Force is trying out a new way to get technical input on important projects.
But information about the site's backstory tells you its other purpose. According to a New York Times article, the site was developed by GSD&M in Austin, Texas. That would be the same agency that holds the Air Force's recruitment advertising contract. The site is also a way to encourage kids interested in technology to consider joining the Air Force.
So there is also sort of a contracting story here – an example of what the government wants from a good vendor, which is original and innovative ideas that can be used to help further the organization's mission.
All in all, an interesting effort. Check out the site – it's hardly an example of stodgy government in action. The Air Force Collaboratory has been up for about a month now. I'd be curious to know what kind of response it's getting, either in terms of generating ideas or in terms of whetting the appetites of potential recruits.
Posted by Steve Kelman on Aug 27, 2013 at 2:06 PM | <urn:uuid:6106485f-7072-4377-8f0c-db9e92da5747> | CC-MAIN-2017-09 | https://fcw.com/blogs/lectern/2013/08/air-force-collaboratory.aspx?admgarea=TC_Opinion | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00484-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959444 | 485 | 2.640625 | 3 |
Although the cost of 3D printers continues to drop so that more people have them in their homes, it's not like most homes have one. But innovative minds keep turning out new and improved 3D printers, such as these three new types: one can print soft and cuddly objects from fabric; another includes actuators that allow an object to morph after being exposed to external stimuli; the last has a retrofit kit to change 3D printers into 3D food printers.
Disney 3D-prints soft objects from fabric
You know how little kids can be super attached to one particular item like a toy or a blanket? And if that item gets lost or destroyed, it's a red alert unless you can find another exactly like the first. If that beloved object is a soft cuddly toy, wouldn't be great if you could 3D print another? Disney Research has come up with a 3D printer that can create soft interactive objects like a printed fabric bunny.
According to Disney Research, its new type of 3D printer "can form precise, but soft and deformable 3D objects from layers of off-the-shelf fabric."
Our printer employs an approach where a sheet of fabric forms each layer of a 3D object. The printer cuts this sheet along the 2D contour of the layer using a laser cutter and then bonds it to previously printed layers using a heat sensitive adhesive. Surrounding fabric in each layer is temporarily retained to provide a removable support structure for layers printed above it. This process is repeated to build up a 3D object layer by layer.
But Disney didn't stop at soft and cuddly as the researchers have also 3D-printed a smartphone case "with an embedded conductive fabric coil for wireless power reception." The phone case was printed with a fabric antenna inside that can harvest electricity from the phone's NFC chip to make an LED light blink. Disney also printed a "touch slider" for a laptop and a starfish that contains a touch sensor. The 3D printer accomplishes this by "automatically feeding two separate fabric types into a single print. This allows specially cut layers of conductive fabric to be embedded in our soft prints."
4D printing: 3D-printed objects that morph into something new
But 3D printing is "so last year," according to the ARC Centre of Excellence for Electromaterials Science (ACES), which is working on the "ground-breaking science" of 4D printing that would allow a printed object to include actuators so it can "transform" from "one shape into another, much like a child's Transformer toy." Basically, 4D printing consists of "3D printed materials that morph into new structures, post production, under the influence of external stimuli such as water or heat."
ACES researchers are working on a 3D-printed "valve that actuates in response to its surrounding water's temperature," making it a 4D-printed object.
"The cool thing about it is, is it's a working functioning device that you just pick up from the printer," said ACES Professor Marc in het Panhuis. "There's no other assembly required. It's an autonomous valve, there's no input necessary other than water; it closes itself when it detects hot water."
Retrofit kit turns 3D printer into 3D food printer
Meanwhile, German startup Print2Taste thinks it has come up with a way to bring more people into the realm of 3D food printing by selling Bocusini kits to retrofit existing 3D printers into food printers. Users can create a 3D-printed food design in the company's app and then send it to the food printer via mobile device over Wi-Fi.
3DPrint said retrofit kits will initially be offered for the Printrbot Simple, Ultimaker 2, and Printrbot Metal. The company will also sell its own standalone Bocusini printer and food pastes. Reloadable food capsules, which are "loaded into the food extruder, can contain anything from cookie dough, chocolate, and jelly, to vegetable paste, mashed potatoes, and even liver pâté."
Print2Taste will kick off a Kickstarter campaign on May 12. | <urn:uuid:e29e4cc2-9874-4799-ba07-afe03edb3a5c> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2915007/microsoft-subnet/3-new-types-of-3d-printers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00008-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932078 | 870 | 2.875 | 3 |
A US Carbon Tax seems more likely than ever with the return of the Obama administration. The national US carbon tax may not take effect overnight but should happen soon in the data center industry. It might come into effect in the next 12 to 18 months. It would help the data center managers to be aware of the same and the impact it will have on the price of electricity in the future as well. There are many benefits or advantages of a national carbon tax. One of them is the fact that it is a green tax which will help in fostering the use of renewable energy rather than sources like coal and oil.
Even a moderate sized tax can raise $1.25 trillion in the next ten years. A carbon tax is also much simpler to implement when compared to a more complex cap and trade plan. Data center operators can begin looking at implementing energy efficiency measures which can include aspects like virtualization and cooling improvements.
Read More About US Carbon Tax | <urn:uuid:efe45eea-204c-4238-9ab7-7e3ecd79cc96> | CC-MAIN-2017-09 | http://www.datacenterjournal.com/us-carbon-tax-likely-on-data-center-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00184-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963147 | 190 | 2.765625 | 3 |
More than half of children in school will be studying Facebook rather than lessons, says a new study of more than 1,000 UK pupils.
Global Secure Systems (GSS), an IT security consultancy, found 52% of the 1,000 children aged between 13 and 17 who participated in the study confessed that they looked at social networking sites during lessons.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The survey, conducted through Facebook, aimed to discover just how widespread children's use of such sites at inappropriate times was. More than a quarter said they were Facebooking in class for more than 30 minutes a day.
David Hobson, managing director of GSS, made the initial discovery when he spent a day at a local public school speaking to its pupils about internet ethics and behaviour. During his presentation to 13-year-olds, who were all diligently tapping away on their laptops, he asked how many had visited social networking sites during their lessons. He was shocked when they all raised their hands. This ignited his determination to uncover if this was an isolated case or whether it was rife among school children.
"I am disturbed, but not surprised, by the findings," he said. He was concerned for the safety of youngsters on the web and worried by time lost for lessons.
"The time youngsters spend on the internet, and more specifically on social networking sites, is a huge challenge for parents and those of us in education," said Toby Mullins, head of Seaford College.
"Youngsters are not only using lesson time but often quietly continue late into the night, leaving them short of sleep and irritable the next day. I think a study like this to highlight the problem is very timely. We now need to plan for a solution."
Hobson said, "Kids are spending up to 2.5 hours a week of lessons on Facebook. I recognise that there is a place for social networking, with a whole new generation now relying on it to communicate, but not at the expense of an education. Schools could learn a lesson from industry and ensure school children use the internet productively. With the right software it is easy to limit access to inappropriate websites or limit it to break-time."
A separate GSS poll conducted with Infosecurity Europe 2008 discovered that social networking sites, such as Facebook, MySpace and Bebo are costing UK corporations close to £6.5 billion a year in lost productivity.
GSS itself clamped down on social networking during working hours. When asked for more bandwidth, Hobson analysed the company's traffic and discovered that it could save the cost of the upgrade simply by restricting the times people could access social network sites to lunchtimes and after hours. | <urn:uuid:4d1d6857-9a89-45eb-a4f2-b042a5e919f2> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/2240085386/Half-of-schoolchildren-use-Facebook-during-lessons-study-says | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00536-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.975091 | 568 | 2.828125 | 3 |
Microsoft will release a CTP (Community Technology Preview) of a new type of compiler its researchers have been building, code-named Project Roslyn, the company executive overseeing the C# programming language announced Thursday.
"This project is about revising what compilers do," said Anders Hejlsberg in a talk at Microsoft's Build conference, being held this week in Anaheim, California. "[It] is about opening the compiler and making all that information available so [the developer] can harness all of this knowledge," he said.
Roslyn is a compiler for C3 and Visual Basic with a set of APIs (application programming interfaces) that developers can use to fine-tune their code. It resembles a model developed by Miguel de Icaza's Mono Project, in which the information the compiler generates about a program can be reused as a library.
Today's commercial compilers are black boxes, Hejlsberg said. A compiler is a program that converts source code into binary executable program. Internally, a compiler generates a lot of information about the program it is building, he explained, although typically the developer doesn't have access to that data.
Roslyn can offer access to this data, Hejlsberg said. The data can then be used by Visual Studio to generate more options for programmers.
Developers could also use the output of such software to do tasks like refactor, or reorganize, their code more easily, to add C# and Visual Basic functionality to programs written in other languages. It also adds dynamic typing to the statically typed C# and Visual Basic, allowing developers to add objects and new variables to a program on the fly.
A compiler of this sort may offer programmers the ability to build more dynamic applications, noted attendee Michael Wolf, who is the principal architect for Microsoft technologies at enterprise software development firm Cynergy Systems. He also warned that the technology, if not well-understood, could pave the way to badly designed programs.
Hejlsberg demonstrated a few of the program's advanced functions. He showed off a command line interface that allows users to enter code that can be run directly against the compiler. Scripts can also be run against the compiler, which can be useful in generating information about a program being compiled.
He also demonstrated how Roslyn could convert Visual Basic code to C# code, and vice versa, much to the delight of the audience.
The CTP should be available in about a month or so, Hejlsberg said. He offered no time frame for when the software would be incorporated into Visual Studio IDE (integrated developer environment). | <urn:uuid:b9a92b1d-ea3c-4e17-be0a-2e5fc01dff7c> | CC-MAIN-2017-09 | http://www.itworld.com/article/2737073/enterprise-software/microsoft-previews-compiler-as-a-service-software.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00536-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926295 | 533 | 2.53125 | 3 |
At the Web 2.0 Summit in San Francisco this week, NVIDIA announced a GPU-powered 3D Web platform. Called the NVIDIA RealityServer, it consists of Tesla GPUs, rendering software and a Web service environment, all integrated into a platform designed to deliver photorealistic image streams via a cloud computing model. The new offering is yet another example of how the company intends to push its high-end GPUs into CPU territory.
The basic idea behind RealityServer is to do all the heavy computation lifting of image rendering on the server side, such that photorealistic 3D content can be delivered interactively across the Web. That means mass-market devices from smart phones to desktops and everything in between can be used to do high-end imaging. Applications include architectural design, product design, manufacturing and apparel styling, as well as HPC visual applications in such areas as oil and gas, medical diagnostics, and scientific research. As a result, potential users span the entire population: consumers, artists, product designers, doctors, architects, engineers, and scientists.
The big emphasis here is on photorealistic images. Generating such content is extremely compute intensive since the software must calculate the effects of light bouncing off the objects in a scene. Rendering a single photorealistic frame for a complex image can take a whole day on a typical CPU-based workstation. So unless one happens to own a deskside HPC machine (which may themselves contain NVIDIA GPUs), client-side processing is usually not able to deliver this interactive user experience.
Significantly, NVIDIA is not yet claiming this can be used to deliver photorealistic animation. For that to happen, presumably gamers and graphics animators will have to wait until GPU horsepower increases to the point where real-time photorealistic animation is practical. Theoretically, someone could build a big enough GPU cluster to do this today (or with Fermi GPUs next year), but computing 60 photorealistic frames per second is not likely to be economically feasible in the near term.
The critical 3D software component of RealityServer is iray, a photorealistic rendering technology developed by mental images, an NVIDIA subsidiary the company bought two years ago. The iray software is essentially a GPU-accelerated rendering mode of its flagship mental ray product. The iray software uses global illumination, which requires a lot more computational horsepower than garden variety ray-tracing (which usually only approximates global illumination or just uses direct illumination). True global illumination, however, blends the effect of direct and indirect light and will produce a much more refined image, almost indistinguishable from a photograph. Rolf Herken, founder, CEO and CTO of mental images, characterized iray as “the first physically correct renderer.”
In this case, the quality of the image is dependent on the fidelity of the input data rather than the algorithm. The feature that makes this practical in a cloud environment is iray’s ability to scale across many GPUs. According to the iray FAQ (PDF), the software scales “completely linearly on a local system, almost linearly on RealityServer across multiple machines.”
The RealityServer software itself encompasses the iray renderer as well as the rest of the software stack that turns 3D imaging into a Web service. OpenGL is also supported for situations where iray computation would be too slow to deliver interactive rendering. As one might suspect, RealityServer includes support for standard CAD and digital content creation formats and can run under both Linux or Windows.
The hardware environment for RealityServer is NVIDIA’s new Tesla RS platform, which comes in medium (8-31 GPUs), large (32-99 GPUs), and extra-large (100-plus GPUs) configurations. The Tesla device was presumably used since the high-end graphics chip and the larger memory capacity is specifically aimed at big GPU computing workloads. The smallest RS configuration is aimed at workgroups (for example, a group of collaborating architects), while the largest configuration is designed for thousands of concurrent users. This is only a general guideline, since some applications, like medical or oil & gas imaging, require multiple GPUs per user, while others, such as online entertainment, can support many users with a just single GPU.
NVIDIA is pointing interested parties who want to build RealityServer GPU server infrastructure to its OEM partners (which include HPC vendors Colfax, Appro, and Penguin Computing), but is not indicating which manufacturers are actually offering these configurations today. The RealityServer software itself will be available on Nov. 30, when a developer edition will be made available free of charge, including the right to deploy non-commercial applications. No mention was made of licensing RealityServer or iray for commercial applications.
As far as who will end up offering RealityServer infrastructure, NVIDIA is hoping public cloud providers, like for example Amazon, will be interested in adding this capability into their offerings. Private GPU clouds are also on the table, and frankly, are the more likely scenario in the short term, since I’m guessing a critical mass of RealityServer applications will need to be developed for the big cloud providers to be interested. In the NVIDIA press release, there were a handful of comments from some initial RealityServer customers, including mydeco.com, SceneCaster, and Wichita State University’s Virtual Reality Center at the National Institute for Aviation Research. Undoubtably, there is more low-hanging fruit out there waiting to be picked.
The ease of developing these RealityServer applications will likely portend the success of the business in general. Users, of course, may be squeamish about locking their software to a specific vendor’s platform, but with no competing offering currently on the market, the choice may become simple. And if NVIDIA supports RealityServer efforts in the same manner it is using to develop the CUDA ecosystem, the company may indeed have a winning model for GPU computing in the cloud. | <urn:uuid:a34d3608-30e1-4728-883f-72184ea97b8e> | CC-MAIN-2017-09 | https://www.hpcwire.com/2009/10/21/nvidia_pitches_gpu_computing_in_the_cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00536-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.91854 | 1,214 | 2.53125 | 3 |
It's often argued that the most important era in U.S. history was the Industrial Revolution, that period from roughly 1760 to 1840 when almost every aspect of business and daily life changed.
The shift from manual labor to machines, from wood to coal, from farms to cities--those are just a few of the ways our world changed dramatically when a new era of U.S. economic power was born. We are now at the beginning of the next great economic era, which is being ushered in by the Industrial Internet Revolution. Today's world has approximately 6.8 billion people and 12.5 billion connected devices. At the rate things are going, by 2020 we'll have about 7.6 billion people and 50 billion connected devices.
What happens when we can merge the power of intelligent devices, intelligent systems, and intelligent automation with physical machines, facilities and networks? One answer is that an estimated $10 trillion to $15 trillion--an amount nearly equal to the current U.S. economy--will be added to the global GDP, according to Peter Evans, GE's director of global strategy and analytics, and Marco Annunziata, GE's chief economist.
In their November 2012 report, "Industrial Internet: Pushing the Boundaries of Minds and Machines,"Evans and Annunziata also predict that 46 percent of the global economy, or $32.3 trillion worth of global output, will benefit from the Industrial Internet.
Why is this important? Because "emerging markets have an advantage in 20th-century things like labor costs," says Kenneth Cukier, co-author of Big Data: A Revolution that Will Transform How We Work, Live and Think. He argues that big data "lets the West claim an advantage in the 21st-century way, as one can become more efficient and productive by harnessing the data."
This emerging ability to draw together fields such as machine learning, big data, and the Internet of Things will not only improve efficiency, it will also drive new revenue streams and create new markets, which we see happening all around us already.
As GE's Evans and Annunziata point out in their report, that powerful combination of increasingly intelligent machines, advanced analytics and a constantly connected, mobile population is "pushing the boundaries of minds and machines."
How are these changes rolling out in your life? I'd love to hear your perspective on the Industrial Internet, so drop me a line. | <urn:uuid:05632184-fa53-4c9d-b67f-09d5684d1f2f> | CC-MAIN-2017-09 | http://www.cio.com/article/2385671/big-data/the-industrial-internet--the-next-great-economic-revolution.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00060-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943221 | 501 | 2.71875 | 3 |
Sweet Password Security Strategy: HoneywordsTo improve detection of database breaches, businesses should store multiple fake passwords and monitor attempts to use them, according to researchers at security firm RSA.
10 Top Password Managers
(click image for slideshow)
Businesses should seed their password databases with fake passwords and then monitor all login attempts for use of those credentials to detect if hackers have stolen stored user information.
That's the thinking behind the "honeywords" concept first proposed this month in "Honeywords: Making Password-Cracking Detectable," a paper written by Ari Juels, chief scientist at security firm RSA, and MIT professor Ronald L. Rivest, who co-invented the RSA algorithm (he's the "R").
The term "honeywords" is a play on "honeypot," which in the information security realm refers to creating fake servers and then learning how attackers attempt to exploit them -- in effect, using them to help detect more widespread intrusions inside a network.
"[Honeywords are] a simple but clever idea," said Bruce Schneier, chief security technology officer of BT, in a blog post. "Seed password files with dummy entries that will trigger an alarm when used. That way a site can know when a hacker is trying to decrypt the password file."
The honeywords concept is also elegant because any attacker who's able to steal a copy of a password database won't know if the information it contains is real or fake. "An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword," Juels and Rivest pointed out. "The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the "honeychecker") can distinguish the user password from honeywords for the login routine and will set off an alarm if a honeyword is submitted."
[ Two-factor authentication is a good first step, but it's not enough. Here's why. Twitter Two-Factor Authentication: Too Little, Too Late? ]
The researchers recommend honeywords as a step beyond creating fake accounts. "Sometimes administrators set up fake user accounts ("honeypot accounts") so that an alarm can be raised when an adversary who has solved for a password for such an account by inverting a hash from a stolen password file then attempts to login," they said. "Since there is really no such legitimate user, the adversary's attempt is reliably detected when this occurs." But they said that attackers may find viable techniques for spotting bogus accounts.
Accordingly, they recommend adding multiple fake passwords to every user account and creating a system that allows only the valid password to work and that alerts administrators whenever someone attempts to use a honeyword. "This approach is not terribly deep, but it should be quite effective, as it puts the adversary at risk of being detected with every attempted login using a password obtained by brute-force solving a hashed password," they said.
If honeyword use is detected, that doesn't mean that the password database has been compromised. Instead, attackers may simply be launching brute-force-guessing attacks against the site. On the other hand, if numerous attempted logins are made using honeywords, or if honeyword login attempts are made to admin accounts, then it's more likely that the password database has been stolen.
One benefit of the RSA researchers' approach is that businesses could improve their security posture without any user intervention. "Honeywords aren't visible to users and don't in any way change their experience when they log in using passwords," read a related FAQ.
The researchers acknowledge that attackers might subvert their system by launching a denial-of-service attack against a honeychecker server. In such an event, they recommend using a failsafe: if a honeychecker server becomes unavailable, temporarily allow honeywords to become valid logins.
Honeywords aren't meant to serve as a replacement for good password security practices. But as numerous breaches continue to demonstrate, regardless of the security that businesses have put in place, they often fail to detect when users' passwords have been compromised. Last month, for example, LivingSocial said that attackers stole information relating to 50 million users, and stolen passwords were reportedly published in underground forums. Two state attorneys general are now investigating. In March, meanwhile, Evernote reset all 50 million users' passwords after the company's security team discovered and blocked suspicious activity on the Evernote network.
Those are hardly isolated incidents. In the space of a single week last year, 6.5 million LinkedIn, 1.5 million eHarmony and an estimated 17 million Last.fm users' password hashes were uploaded to hacking forums. Although security experts suspect the passwords may have been stolen as early as 2011 or 2010, the affected businesses appeared to learn about the breaches only after the hashes were posted.
Many businesses -- including Evernote -- used encryption algorithms to protect passwords, sometimes also with salt for added protection. But that approach is insecure, and password-security experts have long recommended that businesses use built-for-purpose password hashing algorithms such as bcrypt, scrypt or PBKDF2, which if properly implemented are much more resistant to brute-force attacks.
Regardless, no password security system is foolproof. That's why an early warning system such as the use of honeywords might buy breached businesses valuable time to expire passwords after a successful attack, before attackers have time to put the stolen information to use.
People are your most vulnerable endpoint. Make sure your security strategy addresses that fact. Also in the new, all-digital How Hackers Fool Your Employees issue of Dark Reading: Effective
security doesn't mean stopping all attackers. (Free registration required.) | <urn:uuid:9260080d-58fe-4902-98dd-afe2fa3403f7> | CC-MAIN-2017-09 | http://www.darkreading.com/attacks-and-breaches/sweet-password-security-strategy-honeywords/d/d-id/1109840?cid=sbx_bigdata_related_mostpopular_storage_security_big_data&itc=sbx_bigdata_related_mostpopular_storage_security_big_data&piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00412-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943481 | 1,182 | 2.625 | 3 |
It’s finally happened: social media has risen to the top of our everyday terminology. The New Oxford American Dictionary just announced “unfriend” as its 2009 Word of the Year. That’s right: unfriend (verb) “To remove someone as a ‘friend’ on a social networking site such as Facebook.” Other Internet/technology words considered this year include hashtag, netbook and paywall, as well as Twitt, Tweeple and other common Twitter terms. Check out the full list here.
Also big news is the new worldwide campaign to nominate the net for the Nobel Peace Prize in 2010. Wired magazine reports on “Internet for Peace”, launched last week by Wired Italy:
“The internet can be considered the first weapon of mass construction, which we can deploy to destroy hate and conflict and to propagate peace and democracy,” said Riccardo Luna, editor-in-chief of the Italian edition of Wired magazine. “What happened in Iran after the latest election, and the role the web played in spreading information that would otherwise have been censored, are only the newest examples of how the internet can become a weapon of global hope.”
For more info or to sign the petition, go to Internet for Peace.
In case you’re looking for more random thoughts and interesting tidbits for Monday, see these:
- The Funny and Bizarre World of Client Requests – from Inspect Element (via Smashing Magazine)
- Behold, The Future! – from woot!
Have something to share? Send it to us! | <urn:uuid:8174f5af-3b8e-4d3e-b14a-ce5944314ca2> | CC-MAIN-2017-09 | http://www.codero.com/blog/monday-miscellany-unfriend-internet-for-nobel-and-more/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00005-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928764 | 335 | 2.625 | 3 |
Google has blocked several digital certificates issued in India that could have been used to make bogus websites appear to be run by the Web giant.
The digital certificates were issued by the National Informatics Centre (NIC), part of India’s Ministry of Communications and Information Technology that handles e-government projects, wrote Adam Langley, a Google security engineer, on Tuesday.
How the bogus certificates were issued by NIC is under investigation, Langley wrote. Users are not believed to have been affected.
“We have no indication of widespread abuse, and we are not suggesting that people change passwords,” he wrote.
Web browsers check a domain’s digital certificate to verify it actually belongs to the entity that claims it. The certificate is also used to encrypt communications between a computer and the domain using SSL/TLS (Secure Sockets Layer/Transport Layer Security).
The certificates are issued by authorized authorities. Hackers have occasionally attacked those authorities and created valid digital certificates for illegitimate domains they’ve created, which pass a security check. If users were lured to the fraudulent website, an attacker could decrypt their data traffic.
Security experts have long warned of the problems with wrongly issued digital certificates. To combat the problem, Google has pushed its
Certificate Transparency project, which is aimed at quickly detecting SSL certificates that have been mistakenly issued or acquired by hackers.
The certificates were revoked on July 3, a day after Google’s discovery of the problem, by another ministry agency, the Indian Controller of Certifying Authorities (India CCA), which regulates Certificate Authorities that issue digital certificates in India, Langley wrote.
Indian officials could not be immediately reached for comment.
The NIC held intermediate digital certificates, which were trusted by the Indian CCA, Langley wrote. Indian CCA certificates are trusted by most programs runnings on Windows, including Internet Explorer and Chrome, Langley wrote.
Firefox is not affected because it uses its own list of trusted certificates that doesn’t include the Indian CCA ones, he wrote. Also, Chrome, Chrome OS, Android, iOS and OS X are not affected.
Chrome running on Windows would not have been fooled by the certificates due to a security measure Google uses called public-key pinning, he wrote. Google has also updated Chrome’s CRLSet, a list of certificates that are trusted. | <urn:uuid:4841ea36-8f0c-4ad3-857e-e38187461123> | CC-MAIN-2017-09 | http://www.cio.com/article/2452142/google-blocks-bogus-digital-certificates-issued-in-india.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00533-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969152 | 489 | 2.609375 | 3 |
How NASA beamed Mona Lisa to the moon, one pixel at a time
- By John Breeden II
- Jan 22, 2013
Remember when Dr. Evil put a (air quotes) laser (air quotes) on the moon in an attempt to destroy the United States? NASA also has ideas involving lasers and the moon, only this time, the goal is to test out new methods of space communication by beaming an image of the Mona Lisa to the moon.
Space is mostly empty, which for communications can be good and bad. It’s good because other than a planet here and there, and perhaps an asteroid, there isn’t much to block a signal. On the down side, there isn’t much infrastructure outside of Earth’s orbit. Right now, NASA has to communicate with all of its space probes and exploration vehicles using the Deep Space Network, which relies on low-bandwidth radio waves. It can take up to 15 minutes to send commands as far as Mars, and just as long to get a response back.
The answer might be lasers, which can carry information optically more easily in space than here on Earth. There isn’t much to block the line of sight of a laser in space.
NASA has a satellite circling the moon called the Lunar Reconnaissance Orbiter (LRO), which is already equipped to accept laser signals though its Lunar Orbiter Laser Altimeter (LOLA), which is mapping out the entire surface of the moon. So NASA figured that in addition to sending along the normal tracking data, it could beam regular information, or even a photo, at the same time.
"Because LRO is already set up to receive laser signals through the LOLA instrument, we had a unique opportunity to demonstrate one-way laser communication with a distant satellite," wrote Xiaoli Sun, a LOLA scientist at NASA Goddard in a release following the achievement.
For her journey, the Mona Lisa was reduced to 152 by 200 pixels. Each pixel was sent to the LOLA using a laser during the brief window when the tracking and mission data wasn’t using the beam. NASA specifically had 4,096 of those brief pauses to work with. How much to darken a pixel was determined by delaying the information pulse. The time difference between when the satellite expected to receive the data and the actual time the data arrived determined how much shading was needed for each pixel.
Because of the Earth’s atmosphere interfering with the laser, the transmission wasn’t perfect. However, using the Reed-Solomon error correcting code, the same type used to keep CDs and DVDs from skipping, NASA was able to perfectly reassemble the photo on the other end. The Mona Lisa was now orbiting the moon, 240,000 miles away. The transfer rate was equal to about 300 bits per second.
While 300 baud modems are far from high-tech these days, the experiment shows that lasers can be used for effective communication, even over very long distances. They are also highly directional, compared to radio waves that radiate out from their source. Theoretically, that could allow for two-way communications using lasers in the future, with each beam sitting right beside another. NASA is already thinking about using lasers, at least as a backup, for future mission communications.
"This is the first time anyone has achieved one-way laser communication at planetary distances," said LOLA's principal investigator, David Smith of the Massachusetts Institute of Technology. "In the near future, this type of simple laser communication might serve as a backup for the radio communication that satellites use. In the more distant future, it may allow communication at higher data rates than present radio links can provide."
For now, we can simply marvel that the Mona Lisa has become the Moona Lisa, the first piece of classical art beamed by laser into space.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:0be8858c-fa7b-4704-b9f4-1410fee7baa9> | CC-MAIN-2017-09 | https://gcn.com/blogs/emerging-tech/2013/10/~/~/link.aspx?_id=E5E7125BF61340D4B156A72F89721AC9&_z=z | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00533-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953326 | 814 | 3.515625 | 4 |
Not so long ago, I would head to my friend’s house after school to play Oregon Trail on her desktop computer. Back in those days, you couldn’t just download apps and games from the Internet, because there was no Internet. There were no smartphones, and handheld tablet devices hadn’t even crossed our minds. We loaded our games onto the computer from multiple floppy disks.
Fast-forward to today. I am surrounded by four computing devices at any given time, all of which are capable of accessing the world at lightning speeds. It’s easy to get excited about these technologies, but what I find most fascinating is what goes on “behind-the-scenes.”
Recently, while studying network reference models, I learned about four different standards bodies that govern the way we experience the Internet, allowing what happens on our devices to be a seamless, magical experience.
A journey down the network highway
Our first stop is the International Telecom Union (ITU), with headquarters in Geneva, Switzerland. This organization was established in 1865 and created what are known as letter standards. Some examples are ADSL (Asymmetric Digital Subscriber Line), which enables a faster connection over copper telephone lines, and MPEG4, which allows us to enjoy audio and video content.
The next stop is at the Institute for Electrical and Electronic Engineers (IEEE). This group was founded in 1884 by a few electronics professionals in New York. They created the numbering system that governs how we access the modern internet. Some familiar protocols are 802.3 (Ethernet) and 802.11 (wifi). Simply put, my neighborhood coffee shop without 802.11 would be like enjoying my coffee without cream and sugar. Thanks IEEE!
We continue our journey to our next destination, which is the Internet Engineering Task Force (IETF). This stop takes us to the west coast in California where RFC (Request for Comment) was created. These standards govern how we reach the content via the World Wide Web. Some familiar protocols developed are RFC 2616, or HTTP (Hypertext Transfer Protocol), and RFC 1034/1035 which is better known as DNS (Domain Name System).
Our last stop on this network field trip is at W3C, or the World Wide Web Consortium. This organization was founded in 1994 (right about the time I stopped playing Oregon Trail) by Tim Berners-Lee at Massachusetts Institute of Technology. W3C created familiar protocols such as HTML5 (the fifth iteration of Hypertext Markup Language), which allows us to experience multimedia content like never before, and CSS (Cascading Style Sheets) which let us manage and enjoy web pages in a more beautiful way.
Now that you’re in acronym overload, I hope you have a better understanding of how our modern Internet became what it is today. I guess I’ll use those floppy disks for drink coasters while I download the latest app to my tablet. | <urn:uuid:6e8c4715-b276-4184-af4f-f03c366965b3> | CC-MAIN-2017-09 | http://www.internap.com/2013/07/11/behind-the-scenes-on-the-internet-superhighway/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00233-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939104 | 607 | 3.0625 | 3 |
Harnessing Creativity to Make Powerful Decisions
The creative process is driven by imagination. The more imaginative one is, the greater one's potential for creativity. The injection of creativity into problem analysis broadens the base of information and ideas that are ultimately incorporated into the selection of a solution. If problem analysis is to be imaginative, then it must therefore be imposed on the decision-making process.
A major problem is that too few of us are imaginative. In his seminal study on creativity, Why Didn't I Think of That?, Charles W. McCoy Jr. reports that children lose one-half of their creativity between the ages of five and seven, and adults over forty retain less than two percent of the creative drive they had as children. The implications are that the average person of working age is not very creative. This is where creativity tools come into play. These tools are simple techniques that help ensure the analysis of a problem has greater breadth and depth than it might otherwise possess. Learn the necessary skills to build a creative toolbox and improve your decision-making skills.
Creativity is Imagination
The creative process is driven by imagination. The more imaginative one is, the greater one’s potential for creativity. George Bernard Shaw said, “Imagination is the beginning of creation. We imagine what we desire; we will what we imagine; and at last we create what we will.” Imagination is the internal process that drives the external expression which is perceived as creativity.
Unfortunately, the problem is that too few of us are imaginative. In his seminal study on creativity, Why Didn’t I Think of That?, Charles W. McCoy Jr. reports children lose one-half of their creativity between the ages of five and seven, and adults over forty retain less than two percent of what they had as children.
The implication is that the average working age person is not very creative. If problem analysis is to be imaginative, then it must therefore be imposed on the decision-making process. This is where creativity tools come into play. These tools are simple techniques that help ensure the analysis of a problem has greater breadth and depth than it might otherwise.
Judgments are the Problem
It is judgmental thinking that the tools of creativity are helping to overcome. Judgments are limits that hold back our thinking. People become increasingly judgmental as they age, and we presume to understand and “know” things. Over time, we become more opinionated and increasingly rigid in our perceptions of ourselves, others, and everything around us.
Being judgmental is the basis of bigotry and preconceived notions about anyone or anything. It is the reason so many decision makers are narrow minded in their perspective or analysis of a problem and its solutions.
Judgments are what keep us all “in a box.” They are why one person will consider a particular act “reasonable” while another would label it “outrageous.” Judgments are boundaries that we impose on ourselves; they are the limits on our imaginations and, therefore, our creativity.
Self-judgment stems from fear of embarrassment or a rigid mindset that does not believe the imagination should be permitted to wander. Left to atrophy, the imagination eventually becomes unable to be spontaneous.
The techniques described below are tools that help to get the creative juices flowing. Regular practice is needed in order for them to work well.
Imagination takes time to do its magic. If you want creative solutions, you need to allow time for the imagination to perform. The optimum solution can only be discovered if imaginative thinking is given the time and tools to conceive it.
Charles W. McCoy Jr. writes, “Imagination plays a crucial role in all genuine creative thinking, because it allows the mind to see the unseen, envision the invisible, and transform ideas into reality.” The more time and technique that is applied to the creative side of problem analysis, the more likely you are to fully understand a problem before arriving at a decision.
The key to being truly creative is the ability and willingness to recognize the assumptions and beliefs that underlie perceptions of a problem and to think beyond them. Questioning the “norm” is an act of courage. To imagine courageously is to question tradition, to defy logic, and to refuse to conform.
Imagining courageously is about openly questioning what we, as well as others, believe to be true about a situation or issue. It is about suggesting the outrageous. Being courageous can be controversial and even dangerous. It takes courage to recognize what is conventional wisdom and to then think beyond it in a creative and productive way.
According to Charles W. McCoy Jr., “Genuine creativity requires raw courage; never flees from adversity, frustration or even failure; challenges conventional wisdom; and vigorously explores beyond the first workable answer to find the very best solution imaginable.”
Imagining courageously is all about suspending judgment. Do not let “group think” control your thought processes. Actively and openly look for the boundaries of colleagues’ mental boxes as well as your own, and then cast your imagination outside those boundaries—even if doing so might offend. | <urn:uuid:03b6342c-ed0f-4fc3-bd91-0f86f5f908d2> | CC-MAIN-2017-09 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/harnessing-creativity-to-make-powerful-decisions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00585-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962458 | 1,067 | 3.375 | 3 |
Carnegie Mellon University computer scientists have developed a new password system that incorporates inkblots to provide an extra measure of protection when, as so often occurs, lists of passwords get stolen from websites.
This new type of password, dubbed a GOTCHA (Generating panOptic Turing Tests to Tell Computers and Humans Apart), would be suitable for protecting high-value accounts, such as bank accounts, medical records and other sensitive information.
To create a GOTCHA, a user chooses a password and a computer then generates several random, multi-colored inkblots. The user describes each inkblot with a text phrase. These phrases are then stored in a random order along with the password. When the user returns to the site and signs in with the password, the inkblots are displayed again along with the list of descriptive phrases; the user then matches each phrase with the appropriate inkblot.
“These are puzzles that are easy for a human to solve, but hard for a computer to solve, even if it has the random bits used to generate the puzzle,” said Jeremiah Blocki, a Ph.D. student in computer science who developed GOTCHAs along with Manuel Blum, professor of computer science, and Anupam Datta, associate professor of computer science and electrical and computer engineering.
These puzzles would prove significant when security breaches of websites result in the loss of millions of user passwords – a common occurrence that has plagued such companies as LinkedIn, Sony and Gawker. These passwords are stored as cryptographic hash functions, in which passwords of any length are converted into strings of bits of uniform length.
A thief can’t readily decipher these hashes, but can mount what’s called an automated offline dictionary attack. Computers today can evaluate as many as 250 million possible hash values every second, Blocki noted.
Given the continued popularity of easy passwords, such as “123456” or “password,” it’s not always difficult to crack these hashes. But even hard passwords are vulnerable to the latest brute force methods, Blocki said.
In the case of a GOTCHA, however, a computer program alone wouldn’t be enough to break into an account.
“To crack the user’s password offline, the adversary must simultaneously guess the user’s password and the answer to the corresponding puzzle,” Datta said. “A computer can’t do that alone. And if the computer must constantly interact with a human to solve the puzzle, it no longer can bring its brute force to bear to crack hashes.”
Because the user’s descriptive phrases for inkblots are stored, users don’t have to memorize their descriptions, but have to be able to pick them out from a list. To see if people could do this reliably, the researchers performed a user study with 70 people hired through Mechanical Turk. First, each user was asked to describe 10 inkblots with creative titles, such as “evil clown” or “lady with poofy dress.” Ten days later, they were asked to match those titles with the inkblots. Of the 58 participants who participated in the second round of testing, one-third correctly matched all of the inkblots and more than two-thirds got half right.
Blocki said the design of the user study, including financial incentives that were too low, might account for the less-than-stellar performance. But he said there also are ways to make descriptions more memorable. One way would be to use more elaborate stories, such as “a happy guy on the ground protecting himself from ticklers.”
The researchers also have invited fellow security researchers to apply artificial intelligence techniques to try to attack the GOTCHA password scheme. Their GOTCHA Challenge is online. | <urn:uuid:e09c7527-208d-438a-8336-d2a841fa8a5b> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/11/08/inkblots-could-solve-problem-of-compromised-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00229-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952129 | 791 | 3.484375 | 3 |
Moore’s Law is the observation that the number of transistors capable of being built on an integrated circuit doubles every eighteen months.
The law, first discussed in 1965 by Intel co-founder Gordon Moore, has proven able to consistently predict the pace of 20th and 21st century technological progress. Today, roughly every 18 months computers double in speed and power. At about the same pace, the number of pixels achievable by a digital camera doubles. This is true because Moore’s Law doesn’t just describe the pace of computers; it reveals the hidden exponential curve of innovation. The rule applies across industries and the data guides us to a more functional understanding of technology.
Theories explaining the intangible march of progress had been pondered and published long before Moore’s Law, but it wasn’t until Moore offered a statistical benchmark that these theories became valuable. This is because data has a way of revealing the truth that was always there and, in cases where misinformation is pervasive, speaking loudly and clearly for itself.
The cable industry sees Moore’s Law in action every day. Looking at broadband, we can see that peak available speeds have steadily and significantly increased, about 50 percent annually for the past several years. Many of these peak speed increases come along with middle tier speed bumps as well. What some customers now pay now for 50 Mbps is about what they paid for 20 Mbps 18 months ago.
Of course, Moore’s Law probably shouldn’t be called a law in the same way that we refer to Newton’s Laws of Motion. Newton described hard-and-fast rules of the universe. Moore’s Law is a compelling observation, but anything can happen. As any computer engineer, technologist, or science fiction writer will tell you, the future is remarkably difficult to predict. Still, for fifty years it’s proven to be a consistent barometer of technological progress.
The point is, without numbers, clear, concise data, and statistical benchmarks backing observations up, abstract arguments like “broadband speeds are getting faster ” tend to ring hollow. That’s why we take such pride in providing clear, accessible data about the cable industry. In an environment like Washington where ideas are a commodity and influence is as much a question of credibility as it is accuracy, numbers are invaluable. They tell their own story and they cut through the clutter.
Be sure to visit our Industry Data page to see more informative graphs and key stats that highlight today’s video and Internet marketplace. | <urn:uuid:773903ed-5b94-4135-82bd-cbafac02db00> | CC-MAIN-2017-09 | https://www.ncta.com/platform/broadband-internet/the-curve-of-innovation-at-505-mbps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00577-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947035 | 520 | 3.59375 | 4 |
Smartphones and tablets have taken educational technology into a league of its own. Though BYOD (Bring Your Own Device) and mobile integration were initially met with much scepticism, many schools have now given in to the amazing benefits. Let’s have a closer look at the benefits and potential risks.
1. Device familiarity
As the students bring devices they are already familiar with, no time is wasted in getting to know the device. They can enroll the devices and start learning right away.
2. Unlimited information
With their smart devices connected to the school Wi-Fi network, students have access to all the information they want, right at their fingertips.
3. BYOD in Schools – Takes the boring out of education
Learning through their beloved gadgets is more engaging and fun. Interactive apps and content make students glued to their devices.
4. Seamless collaboration
BYOD helps students work together and share content with ease. When it comes to group projects, collaboration is simple and effective.
5. Beyond textbooks
Students can break-free of the outdated textbooks and access the most up-to-date information on any topic.
6. Anytime, anywhere learning
With smart devices in hand, school hours doesn’t matter any more. Students can keep on learning on their own.
7. Cost savings
Allowing BYOD can save the school a significant amount of money. Providing every student a unique device is no small challenge.
8. High-end devices
Students tend to bring cutting-edge devices. Both students and teachers benefit from the enhanced experience.
9. Personalised learning
With BYOD in place, teachers can easily meet the different learning needs. With highly personalized education through their devices, each student can learn at their own pace.
10. Easier evaluation
BYOD helps teachers grade the assignments and tests much easier. With dedicated apps, teachers can look up online scores and see how well their students are doing.
The positives do not end there. However, let’s not overlook these challenges.
1. Theft and loss
The more the valuable stuff students bring to school, the more the chance of theft. Kids losing expensive devices at school don’t bode well with the parents.
2. Overloading the network
Many students will bring more than one device. Connecting all of them to the school Wi-Fi may weigh down the network.
Malicious apps installed on a student’s device may cause security vulnerabilities on other devices or invite more web based threats.
It is easier to get distracted while working on a BYOD device. As there are fewer restrictions, students might end up playing games or browsing.
5. Apps availability
The dedicated learning apps may not support all platforms and thus not necessarily be available for all the devices students bring to school.
6. The digital divide
BYOD may highlight the financial disparities between the students. Students who bring low-end tablets can get bullied and may ultimately miss out on the learning experience.
Despite these challenges, it’s worth every bit going BYOD. Not because it’s in trend but because it helps prepare our students and teachers for the future, one of endless knowledge and possibilities. | <urn:uuid:8afa1af3-f6df-4d6a-8454-797ce39cdc2f> | CC-MAIN-2017-09 | https://www.hexnode.com/blogs/byod-in-schools-is-it-worth-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00453-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940682 | 665 | 2.984375 | 3 |
Computer security systems may one day get a boost from quantum physics, as a result of recent research from the National Institute of Standards and Technology (NIST). Computer scientist Yi-Kai Liu has devised away to make a security device that has proved notoriously difficult to build—a “one-shot” memory unit, whose contents can be read only a single time.
The research shows in theory how the laws of quantum physics could allow for the construction of such memory devices. One-shot memories would have a wide range of possible applications such as protecting the transfer of large sums of money electronically.
A one-shot memory might contain two authorization codes: one that credits the recipient’s bank account and one that credits the sender’s bank account, in case the transfer is canceled. Crucially, the memory could only be read once, so only one of the codes can be retrieved, and hence, only one of the two actions can be performed—not both.
“When an adversary has physical control of a device—such as a stolen cell phone—software defenses alone aren’t enough; we need to use tamper-resistant hardware to provide security,” Liu says. “Moreover, to protect critical systems, we don’t want to rely too much on complex defenses that might still get hacked. It’s better if we can rely on fundamental laws of nature, which are unassailable.”
Unfortunately, there is no fundamental solution to the problem of building tamper-resistant chips, at least not using classical physics alone. So scientists have tried involving quantum mechanics as well, because information that is encoded into a quantum system behaves differently from a classical system.
Liu is exploring one approach, which stores data using quantum bits, or “qubits,” which use quantum properties such as magnetic spin to represent digital information. Using a technique called “conjugate coding, “two secret messages—such as separate authorization codes—can be encoded into the same string of qubits, so that a user can retrieve either one of the two messages. But as the qubits can only be read once, the user cannot retrieve both.
The risk in this approach stems from a more subtle quantum phenomenon: “entanglement,” where two particles can affect each other even when separated by great distances. If an adversary is able to use entanglement, he can retrieve both messages at once, breaking the security of the scheme.
However, Liu has observed that in certain kinds of physical systems, it is very difficult to create and use entanglement, and shows in his paper that this obstacle turns out to be an advantage: Liu presents a mathematical proof that if an adversary is unable to use entanglement in his attack, that adversary will never be able to retrieve both messages from the qubits. Hence, if the right physical systems are used, the conjugate coding method is secure after all.
“It’s fascinating how entanglement—and the lack thereof—is the key to making this work,” Liu says. “From a practical point of view, these quantum devices would be more expensive to fabricate, but they would provide a higher level of security. Right now, this is still basic research. But there’s been a lot of progress in this area, so I’m optimistic that this will lead to useful technologies in the real world.” | <urn:uuid:21c36405-1a2a-4496-94df-8cbbb7254289> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2014/01/16/quantum-physics-could-make-secure-single-use-computer-memories-possible/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00273-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937287 | 713 | 3.296875 | 3 |
The Moon is a bit of an enigma. In some ways, it's nothing like Earth, as its minerals contain few volatile chemicals and it has a relatively tiny core. But in other ways, it's nearly our twin, with many elements having isotopic signatures that are almost identical.
Currently, our best model for Moon formation involves having a Mars-sized object smack in to the early Earth. This could create a Moon that has some similarities to the Earth, but ends up with most of the iron from the impact being deposited in the Earth's core. The only problem with this is that anything as big as Mars probably originated from elsewhere in the Solar System, and thus would have a very distinct isotope ratio.
Today, Science is releasing two papers that take very different routes to tackling this problem. One models what would happen if, instead of a large size difference between the Earth and its impactor, the two bodies were of roughly equal size. Another models two differently sized bodies colliding, but assumes the proto-Earth was spinning much faster than it is now, with "days" on the order of 2.5 hours long. And, in a dilemma that may interest planetary scientists, both models produce the sort of distribution of materials we currently see.
Models of Solar System formation suggest that its rocky planets were built sequentially, with planetesimals condensing from the dust and debris, and then merging to form protoplanets. Over time, these protoplanets underwent a series of collisions and mergers, building planets like Venus and the Earth. (A few objects, like Mars and Vesta, may have sat out most of these later mergers, leaving much smaller objects behind.)
A collision of this sort could help explain some puzzling aspects of the Moon. If the debris left behind were dispersed enough, some of it could condense into a separate object, explaining how two large bodies could end up in such close proximity. And heavy elements would preferentially end up in the larger one, explaining the Moon's small core.
The simulations that show how these collisions would work involve smacking together two spheres full of particles, with each particle having a distinct identity and location—heavy metal particles in the core, silicate rocks in the crust, etc. By tracing these particles through the collision, it's possible to follow the iron from the impactor, and watch it drop into the Earth's core, and so on. The problem is that they also show that the Moon should end up with a crust that's largely composed of material from the impactor. And that's tough to square with the fact that the different forms of many elements, called isotopes, vary with a body's location in the Solar System.
So, unless the impactor started right next door, we'd expect that the isotope ratios of the material it brought wouldn't look like the ones on Earth. And, as we look more carefully at the material in the Moon, that just doesn't seem to be the case.
One of the new papers, from the Southwest Research Institute's Robin Canup, takes a look at what would happen if two large bodies combined. Typically, the Earth is modeled as being nearly its current size, the product of multiple planetoid mergers; its impactor as being about a tenth of its size. Canup ran a series of models in which the two were much closer to equal size, starting with an impactor that was half the mass and moving up to one that was about 90 percent. Models where the impactor was about 80 percent of the size of the pre-Earth created debris disks that could form a Moon; both it and the resulting Earth ended up with very similar material in their crusts.
A second paper, from Sarah Stewart of Harvard and Matija Ćuk (now with the SETI institute), went about things completely differently. They focused on the fact that all the collisions involved in building a planet are expected to leave it spinning very rapidly. So they modeled a normal Mars-sized impactor, but had the Earth spinning really fast, with days lasting anywhere from 2.3 to 2.7 hours. These collisions produce a debris disk that was composed primarily of material from the pre-Earth's mantle, which would explain the Moon's present similarity to the Earth.
An example of one of the runs of this model is seen here. The smash leaves the impactor's matter evenly distributed, and a sufficient amount of debris far enough from the Earth to form a separate body.
The problem with both of these models is that they leave the Earth-Moon system spinning very rapidly. Since the angular momentum of the system has to be conserved, we need some way of getting rid of some of this spin. Tidal forces can do some of that, but the authors of the second paper go on to show that there's a resonance between the lunar orbit and the system's orbit around the Sun. This effectively moves some of the angular momentum out of the Earth-Moon system, and into the system's orbit around the Sun. Combined with tidal forces, this can put a strong enough brake on the system.
Which model will win out? Right now, neither paper really addresses the others' model, so it's a bit hard to say. It's possible that, as the details are filled in, one or the other model will provide a better match to the data we already have. Or one of them could identify data we don't have yet. The ongoing GRAIL mission, which is mapping the density of the Moon's crust, may also provide some further information that will help us understand the Moon's formation. | <urn:uuid:f3074463-acfc-41eb-9280-d22be56b604a> | CC-MAIN-2017-09 | https://arstechnica.com/science/2012/10/more-than-one-way-to-smash-the-earth-and-build-our-moon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00449-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.974884 | 1,151 | 4.59375 | 5 |
The new lenses include innovative, miniaturized sensors to monitor blood sugar levels in human tears as a way to help diabetes patients keep their disease in check.
Google is experimenting with special contact lenses equipped with miniaturized sensors that can analyze the tears in the eyes of diabetes patients to determine when their blood sugar levels need to be adjusted.
The project, which could ultimately make the management of diabetes easier
for millions of patients around the world, was unveiled Jan 17 in a post by project co-founders Brian Otis and Babak Parviz on the Google Official Blog
. Otis and Parviz work with the company's Google[x] research branch.
"You've probably heard that diabetes is a huge and growing problem—affecting one in every 19 people on the planet," wrote Otis and Parviz. "But you may not be familiar with the daily struggle that many people with diabetes face as they try to keep their blood sugar levels under control. Uncontrolled blood sugar puts people at risk for a range of dangerous complications, some short term and others longer term, including damage to the eyes, kidneys and heart. A friend of ours told us she worries about her mom, who once passed out from low blood sugar and drove her car off the road."
Managing diabetes for patients can mean wearing glucose monitors and constantly pricking their skin and testing their blood for sugar levels. To change that, alternative methods are always being evaluated and tested.
"Over the years, many scientists have investigated various body fluids—such as tears—in the hopes of finding an easier way for people to track their glucose levels," wrote Otis and Parviz. "But as you can imagine, tears are hard to collect and study. At Google[x], we wondered if miniaturized electronics—think: chips and sensors so small they look like bits of glitter, and an antenna thinner than a human hair—might be a way to crack the mystery of tear glucose and measure it with greater accuracy."
The experimental lenses, which look like typical curved, round lenses, also feature copper-colored "grid" lines that are reminiscent of the rear window heater lines on a modern automobile. The sensors embedded in the grid lines measure glucose levels and analyze the wearer's tears using a tiny wireless chip and a miniaturized glucose sensor that are embedded between two layers of soft contact lens material, according to the post.
So far, the team is testing prototypes that can generate a reading once per second. "We're also investigating the potential for this to serve as an early warning for the wearer, so we're exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds," they wrote.
The study is still in its early phases but multiple clinical research studies have already been completed that are helping refine the lens prototypes, wrote Otis and Parviz. "We hope this could someday lead to a new way for people with diabetes to manage their disease."
For diabetes patients, they wrote, having the disease is very labor-intensive and is like having a part-time job. "Glucose levels change frequently with normal activity like exercising or eating or even sweating. Sudden spikes or precipitous drops are dangerous and not uncommon, requiring round-the-clock monitoring. Although some people wear glucose monitors with a glucose sensor embedded under their skin, all people with diabetes must still prick their finger and test drops of blood throughout the day. It's disruptive, and it's painful. And, as a result, many people with diabetes check their blood glucose less often than they should."
Otis and Parviz said they are in discussions about their experiments with the U.S. Food and Drug Administration (FDA), "but there's still a lot more work to do to turn this technology into a system that people can use."
The project leaders are now seeking business partners to help them invest and successfully bring the experiments to the marketplace in the future, Otis and Parviz wrote. "These partners will use our technology for a smart contact lens and develop apps that would make the measurements available to the wearer and their doctor," the post stated. "We've always said that we'd seek out projects that seem a bit speculative or strange, and at a time when the International Diabetes Federation
is declaring that the world is 'losing the battle' against diabetes, we thought this project was worth a shot."
In September 2013, Google launched a new health care company
, called Calico
, with a goal of finding ways to improve the health and extend the lives of human beings. The startup is focusing on health and well-being, in particular the challenge of aging and associated diseases, according to Google.
Calico wasn't the first health care-related initiative undertaken by Google. Back in 2008, Google launched its Google Health initiative
, which aimed to help patients access their personal health records no matter where they were, from any computing device, through a secure portal hosted by Google and its partners, according to earlier eWEEK
reports. Google Health shut down
in January 2013. | <urn:uuid:3e4da564-0dcb-4e96-87e6-1161c44366fe> | CC-MAIN-2017-09 | http://www.eweek.com/cloud/google-testing-smart-contact-lenses-for-diabetes-patients.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00625-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965271 | 1,049 | 2.90625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.