text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
When a bright, 30-something persuaded the authorities to waive the restrictions on age and weight, and joined the US Navy in 1944, no-one could have foreseen the profound benevolent impact Grace Hopper would go on to have on computing and the world as we know it today.
This week marks what would have been the 107th birthday of Grace Hopper, the lady who came to be known as the Mother of COBOL. In our own tribute to Amazing Grace, let’s look at some of the details of how she helped drive the creation of the commercial world’s most ubiquitous and successful programming language.
Grace Under Pressure
Grace Hopper was already a well-known pioneer for computing by the time she attended the first Data Systems Languages (CODASYL) conference, a consortium that aimed to guide the development of a standard programming language that could be across multiple computers, in 1959. The output of that meeting was the blueprint for what was to become the COBOL computer language.
So what did Hopper bring to the party?
Hopper had already established the concept of the compiler – taking coded instructions and translating them into repeatable machine execution. This idea was carried forward into CODASYL.
CODASYL aimed that the language should be “used on many computers” (the success criteria was for code to execute on 2 different class of machines) – but it was on Hopper’s insistence that the language stipulation was further qualified to be as “close to English” as possible.
Hopper’s later work on language standards, where she was instrumental in defining the relevant test cases to prove language compliance, ensured longer-term portability could be planned for and verified.
Finally, although by no means the first, Hopper popularised the term ‘debugging’ (which at the time literally meant taking a moth out of a piece of computer circuitry), which was to become a vital component of later development products.
An Evolutionary Course
The CODASYL meeting took place in 1959. Computing was in its infancy. Very little survives from then in terms of technology or corporate entity. However, the blueprint for the COBOL language set an evolutionary course that ensured its adoption and usage across decades to come.
New standards bodies emerged as custodians of the language, agreeing updates to the standard in 1968, 1974 and 1985, with additional refinements in later years. Hopper continued to lobby for standardisation and the US Navy played a key role in establishing tests and metrics that allowed vendors and manufacturers to check against the standard.
The language was adopted as a de-facto standard by many hardware manufacturers, a plethora of whom started to emerge in the 1980s with the advent of the micro-processor. Soon enough, household name manufacturers were providing, among other technology, their own “COBOL compiler” product on their machinery. Of course, across the 500 or more platforms on offer to the mark since the late 1970s, Micro Focus provided the technology as part of an OEM engineering contract to the hardware industry.
Over time, the commercial world also evolved. The standard and the underlying technology for COBOL had to evolve with it, to support an array of emerging technology. Consider this list –
- Managed Code
No one could have predicted the astronomical growth of technology and how it disrupted entire industries and transformed billions of peoples’ lives. Yet Hopper’s language blueprint of portability, legibility and standardization was a platform from which vendors such as Micro Focus have been able to build out generation after generation of improvement, to enable COBOL to remain relevant, accessible and valuable in today’s commercial world.
Good News Travels Fast
The anniversary of Hopper’s birthday was commemorated by industry giants Google in the best way they know how, the creation of a unique “doodle” (the picture on the main search page) as homage to Hopper, including COBOL code being executed on an old machine and even the infamous moth making an appearance.
The press were also quick to share in the news. James Bourne’s Developer Tech article asked if we should now re-evaluate COBOL – Hopper’s invention having proved its value for such a long period. It cites how Micro Focus is also providing – with its latest products – a genuine solution to the perennial question surrounding skills.
Elsewhere the Independent online article also described Hopper’s invaluable contribution to the language and to the industry as a whole.
Nick Heath takes an IT skills slant to his article for ZDNet, highlighting the present day relevance of COBOL and talks to Micro Focus CTO, Stuart McGill about how Academia and business can help bridge the skills gap.
Hopper’s Legacy: COBOL today
Hopper remained active in the industry well beyond her scheduled retirement and – as a TV appearance on the Letterman show demonstrates – she remained formidably astute. Those design principles from over half a century ago, in other industries, would be unlikely to survive. Yet such was the foresight and shrewd thinking of Hopper and her cohorts, her legacy thrives to this day.
COBOL remains portable, scalable, debuggable, easy to learn, and is the preferred language of business for the vast majority of the global Fortune 100. Micro Focus embodies those principles in our COBOL technology: over 500 platforms have been supported with our portable COBOL technology – the industry’s workhorse business language – and we currently support 50 of those platforms today.
Micro Focus’ latest technology supports enterprise class COBOL applications being developed under Visual Studio or Eclipse IDEs and executing across a range of servers including zEnterprise, JVM, .NET, Unix, Windows, Linux and Cloud. Our Amazing Grace may no longer be with us, but the language she pioneered is still at the core of many business systems, and will continue for many years to come. | <urn:uuid:d574eb8b-8d8b-4ff2-a536-0f8cd6364a0b> | CC-MAIN-2017-09 | https://blog.microfocus.com/date/2013/12/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00020-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958418 | 1,246 | 3.421875 | 3 |
President Donald Trump’s threats to overhaul the H1-B program, the largest visa program for allowing high-skilled immigrants to work in the U.S., has Silicon Valley shaking in its boots. But American computer scientists might want to root Trump on.
The H-1B visa, created for college graduates with knowledge in a highly specialized field, was first granted 1990. Its establishment coincided with the rise of the internet, which sent America’s need for skilled computer scientists skyrocketing. Today, the H-1B program is integral to the tech industry: About half of the more than 120,000 H1-B visas granted by the U.S. in 2014 went to those working in computer science.
The H-1B visa has also been a boon for the U.S. economy overall. One study estimated that between 10 percent and 25 percent of all productivity growth in the country between 1990 and 2010 came from foreign workers in science and technology, many of who are on H-1Bs.
But not everyone wins from the program. Recently published research by economists John Bound, Gaurav Khanna and Nicolas Morales of the University of Michigan found that although the H-1B program is a major contributor to U.S. economic growth, it’s quite bad for domestic computer scientists.
Based on data from 1994 to 2001, the researchers estimate that without the H-1B program, the wages of American computer scientists would have been 3 percent to 5 percent higher in 2001, and Americans’ employment in computer science would have been 6 percent to 11 percent higher. They also find that, in general, the H-1B program makes college graduates worse off, while helping non-college graduates by giving them access to cheaper technology.
The researchers chose to use the period from 1994 to 2001 because it was a time of stable growth in the U.S., during which there was also a large influx of H-1B workers. Though the computer-scientist labor market might be different today, the pain felt by its U.S. participants still likely holds.
The findings were a surprise to the researchers, who had not thought they would discover such a large loss for domestic computer scientists.
“The [H-1B] program led to a lot more innovation and growth in IT, which should raise wages for everyone in that sector,” Khanna told Quartz. “But competition from foreign computer scientists should also keep wages down. We weren’t sure which would be the bigger effect.” The competition effect easily won out.
The study is exemplary of the classic immigration trade off. Almost everybody in the U.S. gains from the H-1B program, and from immigration generally. It helps the economy grow, consumers are better off and company profits are higher. But the workers in direct competition with the immigrants in their industry are usually harmed, and are rarely if ever compensated for that loss. | <urn:uuid:a679dff3-43b5-4a41-bee5-3ec028a51118> | CC-MAIN-2017-09 | http://m.nextgov.com/cio-briefing/2017/02/new-research-shows-who-will-be-hurtand-helpedif-americas-tech-industry-cant-hire-worlds-best-talent/135483/?oref=m-ng-river | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00196-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.975273 | 608 | 2.84375 | 3 |
An important goal of SOA design is the identification of services and their specifications. In other words: Which functions and data should I expose as a service and how do I define and model those identified services? The IBM methodology for defining the SOA analysis and design process is the Service Oriented Modeling and Architecture (SOMA) (see Resources).
SOMA (and many other SOA methodologies) relies heavily on business process analysis and use case design to resolve service interface design at the appropriate level of granularity, establish reuse, and so on. Often, the information perspective of SOA is limited to implementing a small number of services as database queries exposed as Web services. This narrow view completely misses the value that established information architecture concepts and patterns can bring to the SOA solution. To fully support scalable, consistent and reusable access to information, the SOA solution needs to include a broader set of design concerns, reflecting information architecture best practices.
Information as a Service applies a set of structured techniques to address the information aspects of SOA design. The goal is that by understanding what business information exists in the solution, informed decisions can be made to ensure that information is leveraged in ways that best support the technical and business objectives of the SOA solution:
- That services are reusable across the entire enterprise.
- That the business data exposed to consumers is accurate, complete and timely.
- That data shared across business domains and technology layers has a commonly understood structure and meaning for all parties.
- That the core data entities linking together the business domains of an enterprise are consistent and trusted across all lines of business.
- That an enterprise gains maximum business value from its data and data systems.
These objectives are valid for all parts of an SOA solution regardless of technology and implementation choices. Exposing an existing application programming interface (API) as a service, for example, requires an understanding of the data being exposed: Is it reliable and accurate? How does it relate to other data in the enterprise? Is it being presented in an understandable format for the consumers? Applying a structured approach to data analysis, modeling and design in an SOA project leads to a solution implementation that is better at meeting existing business requirements as well as being better prepared to adapt to new ones.
Most of the patterns discussed in the information perspective of SOA design apply to any service. They are independent of how the service is realized and are not limited to information services. These patterns are described in a later section.
However, information architecture concepts -- and in particular IBM's Information on Demand approach to information architecture -- can also provide the best implementation choice for some SOA components. For example, the Data Federation pattern is often the best option to implement an SOA component that aggregates data from disparate systems in real time and then exposes it through a common service interface (see Resources). This article includes considerations related to the realization of information services.
General information-related SOA design patterns
Figure 1 shows the three pillars that the information perspective to SOA design is based on. These pillars are to:
- Define the data semantics through a business glossary
- Define the structure of the data through canonical modeling
- Analyze the data quality
Figure 1. Overview
In subsequent articles in this series, learn about the role and value of the pattern for each pillar. Then, get an introduction to the corresponding IBM technology to this pattern.
The Business Glossary
A foundation for any successful SOA is the establishment of a common, easily accessible business glossary that defines the terms related to processes, services, and data. Often, practitioners discover inconsistencies in terminology while trying to learn the accepted business language and abbreviations within an organization. Without an agreement on the definition of key terms such as customer, channel, revenue and so on, it becomes impossible to implement services related to those terms. If stakeholders differ in their interpretation of the meaning of the parameters of a service, or indeed the data set it retrieves, it is unlikely that a service implementation can be successful.
It is critical that business analysts and the technical community have a common understanding of the terminology used across all aspects of the SOA domain, including processes, services and data. The business glossary eliminates ambiguity of language around core business concepts that could otherwise lead to misunderstandings of data requirements.
A business glossary eliminates misinterpretations by establishing a common vocabulary which controls the definition of terms. Each term is defined with a description and other metadata and is positioned in a taxonomy. Stewards are responsible for their assigned terms: they help to define and to support the governance of those terms. Details for the business glossary pattern are discussed in a future article in this series.
A key success factor of a business glossary is to make it easily accessible, to link it to other important modeling artifacts, and also to demand that it is actively used in the design phase of the project. This pattern is supported by InfoSphere ® Business Glossary, which is part of IBM Information Server. This product is described in more detail in a future article in this series.
As well as a tool to manage and share a glossary, IBM also delivers industry-specific intellectual property, in the form of models. These models contain thousands of business terms, clearly defined, to enable data requirements and analysis discussions with stakeholders.
The canonical data model
Consistent terminology is a good starting point when designing services, but this in itself is not sufficient. You must also have a clear understanding of the way business information is structured. The input and output parameters of services, that is, the messages, are often far more complex than single data types. They represent complex definitions of entities and the relationships between them. The development time and quality of SOA projects can be greatly improved if SOA architects leverage a canonical model when designing the exposed data formats of service models. The resulting alignment of process, service/message, and data models accelerates the design, leverages normative guidance for data modeling and avoids unnecessary transformations. Equally important is surfacing the detailed service data model to stakeholders early in the SOA lifecycle. This facilitates identification of the most reusable data sets across multiple business domains, resulting in service definitions that meet the needs of a wide range of service consumers, thus reducing service duplication.
The key problem addressed in this and subsequent articles is how to best ensure a consistent format for information horizontally across the services and vertically between the process, the service, and the data layers in the SOA context. A canonical data model provides a consistent definition of key entities, their attributes and relationships across the various systems that hold relevant data for the SOA project. The canonical data model establishes this common format on the data layer while the canonical message model defines this uniform format on the services layer. The pattern of a canonical data and message model is presented in a future article in this series.
Industry Models provide an integrated set of process, service and data models that can be used to drive analysis and design of service architectures, ensuring a tight alignment of data definitions across modeling domains. They define best practices for modeling a particular industry domain and provide and an extensible framework so that you don't have to constantly redesign your SOA as you add more and more services.
A future article discusses the related data modeling tool Rational Data Architect, and relevant structures from models in greater detail.
Data quality analysis
Practitioners who have considered the concepts described above can deliver service designs with a high degree of consistency across models and metadata artifacts. However, this is no guarantee that the quality of the data that is being returned by services is acceptable. Data which meets the rules and constraints of its original repository and application may not satisfy requirements on an enterprise level. For example, an identifier might be unique within a single system but is it really unique across the enterprise? Quality issues which are insignificant within the original single application may cause significant problems when exposed more broadly through an SOA on an enterprise level. For example, missing values, redundant entries, and inconsistent data formats are sometimes hidden within the original scope of the application and become problematic when exposed to new consumers in an SOA.
The problems therefore are whether the quality of the data to be exposed meets the requirements of the SOA project and how to effectively make that determination. The proposed solution is to conduct a data quality assessment during service analysis and design. After you catalog the source systems that support a service, you can start to investigate them for data quality issues. For example, you should verify that data conforms to the integrity rules that define it. You should verify if data duplication exists and how this can be resolved during data matching and aggregation. On the basis of these types of analysis, you can take appropriate actions to ensure that service implementation choices meet the demanded levels of data accuracy and meaning within the context of the potential service consumers. A future article in this series describes this pattern.
The effectiveness of the data quality assessment can be greatly enhanced with the right tooling decision. InfoSphere Information Analyzer, which is part of IBM Information Server, supports the data quality analysis pattern and is described in a separate article in this series.
The issues and concepts described so far apply to any service in an SOA. Canonical modeling and data quality analysis can provide value to the consistency of services and to its output data regardless of the type of service.
Information services specific patterns
Information services are services whose realization depends on information architecture, or Information on Demand, where a separation of information from applications and processes provides benefits.
Most SOA projects do not start on a green field but are based on an existing IT environment. Some of the challenges are unique to SOA, but, more often than not, well-known problems in traditional information architecture fall within the scope of SOA as well. A typical organization's information environment is often not in an ideal state to enable an effective SOA transformation. From an enterprise perspective, there's often a lack of authoritative data sources offering a complete and accurate view of the organization's core information. Instead, there is a wide variety and technologies used for storing and processing data differently across lines of business, channels or product types. Many large organizations have their core enterprise information spread out and replicated across multiple vertical systems, each maintaining information within its specific context rather than the context of the enterprise. These further drive inconsistencies within the business processes -- which themselves are usually dramatically different within different parts of the enterprise. Information On Demand -- in particular data, content, information integration, master data, and analytic services -- can be leveraged to realize information services that provide accurate, consistent, integrated information in the right context.
Consider the lack of an authoritative, trusted source or single system of record as an illustrative example. Suppose that in an organization's supply chain system's portfolio, there are five systems that hold supplier information internally. Each of these can be considered a legitimate source of supplier data within the owning department. When building a service to share supplier data, what should be the source of supplier data?
- Is it one of the five current systems that have their own copy of the supplier data? If so, which one?
- Is it a new database that's created for this specific purpose? How does this data source relate to the existing sources?
- Does data have to come concurrently from all of the five systems? If so, is it the responsibility of the data architect, the service designer, the business process designer, or the business analyst to understand the rules for combining and transforming the data to a format required by the consumer?
Often an understanding of these disparate data definitions can only be obtained by mapping back to a reference model (often a logical data model), allowing overlaps, gaps and inconsistencies in data definitions to be identified. Reusable, strategic enterprise information should be viewed as sets of business entities, standardized for re-use across the entire organization and made compliant with industry standard structures, semantics and service contracts. The goal is to create a set of information services that becomes the authoritative, unique, and consistent way to access the enterprise information. Allowing access to any information only through an application limits the scope of the information to the context of the application rather than that of the enterprise as required in an SOA. In this target service-oriented environment, an organization's business functionality and data can be leveraged as enterprise assets that are reusable across multiple departments and lines of business. This enables the following principles of information services:
- Single, logical sources from which to get a consistent and complete view of information through service interfaces. This is often referred to as delivering trusted information.
- The underlying heterogeneity that may exist underneath this information service layer and its related complexity is hidden when required (for example, during runtime). However, the lineage of the information -- the mapping of logical business entities to actual data stores -- is available when appropriate (for example, for data stewards to support data governance, impact analysis, etc.).
- The authoritative data sources of the information service are clearly identified and are effectively used throughout the enterprise.
- Valuable metadata about the information service is available:
- The quality of the information exposed through the service is known and meets the expectations of the business. The information services are compliant with data standards that have been defined.
- The currency of the information (how "old" the data is) is known. Effective mechanisms are available to deliver the information with the required latency.
- The structure and the semantics of the information are known and commonly represented on different architecture layers (data persistence layer, application layer, service/message layer, and process layer)
- The information service may be governed based on appropriate
processes, policies and organizational structures:
- The security of the information is guaranteed and incorporated into the solution rather than implemented as an afterthought and follows security and privacy policies.
- The change of the service may be audited.
- The information service is easily discoverable by potential consumers across the organization.
- A holistic governance approach is in place that addresses both the service and the information layer.
Information as a Service is about leveraging information architecture concepts and capabilities -- as defined through Information On Demand -- in the context of SOA. There are important capabilities and concepts in SOA that are not included in Information On Demand and vice versa. But there is also a substantial overlap between them -- such as leveraging content, information integration, and master data services -- which significantly improve the delivery of an SOA project. The following diagram illustrates the alignment between the SOA reference architecture shown on the left (see also Resources) and the Information On Demand reference architecture on the right.
Figure 2. Information services in SOA
As part of the SOA design phase, architects may need to make architecture decisions regarding which patterns to use based on the requirements in the project. Table 1 describes some of the key, but high-level, patterns that may apply.
Table 1.High-level categorization of information service patterns
|Data services||How do I expose structured data as a service?||Implement a query to gather the relevant data in the desired format and then expose it as a service.|
|Content services||How do I best manage (possibly distributed and heterogeneous) unstructured information so that a service consumer can access the content effectively?||Provide a consistent service interface to content no matter where it resides, maintaining the relationship between content and master data.|
|Information integration services||How do I provide a service consumer access to consistent and integrated data that resides in heterogeneous sources?||Understand your legacy data and its quality, cleanse it, transform it, and deliver it as a service.|
|Master data services||How can consumers access consistent, complete, contextual and accurate master data even though the data resides in heterogeneous inconsistent systems?||Establish and maintain an authoritative source of master data as a system of record for enterprise master data.|
|Analytic services||How do I access analytic data out of raw heterogeneous structured and unstructured data?||Consolidate, aggregate and summarize structured and unstructured data and calculate analytic insight such as scores, trends, and predictions.|
IBM Information Server plays an important role in the SOA design phase by providing a unified metadata management platform. This platform consists of a repository and a framework that allows various design tools to access, maintain, and share their artifacts with other IBM Information Server components and third party tools. The value of this shared metadata platform is that metadata artifacts can be easily shared between the tools and is kept consistent.
The purpose of this article is to give you an introduction to the information perspective of SOA design and some of the key patterns -- the business glossary, canonical models, data quality analysis, and information services. You should see the role of leveraging industry models in those design activities. If any of these topics has sparked your interest, be sure to read the coming articles in this series.
- Check out the rest of this series to learn more about topics that were introduced in this article.
- The "Information service patterns" series discusses the information service patterns addressed in this article. (developerWorks. 2006-2007).
- Read "Design an SOA solution using a reference architecture" to get more information on the SOA reference architecture.
Get products and technologies
- Create, manage & share an enterprise vocabulary and classification system with IBM Information Server and in particular InfoSphere Business Glossary.
- Simplify data modeling and integration design with Rational Data Architect.
- Accelerate projects and reduce risk with IBM Industry Models.
- Understand the structure, content and quality of your data sources with InfoSphere Information Analyzer.
- Participate in developerWorks blogs and get involved in the developerWorks community. | <urn:uuid:5fbc42c8-8ef1-4e89-a6df-9a89a90ace09> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/data/library/techarticle/dm-0801sauter/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00192-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917287 | 3,623 | 2.671875 | 3 |
Researchers at MIT and other institutions have demonstrated a new type of magnetism, only the third kind ever found, and it may find its way into future communications, computing and data storage technologies.
Working with a tiny crystal of a rare mineral that took 10 months to make, the researchers for the first time have demonstrated a magnetic state called a QSL (quantum spin liquid), according to Massachusetts Institute of Technology physics professor Young Lee. He is the lead author of a paper on their findings, which is set to be published in the journal Nature this week. Theorists had said QSLs might exist, but one had never been demonstrated before.
"We think it's pretty important," Lee said, adding that he would let his peers be the ultimate judges.
Lee and his fellow researchers didn't carry out their work with IT advancements in mind, but while studying how a QSL works, they saw a phenomenon called "long-range entanglement" that may brighten the prospects for new types of storage, computing or networking.
The two other known forms of magnetism are already widely used. Ferromagnetism, the effect that makes compass needles turn and refrigerator magnets stick to metal, causes attraction or repulsion between two objects. In this type of magnet, the "magnetic moment," or direction of magnetism, of all the atoms inside is the same.
In antiferromagnetism, the atoms within an object have opposite magnetic moments and cancel each other out, which makes them line up in orderly patterns. This effect is used in materials added to hard disk drives to make them more reliable.
In the crystal that the MIT researchers studied, each particle constantly changes its magnetic moment. They never line up with their neighbors or cancel each other out to form patterns. Though this state is called a "quantum spin liquid," that's just an analogy. The crystal is a solid material.
"If the magnetic moments do not order, and they're constantly fluctuating with respect to each other, then we call that a liquid," Lee said.
It wasn't easy to demonstrate this magnetic state. First, the team had to create a pure crystal of herbertsmithite, a mineral that was discovered in the Atacama desert of Chile and named for mineralogist Herbert Smith. That forced the team to invent its own technique, carefully raising and lowering the temperature in a furnace, and produced thin crystals no more than a centimeter across. For the QSL research, which was carried out using a technique called neutron scattering, the crystal had to be cooled to near zero Kelvin, or hundreds of degrees below zero in Celsius or Fahrenheit.
The research on the QSL state revealed other rare and interesting phenomena, Lee said. One was an effect called long-range entanglement, in which two widely separated particles can affect each other's magnetic moments instantaneously.
That effect could aid in the development of quantum computing, which uses a "qubit" based on the quantum state of an atomic particle to represent each bit of information, according to Lee. Impurities in the material around a qubit particle can cause it to change its quantum state unexpectedly, he said.
"There are issues that need to be improved in these qubits so you can have a quantum state that lasts a very long time without, essentially, decaying," Lee said. "This new type of state, with long-range entanglement, is very robust, or protected, against that," Lee said.
In addition to helping to reliably store data and do calculations in quantum computing, long-range entanglement might aid in communication technology, according to Lee. A QSL material might also be turned into a superconductor for efficient electrical transmission over power lines, Lee said.
It's too soon to say how the challenges of building pure herbertsmithite crystals or cooling them down might translate into making quantum storage or other technologies.
"Once we understand a lot more of the basic physics, there could be some good ideas for the engineering aspects, but we're still very early into this research," Lee said. "It's many, many years away from becoming something that's in a technology that a consumer would use." | <urn:uuid:32927773-27ba-4882-90cb-19128ebd88dc> | CC-MAIN-2017-09 | http://www.itworld.com/article/2717157/networking/mit-research-shows-new-magnetic-state-that-could-aid-quantum-computing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00192-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965012 | 863 | 3.515625 | 4 |
Can Cloud Computing Hubs Alter Food Distribution?
Consumers are increasingly interested in purchasing products from local markets. As people’s palettes yearn more and more for garden products fresh from the farm, the waiting list is getting longer at places where these are readily available. With decreasing numbers of local producers, there is only one solution for the customers to search them without using a fuel guzzler to trek the country: cloud computing hubs.
Recent publications from the United Kingdom have shown how cloud computing is bringing vendors and customers together—they can discuss exclusive deals over the Internet. This is eliminating the much-loathed middlemen, namely large grocery stores and retail chains, that often alter the fresh products through processing before consumers can purchase them.
Now customers only need to join a forum and they can chat with the farmers about distribution logistics, such as transport bills, delivery dates, or quantities of products. Thanks to tools like Google Maps and other local directory resources, it will also be possible to find the exact county, small town, or village where the supplier is located. Cloud hubs are also the next big stage towards helping local farmers to overcome the competition of the large grocery stores. Producers will be able to contact potential customers directly.
Analysts are saying that the consumer will be the biggest beneficiary of this arrangement. The consumers will be able to enjoy healthy meals because the products will come right from the farm. They will no longer have to wait to purchase the products in a supermarket in the questionable package form.
Through hub distribution, cloud computing will alter the economical part of farm produce. Food will now be coming to the table as a calculated product with all the mathematics that went to its making. This is because producers and distributors alike will be monitoring food prices closely through the Internet. The same case applies to the buyers who can follow market deals in real-time and select products from the cheapest sources.
There is also an ecological aspect of this technology. Distributing food through hubs will reduce waste. Produce will no longer go to waste for lack of a buyer. Also, the availability of fresh food will reduce storage and processing costs.
All these will ultimately lead to an exciting situation where food will be sourced across the planet in its complexity right from one’s computer. Even if a product is only available in a certain continent, it will only be a click away for the food connoisseur.
By John Omwamba | <urn:uuid:fb670c19-56d0-4e4b-93d9-abdfe07b4ba6> | CC-MAIN-2017-09 | https://cloudtweaks.com/2012/10/can-cloud-computing-hubs-alter-food-distribution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00192-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935536 | 496 | 2.59375 | 3 |
Key points in the announcement:
- the OS is designed for Web apps and cloud computing
- it combines the Chrome browser plus a lightweight windowing environment on top of a Linux kernel
- goals are lightweight, fast to boot, fast to allow users to access the Web
- targeted to netbook category of devices initially (via hardware partners), but eventually for desktop machines
- will be released as open source
- relies on a security architecture designed for the Internet era
- it’s not Android
- discussions underway with hardware partners for shipment after mid-2010
If two dots allow one to draw a trend-line, the cloud-centric OS built on a browser can be considered a trend. The two large dots that define this trend-line are Chrome OS plus a Microsoft Research project called Gazelle, a browser-based operating system with an approach similar to Chrome OS. More about Gazelle in a moment.
Based on the limited information that Google has provided thus far, it appears that Chrome OS leverages Google’s core competencies, which include ability to design user experiences that are simple, effective and fast, backed by “rocket-science” software technology under hood. Google’s approach is to first narrow the scope of the problem in order to put more substance and depth into the remainder.
There is some substantive information upon which to make inferences if one looks at the open-source version of the Chrome browser, available at Chromium.org.
The current Chrome browser consists of 1.7 million lines of C++ code, and already incorporates many OS-like aspects (multi-processing, robust isolation). About 60% of the Chrome browser source code is related to rendering HTML, and much of the rest (about 700,000 lines of code) implements OS-like aspects such as multiple process models, interprocess communication (IPC) secure sandbox isolation mechanism, and so on. This system could rest on a foundation of a Linux kernel. The major new piece is the unspecified windowing environment, which presumably would mostly be a pass-through mechanism. Speculating wildly here, I would say the combination of all of the above would result in a system of about 5 to 7 million lines of code, including a stripped-down Linux kernel. This is about the same size as Windows NT 3.1, and about one-tenth the code size of Windows Vista. This is smaller than Google Android, which is variously estimated at between 8M to 11M lines of code. The size estimates perhaps answer the question “Why not use Android?”. That is, Google is betting on a lean and fast browser-based OS rather than one that is built for comfort across a wide range of scenarios.
One might also ask: “Why a browser-based OS? Why is this worthwhile in a time when users can simply procure the combination of Linux, Firefox and OpenOffice on a modest laptop?”. A paper on Chrome’s Multi-Process Architecture provides an answer:
“The current state of web browsers is like that of the single-user, co-operatively multi-tasked operating systems of past. As a misbehaving application in such an operating system could take down the entire system, so can a misbehaving web page in a modern web browser….Modern operating systems are more robust because they put applications into separate processes that are walled off from one another…. We use separate processes for browser tabs to protect the overall application from bugs and glitches in the rendering engine. We also restrict access from each rendering engine process to others and to the rest of the system. In some ways, this brings to web browsing the benefits that memory protection and access control brought to operating systems.”
This text is accompanied by the following diagram:
The multi-process approach is not unique to the Chrome browser. Recent versions of Internet Explorer have something similar. What is different is that this multi-process approach is extended to include the entire machine environment. A crisp explanation comes from Helen Wang, a member of Microsoft Research Labs working on Gazelle, a lightweight, browser-centric OS prototype:
“Everyone accepts that applications need to run on operating systems. However, this has not been the case for Web applications; they depend on browsers to render pages and handle computing resources. Yet browsers have never been constructed to be operating systems. Principals are allowed to coexist within the same process or protection domain, and resource management is largely non-existent.”
Microsoft writer Janie Chang elaborates:
“In the Gazelle model, the browser-based OS, typically called the browser kernel, protects principals from one another and from the host machine by exclusively managing access to computer resources, enforcing policies, handling inter-principal communications, and providing consistent, systematic access to computing devices.”
When Google Wave was introduced in May, I noted how Microsoft Research had seen prototypes of Office that supported real-time collaborative editing in 2003 but never moved forward on this. Now perhaps, this ironic pattern is repeating, assuming that Chrome OS actually sees the light of day.
If Google delivers on its plan, it seems that Chrome OS will be the first cloud-oriented OS to ship. This will be a consumer-oriented offering initially, similar to Google’s past practice in other categories (Web maps, Web email, and the Chrome browser). It will be years (three to five) before it has any impact on the enterprise sector.
For Google to succeed with this initiative, Chrome OS must deliver a user experience that is perceptibly better from the outset. Google was able to achieve this with Google Maps, which reinvented online mapping in a way that users immediately noticed a better user experience. Google has arguably also achieved this objective with Gmail and with the Chrome browser. The question is whether they can achieve a similar goal with an OS — a clearly envisioned target that they will reach for, but one that may prove difficult for them to grasp and hold.
What do you think?
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:5602f3b1-8de9-4da3-89c2-4e8ce000a7bf> | CC-MAIN-2017-09 | http://blogs.gartner.com/ray_valdes/2009/07/08/google-chrome-microsoft-gazelle-and-the-cloud-oriented-os/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00368-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921181 | 1,358 | 2.78125 | 3 |
Coral reefs the world over have been taking a beating over the past few decades. Warming water temperatures and decreased pH levels have led to wide-scale bleaching of coral reefs and have decreased the corals' ability to produce the calcium carbonate that forms the structural element of the reef itself. Work published in the open access journal PLoS One on Monday has shown that coral reefs that are protected as marine reserves can bounce back from damage. By placing reefs in marine reserves—areas where dredging and fishing are not permitted—harmful effects from human activity can be mitigated.
The study examined ten different reef sites in and around the Bahamas over the course of two and a half years. The reefs had seen damage both from bleaching and from hurricane Frances in the summer of 2004. At the beginning of the observation period, the reefs had, on average, seven percent coral coverage. At the end, two and a half years later, the reefs in marine reserve had coral coverage increase by 19 percent, when initial distribution was taken into account. The areas not protected as part of a reserve showed no statistically significant recovery.
Professor Peter Mumby of the University of Exeter, lead author of the paper, highlighted the importance of this protection: "Coral reefs are the largest living structures on Earth and are home to the highest biodiversity on the planet. As a result of climate change, the environment that has enabled coral reefs to thrive for hundreds of thousands of years is changing too quickly for reefs to adapt."
Their work shows, for the first time, that reducing the amount of human interference, mainly fishing, can help nature regain lost ground. By limiting the amount of parrotfish taken, the reserves gave these natural herbivores the chance to keep the local seaweed population under check, which gave the reefs the breathing room they needed to bounce back.
PLoS One, 2010. DOI: 10.1371/journal.pone.0008657 | <urn:uuid:4624662d-00e8-4abb-bf94-0feee5fa580a> | CC-MAIN-2017-09 | https://arstechnica.com/science/2010/01/coral-reefs-show-that-they-can-rebound/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00368-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945477 | 397 | 4 | 4 |
Table of Contents
A very common question we see here at Bleeping Computer involves people concerned that there are too many SVCHOST.EXE processes running on their computer. The confusion typically stems from a lack of knowledge about SVCHOST.EXE, its purpose, and Windows services in general. This tutorial will clear up this confusion and provide information as to what these processes are and how to find out more information about them. Before we continue learning about SVCHOST, lets get a small primer on Windows services.
Services are Windows programs that start when Windows loads and that continue to run in the background without interaction from the user. For those familiar with Unix/Linux operating systems, Windows services are similar to *nix daemons. For the most part Windows services are executable (.EXE) files, but some services are DLL files as well. As Windows has no direct way of executing a DLL file it needs a program that can act as a launcher for these types of programs. In this situation, the launcher for DLL services is SVCHOST.EXE, otherwise known as the Generic Host Process for Win32 Services. Each time you see a SVCHOST process, it is actually a process that is managing one or more distinct Windows DLL services.
Outlined below are three methods, depending on your Windows version, to see what services a SVCHOST.EXE process is controlling on your computer as well as some advanced technical knowledge about svchost for those who are interested.
Process Explorer, from Sysinternals, is a process management program that allows you to see the running processes on your computer and a great deal of information about each process. One of the nice features of Process Explorer is that it also gives you the ability to see what services a particular SVCHOST.EXE process is controlling.
First you need to download Process Explorer from the following site:
Download the file and save it to your hard drive. When it has finished downloading, extract the file into its own folder and double-click on the procexp.exe to start the program. If this is your first time running the program, it will display a license agreement. Agree to the license agreement and the program will continue. When it is finished loading you will be presented with a screen containing all the running processes on your computer as shown in the figure below. Remember that the processes you see in this image will not be the same as what is running on your computer.
Process Explorer Screen
Scroll through the list of processes until you see the SVCHOST.EXE process(es). To find out which services are running within a particular SVCHOST.EXE process we need to examine the properties for the process. To do this double-click SVCHOST.EXE entry in Process Explorer and you will see the properties screen for the process like in the image below.
Finally, to view the services running in this process, click on the Services tab. You will now see a screen similar to the one below.
This window displays the services that are being managed by this particular SVCHOST.EXE process. As you can see the SVCHOST.EXE that we are currently looking at in this tutorial is managing the DCOM Server Process Launcher and Terminal Services.
Using this method you can determine what services a SVCHOST.EXE process is controlling on your computer.
For those who like to tinker around in a Windows command prompt/console window, and have Windows XP Pro or Windows 2003, there is a Windows program called tasklist.exe that can be used to list the running processes, and services, on your computer. To use task list to see the services that a particular SVCHOST.EXE process is loading, just follow these steps:
1. Click on the Start button and then click on the Run menu command.
2. In the Open: field type cmd and press enter.
3. You will now be presented with a console window. At the command prompt type tasklist /svc /fi "imagename eq svchost.exe" and press the enter key. You will see a list of the processes on your computer as well as the services that a SVCHOST.EXE process is managing. This can be seen in the image below.
TaskList /svc output
When you are done examining the output, you can type exit and press the enter key to close the console window.
Windows Vista and Windows 7 have enhanced their Windows Task Manager and one of its features allows us to easily see what services are being controlled by a particular SVCHOST.EXE process. To start, simply start the task manager by right clicking on the task bar and then selecting Task Manager. When Task Manager opens click on the Processes tab. You will now be presented with a list of processes that your user account has started as shown in the image below.
Windows 7's Current User Processes
We, though, need to see all of the processes running on the computer. To do this click on the button labeled Show All Processes. When you do this, Windows may prompt you to allow authorization to see all the processes as shown below.
Show all Processes Confirmation
Press the Continue button and the Task Manager will reload, but this time showing all the processes running in the operating system. Scroll down through the list of processes until you see the SVCHOST processes as shown in the image below.
All Windows 7 Processes
Right-click on a SVCHOST process and select the Go to Service(s) menu option. You will now see a list of services on your computer with the services that are running under this particular SVCHOST process highlighted. Now you can easily determine what services a particular SVCHOST process is running in Windows Vista or Windows 7.
The Windows 8 Task Manager makes it much easier to find what services are running under a particular SVCHOST.exe instance. To access the Task Manager, type Task Manager from the Windows 8 Start Screen and then click on the Task Manager option when it appears in the search results. This will open the basic Task Manager as shown in the screenshot below.
To see the list of processes, click on the More details option.
Scroll down until you see the Windows Processes category and look for the Service Host entries as shown in the image below.
Next to each Service Host row process will be a little arrow. Click on this arrow to expand that particular Service Host entry to see what services are running under it.
Under the expanded Service Host, you will now see the list of services that is running under it. This allows you to easily determine what services a particular SVCHOST process is managing in Windows 8.
Now that we know that a single SVCHOST.EXE process can load and manage multiple services, what determines what services are grouped together under a SVCHOST instance? These groups are determined by the settings in the following Windows Registry key:
Under this key are a set of values that group various services together under one name. Each group is a REG_MULTI_SZ Registry value that contains a list of service names that belong to that group. Below you will see standard groups found in XP Pro.
Services in the group
|LocalService||Alerter, WebClient, LmHosts, RemoteRegistry, upnphost, SSDPSRV|
6to4, AppMgmt, AudioSrv, Browser, CryptSvc, DMServer, DHCP,
ERSvc, EventSystem, FastUserSwitchingCompatibility, HidServ, Ias,
Iprip, Irmon, LanmanServer, LanmanWorkstation, Messenger, Netman,
Nla, Ntmssvc, NWCWorkstation, Nwsapagent, Rasauto, Rasman, Remoteaccess,
Schedule, Seclogon, SENS, Sharedaccess, SRService, Tapisrv, Themes, TrkWks,
W32Time, WZCSVC, Wmi, WmdmPmSp, winmgmt, TermService, wuauserv,
BITS, ShellHWDetection, helpsvc, xmlprov, wscsvc, WmdmPmSN
Each of the service names in these groups corresponds to a service entry under the Windows Registry key:
Under each of these service entries there is a Parameters subkey that contains a ServiceDLL value which corresponds to the DLL that is used to run the service.
When Windows loads it begins to start services that are set to enabled and have an automatic startup. Some services are started using the SVCHOST.exe command. When Windows attempts to start one of these types of services and there is currently not a svchost instance running for that services group, it will create a new SVCHOST instance and then load the DLL associated with the service. If on the other hand, there is already a SVCHOST process running for that group it will just load the new service using that existing process. A service that uses SVCHOST to initialize itself, provides the name of the group as a parameter to svchost.exe command. An example would be:
C:\WINDOWS\system32\svchost.exe -k DcomLaunch
In the above command line, the svchost process will look up the ServiceDLL associated with the service name from the DcomLaunch group and load it.
This can be confusing, so let's use an example. There is a Windows service called Distributed Link Tracking Client which has a service name TrkWks. If we examine the table above, we can see that the TrkWks service is part of the netsvcs group. If we look at the Registry key for this service we see that it's ServiceDLL is %SystemRoot%\system32\trkwks.dll. Therefore, using this information and what we learned above, we know that the executable command for the TrkWks service must be:
C:\WINDOWS\system32\svchost.exe -k netsvcs
When the TrkWks service is started Windows will check to see if there is a SVCHOST process for the netsvcs group already created. If not it will create an instance of one to handle services in the netsvcs group. The SVCHOST process for netsvcs will then start the service by executing the %SystemRoot%\system32\trkwks.dll. Once the DLL has been loaded by SVCHOST the service will then be in a started state.
Now that you understand what SVCHOST.EXE is and how it manages certain Windows services, seeing multiple instances in your process list should no longer be a mystery or a concern. It is not uncommon to see numerous SVCHOST entries, sometimes upwards to 8 or 9 entries, running on your computer. If you are concerned with what is running under these processes, simply use the steps described above to examine their services. If you are unsure what a particular service does and need help, feel free to ask any question you may have in of our Windows forums.
A common misconception when working on removing malware from a computer is that the only place an infection will start from is in one of the entries enumerated by HijackThis. For the most part these entries are the most common, but it is not always the case. Lately there are more infections installing a part of themselves as a service. Some examples are Ssearch.biz and Home Search Assistant.
One of the top questions I see on forums is "How do I know if I have been hacked?". When something strange occurs on a computer such as programs shutting down on their own, your mouse moving by itself, or your CD constantly opening and closing on its own, the first thing that people think is that they have been hacked. In the vast majority of cases there is a non-malicious explanation ...
Ever since Windows 95, the Windows operating system has been using a centralized hierarchical database to store system settings, hardware configurations, and user preferences. This database is called the Windows Registry or more commonly known as the Registry. When new hardware is installed in the computer, a user changes a settings such as their desktop background, or a new software is installed, ...
Many programs that you install are automatically run when you start your computer and load Windows. For the majority of cases, this type of behavior is fine. Unfortunately, there are programs that are not legitimate, such as spyware, hijackers, Trojans, worms, viruses, that load in this manner as well. It is therefore important that you check regularly your startup registry keys regularly. Windows ...
One of the more frustrating experiences when using a computer is when you want to delete or rename a file or folder in Windows, but get an error stating that it is open, shared, in use, or locked by a program currently using it. | <urn:uuid:966c8b5c-10c6-42ca-8559-162030979617> | CC-MAIN-2017-09 | https://www.bleepingcomputer.com/tutorials/list-services-running-under-svchostexe-process/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00544-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.914383 | 2,691 | 2.5625 | 3 |
It’s a scary world that we live in. Posting to social media is a choice that some people make. Even if they are comfortable with the privacy settings they have configured for their account, there is little stopping someone from resharing the post to a much wider audience.
When the content of that post is of something inconsequential, it has little impact. As an individual, you can choose what you post about yourself with concerns that it might reach a wider audience. If somebody else posts something embarrassing about you or just more than you would rather share, it seems unfair. That same consideration does not seem to be granted to children.
Things that hit the Internet have a way of sticking around. It is not unreasonable to say that a Facebook post or viral video on YouTube could last until the focus of the post becomes a victim of bullying in grade school. It might take that long for the child to be aware of their unwanted fame but parents might find out much sooner. Kaspersky had an excellent article on tips for parents following viral videos. Close comments to prevent ill-intentioned trolls, set privacy settings to your intended audience only, and think of the consequences.
The problem is not limited to videos. German police issued an appeal to parents to stop posting photos of their children. Facebook may implement facial recognition to warn people when they are posting photos of minors publicly.
Many people were upset when VTech was compromised, releasing the names, email addresses, passwords, and home addresses of almost 5 million parents and over 200,000 children’s first names, genders, and birthdays. All of this information helps attackers pull off scams. With believable information, more people are likely to be victims. While people were upset that VTech’s Kid Connect service was compromised, many post that same information publicly to social media.
All of this oversharing has gone too far. Privacy settings are complicated but necessary to understand if you are going to use social media. Oversharing might allow someone to create a complete profile on a victim. If we thought the popular security question asking your mother’s maiden name was a weakness for our generation, the next generation is going to be completely known and transparent. Review privacy settings and avoid sharing more information than is necessary. A photo post wishing happy birthday could reveal name, birthday, and appearance.
That is why I say that a child’s privacy should be treated like their credit. Most minors do not have a credit history. Experian states that if a child has a credit report, one of three things happened:
You have applied for credit in their names and the applications were approved. You have added them as authorized users or joint account holders on one or more of your accounts. Or, someone has fraudulently used their information to apply for credit and they are already identity theft victims.
Unlike the free annual credit report, there is no free privacy report. If everything goes as planned, a person has their own credit to make or ruin. Likewise, it should be up to them whether they shed or hold onto their privacy. They won’t have anything to blame you for and can decide for themselves what level of exposure they would want. This also might mean standing up to grandparents and aunts/uncles about sharing so much information but it’s your job to protect your kids, right? | <urn:uuid:ee11ee3f-4fa3-4523-89e5-a3adfa863ecd> | CC-MAIN-2017-09 | https://www.404techsupport.com/2015/12/childs-privacy-credit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00064-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966941 | 687 | 2.625 | 3 |
The title of this post seems to be lost on some who are responsible for security architecture. One of my reflections on this past summer is that not everyone is aware of the difference between weaker and stronger forms of multifactor authentication.
You have likely read about multifactor authentication, have used it with your social networking websites, or maybe you have used a form of multifactor authentication in a corporate environment. This is all very good news. The bell has tolled for single-factor username-and-password schemes and people are starting to realize that this old stalwart of authentication needs to be retired as soon as possible.
Keylogging, man-in-the-middle attacks and social engineering techniques leveraged by cunning identity thieves are in the news every day. The time has come for multifactor authentication.
Why? It makes the job of a malicious hacker more difficult. As with all attacks, malicious hackers are looking for ways to steal your identity. Nobody in the cybersecurity business is getting fired right now for suggesting that multifactor authentication should be used in their enterprise. It’s a good idea that has reached the executive ranks.
But without understanding the offensive side of the security equation, there are some in the defensive side of cybersecurity who have forgotten that not all multifactor authentication techniques are equal.
Simply, it’s smart to choose a multifactor authentication that matches the risks.
This summer I spoke to security architects in large enterprises in both North America and Europe and their job was to protect one of three things: money, privacy or critical infrastructure. Some of these professionals were planning to employ SMS-based multifactor authentication. This is where users log in to a website and are challenged to enter a code that is sent to their mobile devices via text message (SMS).
SMS-based multifactor authentication is better than single-factor username-and-password authentication. These professionals had every reason to be glad to be working on these projects. But I challenged some of them to explain to me why they did not choose a stronger form of multifactor authentication.
“What’s wrong with SMS?” they asked. What bothered me was not that they were employing SMS, but that they did not know the weaknesses.
In addition, I witnessed a demonstration at the Def Con 21 conference in Las Vegas this year where SMS messages were being intercepted — by a Femtocell device hacked by ethical researchers — and projected onto a screen. This was a friendly environment and nobody was hurt, but it laid bare the weakness of non-encrypted messages like SMS.
There are other forms of multifactor authentication that are much stronger than SMS, and even easier for the end-user. An example includes innovative virtual smart credentials embedded onto mobile devices. The chain of communication is encrypted, and doesn’t require the user to type a code. It’s not often that better security can also mean a better user experience.
Your money and privacy are important to you. Before you log in to a bank, conduct a transaction with your government, or turn on a pump at a critical infrastructure plant, you should consider that there are malicious individuals or groups out there who strive to obtain your identity for illegal gain. Making it more difficult for the bad guy means choosing a method of authentication that does not easily give away your identity.
SMS multifactor authentication is a step above username-and-password solutions, but if what you are protecting is important to you, there are stronger methods. | <urn:uuid:ab093ed5-5b8d-4199-9f4c-a58e98132bb4> | CC-MAIN-2017-09 | https://www.entrust.com/multifactor-authentication-techniques-equal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00185-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.960862 | 713 | 2.703125 | 3 |
The new iPhones have an added capability that's of particular interest to scientists: A barometer.
The barometer capability wasn't added to help scientists, though. It sounds strange but a barometer can help improve GPS results to better pinpoint a user's location. Android has supported barometric readers for a while, but not all Android phone makers have opted to include barometers in their phones.
Improved location readings are useful in new kinds of apps that Apple wants to support, particularly around health trackers.
But there's another reason that the barometers are interesting. It's because scientists hope to use them to crowdsource data so that they can do a better job at predicting the weather.
Researchers are already using data collected from an Android app that measures barometric pressure. Jacob Sheehy and Phil Jones built the app, called PressureNet, and have been sharing data collected from it with scientists. They also let other developers build their technology into third party apps as a way to further distribute the technology and collect more data.
They launched PressureNet in 2012 and have had 95,000 downloads but only about 22,000 people actively use it, Sheehy said. However, including third party apps, like Beautiful Widgets, around 300,000 phones are capable of feeding pressure data to PressureNet.
So far around 300 people have signed up to use the data from the app, 100 of whom are researchers. But only around 10 or so are active, Sheehy said.
One of those active researchers is Cliff Mass, an atmospheric scientist at the University of Washington. He's collecting data from both PressureNet, which provides around 90% of the data he gets from smartphones, and OpenSignal, the developer of another app that collects pressure data. He's getting around 115,000 pressure observations per hour.
"We need millions of observations per hour over the U.S. to do the job," he wrote in a blog post about his work.
He said the phone-based collection of data might help meteorologists do a much better job predicting weather, including severe incidents like thunderstorms, that may happen in the coming hours. "To forecast fine-scale weather features (like thunderstorms), you need a fine-scale description of the atmosphere, and the current observational network is often insufficient," he wrote. "I believe that dense pressure observations could radically improve weather prediction, and early numerical experiments support this claim."
Mass was excited to discover that the new iPhones will have barometers since having more phones with this capability in the market could help him collect the volume of data he needs.
Users will first have to download an app. Sheehy said that he's working on an iPhone app and an SDK so that others can build his technology into their apps.
Mass has high hopes for broad adoption. He's reached out to Google about building technology into Android that would help capture pressure data from all phones. While a number of Google engineers have been "supportive," Google doesn't appear ready to enable such collection, he said.
Mass also notes that if a very popular app like from the Weather Channel built this capability into its app, far more data collection could happen.
Until then, Sheehy hopes that inviting iPhone users to get the app might significantly grow the user base. So far, it's been tough for people to share the app with friends since existing users are limited to Android and to only certain Android phones. At some point, presumably all iPhones will have barometers, adding a large base of potential users.
In addition, there's just something about Apple. "Apple has a way of making things mainstream, so I expect to see a much fuller and more competitive landscape now that Apple is here," Sheehy said.
This story, "How the new iPhones could help scientists predict the weather" was originally published by CITEworld. | <urn:uuid:7840a72c-b11b-4d4e-a572-4ee5588a18b3> | CC-MAIN-2017-09 | http://www.itworld.com/article/2694837/mobile/how-the-new-iphones-could-help-scientists-predict-the-weather.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00237-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967889 | 789 | 2.984375 | 3 |
IBM on Wednesday said it was establishing a consortium with the European Union and universities to research new cloud-computing models to reduce the cost of hosting and maintaining Internet-based services.
The consortium will undertake research that could lead to the development of new computer science models that bring together managed Internet-based services from diverse hardware and software environments in a flexible cloud environment, IBM said in a statement.
The new design and deployment models could help cut costs compared with conventional models, which are complex and require significant time and cost to maintain, IBM said. The current systems are not flexible and need to be manually customized for services to communicate and work together. The researchers hope to establish a framework to cut down the design and deployment time for such services by hosting them in a central cloud environment.
The researchers will undertake a project called Artifact-Centric Service Interoperation (ACSI), which is based on a concept of interoperation hubs, which was introduced by IBM Research last year. These hubs provide cloud-based environments in which flexible Internet-based software and services can easily be created and deployed. Customers would pay for service integration and pay for the hosted services depending on data stored and transactions completed. Consortium partners will develop services and applications for the project, IBM said.
IBM was not immediately able to comment on whether technologies derived from the project will be put to immediate use.
The universities involved in the project include Sapienza Universita di Roma, Italy; Free University of Bozen-Bolzano, Italy; Imperial College, United Kingdom; Technische Universiteit Eindhoven, Netherlands; University of Tartu in Estonia and Collibra NV in Belgium. | <urn:uuid:c2f1129a-9a88-4184-9c42-5e10f4d2ccf8> | CC-MAIN-2017-09 | http://www.cio.com/article/2417006/cloud-computing/ibm-and-eu-establish-cloud-computing-consortium.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00413-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95309 | 340 | 2.546875 | 3 |
What you should know about the Internet Standards process.
What you should know about the Internet Standards process.
By Pete Loshin
All about Internet Standards
Some of the solutions that researchers and developers have come up with since 1969 to do interoperable internetworking have been quite clever, some of them have been pretty simple. All of them that are considered to be Internet standards are published in a series of documents called Request for Comments or RFCs. Though these are the most "famous" Internet documents, they are far from the only ones.
The first and possibly most important thing to remember about Internet standards is that while all Internet standards are documented in RFCs, not all RFCs document Internet standards. Only a relative few RFCs document actual Internet standards; many of the rest document specifications that are on the standards track, meaning they are at some point in the process that takes a specification and turns it into a standard. Non-standard, non-standards track RFCs may be published for information purposes only, or may document experimental protocols, or may simply be termed historical documents because their contents are now deemed obsolete (this is the RFC "state" which we'll come back to later).
There are several important published document series related to Internet standards and practices. They include:
- These are Requests for Comments, and are an archival document series. RFCs never change. They are intended to be always available, though RFC status can change over time as a specification moves from being a proposed standard to a draft standard to an Internet standard to an historical RFC.
- The STD (for "standard") series of documents represents Internet standards. A particular STD may consist of one or more RFCs that define how a particular thing must be done to be considered an Internet standard. An STD may consist of a single document, which is the same as some single RFC. However, the STD number stays the same if that RFC is deprecated and replaced by a newer RFC, but the document to which the STD points changes to the newer RFC.
- This series consists of "for your information" documents. According to RFC 1150, "F.Y.I. Introduction to the F.Y.I. Notes", the series is designed to give Internet users information about Internet topics, including answers to frequently asked questions and explanations of why things are the way they are on the Internet.
- The Best Current Practices series are defined in RFC 1818, "Best Current Practices", which says that BCP documents describe current practices for the Internet community. They provide a conduit through which the IETF can distribute information that has the "IETF stamp of approval" on it but that does not have to go through the arduous process of becoming a standard. BCPs may also cover meta-issues, such as describing the process by which standards are created (see below or RFC 2026 for more on the Internet standards process).
- There are a bunch of different documents that, over time, have been treated with more or less respect. This includes RTRs (RARE Technical Reports), IENs (Internet Engineering Notes), and others. We won't be covering these, as they rarely come up in discussions of current issues. Also, while the STD, FYI, and BCP document series contain RFCs, these other documents are not necessarily RFCs.
Part 2: RFCs and Internet Drafts
RFCs and Internet-Drafts
Some readers may wonder why Internet-Drafts (I-Ds) are not included in the list above with all the rest, but I-Ds are quite distinct from RFCs. For one thing, anyone can write and submit an I-D; RFCs are published (from I-Ds) only after an I-D has been through a sequence of edits and comments. For another thing, I-Ds expire six months from the time they are published. They are considered works in progress, and each one is supposed to state explicitly that the document must not be cited by other works. Where the RFC series is archival, I-Ds are ephemeral working documents that expire if no one is interested enough in them to move them forward through the standards process.
This is a critical distinction. Networking product vendors often claim that their product, protocol, service, or technology has been given some kind of certification by the IETF because they have submitted an I-D. Nothing could be further from the truth (though even publication as an RFC may mean little if it is published as an Informational RFC).
I-Ds become RFCs only after stringent review by the appropriate body (as we'll see in Part 4). Some more differences:
- RFCs are numbered, I-Ds are not (they are given filenames, by which they are usually referenced). RFC numbers never change. RFC 822 will always be the specification for Internet message format, written in 1982. If substantial errors are found in an RFC, a new RFC may have to be written, submitted, and approved; you can't just go back and make edits to an RFC.
- RFCs are given a "state" or maturity level (where they are on the standards track, or some other indicator) as well as a "status" that indicates a protocol's status as far as requirements level. We'll come back to these topics in the next section. I-Ds, on the other hand, are just I-Ds. The authors may make suggestions about what kind of RFC the draft should eventually become, but if nothing happens after six months, the I-D just expires and is supposed to simply vanish.
- RFCs are usually the product of IETF working groups, though I-Ds can come from anywhere and anyone.
Part 3: RFCs states and status
RFC State and Status
RFCs can have a state, meaning what kind of an RFC it is; and a status, meaning whether or not the protocol specified in the RFC should be implemented (or how it should be implemented). Valid RFC states include:
- Standard Protocol
- These are Internet Standards with a capital "S", which means that the IESG has approved it as a standard. If you are going to do what the protocol in the RFC does, you have to do it the way the RFC says to do it. Very few RFCs represent full Internet Standards.
- Draft Standard Protocol
- Draft standards are usually already widely implemented, and are under active consideration by the IESG for approval. A draft standard is quite likely to eventually become an Internet Standard, but is also likely to require some modification (based on feedback from implementers as well as from the standards bodies) and the authors are supposed to be prepared to deal with that (by making the changes that have to be made).
- Proposed Standard Protocol
- Proposed standards are being proposed for consideration by the IETF in the future. A proposed standard protocol must be implemented and deployed, so it can be tested and evaluated, to be given proposed standard state. Proposed standards almost always get revised (sometimes revised significantly) before advancing along the standards track.
- Experimental Protocol
- Experimental RFCs describe protocols that are not intended for general use, and that are, well, considered experimental. In other words, don't try this at home.
- Informational Protocol
- Informational RFCs are often published without the intention of putting the protocol on the standards track, but rather because they provide useful information for the Internet community. For example, the Network File System (NFS) protocol was published as an Informational so that implementers other than Sun Microsystems could build NFS clients and servers.
- Historical Protocol
- Though most of the other designations are included in first page of the RFC, an RFC that was once on the standards track can be redefined as historical if the protocol is no longer relevant, was never accepted, or was proved flawed in some way.
The status of a particular protocol relates to how necessary it is to implement. The status levels include:
- Required Protocol
- A required protocol must be implemented on all systems.
- Recommended Protocol
- All systems should implement a recommended protocol, and should probably have a very good reason not to implement it if they don't. It's not really optional, but it's not entirely required either.
- Elective Protocol
- A system may choose to implement an elective protocol. But if it does implement the protocol, the system has to implement it exactly as defined in the specification.
- Limited Use Protocol
- Probably not a good idea to implement this type of protocol, because it is either experimental, limited in scope or function, or no longer relevant.
- Not Recommended Protocol
- Not recommended for general use. In other words, there's probably no good reason for you to implement this protocol.
Part 4: Turning I-Ds into Standards
Turning I-Ds into Standards
You can't tell the players without a scorecard, and there are a number of different players in the Internet standards game. Before you can truly understand how the process works, it helps to know who is involved.
It might be nice if there were a nice, orderly org chart that laid out the different entities involved in the standards process. On the other hand, the standards process is an organic, human one that sometimes, over the years, adapts to market or political forces. The figure gives an idea of the entities involved.
- Internet Society (ISOC)
- ISOC is the umbrella organization to all Internet standard activity. Positioned as the professional organization for the Internet and TCP/IP networking, ISOC sponsors conferences, newsletters, and other activities pertaining to the Internet.
- Internet Architecture Board (IAB)
- The IAB was first formed in 1983, when it was known as the Internet Activities Board, and then reconstituted as a component of ISOC, the Internet Architecture Board, in 1992. Its early history is documented in RFC 1160 and its current charter in RFC 1601. The IAB chooses the steering groups' members and provides oversight to the Internet standards process, publishes RFCs and assigns Internet-related numbers.
- Internet Engineering Task Force (IETF)
- Though often portrayed as a very formal entity, the IETF consists of anyone who shows up (either in person or by mailing list) and participates in IETF activities. IETF activities are organized by areas (active IETF areas are listed at the IETF website), and within each area are more focused working groups. Each area has one or two area directors, and each working group has one or two chairs as well as an area advisor; these individuals guide the work of the groups.
- Internet Engineering Steering Group (IESG)
- The IESG consists of the IETF area directors and the IETF chair, and this is the body that has final say over whether a specification or protocol becomes a standard or not.
- Internet Corporation for Assigned Names and Numbers (ICANN)
- ICANN is the controversial new entity with shaky finances that was formed last year to take over the functions of the RFC Editor and the Internet Assigned Numbers Authority (IANA). Both those functions had previously been carried out by the late Jon Postel, whose untimely death highlighted the need for a new structure to handle the publication of RFCs as well as maintaining lists of protocol and address numbers that have been assigned or reserved for all the different mechanisms, specifications, and protocols defined by the IETF.
The Internet Research Task Force (IRTF) and Internet Research Steering Group (IRSG) fulfill similar functions for long-term planning and research, but the IRTF is not generally an open organization as the IETF is, and these two entities have less immediate impact on Internet issues, so they generally perform their functions in the background.
RFC 2026, "The Internet Standards Process - Revision 3," documents the process by which the Internet community standardizes processes and protocols. On its face, the process is simple: a group or individual submits their draft for publication as an Internet-Draft. This is the first step. At this point, the document is publicly posted on the Internet and a notification of its publication is posted to the IETF-announce mailing list (IETF mailing lists are archived at the IETF website). Most I-Ds don't progress beyond this point.
Assuming that there is enough interest in the draft to generate discussion, the authors may be called upon to incorporate edits. Once there is consensus among those who are working on the draft (usually work is done in working groups, much of the work taking place in the context of the working group mailing list), a "Last Call" will be issued for further comments on the draft. Any further comments may be incorporated into the draft after the Last Call period (usually some number of weeks), at which point the draft can be submitted to the IESG for approval and publication as an RFC.
Of course, the draft may be published as an experimental or informational RFC; but if it makes it onto the standards track, it starts out as Proposed Standard. Over time, the specification may advance along the standards track (depending on whether it is accepted by the community and implemented, how well it works, and whether or not something better comes along).
A standards track specification may need to go through a revision process as it progresses. Thus, the same specification may be rewritten several times over a period of years before it becomes an Internet standard. There are only about 50 full Internet standards; most of the protocols that we take for granted as being "standards," including HTTP, DHCP, MIME, and many others, are actually either Proposed Standards or Draft Standards. Lynn Wheeler maintains a web page that lists all current RFCs along with their status. It is an interesting and instructive list.
Few people really understand the distinctions between different types of Internet standard (and non-standard) documents-not so much because the concepts are so complicated but rather because they are relatively obscure. And networking vendors often misrepresent standards activities when they announce them. Armed with the information in this article, you can easily determine whether the latest and greatest protocol just released by Novell, Microsoft, or Cisco is actually a new Internet standard or just documented in yet another Internet-Draft.
Part 5: Finding RFCs
The IETF publication mechanism does not provide the best interface or search engine for locating RFCs. However, it should be considered canonical. This list is not comprehensive, as there are probably scores if not hundreds of websites and FTP servers serving some or all RFCs. However, these are good places to go to when you need to locate a particular RFC or to find out more about what might be in an RFC or I-D.
- Internet Standards Archive. This is a good easy site, and they've got good search facilities for RFCs and I-Ds. A good "go-to" site for RFCs.
- Lynn Wheeler's RFC Index. This is another excellent resource if you're trying to figure out what's current, what is a standard, and what is not.
- The NORMOS Standards Repository. Another good site, it has particularly good search capabilities; very flexible. It also returns all the hits (unlike the Internet Standards Archive above).
- Invisible Worlds RFC Land. This seems to be a pretty cool site. Carl Malamud and others had a neat idea about XML-tagging RFCs. There's a lot of graphics and very involved programming underneath the website, so I'd like it better if it were simpler without all that stuff, and they still need to finish the XML-ification (at least, that's how it seems), but this site is recommended as well.
- The RFC Editor Page. This is the official place, and there's lots of good information here as well.
- The RFC Editor's Search through the RFC Database Page. This used to be nothing more than a simple listing, but has become virtually overnight one of the best resources on the web for RFCs. It is canonical, and you can download the whole RFC database from here too.
Pete Loshin (email@example.com) began using the Internet as a TCP/IP networking engineer in 1988, and began writing about it in 1994. He runs the website Internet-Standard.com where you can find out more about Internet standards. | <urn:uuid:b12d8111-01cd-4bca-a0b5-2fcdb010367a> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/616051/What-you-should-know-about-the-Internet-Standards-process.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00413-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951322 | 3,384 | 2.625 | 3 |
The very first road to the various app stores from Apple and Google was paved with native code. If you wanted to write for iOS, you learned Objective-C. If you wanted to tackle Android, Java was the only way. Similar issues popped up with all the other smaller players in the smartphone market.
Then some clever developers came to a realization: All the smartphones offered a nice option for displaying HTML in a rectangle on the screen. You have to write a bit of native code that pops up this rectangle in the native language, but everything inside the rectangle is controlled by the same languages that control the browser.
That started changing several years ago. Apple relented and recognized that HTML was not dangerous. Then the hardware got faster, smoothing over many glitches. Today, some of the HTML-based apps I've been writing perform just as well as native apps -- and they're much easier to port.
PhoneGap began as an open source project before it was absorbed by Adobe. There's still an open source version called Cordova available from the Apache Foundation and a very similar version called PhoneGap that's available under an open source license (ASF).
The principle difference is that Adobe offers a smooth Web service that turns your HTML into apps. You write the HTML, and Adobe's cloud turns it into something that runs on iOS, Android, Windows Phone, BlackBerry 5/6/7, and webOS. There's a free version that lets you build an unlimited number of open source apps, but only public apps. Professional developers can pay $10 a month for unlimited private apps.
Of course you can do all this yourself. You can fire up Xcode, Android Studio, or the tools for the other smartphones and create each on your own. PhoneGap is really six or more projects for different platforms that all implement more or less the same API. Your code should come close to running the same on all the machines, although issues often arise due to differences in hardware.
After building apps for the iPhone and Android, I can recommend the build process. Just downloading all the tools takes a long time. The job of building code has migrated to the Web, and Adobe is offering one of the first concrete and compelling tools. You put in one chunk of code for your browser and out come six different apps that run on six different platforms. That's amazing.
Though the entire process takes plenty of weight off our shoulders, it is neither as simple nor as perfect as it could be. The platforms have plenty of details that need to be filled out endlessly. The documentation, while good, can't begin to offer enough detail for every possibility.
One of the trickier issues involves creating the digital signature. Apple, Android, and BlackBerry all ask the developer to "sign" the code, essentially acting like the signature an artist applies to a masterpiece or the President creates during a fancy signing ceremony in the Rose Garden. While there's always a big gap between the symbolism and reality, there's no doubt of the legal and emotional power behind the digital signature.
Adobe asks you to upload the private keys and the passwords to its cloud. This may appear as a service, but it gives Adobe the power to create anything it wants and distribute it any way it likes. Would the company act upon this power? I'm sure the public answer approximates the word "never," but who knows about others poking around Adobe's infrastructure? What if Adobe employs someone like Ed Snowden with the ability to read files at will and impersonate others? That person could create extra apps and distribute them easily.
Adobe is not alone here. You court this potential security hole with AppGyver and Icenium as well, and in any case you can work around it. Last week I had to sign a new app, and the safest way was for me to download the complete source code, build it from scratch, then let Apple's built-in signing tool handle it automatically. The mathematics don't require a cloud of servers. Anyone can sign any digital file with the algorithms. But using Apple's tool seems to be the safest path through the system.
Adobe's Build tool also offers one other nice feature: The binary wrapper for your app can also look for new versions of the software on startup, something that Adobe calls "Hydration." This allows you to push new builds to your users without going through the standard update mechanism.
Adobe sells the Build service as part of the Creative Cloud, its latest plan to bundle all of its applications for one monthly fee that tops out at $75. There's also a free plan for testing out the service that offers one "private app" and an unlimited number of open, public apps. Separate paid programs begin at $10 a month and offer many more private apps with controlled access.
If imitation is the sincerest form of flattery, the folks at AppGyver are clearly infatuated with PhoneGap. They've taken much of the core from the open source project but have added their own build infrastructure and one very useful feature that may prove irresistible to developers.
The interface is a bit different. Whereas the standard way to use PhoneGap is with the customary developer tools like Xcode, AppGyver runs from the command line and piggybacks on many of the tools developed for Node.js. Installing the software requires running the Node package manager and Python. While AppGyver apparently works with Windows and Cygwin, I ran for my Mac within seconds of starting. AppGyver is geared for Linux and Unix, and everything is ready to go on your Mac because it's a Unix box underneath.
When all the command-line typing is done, you're still playing with the code in your browser. Safari does a credible job of emulating and debugging the kind of HTML that runs in PhoneGap/Cordova. I've found a few inconsistencies over the years, but not many. You write your code in your favorite editor, then you deploy it. I started out debugging in Safari, then switched to the built-in simulator. Safari offers the kind of step-by-step debugging that's often necessary, while the Xcode simulator works more for double-checking.
There were some glitches -- or perhaps I should call them overly earnest suggestions. My builds would often fail because some SCSS file was missing. The code ran fine in Cordova with Xcode and in Safari -- neither batted an eye. But AppGyver wouldn't move forward without cleaning up that issue.
My favorite part of the entire AppGyver process is the way you can deploy to your smartphone. When you first deploy, AppGyver creates a QR code with the URL. AppGyver also gives away a set of free apps that can interpret these bar codes and use them to download the latest version of the HTML. All of a sudden, your iPhone will reach out and suck up the latest version of your program and run it in AppGyver's shell.
It's impossible to praise this feature too loudly. I don't know how many times I've lost a day or two of development because apps will only run on iOS devices that have permission from Apple's secret bunker. One guy wrote me saying the software wouldn't work on his phone -- it turned out he'd upgraded a week before. The old UUID in the certificate chain was worthless now, and everything had to be redone. It's not exactly right to say that nothing happens in the iOS world without a developer asking "mother may I" of Apple, but it's a close approximation. AppGyver's solution is a godsend for developers.
The AppGyver system avoids the endless clicking that gets in the way of real debugging and real quality assurance with real users. Apple's tools insist you can't have more than 100 beta testers no matter what. The AppGyver app has already been approved by the App Store, and it can download the latest version of your app when anyone points the camera at the bar code. Others can debug your code, and it's much simpler. This is real innovation.
I used to see the value in this several years ago. Some of my HTML apps were slow at times, especially when I filled up the RAM with baseball statistics. But this effect has been much less noticeable on the newer smartphones. The bigger memories and faster chips do a better job of swapping out the HTML pages. For that reason, I didn't see or feel much difference when using Steroids. This might be quite different with your app; I've noticed that the smartphones handle code in widely varying ways.
There are other parts to the AppGyver world. Steroids works with AppGyver's Cloud Services that handle the building and distribution of your app. When you're just debugging, the code flows through the cloud to your iPhone or Android. When you're ready to submit it to the stores, it will build the code for you -- if you upload your private key for creating the digital signature.
Prototyper, a neat tool still listed as beta, tries to make app creation as easy as uploading images, then dragging and dropping links between them. It works, but only for the simplest ideas. After a few minutes, I wanted to seize control again and write text with an editor. It may be, however, a good tool to give to the boss for sketching out a prototype. If anything, it will help the boss understand how much work the programmers really do.
AppGyver doesn't charge directly for Steroids or its Cloud Services at this time. The company funds itself through support fees and white-label development. The AppGyver folks are experts at building, and I'm sure that sharing the tools with the world will help add polish. Prototyper has a free plan and a small monthly fee that starts at $9 and goes up to the coy listing that says "ask for price."
Telerik is offering a complete collection of tools to turn your ideas into an app. You can write your code in your browser, host it in Telerik's cloud, then let the cloud build it into a completed app. All of this is centered around Apache Cordova at the core.
The most significant parts of the offering are the IDEs called Mist (browser) and Graphite (Windows), along with a new extension for Visual Studio I didn't try. Mist and Graphite seem functionally equivalent to me, and I wasn't surprised to find that the projects I created in Mist started appearing in Graphite. Both offer a screen split between a file navigator and an editor. The editor can toggle between a text-based HTML editor and a visual tool for dragging and dropping widgets.
There were some glitches. The editor wouldn't work with several views, claiming they weren't proper HTML5. The complaints kept coming even when I deleted all the various DIVs inside. Sometimes it was simpler to work with the HTML instead of the designer.
I also found myself defaulting to the built-in debugger in the browser. Firebug and Safari's debugger are incredible, and it will be some time before anything will be as good as them.
The main difference between the Icenium tools is access to the hardware. The Windows IDE (Graphite) can access the hardware through the USB port, whereas the browser-based IDE (Mist) can't. The tools seem to be evolving. They make it easy to drag widgets into place, but you still need to read the HTML and think about the structure. I found I had to remember what was going on in the HTML layer to understand how to put together all the widgets correctly.
Google jumped into the cord-cutter game with its own live TV option without a contract: YouTube TV. For...
With more and more workloads going to the cloud, and the top vendors being as competitive as they’ve...
The U.S. government reportedly pays Geek Squad technicians to dig through your PC for files to give to...
Palo Alto Networks has bought LightCyber for its behavioral analytics platform that can speed the time...
In this installment of the IDG CEO Interview Series, CEO Chris McNabb talks about why Dell Technologies...
While cloud can lead to key benefits, make sure you avoid these costly mistakes
Plenty of big name vendors are using the annual wireless network confab to show off their latest and... | <urn:uuid:2f60d9e1-3582-4b2f-917a-e5f2582521ae> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2173452/smb/phonegap-toolkits-tame-mobile-app-development.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00113-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956668 | 2,536 | 2.609375 | 3 |
Data Center Consolidation
What is data center consolidation operations?
The process of optimizing IT expenditure by efficient usage of Information Technology by using methods such as server virtualization, storage virtualization, cloud computing, etc. is called data center consolidation. Data center consolidation enables operating expenditure optimization by optimal usage of database resources. | <urn:uuid:59a18143-93df-47f1-99ab-fad16af332a5> | CC-MAIN-2017-09 | https://www.hcltech.com/technology-qa/what-is-data-center-consolidation-operation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00461-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.865461 | 63 | 2.578125 | 3 |
The Macronix group figured out how to use heat to repair the insulating layers of the flash chip, which degrade with each erasure. Researchers have known that this method works; previous attempts heated the whole chip to 250 degrees C (482°F) for several hours. The Macronix advance uses itty-bitty heaters, derived from the ones they build for phase change memory, that heat small groups of flash pages to 500°C. Macronix also discovered that the elevated temperature speeds up erasures, which wasn't predicted by the materials science geeks. (Before you attempt to revive an old SSD or CF card in the pizza oven, note the solder that holds the components of an SSD together melts at about 185°C).
Macronix hasn't announced any product using the technology.
There's certainly some appeal to the idea of resetting the write endurance odometers after 50 or 100 write/erase cycles with built-in heaters for SSDs based in TLC or even QLC (Quad Level Cell, which is flash that stores 4 bits per cell). However, I don't think flash's limited write endurance is that big a problem. Instead, our management processes need to account for the fact that SSDs wear out.
Many people think SSDs just up and stop working, like a dead hard drive, when the 10,000th write/erase cycle completes. That's not true. While SSDs occasionally fail without warning (just like everything else), those failures aren't due to write exhaustion.
The flash controllers in each SSD monitor how often each page is erased, and distribute the wear as evenly as possible across all their flash. Array controllers and host OSes can use SMART (Self-Monitoring, Analysis and Reporting Technology) to check the status of parameter 231 SSD Life Left, which will report what percentage of the SSD's rated life remains. If customers would accept it, array vendors could stop using expensive SLC SSDs, which can be written to as fast as they accept data, and start using MLC flash, which should last for five years. MLC flash should satisfy the performance needs of 80% of array vendors' customers; the others, who need SLC, could get new SSDs shipped to arrive 60 days before the old ones reach the end of their rated life.
Of course, the flash in an SSD doesn't self-destruct on erase 10,001, although at least one controller vendor allows SSD makers to switch the device to read-only when a threshold is reached. Ten-thousand cycles is just the point where the flash has degraded to where the flash manufacturer doesn't want to guarantee it will work. As the flash insulating layers break down, individual cells get stuck and will no longer hold data properly. At some point after 10,000 cycles--and there's no knowing if it's 10,317 or 30,000--there will be too many broken cells on a given page for the controller to be able to correct, and the controller will mark that page as bad. Once too many pages go bad, the SSD will not have any place left to write new data. But this is a gradual, monitor-able degradation, not a fatal failure with data loss.
We should treat SSDs like the timing belts in our cars. They're just parts we replace every 60,000 miles. We know when 60,000 miles is coming, and we can plan for it. | <urn:uuid:06b0a44e-360f-4201-8b1b-47477b059ad9> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/hot-flash-researchers-use-heat-counter-nand-flash-wear-n-tear/448330696?cid=sbx_nwc_related_commentary_default_private_cloud_tech_center&itc=sbx_nwc_related_commentary_default_private_cloud_tech_center&piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00105-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944906 | 706 | 2.53125 | 3 |
TRY IT FREE FOR 48 HOURS
Access our entire library for 48 hours.
Just fill out the form below.
This course is training for basic, intermediate, and advanced features of Microsoft Office Excel 2010 software. Excel is a spreadsheet program in the Microsoft Office system. You can use Excel to create and format workbooks (a collection of spreadsheets) in order to analyze data and make more informed business decisions. Specifically, you can use Excel to track data, build models for analyzing data, write formulas to perform calculations on that data, pivot the data in numerous ways, and present data in a variety of professional looking charts.
Module 1: Introduction
1.1 Course Outline
1.2 Introducing Excel 2010
1.3 The Excel Interface
1.4 Keyboard Shortcuts
1.5 Section Review
Module 2: Basic File Commands and Operations
2.1 Creating, Saving and Closing Workbooks
2.2 Personalizing Files and Opening Workbooks
2.3 Viewing Existing Workbooks and Applying Templates
2.4 Keyboard Shortcuts
2.5 Section Review
Module 3: Creating, Managing and Navigating the Worksheets
3.1 Creating and Managing Worksheets
3.2 Navigating the Worksheets
3.3 Keyboard Shortcuts
3.4 Section Review
Module 4: Entering and Managing Worksheet Data
4.1 Worksheet Basics and Cell Range Selection
4.2 Entering Cell Content and Multiple Cells
4.3 AutoContent and Undo, Redo and Repeat
4.4 Updating and Clearing Cell Content
4.5 Inserting and Deleting, Rows and Columns
4.6 Copying, Cutting, Pasting and Moving Contents
4.7 Keyboard Shortcuts
4.8 Section Review
Module 5: Formatting Cells and Worksheets
5.1 Formatting Cells and Applying Formats
5.3 Merging Cells and Cell Styles
5.5 Apply and Modify Formats
5.6 Using Table Features
5.7 Pivot Tables
5.8 Manipulating data within the pivot tables
5.9 Keyboard Shortcuts
5.10 Section Review
Module 6: Applying Formulas and Functions
6.1 Creating Formulas
6.2 Using Cell References
6.3 Managing and Updating Formulas
6.4 Creating Functions
6.5 Conditional Statements
6.6 Error Messages
6.7 Keyboard Shortcuts
6.8 Section Review
Module 7: Analyzing and Organizing Data
7.1 Find and Replace
7.4 Conditional Formatting and Keyboard Shortcuts
7.5 Section Review
Module 8: Naming and Hyperlinks
8.1 Naming Cells and Ranges
8.3 Section Review
Module 9: Displaying Data Visually Using Charts
9.2 Layout Chart Element Options and Format
9.4 Keyboard shortcuts
9.5 Section Review
Module 10: Preparing to Print and Printing
10.1 Preparing to Print with Page Layouts
10.2 Section Review
Module 11: Share Worksheet Data with Other Users
11.1 Sharing a Document and Managing Comments
11.2 Section Review
Module 12: Including Illustrations and Graphics in a Workbook
12.1 Inserting and Formatting Pictures
12.2 Inserting and Formatting Clip Art
12.3 Inserting and Formatting Shapes, Word Art and Text Boxes
12.4 Inserting and Formatting Smart Art
12.5 Keyboard Shortcuts
12.6 Section Review
Module 13: Customize the Excel Interface
13.1 Section Review
13.2 Course Review
With ITU's e-learning system, certification has never been simpler! You can be starting your IT career or taking your current IT skills to the next level in just a few short weeks. Our award winning learning system gives you all of the benefits of a live class at just a fraction of the cost. We’re so confident that our materials will produce results; we guarantee you’ll get certified on your FIRST attempt or your money back!
ITU’s courses include:
ITU uses only the industry’s finest instructors in the IT industry. They have a minimum of 15 years real-world experience and are subject matter experts in their fields. Unlike a live class, you can fast-forward, repeat or rewind all your lectures. This creates a personal learning experience and gives you all the benefit of hands-on training with the flexibility of doing it around your schedule 24/7.
Our courseware includes instructor-led demonstrations and visual presentations that allow students to develop their skills based on real world scenarios explained by the instructor. ITU always focuses on real world scenarios and skill-set development.
ITU’s custom practice exams prepare you for your exams differently and more effectively than the traditional exam preps on the market. You will have practice quizzes after each module to ensure you are confident on the topic you have completed before proceeding.
This will allow you to gauge your effectiveness before moving to the next module in your course. ITU Courses also include practice exams designed to replicate and mirror the environment in the testing center. These exams are on average 100 questions to ensure you are 100% prepared before taking your certification exam.
ITU has designed a world class Learning Management System (LMS) This system allows you to interact and collaborate with other students and ITU employees, form study groups, engage in discussions in our NOW@ Forums, rate and “like” different courses and stay up to date with all the latest industry knowledge through our forums, student contributions and announcement features. This LMS is unmatched in the industry and makes learning fun and enjoyable.
ITU knows that education is not a one size fits all approach. Students learn in different ways through different tools. That is why we provide Flash Cards and Education Games throughout our courses. This will allow you to train in ways that keep you engaged and focused. Each course will have dozens of Flash Cards so you can sharpen your skill-sets throughout your training as well as educational games designed to make sure your retention level of the materials is extremely high.
ITU’s self-paced training programs are designed in a modular fashion to allow you the flexibility to work with expert level instruction anytime 24/7. All courses are arranged in defined sections with navigation controls allowing you to control the pace of your training. This allows students to learn at their own pace around their schedule. | <urn:uuid:a129403a-40b4-4a97-9453-682a961b08ff> | CC-MAIN-2017-09 | https://www.ituonline.com/course/excel-2010 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00401-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.868858 | 1,354 | 3.09375 | 3 |
A team of researchers from Imperial College London have developed an eye-tracking device that lets you control a computer, and not just control it, play games, read e-mails, and even browse the web.
While eye-tracking technology is not new, this device is exciting for one good reason: it's cheap to build. While similar technology can be very unaffordable for those who really need it, this new device can be built for less than $60, and is made out of two video game console cameras.
To prove the device's effectiveness, the team had able-bodied subjects use it to play the all-time favorite game of Pong. As you all know, Pong requires quite the sleight of hand, and the subjects were able to get some respectable scores using nothing but their eyes to move the paddle. Impressive.
The eye-tracking system can be used to control almost anything on a computer, and can connect to any Windows or Linux PC via Wi-Fi or USB. You can use it to browse the Web, read files, and even write e-mails.
One ingenious aspect of this device--aside from the affordable price, of course--is its ability to detect mouse "clicks." Many similar devices struggle to distinguish between involuntary eye movements and the voluntary movements meant to emulate clicks. In order to click something, all you have to do is wink, and since this is something we usually don't do by mistake, it should work perfectly.
On the long run, the device can be used to detect not only where a person is looking, but how far he's looking, thus enabling people with disabilities to control a wheelchair or a prosthetic with their eyes only.
Like this? You might also enjoy...
- TELESAR V Robot Lets You Feel What It Feels, See What It Sees
- Lockheed Martin Recharges Flying UAVs With Freaking Lasers
- This Amazing Bicycle Is Made Entirely Out of Cardboard
This story, "Play Pong and read emails with your eyes using this $60 device" was originally published by PCWorld. | <urn:uuid:10ae48c4-d5d1-4bd6-86f8-ebbe2022a1c4> | CC-MAIN-2017-09 | http://www.itworld.com/article/2723568/consumerization/play-pong-and-read-emails-with-your-eyes-using-this--60-device.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00097-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953172 | 435 | 2.640625 | 3 |
It may be one the oldest, most basic components of software code but the scientists at the Defense Advanced Research Projects Agency want to develop what they call revolutionary technologies for analyzing, identifying, and slicing binary executable components.
According to DARPA, the Department of Defense has critical applications that have been developed for older operating system versions and must be ported to future versions. In many cases, the application source code is no longer accessible requiring these applications to continue to run on insecure and out-dated configurations, impacting day-to-day operations. It is necessary to identify and extract functional components within this software for reuse in new applications.
More cool news: High-tech healthcare technology gone wild
DARPA said it defines binary executable components are defined as a fully encapsulated set of subroutines, data structures, imported APIs, objects, and global variables that accomplish a particular function.
DARPA says its Binary Executable Transforms (BET) program is focused on binary executables, not source code, and seeks to overcome the current limitations of existing binary analysis and program slicing techniques. Many program slicing techniques use source code, not machine code, to slice functional components due to the lack of correctness in binary analysis and disassembly.
That's where the BET program comes in. Specifically, BET is seeking innovative research in:
- Automatically analyzing and identifying binary executable functional components.
- Automatically slicing and extracting identified binary functional components into reusable programming modules, including defined inputs and outputs.
- Combining static and dynamic binary analysis to increase understanding and function of binary executables.
- Exploring formal verification methods to prove functional component properties.
- Developing intermediate representation language to support program slicing.
- Developing core technology to enable exploration and research for the BET program.
BET intends to generate novel research, publications, and prototype code to seed future programs requiring foundational technology in binary program analysis. The goal of this program is not to build systems or transitionable technology but to perform research that will eventually help the Department of Defense build such systems, DARPA stated.
"To bound the research problem, experimental binaries and approaches should be limited to x86 Windows or Linux operating systems. Approaches should be capable of processing either PE or ELF formats without artifacts or information dependencies particular to a specific binary format. Other formats may be considered but will require compelling rationale from the performer," DARPA said.
DARPA said it anticipates multiple awards not exceed $250,000 per phase/technical area for this research announcement.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:ab6c1e32-6157-4288-bc2e-f852fc971d96> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2229456/software/us-looking-for-revolutionary-binary-code-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00273-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.87862 | 540 | 2.859375 | 3 |
Web developer Elliot Kember questioned Google’s security practices after showing that anyone with physical access to the computer will have immediate access to the passwords, which can easily be toggled to plain text. Someone can simply go to the URL chrome://settings/passwords or visit a user’s password page in the browser Settings menu to easily view the data. There is no master password or even a generic prompt – essentially, there is no added security for the passwords.
The main concern that Kember raises is the fact that the mass market doesn’t expect it to be that easy for others to get to their data. In his blog post, he calls for Google to either clarify the security policy so users can make a more informed decision, or to add a master password option (as Mozilla Firefox has done).
This “flaw” in Google Chrome is old news to many. However, the fact that Chrome is now one of the three most widely-used browsers in the world means that more and more of the general population is utilizing Chrome and saving their data to the browser, with little information regarding how that data is protected.
Ultimately, the most secure way to store your data is to not store it in a browser at all, where there are minimal security options and a host of possible threats. By storing your data in a password manager, you’re adding at least one authentication layer with your master password, not to mention the encryption technology built into the software itself.
There is also the added benefit of utilizing multifactor authentication and other features to control where and how your data can be accessed. These features include the ability to restrict logins to specific countries or to enable master password reprompts on more sensitive logins. It also ensures that should one computer or browser crash, or be lost or stolen, your data remains securely accessible on your other devices.
While we agree it would be wonderful if Chrome would increase their security options or offer better warnings for users, Chrome users can be proactive today by downloading a password manager like LastPass and migrating their data out of their browsers. LastPass will even help you with that process by automatically importing your passwords for you as you get started – so don’t wait until it’s too late.
Were you aware of this shortcoming in Google Chrome? What other steps are you taking to protect your data? | <urn:uuid:4229395e-46c5-4b1c-ad9a-c6214f69da0e> | CC-MAIN-2017-09 | https://blog.lastpass.com/2013/08/storing-passwords-in-your-browser-time-to-stop.html/?showComment=1375999900830 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00449-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930588 | 487 | 2.765625 | 3 |
The U.S. government believes the Internet of Things (IoT) has enormous economic potential across all industries. Its machine-to-machine technologies can reduce automobile-related injuries, usher in an era of precise weather forecasting and automate all types of processes.
But what impact will IoT have on jobs? Will it create more than it destroys? And what happens to all the data devices generate?
With those kinds of issues at stake, the U.S. Department of Commerce is now seeking public comment on the "benefits, challenges and potential roles for the government in fostering the advancement of the Internet of Things." There are 28 questions, and multiple sub-parts to some questions. It's a long list.
The Commerce Department began accepting comments Friday, opening a comment period that lasts until 5 p.m. ET on May 23. The government plans to make the responses -- likely to run into the thousands -- public, resulting in the nation's single largest knowledge dump about the future of technology and where Americans think it should go.
The focus on IoT is deceptively broad. Any IoT discussion will likely bring in all its related technologies processes: Robotics, automation on every level, widespread use of artificial intelligence tools, and the collection of incalculable amounts of data about every aspect of life.
In sum, the government wants to know how the IoT will impact life, job, security and privacy.
Many of the questions are broad, such as:
- Are the challenges and opportunities arising from IoT similar to those that governments and societies have previously addressed with existing technologies, or are they different, and if so, how?
- What are the most significant new opportunities and/or benefits created by IoT, be they technological, policy, or economic?
- And what technological issues may hinder the development of IoT, if any?
The government's goal is to map out its policy role, including research, economic development, standards and security and privacy.
The U.S. can influence standards, set rules on security and the privacy of data and influence the market through its purchasing power. "It would be good to have a clear policy on IoT from one of the biggest buying centers in the world," said Alfonso Velosa, an analyst at Gartner.
Data ownership is another problem waiting to be solved. For instance, a carmaker sells vehicles to a car rental firm. The connected vehicles today can send information back to the auto maker, which may use it for vehicle maintenance. But that data is valuable for competitive and monetary reasons. This is data a car maker could sell to another party, perhaps an insurer. Should it be allowed to?
"Right now we don't have any rules about how that data is managed," said Velosa. The government can also help set standards and rules governing security at the device, communications and cloud level.
Some of the security rules the government needs to set are obvious, particularly around the ability devices to spy on people, said Frank Gillett, an analyst at Forrester.
But the government needs to think about security and privacy rules now because they "are hard to undo later," said Gillett.
Joshua New, a policy analyst at the Center for Data Innovation, a Washington-based research group, said there are already bipartisan efforts in Congress to try to develop a national IoT plan.
Indeed, in January, Reps. Suzan DelBene (D-Wash.), and Darrell Issa (R-Calif.), launched the Congressional Caucus on the Internet of Things. It has two broad goals: to educate lawmakers about IoT and develop a policy role. In the Senate, lawmakers have their own bill, the DIGIT Act (Developing Innovation and Growing the Internet of Things), which would create a national working group to develop IoT policy recommendations.
There is a lot the government can do, said New. For instance, it can bring together cities, public transit agencies and tech firms and help broker agreements on deploying IoT-based technologies. This government involvement could create markets for vendors, encouraging research and investment, he said.
The government will take the public comment data and issue a "green paper," which is the name for a tentative government report, not an official statement of policy. (That will come in a subsequent "white paper.")
While this is a big project to undertake in the remaining months of Obama administration, considering the bipartisan IoT activities in Congress and widespread interest in the area, "this issue is here to stay," said New.
This story, "Feds seek public input on the future of IoT" was originally published by Computerworld. | <urn:uuid:990bc8a6-f9b8-4616-afb7-a2bb9b2cc1ff> | CC-MAIN-2017-09 | http://www.itnews.com/article/3060754/internet-of-things/feds-seek-public-input-on-the-future-of-iot.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00149-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953369 | 945 | 2.625 | 3 |
One of the easiest ways to disrupt any organization's online presence is through Distributed Denial of Service (DDoS) attacks. Unlike hacking or infiltrating a network, DDoS attacks simply overwhelm a site with meaningless traffic, causing it to become unreachable. This sort of cyberattack has been made famous by large hacking collectives such as Anonymous, and 2013 was the worst year on record for DDoS attacks, according to a new report.
With the financial sector estimating that each DDoS attack costs at least $100,000, protecting against them has become more important than ever before, especially as more business transactions begin to rely on the Internet.
Prolexic Technologies, one of the leading providers of DDoS protection services, has released a report detailing some of the major DDoS trends it saw during 2013. Not only have the attacks become more common, Prolexic found, but they are now harder to defend against due to more sophisticated malware.
"Prolexic noted a clear evolution in the strategies and tactics malicious actors embraced over the past 12 months," the report said. "The tools used by malicious actors in 2013 and the tactics they adopted changed considerably, reflecting the ongoing evolution of the (DDoS) threat."
Unlike in the years prior to 2013, mobile devices are now being used to carry out attacks, making it even easier for malicious groups to take down a Web site.
"Although the use of mobile devices in these attacks is still minimal, it is expected to grow alongside the adoption of smartphones around the world," the report said.
Just as with actual hacks, many of the DDoS attacks have been coming from Asian countries, according to Prolexic's data. It has long been asserted that the Chinese government has been behind various attacks but with the prevalence of computers within China, no one is entirely sure who is behind many of the DDoS attacks.
No matter what sector a business is in, protecting against DDoS attacks is important. Now that these attacks are becoming more common, even large organizations and government-funded institutions have been taken offline for hours.
According to a report from Forrester Research, an attack "can last anywhere from hours to days, depending on how long it takes the victim to mitigate the traffic and how long the attacker can keep blasting the traffic at the victim's site and network."
Along with the troublesome downtime is an even more significant amount of lost money, especially with larger businesses, Forrester said.
"The estimated financial impact is $2.1 million dollars lost for every 4 hours down and $27 million for a 24-hour outage," the Forrester report said.
Before attackers began to adopt advanced techniques, individuals with knowledge of the network would be able to mitigate a DDoS attack by filtering out any request that appeared to be fake. Now, on-site and cloud-based services are usually required to prevent or quickly stop an attack, as they can be consistently updated and include multiple layers of protection. Unfortunately, in most scenarios, even those systems are unable to completely prevent attacks, but when money is at stake, less downtime is still beneficial. | <urn:uuid:a16ffa2c-0d3f-436f-aa55-a1cb0db2c9b2> | CC-MAIN-2017-09 | http://www.cio-today.com/article/index.php?story_id=91273 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00325-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964248 | 637 | 2.546875 | 3 |
Why we care about file systems
Computer platform advocacy can bubble up in the strangest places. In a recent interview at a conference in Australia, Linux creator Linus Torvalds got the Macintosh community in an uproar when he described Mac OS X's file system as "complete and utter crap, which is scary."
What did he mean? What is a "file system" anyway, and why would we care why one is better than another? At first glance, it might seem that file systems are boring technical widgetry that would never impact our lives directly, but in fact, the humble file system has a huge influence on how we use and interact with computers.
This article will start off by defining what a file system is and what it does. Then we'll take a look back at the history of how various file systems evolved and why new ones were introduced. Finally we'll take a brief glance into our temporal vortex and see how file systems might change in the future. We'll start by looking at the file systems of the past, then we'll look at file systems used by individual operating systems before looking at what the future may hold.
What is a file system?
Briefly put, a file system is a clearly-defined method that the computer's operating system uses to store, catalog, and retrieve files. Files are central to everything we use a computer for: all applications, images, movies, and documents are files, and they all need to be stored somewhere. For most computers, this place is the hard disk drive, but files can exist on all sorts of media: flash drives, CD and DVD discs, or even tape backup systems.
File systems need to keep track of not only the bits that make up the file itself and where they are logically placed on the hard drive, but also store information about the file. The most important thing it has to store is the file's name. Without the name it will be nearly impossible for the humans to find the file again. Also, the file system has to know how to organize files in a hierarchy, again for the benefit of those pesky humans. This hierarchy is usually called a directory. The last thing the file system has to worry about is metadata.
Metadata literally means "data about data" and that's exactly what it is. While metadata may sound relatively recent and modern, all file systems right from the very beginning had to store at least some metadata along with the file and file name. One important bit of metadata is the file's modification date—not always necessary for the computer, but again important for those humans to know so that they can be sure they are working on the latest version of a file. A bit of metadata that is unimportant to people—but crucial to the computer—is the exact physical location (or locations) of the file on the storage device.
Other examples of metadata include attributes, such as hidden or read-only, that the operating system uses to decide how to display the file and who gets to modify it. Multiuser operating systems store file permissions as metadata. Modern file systems go absolutely nuts with metadata, adding all sorts of crazy attributes that can be tailored for individual types of files: artist and album names for music files, or tags for photos that make them easier to sort later.
Advanced file system features
As operating systems have matured, more and more features have been added to their file systems. More metadata options are one such improvement, but there have been others, such as the ability to index files for faster searches, new storage designs that reduce file fragmentation, and more robust error-correction abilities. One of the biggest advances in file systems has been the addition of journaling, which keeps a log of changes that the computer is about to make to each file. This means that if the computer crashes or the power goes out halfway through the file operation, it will be able to check the log and either finish or abandon the operation quickly without corrupting the file. This makes restarting the computer much faster, as the operating system doesn't have to scan the entire file system to find out if anything is out of sync. | <urn:uuid:5ab6be7d-e27e-492f-b03b-cb57b1237b9c> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2008/03/past-present-future-file-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00025-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959846 | 842 | 2.875 | 3 |
- A11yComponentActivationEvent -
Represents an activation event from an assistive technology.
- A11yComponentActivationType -
Provides an enumeration describing the different types of activations that can be performed through accessibility.
- A11yMode -
A set of modes used to specify how a control and its subtree are exposed to assistive technologies.
- A11yRole -
A set of roles that can be used on accessibility objects for use with assistive technologies.
- A11yState -
Represents different accessible states which specifies the state of an accessible control through its accessibility object.
- A11yStateHolder -
Holds the state of an accessibility object.
- A11yValueAdjustmentType -
Represents different ways a value can be adjusted.
- AbstractA11yObject -
Defines a control's accessibility properties.
- AbstractA11ySpecialization -
Class defining an abstract accessibility specialization.
- ComponentA11ySpecialization -
Class defining a "component" accessibility specialization.
- CustomA11yObject -
Accessibility object that can be used to implement custom accessibility behavior.
- ValueA11ySpecialization -
Class defining a "Value" accessibility specialization. | <urn:uuid:4f2dd385-d6a4-4177-932b-999c37ba8bf9> | CC-MAIN-2017-09 | http://developer.blackberry.com/native/reference/cascades/user_interface_accessibility.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00621-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.714401 | 266 | 2.53125 | 3 |
Two-factor authentication, also known as 2FA, is an additional piece of information that is used to log into a service. Normally people just input a username and password. But if the password is easy to guess, or has been stolen, the entire account could be compromised.
Two-factor authentication is increasingly popular as it helps add an additional layer of security to your accounts. It tries to ensure that only you can log into your account. You have probably been using two-factor authentication without realizing it. For example, if you want to reset your password on a particular site, you’re sometimes asked for your mother’s maiden name, or the name of your first pet. The idea behind this is that someone might know your password, but they won’t know your own personal information.
So when you enter only your username and password that is single-factor authentication. Two-factor authentication meanwhile requires a user to input two out of three of the following credentials before being able to access an account. These are:
- Something you know – This could be a PIN code, password or a pattern
- Something you have – such as an ATM card, smartphone, or fob
- Something you are – such as a fingerprint, iris scan or voice recognition print
If you take online banking as an example, people often have a security token that they insert their card into and enter their PIN. This then generates a code that is entered alongside their username and password to prove that the person trying to log in has their bank card present.
Many social media services also have this process. You can update your settings so that every time you try to login, you are sent a code. This could either be to your email, or your phone. You then enter this code to complete the login process.
It is generally accepted that using two-factor authentication is a good idea if it is offered. It might delay the speed of your login, but that’s a small price to pay when the alternative is someone stealing your personal information, or logging into your account and pretending to be you. | <urn:uuid:ec317d86-0368-4dac-a476-3e573d9a5108> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/us/what-is-two-factor-authentication-or-2fa-and-how-does-it-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961933 | 431 | 3.859375 | 4 |
Four years and $50 million after enacting the Statewide Land Information System in Wisconsin, the state Legislature asked to see some tangible benefits from the enterprise. In response, the State Land Information Board pointed to Winnebago County, one of the first to develop a GIS-based land information system (LIS). In the process of developing the countywide system, Winnebago created a floodplain-boundary profile far more accurate than the existing Flood Insurance-Rate Maps (FIRMs) required by the Federal Emergency Management Agency (FEMA). FIRMs are used by the county and by mortgage lenders to identify buildings within the floodplain. Owners of such buildings must conform to floodplain zoning ordinances and carry expensive flood insurance. The new profile shows
that more than 2,000 buildings on the old FIRM are actually out of the floodplain.
FEMA approval of the new Winnebago County floodplain-boundary profile will mean millions in flood-insurance savings and increases in property values for the affected homeowners and businesses. It will eliminate the need for property owners incorrectly shown on existing FIRMs as on the floodplain, to go through the expensive process of proving they are not. It will also result in the adoption by FEMA of more accurate FIRMs.
Winnebago's LIS is the result of a four-year, coordinated effort by seven municipalities to modernize land- and infrastructure-management needs of city and county agencies, and enable government, utilities, and private enterprise to share specific data resources via an open software environment. The system is expected to increase government efficiency and public service by eliminating duplication of land-record operations and standardizing data storage management. Also it will enable county zoning to expedite evaluation of land-use restrictions, and give taxpayers access to property-tax information via the Web.
One-Third in Floodplains
Winnebago County is in east-central Wisconsin, along the western shore of Lake Winnebago, the largest lake in the state. According to 1998 Census estimates, the population is 154,000, mainly centered around Oshkosh, the county seat, and near the towns of Neenah and Menasha. Nearly a third of the county's 500 square miles -- including Lake Winnebago, two smaller lakes, the Fox and the Wolf rivers, and numerous streams -- are floodplains. Rain and snow together annually average 30 inches, and in the past, heavy rains have caused flooding in certain areas of the county.
Until development of the Winnebago LIS, federal and state law required the county to base floodplain zoning ordinances on FIRMs produced by FEMA. Where and how new structures could be built depended on whether the FIRM showed the building to be in or out of a floodplain. Mortgage lenders used FIRMs to require flood insurance on buildings in flood-prone areas, to reduce the need for disaster assistance when devastating floods occurs. However, County Zoning Administrator Robert Braun said the Zoning Office is now using the new map developed by the Wisconsin Department of Natural Resources (WDNR) to determine the location of buildings, relative to a floodplain.
The Problem with FIRMs
Braun pointed out that the old FEMA maps were drawn at a scale that makes locating a property virtually impossible. "There's no way to get parcel specific at 1:24,000 [1 inch = 2,000 feet]; the width of a line on a FIRM can represent as much as 50 to 100 feet, depending on how thick the ink was when it came out of the pen."
Ben Niemann, professor of urban and regional planning at the Land Information and Computer Graphics Facility at the University of Wisconsin, Madison, agreed. "FEMA uses 1:24,000 quadrangles to draw the delineation -- you're talking about contour intervals of 10 feet. It is very difficult to draw a floodplain using that scale of map, especially in Winnebago County, where the topography is so flat. Although the margin of error can be quite large, the FIRM is still the determinant of whether or not a building is in the floodplain," he said. "A resident whose building was obviously on top of a hill might still have to buy floodplain insurance. The only way out of the flood-insurance program is to have a surveyor actually survey in the flood elevation relative to the building site, and that's expensive, about $5,000. Winnebago County knew that. That's why they went for a higher-resolution database with two-foot contour intervals. [1 inch = 400 feet] The LIS map is much more indicative of where the flood might actually go."
Potential Cost Savings
Braun explained that with the LIS, it is possible to zoom in to one specific parcel, and through the use of background aerial photography or on-screen data layers, see the relationship between the flood plain and whatever structures happen to be on the property. "For example, by looking at the digital information, insurance companies can now accurately determine whether or not structures on the property are within the floodplain. Whereas on the old Z-fold maps, it was, here's this great blob and we have no accurate way to scale things off, so we're just going to say, 'Yeah, OK.' Now we can say, 'Here's the area on the property that's not in the floodplain. If you want to build somewhat cheaper and not be subject to the basement restrictions, etc., build in this area, and you won't have to comply with floodplain requirements.' There's a potential cost savings here for home- owners and businesses."
Charting the Waters
Approving floodplain mapping is WDNR's responsibility. The agency also reviews new engineering studies for floodplain mapping, provides technical support to the counties, and assists them in integrating floodplain mapping into their LIS. Floodplain Engineer and GIS Specialist Alan Lulloff said the old floodplain-boundary profile was drawn using 10-foot contours. "That eliminated any possibility of the old profile matching the new two-foot contour data in the county's LIS." Lulloff then requested and received a $20,000 grant from FEMA to redelineate the floodplain boundaries.
In drawing the new floodplain profile, Lulloff brought established flood elevations into the grid package in ArcInfo to establish the projected water level of a "100-year flood," and create a representation of the flood surface. He then compared the flood surface to the LIS topography to determine which land points in a 100-year flood would be dry and which would be wet. The "wet" areas were then converted to polygons that outlined the new floodplain boundary. The project was completed in little over a year and a half.
Although WDNR has received concurrence from FEMA to use the revised floodplain map for zoning purposes, Lulloff stressed that formal approval from FEMA is pending revision of the flood insurance-rate map and publication of a new one.
The impact of the redelineation is tangible; of the 5,700 buildings shown on the original FIRMs to be in a floodplain, the new map shows 2,400 of them to be outside, and 1,300 that were thought to be safe are actually inside the floodplain. FEMA's acceptance of the new floodplain delineation will mean an increase in property values for those outside, plus a significant flood-insurance savings. According to the Planning and Zoning Office, the cost of flood insurance over the life of a 30-year mortgage is about $12,000. For owners of the 2,400 structures, FEMA approval will mean a savings of $28.8 million. Owners of 1,300 homes may see a drop the value of their properties, but they will now have something they do not currently have, flood-insurance protection.
"Modern information technology," Niemann said, "is enabling government to spread the cost of flood insurance more equitably. It's a way of government being more responsive to the needs of real people. If you don't need flood insurance, why should you be paying for it? On the other hand, data shows that if you're in a flood-prone area, you'll likely be flooded once in 30 years. This would suggest you do want insurance."
Niemann added that FEMA is in the process of putting together a nationwide program to do what they have done in Winnebago County, support redelineation of floodplain profiles. "As urbanization has expanded outward, FEMA is finding that their maps are inadequate to deal with these problems. They're talking real big money."
What it Takes
Braun said redelineating the floodplain was time-intensive in terms of setting up the data and doing the modeling based on contours. "I don't know that it is something that a specific municipality would have the time or personnel to do. It needs to be coordinated, let's say through a state planning agency with computer facilities. Certainly, the private sector could provide this kind of assistance. As long as things are done within FEMA parameters, everyone will end up with a better product."
Bill McGarigle is a writer, specializing in communications and information technology. He is based in Santa Cruz, Calif. E-mail him. | <urn:uuid:7c280610-7901-4c5b-9dbd-0b4898283470> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/FIRM-Support-for-Accurate-Maps-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00141-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948079 | 1,925 | 2.59375 | 3 |
Cloud Computing: Hybrid Clouds
True that it is getting noisier with cloud computing. And in clouds, hybrid cloud is making more noise! In giving the meanings to hybrid, dictionaries mention that it is an offspring of two animals and more similar synonyms. Going further, they use the word heterogeneous in origin and composition, two different types of components doing the same function. Hybrid clouds, as we know, depict the combined use (benefits/problems) of private cloud and public cloud. Are there two different types of components in Public and Private Cloud? Are they heterogeneous in nature? Leaving these questions to geeks (and meanings of word to etymologists), let us see what hybrid cloud offers to the businesses and enterprises.
Public and Private: The clouds that we know, Amazon Web Services (AWS) EC2, RackSpace CloudServers, FileServers etc…are public clouds. A private cloud is one; either built within enterprise (internal) or may be one dedicated offering from a service provider. Usually referred as within firewall! Dedicated to one enterprise and not shared with other enterprises. These are deployment models of cloud. Underlying architecture may be same.
Hybrid: As we read now from technology blogs, the most likely scenario of future is, every enterprise may end up with a hybrid cloud, wherein they will make use of both internal cloud and public cloud, preferably in tandem, picking the one that suits to requirement. Enterprises are looking for right technology from now onwards. And so, the onus is now with the cloud service vendors to gear up their service model, architecture suitable towards hybrid model. Similarly enterprises deliberate their long term cloud strategy by keeping in mind that in future anything and everything should work in anything and everything with little or no engineering efforts.
In the future, every IT resource available in cloud globally for on demand usage will be at our disposal!
By Glenn Blake | <urn:uuid:f5cc9d4d-f4eb-4610-aaac-7f87d2e4ac42> | CC-MAIN-2017-09 | https://cloudtweaks.com/2010/10/practically-speaking-about-cloud-computing-hybrid-clouds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00193-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926456 | 392 | 2.515625 | 3 |
When using the Internet most people connect to web sites, ftp servers or other Internet servers by connecting to a domain name, as in www.bleepingcomputer.com. Internet applications, though, do not communicate via domain names, but rather using IP addresses, such as 192.168.1.1. Therefore when you type a domain name in your program that you wish to connect to, your application must first convert it to an IP address that it will use to connect to.
The way these hostnames are resolved to their mapped IP address is called Domain Name Resolution. On almost all operating systems whether they be Apple, Linux, Unix, Netware, or Windows the majority of resolutions from domain names to IP addresses are done through a procedure called DNS.
What is DNS
DNS stands for Domain Name System and is the standard domain name resolution service used on the Internet. Whenever a device connects to another device on the Internet it needs to connect to it via the IP address of the remote device. In order to get that IP address, DNS is used to resolve that domain name to its mapped IP address. This is done by the device querying its configured DNS Servers and asking that server what the IP address is for that particular domain name. The DNS server will then query other servers on the Internet that know the correct information for that domain name, and then return to the device the IP address. The device will then open a connection directly to the IP address and perform the desired operation.
If you would like a more detailed explanation of the Domain Name System you can find it here: The Domain Name System
Enter the Hosts File
There is another way to resolve domain names without using the Domain Name System, and that is by using your HOSTS file. Almost every operating system that communicates via TCP/IP, the standard of communication on the Internet, has a file called the HOSTS file. This file allows you to create mappings between domain names and IP addresses.
The HOSTS file is a text file that contains IP addresses separated by at least once space and then a domain name, with each entry on its own line. For example, imaging that we wanted to make it so that if you typed in www.google.com, instead of going to Google we would go to www.yahoo.com. In order to do this you would need to find out one of the IP addresses of Yahoo and map www.google.com to that IP address.
One of the IP addresses for Yahoo is 184.108.40.206. If we wanted to map Google to that IP address we would add an entry into our HOSTS file as follows:
NOTE: When inputting entries in the hosts file there must be at least one space between the IP address and the domain name. You should not use any web notations such as \, /, or http://. You can disable a specific entry by putting a # sign in front of it.
You may be wondering why this would work as we said previously that when you need to resolve a domain name to an IP address the device will use its configured DNS servers. Normally this is true, but on most operating system the default configuration is that any mappings contained in the Hosts file overrides any information that would be retrieved from a DNS server. In fact, if there is a mapping for a domain name in a hosts file, then your computer will not even bother querying the DNS servers that are authoritative for that domain, but instead read the IP address directly from the HOSTS file. It is also important to note that when you add entries to your HOSTS file they automatically start working. There is no need to reboot or enter another command to start using the entries in the HOSTS file.
An example HOSTS file can be found here: HOSTS
Please note that there are ways to change the order that your computer performs Domain Name Resolution. If there are problems with HOSTS file not working you may want to read this article that goes into much greater detail on Domain Name Resolution on various operating systems:
For reference the HOSTS file is located in the following locations for the listed operating systems:
Location on Hard Drive
|Windows NT/2000/XP Pro||c:\winnt\system32\drivers\etc\hosts or c:\windows\system32\drivers\etc\hosts|
|Windows XP Home||c:\windows\system32\drivers\etc\hosts|
|Apple||System Folder:Preferences and in the System Folder itself.|
In Windows machines you may not already have a hosts file. If this is the case there will most likely be a sample hosts file called hosts.sam that you can rename to hosts and use as you wish. You can edit this file either from the cmd prompt using Edit or Notepad on windows or VI on Unix/Linux. Really any text editor can open and modify the HOSTS file. It is also recommended that if you use this file that you make periodic backups of it by copying it to another name. Some people recommend that you make this file read only so that it will be harder to modify by a malicious program, which there are Hijackers that are known to do this, but there are Hijackers such as CoolWebSearch that add entries to the file regardless of whether or not its read only. Therefore you should not think that having your HOSTS as read only will make it safe from modification.
Why would I want to use a HOSTS file
There are a variety reasons as to why you would want to use a HOSTS file and we will discuss a few examples of them so you can see the versatility of the little file called the HOSTS file.
Network Testing - I manage a large Internet Data center and many times we need to set up test machines or set up development servers for our customers applications. When connecting to these development or test machines, you can use the HOSTS file to test these machines as if they were the real thing and not a development server. As an example, lets say that you had a domain name for a development computer called development.mydomain.com. When testing this server you want to make sure it operates correctly if people reference it as the true web server domain name, www.mydomain.com. Since if you change www.mydomain.com in the DNS Server to point to the development server everyone on the Internet would connect to that server, instead of the real production server. This is where the HOSTS file comes in. You just need to add an entry into your HOSTS file that maps www.mydomain.com to the IP address of the development server on the computers that you will be testing with, so that the change is local to the testing machines and not the entire Internet. Now when you connect to www.mydomain.com from your computer with the modified HOSTS file you are really connecting to the development machine, but it appears to the applications that you are using that you are connecting to www.mydomain.com.
Potentially Increase Browsing Speed - By adding IP address mappings to sites you use a lot into your HOSTS file you can potentially increase the speed of your browsing. This is because your computer no longer has to ask a DNS server for the IP address and wait to receive it's response, but instead can quickly query a local file. Keep in mind that this method is not advised as there is no guarantee that the IP address you have for that domain name will always stay the same. Therefore if the web site owner decides to change their IP address you will no longer be able to connect.
Block Spyware/Ad Networks - This reason is becoming a very popular reason to use the HOSTS file. By adding large lists of known ad network and Spyware sites into your hosts file and mapping the domain names to the 127.0.0.1, which is an IP address that always points back to your own machine, you will block these sites from being able to be reached. This has two benefits; one being that it can make your browsing speed up as you no longer have to wait while you download ads from ad network sites and because your browsing will be more secure as you will not be able to reach known malicious sites.
NOTE: It is important to note that there have been complaints of system slowdowns when using a large hosts file. This is usually fixed by turning off and disabling the DNS Client in your Services control panel under Administrative Tools. The DNS client caches previous DNS requests in memory to supposedly speed this process up, but it also reads the entire HOSTS file into that cache as well which can cause a slowdown. This service is unnecessary and can be disabled.
There are HOSTs file that are already made that you can download which contain a large list of known ads servers, banner sites, sites that give tracking cookies, contain web bugs, or infect you with hijackers. Listed below are web sites that produce these types of hosts files:
hpguru's HOSTS File can be found here: http://www.hosts-file.net/
The MVPS Host File can be found at: http://www.mvps.org.
Hosts File Project can be found here : http://remember.mine.nu/
If you choose to download these files, please backup your original by renaming it to hosts.orig and saving the downloaded HOSTS file in its place. Using a HOSTS file such as these is highly recommended to protect your computer.
Utilities for your HOSTS file
If you do not plan on modifying your HOSTS file much and plan on using it occasionally for testing purposes, then the basic text editors like VI, Notepad, and Edit are more than adequate for managing your HOSTS file. If on the other hand you plan on using the HOSTS file extensively to block ads/spyware or for other reasons, then there are two tools that may be of use to you.
eDexter - When you block ads on web sites using a HOSTS file, there tends to be empty boxes on the web site you are visiting where the ad would normally have appeared. If this bothers you, you can use the program eDexter to fill in the image with one on your local machine such as a clear image or any other one for that matter. This removes the empty boxes and is quick because the replacement image is loaded off of your hard drive.
Hostess - Hostess is an application that is used to maintain and organize your HOSTS file. This program will read your HOSTS file and organize the entries contained in it into a database. You can then use this database to scan for duplicates and to manage the entries. It is a program that is definitely worth checking out if you plan on using the HOSTS file extensively.
As you can see the HOSTS file is a powerful tool if you understand how to use it. You should now know how to use the HOSTS file to manipulate Domain Name Resolution to suit your needs. It is also important that you use its ability to block malicious programs as discussed above to make your computing environment more secure.
As always if you have any comments, questions or suggestions about this tutorial please do not hesitate to tell us in the computer help forums.
Bleeping Computer Basic Internet Concepts Series
BleepingComputer.com: Computer Support & Tutorials for the beginning computer user.
04/09/04 : Added information about hpguru's host file and http://remember.mine.nu/. Warned about potential slow downs caused by large hosts files and how to fix that. Updated information that changing the hosts file to read only may not stop hijackers from changing information. Added info about hostess host file manager and - Thanks to CalamityKen
In this tutorial we will discuss the concept of Ports and how they work with IP addresses. If you have not read our article on IP addresses and need a brush up, you can find the article here. If you understand the concepts of IP addresses, then lets move on to TCP and UDP ports and how they work.
When using the Internet most people connect to web sites, ftp servers or other Internet servers by connecting to a domain name, as in www.bleepingcomputer.com. Internet applications, though, do not communicate via domain names, but rather using IP addresses, such as 192.168.1.1. Therefore when you type a domain name in your program that you wish to connect to, your application must first convert ...
Every machine on the the Internet has a unique number assigned to it, called an IP address. Without a unique IP address on your machine, you will not be able to communicate with other devices, users, and computers on the Internet. You can look at your IP address as if it were a telephone number, each one being unique and used to identify a way to reach you and only you.
A key component of the Internet and how it works revolves around the Domain Name System, otherwise known as DNS. The underlying technology behind the Internet, is that when a computer needs to talk to another computer on the Internet, they communicate via the computer's IP Address. The IP Address is a unique set of numbers associated with a particular machine, which will be discussed in a ...
With so much of Computer use these days revolving around the Internet and communicating with others, its important that you understand what exactly a network is. Without networks, all communication between your computer and other computers whether it be instant messaging, email, web browsing, or downloading music could not be achieved. This tutorial will strive to teach you about networks and ... | <urn:uuid:d841d9de-271c-44bf-8d24-8b21c3fbca1d> | CC-MAIN-2017-09 | https://www.bleepingcomputer.com/tutorials/hosts-files-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00545-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937973 | 2,831 | 4 | 4 |
This resource is no longer available
SSL 101: A Guide to Fundamental Web Site Security
The internet is a wealth of information for users and supplies the ability to shop, email, work, and everything in between. However, as more users leverage the web, more cybercriminals do as well, making security a top concern.
Inside this guide, discover why Secure Sockets Layer (SSL) technology is the backbone of your web protection as you explore what it does, how it works, and how it can help you build credibility online. | <urn:uuid:21ab185f-1b1f-4471-a359-adacffbb04f5> | CC-MAIN-2017-09 | http://www.bitpipe.com/detail/RES/1354911158_639.html?asrc=RSS_BP_TERM | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00013-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918226 | 109 | 2.609375 | 3 |
In the 10 Kentucky counties surrounding the Blue Grass Army Depot -- where chemical weapons, such as mustard gas, VX nerve agent and GB (sarin), dating back to the 1940s are stored until their destruction -- officials responsible for emergency preparedness once relied on enlarged and laminated county highway maps to prepare for a possible release incident.
Using a phased approach, the Kentucky Chemical Stockpile Emergency Preparedness Program (KY CSEPP) affordably implemented a GIS that allows officers to quickly and easily obtain information such as evacuation routes and nearby medical facilities.
In late 2001, with an investment of less than $40,000 from FEMA, officials from the KY CSEPP, part of the Kentucky Division of Emergency Management, started developing computerized maps of the almost 3,000 square-mile area around the depot.
Like many small- to moderate-sized communities, the KY CSEPP had little if any digital data and limited funding, so it chose to build its GIS in four phases.
In the first phase, the KY CSEPP chose PlanGraphics, a geospatial consulting firm based in Frankfort, Ky., to build the GIS, and then acquired computer and printing hardware, and ESRI's ArcView GIS software.
The KY CSEPP then asked PlanGraphics to assess data availability and gather necessary geographic data from various sources to develop base maps for each county.
The individual counties' thematic, one-meter aerial photos, 10-meter SPOT satellite imagery and topographic base maps were assembled into a consistent regional base. Fortunately the Kentucky Office of Geographic Information already had most of the raster data PlanGraphics needed to prepare imagery and topographic base maps.
The last step of phase one was a comprehensive survey conducted by PlanGraphics to determine if digital data on important resources like schools, hospitals, shelters and public safety assets existed in the 10 individual counties.
To no one's surprise, the data was virtually nonexistent, especially in the region's predominantly rural counties.
Pooling the Data
Data is always the most important -- and almost always the most expensive -- component of any GIS development project, and the KY CSEPP project didn't disprove those time-tested facts.
During the second phase, the KY CSEPP used PlanGraphics to design a detailed database for 65 non-base-map layers in 10 categories including evacuation origins, such as schools; public safety, medical and evacuation resources; utility and transportation networks; and political and administrative boundaries.
If an incident occurs, officials must decide to shelter residents in place or evacuate a potentially large number of people to safe shelter sites. Therefore, the KY CSEPP needs many detailed data sets.
As PlanGraphics began building the KY CSEPP's GIS, it uncovered several novel data sources. The Kentucky Board of Medical Licensure keeps records of all licensed physicians and physician assistants, which were obtained and used to populate attribute databases. They also helped locate approximately 2,000 MDs and physician assistants on the GIS maps.
Similar data sets were found for pharmacies, veterinarians, daycare centers, assisted living and long-term care centers, and group foster-care facilities, among others. A lot of necessary information had to be converted from hard copy.
Because the state didn't have a master address database for its 120 counties, the KY CSEPP purchased addressing software from Geographic Data Technology Inc. (GDT) to geo-code locations with situs addresses.
In addition, county CSEPP and local public safety staff identified known locations on one-meter digital ortho photography for digitizing, and provided GPS coordinates for a number of other features, such as emergency landing zones.
Local public safety and emergency management personnel were incredibly knowledgeable and helpful in building the GIS database. Local participation in the development effort is essential to build ownership and trust among individuals who will provide future data updates and use the GIS locally.
After collecting and processing the data, and populating the GIS database, PlanGraphics developed an ArcView Project -- a file for organizing work -- that gave users an interface designed specifically for non-GIS professionals.
This GIS had to be user-friendly. The two opening application screens let users pick the type of data and counties to display by answering several basic questions and clicking a few buttons.
Expanding the GIS
As we gained experience and became comfortable with the GIS, we wanted to use it to do more, including making it available to others.
For the third phase, PlanGraphics added customized functionality to the ArcView Project and developed a secure Internet site for access to the GIS from the field, the Blue Grass Army Depot, 10 county emergency operations centers, FEMA Region IV in Atlanta, and the U.S. Army's CSEPP Office in Washington, D.C.
PlanGraphics was asked to augment the ArcView Project to enable the KY CSEPP to more quickly conduct analyses typically performed for planning and used during incident response. First, a customized search capability was developed to search the database by facility name, type or location within a boundary feature, such as immediate response zones, cities, counties or census tracts.
Second, several different buffering routines that find features, such as highways and rivers, by proximity were programmed. Finally the ability to search for an XY location, buffer the coordinate position, and find and identify features within the selected area was added.
The GIS was so popular the KY CSEPP was producing and sending out dozens of maps each week. Sometimes they even went to non-CSEPP agencies that just wanted access to the map information.
The KY CSEPP intended to install the GIS mapping project in all 10 counties from the start, as well as in other state and federal agency locations, so due to cost, ESRI's ArcIMS solution was chosen. This solution would make the data and maps available over the Internet. Using ArcIMS also kept the GIS database synchronized.
After purchasing and installing a dedicated server and the ArcIMS software for the KY CSEPP, PlanGraphics converted the ArcView Project to an Internet mapping application that provides all the functionality that was previously developed.
The secure application is being placed online and tested by all users before full rollout. PlanGraphics also is preparing a database maintenance strategy for keeping the GIS current.
The KY CSEPP has moved well beyond black-and-white highway maps, but there is still much more to accomplish.
The next step will be to integrate the GIS with functions such as chemical sensor monitoring, plume modeling, evacuation routing, alert call notification and traffic video surveillance.
PlanGraphics developed an interoperability architecture called STEPs (Spatial Templates for Emergency Preparedness), and we are jointly pursuing funding for it.
All users can also just as easily apply the GIS to other types of emergency preparedness and response hazards. Even if the KY CSEPP mapping project were to progress no further, there is dramatic improvement in planning for and responding to an incident at the depot, no matter how unlikely.
Bill Hilling is the planning project supervisor for the Kentucky Division of Emergency Management's Chemical Stockpile Emergency Preparedness Program. | <urn:uuid:2abce51e-ccd7-4880-be97-6f1a7ecac5fb> | CC-MAIN-2017-09 | http://www.govtech.com/public-safety/Disaster-Planning.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00189-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95219 | 1,483 | 3.234375 | 3 |
January 2017 Digital Edition
Nov/Dec 2016 Digital Edition
Oct 2016 Digital Edition
Sept 2016 Digital Edition
Aug 2016 Digital Edition
July 2016 Digital Edition
June 2016 Digital Edition
DARPA looking to give UAVs better 'hand-eye' coordination
Researchers funded by the Defense Department are hoping to give unmanned aerial vehicles finer visual depth perception and capabilities that enable them to pick materials up and move them around.
The Defense Advanced Research Projects Agency (DARPA) on Dec. 6 detailed some of the research being conducted to give unmanned aerial vehicles (UAVs) more precise autonomous payload placement capabilities.
Researchers funded by the agency successfully tested a vision-driven robotic-arm payload emplacement using Santa Clara, CA-based MLB Company’s V-Bat “tail-sitter” UAV. The vehicle, said DARPA, is capable of both hover and wing-borne flight, making possible the delivery and precision emplacement of a payload.
Key to the capabilities is a special extendable six-foot robotic arm attached to the UAV that can grab and carry up to a one pound payload.
The research team designed and developed a low-cost vision system that estimates the target’s position relative to the hovering vehicle in real time, said DARPA. This vision system also enables the UAV to search and find the target for the emplacement autonomously and then perform the action, it said.
The capability paves the way for a number of applications for precise long-range delivery of small payloads into difficult-to-reach environments, according to the agency.
“Our goal with the UAV payload emplacement demonstration was to show we could quickly develop and integrate the right technology to make this work,” said Dan Patt, DARPA program manager. “The success of the demonstration further enables the capabilities of future autonomous aerial vehicles.”
During the technology demonstration, DARPA said the MLB-built V-Bat successfully demonstrated:
- A newly developed stereo vision system that tracks the emplacement target and motion of the robotic arm. The vision system, coupled with global positioning system, controls the arm and V-Bat during emplacement.
- Control logic to maneuver the vehicle and direct the robotic arm to accurately engage the emplacement target.
- Vehicle stability with the arm extended six feet with a one-pound payload.
- Autonomous search and detection of the emplacement target and autonomously emplaced a one-pound payload. | <urn:uuid:2b984144-2d3f-4772-b961-e0cd9eb86e6b> | CC-MAIN-2017-09 | http://gsnmagazine.com/node/27986?c=disaster_preparedness_emergency_response | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00541-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.895452 | 519 | 2.84375 | 3 |
It's not often that one gets a chance to attend a demonstration of a new method of human-computer interaction. Having been too young to witness the development of the command line in the 1950s or the modern graphical user interface at Xerox PARC in the 1970s, it was a genuine thrill to visit Microsoft's campus for a personal demo of "surface computing." While future computer historians are unlikely to view this technology as being anywhere near as groundbreaking as the CLI or GUI, the multi-touch interface nonetheless serves as an innovative way of interacting with the personal computer.
Microsoft Surface has taken many years to come to fruition. The original idea was developed in 2001 by employees at Microsoft Research and Microsoft Hardware, and it was nurtured towards reality by a team that included architect Nigel Keam. Not content with merely coming up with a new idea, the Surface team is committed to actually releasing it to the commercial market as early as the end of 2007. From there, the team hopes that the product will make its way from retail and commercial establishments to the home, in much the same manner as large-screen plasma displays have migrated out of the stadium and into the living room over the past few years.
Microsoft began the Surface project back in 2001, after the idea had already been proposed by employees in the Microsoft Research division. For many years the work was hidden under a non-disclosure agreement. Keam mentioned that, although necessary, the NDA made it frustrating when Microsoft scheduled the official Surface announcement just days after Apple announced the iPhone. While both projects employ touch-sensitive screens with multi-touch capability, they are very different from each other, and the development timelines clearly show that neither was "copied" from the other. As Keam put it: "I only wish I could work that fast!"
Beyond creating the hardware, however, the Microsoft Surface team has identified several different scenarios where the device could be used in retail and commercial environments, and it has developed demonstration software that shows off the potential of the system. Microsoft has partnered with several retail and entertainment companies and will be co-developing applications customized for these environments.
Let's take a look.
Senior marketing director Mark Bolger models Surface
Essentially, Microsoft Surface is a computer embedded in a medium-sized table, with a large, flat display on top that is touch-sensitive. The software reacts to the touch of any object, including human fingers, and can track the presence and movement of many different objects at the same time. In addition to sensing touch, the Microsoft Surface unit can detect objects that are labeled with small "domino" stickers, and in the future, it will identify devices via radio-frequency identification (RFID) tags.
The demonstration unit I used was housed in an attractive glass table about three feet high, with a solid base that hides a fairly standard computer equipped with an Intel Core 2 Duo processor, an AMI BIOS, 2 GB of RAM, and Windows Vista. The team lead would not divulge which graphics card was inside, but they said that it was a moderately-powerful graphics card from either AMD/ATI or NVIDIA.
The display screen is a 4:3 rear-projected DLP display measuring 30 inches diagonally. The screen resolution is a relatively modest 1024x768, but the touch detection system had an effective resolution of 1280x960. Unlike the screen resolution, which for the time being is constant, the touch resolution varies according to the size of the screen used—it is designed to work at a resolution of 48 dots per inch. The top layer also works as a diffuser, making the display clearly visible at any angle.
Unlike most touch screens, Surface does not use heat or pressure sensors to indicate when someone has touched the screen. Instead, five tiny cameras take snapshots of the surface many times a second, similar to how an optical mouse works, but on a larger scale. This allows Surface to capture many simultaneous touches and makes it easier to track movement, although the disadvantage is that the system cannot (at the moment) sense pressure.
Five cameras mounted beneath the table read objects and touches on the acrylic surface above, which is flooded with near-infrared light to make such touches easier to pick out. The cameras can read a nearly infinite number of simultaneous touches and are limited only by processing power. Right now, Surface is optimized for 52 touches, or enough for four people to use all 10 fingers at once and still have 12 objects sitting on the table. (For more on the camera system and hardware, check out our launch coverage of the system).
The unit is rugged and designed to take all kinds of abuse. Senior director of marketing Mark Bolger demonstrated this quite dramatically by slamming his hand onto the top of the screen as hard as he could—it made a loud thump, but the unit itself didn't move. The screen is also water resistant. At an earlier demonstration, a skeptical reporter tested this by pouring his drink all over the device. Microsoft has designed the unit to put up with this kind of punishment because it envisions Surface being used in environments such as restaurants where hard impacts and spills are always on the menu.
The choice of a 4:3 screen was, according to Nigel Keam, mostly a function of the availability of light engines (projectors) when the project began. Testing and user feedback have shown that the 4:3 ratio works well, and the addition of a slight amount of extra acrylic on each side leaves the table looking like it has normal dimensions.
Built-in wireless and Bluetooth round out the hardware capabilities of Surface. A Bluetooth keyboard with a built-in trackpad is available to diagnose problems with the unit, although for regular use it is not required. | <urn:uuid:a839c71a-bd7c-4a7d-93ff-6507b6aae08f> | CC-MAIN-2017-09 | https://arstechnica.com/information-technology/2007/09/surface/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00541-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958804 | 1,167 | 2.5625 | 3 |
January 22, 2010
-- As the Linear Tape-Open (LTO) Program announces licensing details for the next generation of 3TB LTO 5 tapes, IBM and FujiFilm are unveiling new technology that makes it possible to hold up to 35TB of uncompressed data on a single tape cartridge.
The world record breakthrough was made possible by an improvement in the precision of controlling the position of the read-write heads, according to IBM. The pinpoint control yields better than a 25-fold increase in the number of tracks that can be squeezed onto the half-inch-wide tape
The scientists have also developed new detection methods to increase the accuracy of reading the tiny magnetic bits, an advance that increases the linear recording density by more than 50%.
The tape also uses a new, low-friction read-write head developed by IBM Research.
IBM claims the demonstration (view the IBM Research video
) was performed at product-level tape speeds (2 meters per second) and achieved error rates that are correctable using standard error-correction techniques to meet IBM's performance specification for its LTO Generation 4 products.
Tape still has the advantage over hard disk drives (HDDs) when it comes to cost. IBM claims today's tape systems cost one-fifth to one-tenth the price of disk-based storage systems, not to mention the power savings associated with magnetic storage.
The concept of storing that much data on a single tape may stave off the "tape is dead" argument for another decade – at least that what IBM is hoping.
posted by: Kevin Komiega | <urn:uuid:2b4a8495-ea83-4e84-9bcb-9c3f0d495538> | CC-MAIN-2017-09 | http://www.infostor.com/index/blogs_new/kevin_komiega_storage_blog/blogs/infostor/kevin_komiega_storage_blog/post987_6262930.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00241-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938643 | 328 | 2.546875 | 3 |
The Network is the Database Integrating Widely Dispersed Big Data with Data Virtualization
Originally published January 14, 2014
IntroductionAlmost 30 years ago in 1984, John Cage1 of Sun Microsystems (acquired by Oracle in 2010) coined the phrase ďThe Network is the Computer.Ē He was right then, and he is even more right today. Nowadays, application processing is highly distributed over countless machines connected by a network. The boundaries between computers have completely blurred. We run applications that seamlessly invoke application logic on other machines.
But itís not only application processing that is scattered across many computers; the same can be said for data. More and more digitized data is entered, collected and stored in a distributed fashion. Itís stored in cloud applications, in outsourced ERP systems, on remote websites and so on. In addition, external data is available from government, social media, news websites, and the number of valuable open data sources is staggering. The network is not only the computer anymore; the network has become the database as well:
This dispersion of data is a fact. Still, data has to be integrated to become valuable for an organization. For long, the traditional solution for data integration has been to copy the data to a centralized site such as the data warehouse. However, data volumes are increasing (and not only because of the popularity of big data systems). The consequence is that more and more often data has become too big to move (for performance, latency or financial reasons) Ė data has to stay where itís entered. For integration, instead of moving the data to the query processing (as in data warehouse systems), query processing must be moved to the data sources.
This article explains the problem of centralized consolidation of data and describes how data virtualization helps to turn the network in a database using on-demand integration. It also explains the importance of distributed data virtualization to operate efficiently in todayís highly networked environment.
A Short History LessonOnce upon a time, all the digitized data of an enterprise was stored on a small number of disks managed by a few machines, all standing in the same computer room. Specialists in white coats monitored these machines and were responsible for making backups of the valuable data. Itís very likely that all the users were in the same building as well, accessing the data through monochrome monitors. The network that was used to move data between the machines was referred to as the sneakers-network.
Then the time came when users started to roam the planet, and machines residing in different buildings were connected with real networks. Compared to today, these first generations of networks were just plain slow. For example, in the 1970s, Bob Metcalfe (co-inventor of Ethernet) built a high-speed network interface between MIT and ARPANET.2 This network supported a dazzling network bandwidth of 100 Kbps. Compare that with todayís 100 Gigabit Ethernet that offers a million times more bandwidth. In an optimized network environment, one terabyte of data can now be transferred within 80 seconds. This would have taken 2.5 years in the 1970s.
Because users were working on remote sites, accessing data involved transmitting data back and forth, and that was slow. The vendors of database servers tried to solve this problem by developing distributed database servers in the 1980s.3,4 By applying replication and partitioning techniques, data was moved closer to the users to minimize network delay. With replication, data is copied to the nodes on the network where users are requesting data. To keep replicas up to date, distributed database servers support complex and innovative replication mechanisms.
Nowadays, itís no longer the computing room where new data is entered. Data is entered, collected and stored everywhere. Examples include:
Distributed collection: Websites running in the cloud collect millions of weblog records indicating visitor behavior. Factories operating worldwide run high-tech machines generating massive amounts of sensor data. Mobile devices collect data on application usage and track geographical locations.But itís not only that data is stored in a distributed fashion; data entry is distributed as well. Employees, customers and suppliers all enter data via the Internet, using their own machines at home, on their mobile devices and so on. Data entry has never been more dispersed.
To summarize this short history lesson, in the beginning data and users were centralized. Next, data stayed centralized, and users became distributed. Now data and users are both highly distributed.
The Need to Integrate Distributed Data RemainsAs described, there are many good reasons why data entry and data storage are dispersed. Still, data has to be integrated. There are many different reasons why data has to be integrated:
Is Centralization the Answer to Data Integration?For the last twenty years, the most popular solution to integrate data is the data warehouse. In most data warehouses systems, data from multiple sources is physically moved to and consolidated in one big database (one site). Here, the data is integrated, standardized and cleansed, and made available for reporting and analytics.
This centralization and consolidation of data makes a lot of sense from the perspective of the need to integrate data. And if there is not too much data, itís technically feasible. But can we keep doing this? Can we keep moving and copying data, especially in this era of big data? It looks as if the answer is going to be no, and for some organizations itís already a no. Here we list four problems of this approach:
Data Virtualization to the Rescue Ė Moving Processing to the DataBut how can all the distributed data be integrated without copying it first to a centralized data store, such as a data warehouse? Data virtualization technology6 offers a solution. In a nutshell, data virtualization makes a heterogeneous set of data sources look like one logical database to the users and applications. These data sources donít have to be stored locally; they can be anywhere.
Data virtualization technology is designed and optimized to integrate data live. There is no need to physically store all the integrated data centrally. Itís only when data from several different sources is requested by users that itís integrated, but not before that. In other words, data virtualization supports integration on demand.
Because data virtualization servers retrieve data from other systems, they must understand networks. They must know how to efficiently transmit data over the network to the server where the integration on demand takes place. For example, to minimize network traffic, mature data virtualization servers deploy so-called push-down techniques. If a user asks for a small portion of a table, only that portion of the data is extracted by the data virtualization server from the data source and not the entire table. The query is ďpushed downĒ to the data source instead of requesting the entire table.
Push down allows a data virtualization server to move the processing to the data instead of moving the data to the processing. In the latter case, all the data is transmitted to the data virtualization server that subsequently executes the request. Especially if big data sets are used, this approach would be slow because of the amount of network traffic involved. A preferred approach is to ship the query to the data source, and transmit only relevant data back to the data virtualization server.
The Need for Distributed Data Virtualization Ė Moving Processing Closer to the DataMoving processing to the data is a powerful feature to optimize network traffic, but itís not sufficient for the distributed data world of tomorrow. Imagine that a data virtualization server runs on one server and all the requests for data are first moved to that central server, queries are sent to all the data sources, answers are transmitted back, and all the data is integrated and returned to all the users. This centralized processing of requests can be highly inefficient. It would be like a worldwide operating parcel service where all the parcels are first shipped to Denver, and from there to the destination address. If a specific parcel has to be shipped from New York to San Francisco, then this is not a bad solution. However, a parcel from New York to Boston is going to take an unnecessarily long time because of this detour via Denver. Or what about a parcel that must be shipped from Berlin, Germany, to London, UK? That parcel is going to make a long journey via Denver before it arrives in London.
Besides this inefficiency aspect, itís not recommended to have one data virtualization server because it lowers availability. If that server crashes, no one can get to the data anymore. It would be like the parcel service in a situation where the airport in Denver is closed because of bad weather conditions.
To address the new data integration workload, itís important that data virtualization servers support a highly distributed architecture. Each node in the network where queries originate and data sources reside should run a version of the data virtualization server for processing these requests. Each node of the data virtualization server that receives user requests should know where the requested data resides, and must push the request to the relevant data virtualization server. Multiple data virtualization servers work together to execute the request. The effect is that when no remote data is requested, no shipping of data and requests will take place.
This is only possible if a data virtualization server is knowledgeable about network aspects, such as what is the fastest network route, the cheapest network route, how to transmit data efficiently, the optimal package size, and so on. Like they must know how to optimize database access, they must also know how to optimize network traffic. It requires a close marriage of the network and data virtualization.
Note that this requirement to distribute data virtualization processing over countless nodes is not very different from the data processing architectures of NoSQL systems.
The Network is the DatabaseData and data entry are more and more distributed over the network, and over time it will only escalate. The time that all the data is stored together is forever gone. Sun Microsystemsí tagline once was ďThe network is the Computer.Ē In this era, in which data is entered and stored everywhere, in which users who access the data can be everywhere, and in which big data systems are being developed, an analogous statement can be made:
The network is the database.If the network is the database, copying all the data to one centralized node for integration purposes is expensive, almost technically undoable, and it may clash with regulations. Due to its integration-on-demand solution, data virtualization technology offers a more suitable approach to integrate all this widely dispersed data. Data virtualization will be the key instrument to integrating widely dispersed big data and turn ďthe network into a database.Ē A requirement will be that data virtualization servers have a highly decentralized architecture and are extremely network-aware.
SOURCE: The Network is the Database
Recent articles by Rick van der Lans
Copyright 2004 — 2017. Powell Media, LLC. All rights reserved.
BeyeNETWORK™ is a trademark of Powell Media, LLC | <urn:uuid:2d0c2257-ea16-444f-b86b-e5aab59e43ae> | CC-MAIN-2017-09 | http://www.b-eye-network.com/print/17223 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00010-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928098 | 2,255 | 2.90625 | 3 |
Using Instant Messaging as a Support Resource
Once a toy for Internet users, instant messaging is gaining acceptance in the workplace. The future of IM will go far beyond the consumer desktop.
In this article, we'll look at instant messaging (IM) and its growing use in the workplace. What started out as a toy for the Internet is growing in popularity among business users. Many valid applications for this technology exist in the workplace.
How It Works
IM is an Internet technology that lets you send and receive text messages, voice messages, file attachments, and other data instantly over the Internet. E-mail is not an instant technology because it sends messages through a server that stores the items until the user retrieves them. Messages arrive in real time using IM because both parties are constantly connected to the network.
When you log on to an IM service, the software informs a server that you are online and ready to receive messages. In order to send messages to another user, you select that person's name from a contact list you've built. You then enter your message and click Send. Depending on which service you use, the server either directly relays the message to the recipient or facilitates a direct connection between you and the recipient.
There are three methods that IM services use to deliver messages: centralized network, peer-to-peer connection, or a combination of both:
- Centralized network--Connects users to each other through a series of servers that form a large network. When a message is sent, servers find the recipient's PC and route the message through the network until it reaches its destination. MSN Messenger uses this method.
- Peer-to-peer--Uses a central server to keep track of who is online. Once you log on, the server sends you the IP addresses of everyone on your contact list who is currently logged on. By doing this, messages are sent directly to the recipient without involving a server. This method is faster for sending large files and graphics. ICQ uses this method.
- Combination--Uses a centralized network of servers for sending text messages, but establishes a peer-to-peer connection for sending large files and graphics. AIM uses this method. | <urn:uuid:2e9a5e01-bc25-4860-95cd-c1b9cc6b6b61> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/netsysm/article.php/624591/Using-Instant-Messaging-as-a-Support-Resource.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00010-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918337 | 450 | 2.78125 | 3 |
Tapping Into A Homemade Android ArmyBlack Hat speaker will detail how security researchers can expedite their work across numerous Android devices at once.
In the Android development world, fragmentation has been the bane of the typical app coder's existence for as long as the platform has been running devices. With so many different devices to account for, it's difficult to troubleshoot and ensure apps run uniformly across them. That same frustration is actually amplified in mobile security research, because as white hat hackers dive under the hood of Android devices they find that not only do different devices behave differently, but sometimes even devices advertised under the same name may sport different processors and totally different architectures.
"Each device is kind of like a unique snowflake," says Joshua Drake, director of research science at Accuvant Labs. "Even if we both had a Samsung Galaxy S3, and, say, you had one from Verizon and I had one unlocked, those phones are almost completely different on the inside. Samsung makes the processor for the unlocked one, and Qualcomm's processor runs Verizon's. That core of a change will change a lot of things."
Consequently, understanding how certain vulnerabilities may cut across devices and manufacturers becomes a very difficult nut to crack -- or, at the very least, requires a long nut-cracking process. However, at Black Hat USA next month Drake plans to help the security community save time and focus on finding bugs and reaching other important security conclusions by building what he terms a homemade "Android Army." His talk will discuss how a simple hardware hack, combined with an open-source toolkit he's been refining, can make it easier for researchers to scale their exploration across many different devices at once.
Drake came up with the idea as he was writing and researching the Android Hacker's Handbook. As he explains, the typical way a researcher interacts with an Android device is through the device hooked up via USB and the Android Debug Bridge (ADB) running on a PC.
"That tool works fine, but it is not really designed to be one where you're operating on lots of devices," he says. "I thought to myself: Wouldn't it be great if I could somehow have ADB but add in this extra layer of something that will run across a whole bunch of devices?"
And so, Drake figured out the most expeditious way to nest together multiple USB ports to get dozens of devices running on a PC at once and started working on the scripts that would eventually make up what he calls the Android Cluster Toolkit. Already available as an open-source project, the toolkit makes it easier, not only for the user to identify devices hooked into a computer by human-friendly names rather than long serial codes, but to also run commands on multiple devices at once. Drake says he personally has built up a cluster of about 55 devices but that it is possible for a researcher to cram up to 127 devices at once on a single PC's root USB hub.
"It can be helpful, not just if you are auditing and looking through some source code and trying to connect that to real devices, but also if there has been a vulnerability that's already been identified and disclosed -- then you can quickly get an idea of which devices out there that are actually affected. Most of what the software part of this toolkit was designed to do was to help me find a way to type less and get more done."
Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading. View Full Bio | <urn:uuid:8fa7254d-d0a7-4b16-ad10-cee1f0f179f8> | CC-MAIN-2017-09 | http://www.darkreading.com/mobile/tapping-into-a-homemade-android-army/d/d-id/1297309?_mc=RSS_DR_EDT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00114-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964881 | 741 | 2.625 | 3 |
The realization that paper-based passports can be too easily altered or falsified is driving a worldwide move to electronic passports. In fact, there were more than 300,000 lost or stolen passports in the United States in recent years.
After the terrorist attacks on the United States on September 11, 2001, Congress legislated that all countries participating in the Visa Waiver Program with the United States must issue passports with integrated circuits (chips) to add digital security features that prevent counterfeiting and positively confirm the bearer of the passport with a biometric, such as a digital copy of the photograph printed on the cover.
Want to learn more about how the epassport protects your privacy and safety? Read JustAskGemalto.com’s Tutorial on the U.S. Electronic Passport. | <urn:uuid:8f774da4-e25b-4164-81eb-40a38dcd128a> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/us/why-us-issuing-epassports/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00114-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.936433 | 163 | 2.78125 | 3 |
Making accessories that tie into an iOS device's Dock connector is an expensive proposition: it requires getting certain components from Apple and applying for a costly "Made for iPhone" (or iPod or iPad) license. However, it is possible to use the headphone jack for two-way data communication with an iPhone and also to power small electronic circuits. A group of students and faculty from the University of Michigan's Electrical Engineering and Computer Science Department have developed a small device it calls the "HiJack" to make sensing peripherals easily accessible to those on a tight budget.
Project HiJack is a hardware and software platform for enabling communication between a small, low-power peripheral and an iDevice. The system uses a 22kHz audio signal, which is converted into 7.4mW of power at 47 percent efficiency. That power runs a TI MSP430 microcontroller as well as any attached electronics, and allows the HiJack to communicate with an iOS application. The components to build a HiJack cost as little as $2.34 in significant quantities.
Other peripherals have used a similar technique to draw power and communicate with mobile devices. Mobile payment processing startup Square uses a small device to read the magnetic stripes on credit cards—it is powered by the headphone jack on an iPhone or other mobile device. There is also a low-power FM transmitter for some iPod models that is powered by the headphone jack.
The team behind Project HiJack envisions users building low-cost sensing and data acquisition systems for student and laboratory use. So far, it has built an EKG interface, soil moisture sensor, an integrated prototype with temperature/humidity sensors, PIR motion sensor, and potentiometer, and a version with a breadboard for prototyping new sensor applications.
Schematics for the HiJack board, as well as source code to enable communication via the audio port, are available on Google Code so that anyone with some soldering skills and the wherewithal can build a HiJack for his or her own use. Currently, software exists to work on iOS, but the hardware design should work with nearly any mobile device that has a combination headphone/microphone jack. The team plans to build APIs to enable the HiJack to work on Android and Windows Phone 7 in the future.
There is a way to get a HiJack for an iOS device without making one yourself, though: the team is putting 20 prebuilt and assembled HiJack boards up for grabs to those who submit a proposal for how they would use it. If your proposal is selected, you have to agree to two conditions: release any code for your project as open source, and let the team document your project on its website. | <urn:uuid:69e09d76-bdff-4849-8165-756166c2a8ce> | CC-MAIN-2017-09 | https://arstechnica.com/apple/2011/01/project-hijack-uses-iphone-audio-jack-to-make-cheap-sensors/?comments=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00358-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944275 | 548 | 2.953125 | 3 |
How Michigan set the pace for state public safety networking
- By Patrick Marshall
- Jul 03, 2014
First of two parts.
Whether it was airliners bringing down the Twin Towers or Hurricane Katrina slamming into New Orleans, disasters over the past 15 years have demonstrated both the importance and vulnerability of public safety communications.
Emergency personnel responding to the collapse of the Twin Towers were severely hampered by overloaded radio channels and incompatible communications equipment.
And according to Eddie Compass, the New Orleans police superintendent during the chaos that followed Hurricane Katrina, his department had no communications at all for days, a lack he described as nearly as catastrophic as running out of ammunition.
In contrast, when a blackout struck on August 14, 2003, disrupting power in the Northeastern and Midwestern United States, public safety personnel in Michigan barely noticed any impact on communications.
"All of our [transmission] sites have redundant power, with generator power as well as commercial power," said Bradley Stoddard, director of Michigan's Public Safety Communications System (MPSCS). "Many of our sites in Southeast Michigan lost commercial power and kicked over to generator power. End users on the network had no idea. There was no loss of communications whatsoever."
Michigan, in fact, has long been a leader in developing public safety communications systems. Stoddard attributes that success to the state’s ability to keep an eye on economies of scale and adhere to standards that paved the way for expansion of shared public safety networking across the state.
“It really started in 1928,” Stoddard said. “The city of Detroit had the first public safety radio communications in the United States and, I would venture, probably the world.”
In 1928, of course, state of the art meant one-way radio communications from the police station to the patrol car. Nevertheless, according to Stoddard, Michigan’s state police force was so impressed by what it saw in Detroit that it pushed for similar capabilities.
Michigan again led the way in the 1940's, being one of the first states to install two-way radio communications in patrol cars. "At that time," noted Stoddard, "mobile radios were very large, as were the base stations."
While radio communications equipment gradually became smaller, lighter and better performing, the system set up in the 1940s remained fundamentally unchanged until the mid-1980s, when state police, noting the increasing mobility of criminals, wanted troopers to have the ability to communicate statewide, instead of just within jurisdictions.
And state officials didn't want a system that just connected state police to each other, Stoddard said. They envisioned a network that local police as well as other agencies at state and local levels could join.
Shared communications services
"The governor's office saw that the state police had a radio system, the Department of Natural Resources had a radio system and the Department of Transportation had a radio system," Stoddard said. "It became an issue of economies of scale. Why does everyone need to have their own radio system? Why don't we build one new system that provides statewide capability and then collapse those systems and bring the state agencies together?"
The challenge was that across jurisdictions agencies were using different, and in many cases not interoperable, equipment. It wasn't until 1989 that a coalition of federal agencies and public-safety professional associations established Project 25, a set of standards for digital radio equipment that made a statewide system feasible.
The Project 25 suite of standards involves digital land mobile radio (LMR) services for local, state and federal public safety agencies. In such systems, radios can communicate in analog mode with legacy radios and in either digital or analog mode with other P25 radios.
"By the mid-1990s, the RFPs went out for the system," Stoddard said. As a result, Michigan state agencies were ahead of the game when the events of September 11, 2001, occurred, thanks to its network of microwave radio transmission stations.
That piqued interest in the legislature to determine if there would be opportunities for local public safety to leverage the same statewide radio system that state agencies had access to, according to Stoddard.
Between 2002 and 2014, as agencies and local jurisdictions replaced equipment with Project 25-compliant systems, the statewide digital voice IP system has grown to cover 57,000 square miles of Michigan using 244 microwave transmission towers across the entire state
"In 2002 we had 152 agencies, both state and local, utilizing the system and roughly about 11,000 radios," Stoddard said. "Today we have 1,460 agencies representing local, state, federal, tribal and private, and roughly 67,000 radios. So just in a dozen years we have seen monumental growth."
Next: Tech decisions driving Michigan’s public safety expansion | <urn:uuid:99396f2a-eef1-4546-b0d1-f2814ea9fcc8> | CC-MAIN-2017-09 | https://gcn.com/articles/2014/07/03/michigan-public-safety-network.aspx?admgarea=TC_STATELOCAL | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00358-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966319 | 985 | 2.59375 | 3 |
Protocol breaks and content checking are technologies used under the hood. These technologies relate to the principal information security objectives, and ultimately how confidential information is protected using data diodes.
When protecting an isolated network against outsider attacks, there are a number of objectives and technologies that are commonly used. Objectives typically boil down to C.I.A.: confidentiality, integrity and availability.
The best possible technology for confidentiality is the unidirectional network connection by means of a data diode. However, there is a lot of technology relating to data diodes that impacts integrity and availability. In particular, protocol breaks and content checking have a subtle relation to these objectives.
This briefing paper explains how data diodes are used to protect confidential information. | <urn:uuid:a2ed059d-aef3-4662-b879-571768a00200> | CC-MAIN-2017-09 | https://www.fox-it.com/en/insights/paper/protecting-confidential-information-using-data-diodes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00358-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.89308 | 153 | 2.890625 | 3 |
What is it?
XSL (Extensible Stylesheet Language) is a way of transforming and formatting XML documents.
Without a stylesheet, a processor would not know how to render the content of an XML document except as an undifferentiated stream of characters, according to the Worldwide Web Consortium (W3C).
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Cascading Style Sheets (CSS) can describe how XML documents should be displayed, although CSS is primarily intended for HTML. XSL is purpose-designed for XML and is far more sophisticated. It can, for example, be used to transform XML data into HTML/CSS documents.
Far from replacing CSS, XSL builds upon and complements it. The two languages can be used together, and both use the same underlying formatting model, so designers have access to the same formatting features in both languages.
Where did it originate?
XSL began as an initiative to bring publishing functionality to XML. The working group included representatives from IBM, Microsoft and the University of Edinburgh. As well as CSS, XSL's heritage includes the ISO-standard Document Style Semantics and Specification Language (DSSSL). XSL became a W3C recommendation in 2001.
What's it for?
The XSL specification is in two parts: a language for transforming XML documents - XSLT - and an XML vocabulary for specifying formatting semantics - XML Formatting Objects (XSL-FO).
One use of XSL is to define how an XML file should be displayed by transforming it into a format recognisable to a browser, such as HTML. Each XML element is transformed into an HTML element. However, XSL does far more than simply formatting it can also manipulate, evaluate, add or remove elements, and reassemble the information in the XML source document.
What makes it special?
CSS was designed for the needs of browsers and to be easy for browser manufacturers to implement. XSL is a more complex proposition and for this reason browser suppliers - Microsoft with Internet Explorer 5, for example - have not always kept up.
How difficult is it to master?
XSL should be an easy progression for people with XML skills, as it uses XML syntax. But it may be more challenging for people coming from a C or Java programming background.
Where is it used?
As well as transforming web development, XSL was intended from the outset to be used by print publishers. It handles all modern (and some ancient) alphabets, including Braille.
What systems does it run on?
XSL is supplier- and platform-neutral, but some implementations are more neutral than others. XSL-supporting browsers include Firefox, Mozilla, and Netscape.
What's coming up?
The W3C's XSL Working Group has started work on version 2.0 of XSL-FO.
There are many free XSL tutorials. Try, for example, the W3C site or the Cover Pages. Many other sites deal in detail with the day to day problems of working with XSL, or explore new ways of using it. IBM's Developerworks is one such site, and publisher O'Reilly and Associates has a daunting array of articles on the subject, as well as XSL books.
Rates of pay
XSL is used with all mainstream development skills - Active Server Pages, Visual Basic, Java, Perl and other scripting languages. Roles range from web designers to consultants in City firms. The range of wages varies accordingly.
Vote for your IT greats
Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference?
Vote now at: www.computerweekly.com/ITgreats | <urn:uuid:0f86d262-f037-4597-88fa-01b34b144310> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/2240078456/Hot-skills-XSL-Take-a-more-sophisticated-approach-to-style-sheets | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00058-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910882 | 817 | 3.921875 | 4 |
VoIP networks are very popular these days. In order to support communication between traditional PBXs, Cisco IP phones, analog PSTN, and the analog telephones, all over IP network, quite a number of protocols are needed. Few protocols are indicating protocols (for instance, MGCP, H.323, SIP, H.248, and SCCP) used to position, sustain, and bring down a call. Other protocols are marked at the real voice packets (for example, SRTP, RTCP, and RTP) relatively indicating information. Few of the most common VoIP protocols are shown and described here. | <urn:uuid:91f6d256-3893-4abc-8b69-e6a431de14db> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/tag/voice-protocols | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00586-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916589 | 129 | 2.71875 | 3 |
I am sure you had nice time thinking around the discussion captured in last blog (/blogs/2013/08/16/networking-beyond-tcp/).
So we know the problem well and we understand that trying to change the basis of networking is not going to be acceptable in any ways. Several researchers have worked together on this issue and finally came up with the proposal of MPTCP – Multi Path TCP.
Let us go back to the basics of TCP, connection establishment process is called three-way handshake. Host A sends a packet with SYN flag to Host B which responds back with a packet containing SYN and ACK flags. Finally Host A sends a packet with ACK flag which marks the connection as established and ready for data transfer. So the basics remain the same and MPTCP utilizes the unused portion of TCP packets called the options field. In a MPTCP session Host A will add MP_CAPABLE option to the SYN packet. If the receiving Host B supports MPTCP then it will add the same MP_CAPABLE option to the SYN-ACK response packet. The final ACK from Host A will as well contain the MP_CAPABLE option establishing the multipath TCP session in between Host A and Host B. There are many other TCP extensions which uses the option field as well thus it is nothing new to the TCP world.
So far nothing different than Host A and B knowing they have formed MPTCP session. Now Host A recognizes another interface to connect to Host B and initiates new TCP connection with different source address and different port. It is a normal TCP connection but in MPTCP world it is called a sub-flow in between Host A and B. For MPTCP stack to recognize that this is a sub-flow of an existing MPTCP session, Host A will initiate the SYN packet with MP_JOIN option. This option also includes information on which MPTCP session it should join. So the MPTCP session will recognize this new connection as a sub-flow of existing session and add to it. But if this is so simple then an attacker would easily be able to forge TCP connection to join others MPTCP session. Researches have taken care of this issue by ensuring cryptographic key exchange in between the two Hosts while they exchange MPTCP options. Thus with the help of cryptographic validation it is ensured that the new connection belongs to same host and can be allowed to join existing MPTCP session.
Similarly multiple sub-flows can be created and joined in the MPTCP session and once the session has multiple sub-flows it totally depends on Host A and B to decide which flow should be used for data transfer at given point in time. Each TCP connection maintains its own sequence and acknowledgment space and ensures it reliably transfers the data. At MPTCP layer the session will have its own receive window which is spread across all the sub-flows. It also has its own sequence space to keep track of how the data received from multiple sub-flows fits into the overall session stream. Here is a quick representation of how the overall stack looks like with MPTCP:
Interestingly MPTCP offers lot more flexibility as the sub-flows are not dependent on each other. The sub-flows can join and exit the MPTCP session at any point in time without impacting the whole session. This is where it perfectly fits into the Mobile use cases where the devices disconnects and reconnects based on availability of the network as it is on move. The Application layer will not even notice the fact that sub-flows are joining and exiting at their own will. MPTCP session will ensure that the Application has connectivity and smooth data transfer rate in between the two hosts.
Great stuff and this is what we built into NetScaler stack for ensuring that we can work with Mobile clients efficiently and ensure better end user experience. Stay tuned for next blog with details around our implementation… | <urn:uuid:c246466a-5317-4db8-8e35-e14ced4116be> | CC-MAIN-2017-09 | https://www.citrix.com/blogs/2013/08/23/networking-beyond-tcp-the-mptcp-way/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00286-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9498 | 807 | 2.53125 | 3 |
Normally, I don’t cover vulnerabilities about which the user can do little or nothing to prevent, but two newly detailed flaws affecting hundreds of millions of Android, iOS and Apple products probably deserve special exceptions.
The first is a zero-day bug in iOS and OS X that allows the theft of both Keychain (Apple’s password management system) and app passwords. The flaw, first revealed in an academic paper (PDF) released by researchers from Indiana University, Peking University and the Georgia Institute of Technology, involves a vulnerability in Apple’s latest operating system versions that enable an app approved for download by the Apple Store to gain unauthorized access to other apps’ sensitive data.
“More specifically, we found that the inter-app interaction services, including the keychain…can be exploited…to steal such confidential information as the passwords for iCloud, email and bank, and the secret token of Evernote,” the researchers wrote.
The team said they tested their findings by circumventing the restrictive security checks of the Apple Store, and that their attack apps were approved by the App Store in January 2015. According to the researchers, more than 88 percent of apps were “completely exposed” to the attack.
News of the research was first reported by The Register, which said that Apple was initially notified in October 2014 and that in February 2015 the company asked researchers to hold off disclosure for six months.
“The team was able to raid banking credentials from Google Chrome on the latest Mac OS X 10.10.3, using a sandboxed app to steal the system’s keychain and secret iCloud tokens, and passwords from password vaults,” The Register wrote. “Google’s Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level. AgileBits, owner of popular software 1Password, said it could not find a way to ward off the attacks or make the malware ‘work harder’ some four months after disclosure.”
A story at 9to5mac.com suggests the malware the researchers created to run their experiments can’t directly access existing keychain entries, but instead does so indirectly by forcing users to log in manually and then capturing those credentials in a newly-created entry.
“For now, the best advice would appear to be cautious in downloading apps from unknown developers – even from the iOS and Mac App Stores – and to be alert to any occasion where you are asked to login manually when that login is usually done by Keychain,” 9to5’s Ben Lovejoy writes.
SAMSUNG KEYBOARD FLAW
Separately, researchers at mobile security firm NowSecure disclosed they’d found a serious vulnerability in a third-party keyboard app that is pre-installed on more than 600 million Samsung mobile devices — including the recently released Galaxy S6 — that allows attackers to remotely access resources like GPS, camera and microphone, secretly install malicious apps, eavesdrop on incoming/outgoing messages or voice calls, and access pictures and text messages on vulnerable devices. Continue reading → | <urn:uuid:d3fadc3c-e7ba-4396-82c7-81d12c27a21f> | CC-MAIN-2017-09 | https://krebsonsecurity.com/tag/1password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00462-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945988 | 651 | 2.546875 | 3 |
NASA Mars Rover Curiosity Gets Software UpdateBy CIOinsight | Posted 08-13-2012
NASA's Mars rover Curiosity is spending its first weekend on Mars getting a software update to give it pointers on how to drive on the Red Planet.
NASA officials said the update prepares the rover for some of the tasks it must perform going forward, including driving and using its strong robotic arm.
NASA said Curiosity s "brain transplant" began on Aug. 10 and will be completed on Aug. 13. The upgrade will install a new version of software on both of the rover's redundant main computers. This software for Mars surface operations was uploaded to the rover's memory during the Mars Science Laboratory spacecraft's flight from Earth, NASA said.
"We designed the mission from the start to be able to upgrade the software as needed for different phases of the mission," said Ben Cichy of NASA's Jet Propulsion Laboratory in Pasadena, Calif., chief software engineer for the Mars Science Laboratory mission, in a statement. "The flight software version Curiosity currently is using was really focused on landing the vehicle. It includes many capabilities we just don't need any more. It gives us basic capabilities for operating the rover on the surface, but we have planned all along to switch over after landing to a version of flight software that is really optimized for surface operations."
For instance, a key capability in the new software is image processing to check for obstacles. This allows for longer drives by giving the rover more autonomy to identify and avoid potential hazards and drive along a safe path the rover identifies for itself. Other new capabilities facilitate use of the tools at the end of the rover's robotic arm.
Meanwhile, as Curiosity completes its software transition, the mission's science team is continuing to analyze images that the rover has taken of its surroundings inside Gale Crater. Gale Crater is the landing site for Curiosity. Researchers are discussing which features in the scene to investigate after a few weeks of initial checkouts and observations to assess equipment on the rover and characteristics of the landing site.
The Mars Science Laboratory spacecraft delivered Curiosity to its target area on Mars at 10:31:45 p.m. PDT on Aug. 5 (1:31:45 a.m. EDT on Aug. 6), which includes the 13.8 minutes needed for confirmation of the touchdown to be radioed to Earth at the speed of light, NASA said.
Curiosity carries 10 science instruments with a total mass 15 times as large as the science payloads on NASA's Mars rovers Spirit and Opportunity, NASA said. Some of the tools, such as a laser-firing instrument for checking rocks' elemental composition from a distance, are the first of their kind on Mars. Curiosity will use a drill and scoop, which are located at the end of its robotic arm, to gather soil and powdered samples of rock interiors, then sieve and parcel out these samples into the rover's analytical laboratory instruments.
Moreover, to handle this science toolkit, Curiosity is twice as long and five times as heavy as Spirit or Opportunity, NASA officials said. The Gale Crater landing site at 4.59 degrees south, 137.44 degrees east, places the rover within driving distance of layers of the crater's interior mountain. Observations from orbit have identified clay and sulfate minerals in the lower layers, indicating a wet history, NASA said.l
To read the original eWeek article, click here: NASA Mars Rover Curiosity Gets Software Update | <urn:uuid:ca912b97-76de-491f-82a8-c46231fbb4f9> | CC-MAIN-2017-09 | http://www.cioinsight.com/print/c/a/Latest-News/NASA-Mars-Rover-Curiosity-Gets-Software-Update-428095 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00054-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940506 | 703 | 2.65625 | 3 |
I was recently reading Michio Kaku's book, "The Physics of the Impossible", which discusses a lot of technologies that we have become used to seeing in modern fiction, including: time travel; matter transportation; faster-than-light travel and UFOs. It's an interesting read for anyone that has any interest in physics.One subject that struck me as interesting was about the future of microelectronics. We are used to chip manufacturers being able to pack more and more transistors into the same tiny space, and to clock devices ever faster. However, some physicists now think that this age will be over in the next 20 years. Photolithography (using UV light to etch circuits) creates feature sizes of down to 50nm. So a transistor 50nm wide is the smallest you can make with this technique. I understood that you can make smaller features using electrons, but 50nm is already quite small, this is only 200 atoms of silicon. You can see that we are reaching some fundamental physical limits. The powerful CPUs we make today create vast amounts of waste heat that does not get used to produce useful work. The heat problem becomes worse as you wind up the clock speed.But if you can't add ever more cores to todays CPUs to make them faster, then what is the way forward? Software certainly has a role to play here, as more careful construction of algorithms has potential to make some processes hundreds of times faster. But developments in software have not kept pace with hardware engineering: although we have new programming languages and better operating systems than 20 years ago, writing code to efficiently use the massively parallel hardware we have built has been a slow process. Perhaps a slowdown in the development speed of silicon chips is just what the software industry and software sciences need as an incentive to make things better? | <urn:uuid:1b7e6c13-1134-4d3a-9d2e-4500b8ac3f4b> | CC-MAIN-2017-09 | http://www.dialogic.com/den/developers/b/developers-blog/archive/2008/12/01/the-new-age-of-software.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00582-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967415 | 363 | 3.453125 | 3 |
Some European Union regulators reportedly are concerned that major Internet companies such as Google and Facebook gain an unfair competitive advantage from the detailed consumer data they hold, since other companies can never hope to amass anywhere near as much of it. In addition, some regulators worry that with less competition, these data-rich companies will disregard their customers’ privacy preferences and become more invasive. Not only are these regulators wrong, but by mistakenly classifying big data as anti-competitive and anti-consumer, they risk driving European companies away from the most productive uses of data, which would harm the competitiveness of European businesses and limit the potential consumer benefits.
Unlike most other business inputs, data is not a finite resource. One company’s use of consumer data does not preclude another company from developing tools to collect and use the same or similar information. One company cannot feasibly monopolize consumer data since consumers freely enter into a wide range of product and service agreements that allow myriad other companies to access to their data. For example, Facebook’s data set is undoubtedly robust, yet many other organizations, including other social networks, dating sites, insurance companies, retailers and universities, build tools to collect similar data. Thanks to the Internet, barriers to collecting data have never been lower.
While some companies certainly have a head start on data, an advantage with data is no different than an advantage with any other key business input. Just as companies are not anti-competitive because they invest heavily in research and development, companies are not anti-competitive because they invest heavily in collecting data. Neither does having more data than anyone else ensure that market leaders will remain dominant. Many companies can successfully compete with larger rivals as long as they have a critical mass of data. For example, Pandora, which started in the United States in 2000, surely had large stores of consumer data by the time Spotify, which started in Sweden in 2006, entered the U.S. market. While both music streaming services offer paid, ad-free subscriptions, the large majority of their listeners opt to use the free, ad-supported versions that utilize user data to target advertisements. If it were true that large, more established Internet companies have an unfair advantage over the competition, then logic would dictate that Spotify should fail. To the contrary, though Pandora is still the largest player in the music streaming market in the United States, Spotify has been able to rapidly and consistently grow its user base.
Finally, there is little reason to believe that data-rich companies will weaken their privacy policies and handle consumer data less responsibly as they gain market share. All companies, regardless of the level of competition they face, risk reputational damage by abusing their customers’ trust. Furthermore, larger companies attract a proportionately large share of public and regulatory scrutiny. One needs only to observe the flurry of news stories that follow every change Google or Facebook makes to its privacy policies to be sure that no anti-consumer subversion would go unnoticed.
The reason Facebook enjoys such a large user base is because consumers find value in participating in the largest social network. Restricting the size of the network — or circumscribing the company’s ability to leverage user-generated data to offer more services — would necessarily reduce the value consumers receive. If regulators in the EU are concerned that data gives certain companies an unfair competitive edge, they should encourage a more robust data economy so that new entrants and incumbents both can easily collect and share data. Strict regulations that prevent companies from exchanging data could create the exact anti-competitive problem that regulators are trying to avoid — because startups are better able to compete with their larger rivals if they can easily collect and buy data. Regulators in the EU should be careful not to craft policies based on a misunderstanding of the role of data in business, as these policies would adversely impact European companies, reduce value for consumers, and hinder the advancement of data-driven innovation around the world.
Joshua New is a policy analyst with the Center for Data Innovation, a U.S.-based public policy think tank.
This story, "EU regulators misunderstand big data" was originally published by Computerworld. | <urn:uuid:1c80d47a-c378-4065-a13f-5f2f046a29f8> | CC-MAIN-2017-09 | http://www.itnews.com/article/2926722/big-data/eu-regulators-misunderstand-big-data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00106-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961194 | 835 | 2.703125 | 3 |
Exploring LTE: Architecture and Interfaces (e) - Video
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
Long Term Evolution (LTE) is explicitly designed to deliver high-speed, high quality services to mobile subscribers. In order to achieve this, the LTE network architecture introduces a number of new network nodes and interfaces to implement the necessary functionality and manage the exchange of packets between mobile devices and external packet data networks. This self-paced eLearning class discusses the overarching goals of LTE networks and then defines the unique network functions needed to achieve those goals. The course then describes the key interfaces between these functions, with particular emphasis on the LTE air interface, as well as the underlying protocols carried over these interfaces. Frequent interactions are used to ensure student comprehension of the essential technologies used in all LTE networks.
This course is intended for a technical audience looking for a detailed understanding of the important nodes, functions, and interfaces found in a typical LTE network.
After completing this course, the student will be able to:
• Discuss the rationale behind the 4G LTE network architecture
• Describe the critical network functions required in every LTE network
• Describe other nodes and functions typically found in large commercial wireless networks
• Identify the key interfaces between LTE nodes and the protocols carried over each interface
• Define EPS bearers and describe their role in supporting user services
• Explain the structure and functions of the LTE air interface
1. What is LTE?
1.1. 4G LTE
1.2. Packet data networks
2. LTE Network Nodes and Functions
2.1. E-UTRAN and EPC
3. Other Network Functions
4. LTE Network Interfaces and Protocols
4.1. Internet Protocol (IP)
4.2. S1-MME and S1-U
5. EPC Bearers
5.1. Default bearers
5.2. Dedicated bearers
6. LTE Air Interface
6.1. Air interface physical structure
6.2. OFDMA and SC-FDMA
6.3. Air interface physical channels
6.4. Uu protocol stack
• Welcome to LTE (eLearning)
• LTE-SAE Evolved Packet Core (EPC) Overview (eLearning)
• LTE Air Interface Signaling Overview (eLearning)
Purchase for:Login to Buy
Create a flexible eLearning plan to purchase eLearning courses for one or more individuals, where course prices are discounted dependent on the number of courses purchased. | <urn:uuid:d9226c4a-7278-435b-a70e-827bbcc3f6d9> | CC-MAIN-2017-09 | https://www.awardsolutions.com/portal/elearning/exploring-lte-architecture-and-interfaces-e-h5v | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00106-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.810817 | 554 | 2.640625 | 3 |
Traceroute can be used to show you how a site is physically connected to the Internet. Along the way you will also gain an understanding of how networks inter-connect.
- MANY traceroute servers such as: Princeton, Telstra
airport code database.
Other related pages: Determining Source of a Web Page , Reading a URL , Domain Names , Whois.,
In the WHOIS Tutorial, you can determine who is the registered owner of a domain name. Knowing who owns the domain may not satisfy your curiosity. You may also be interested in where is the Web server located, and how is it connected to the Internet. There is a network utility called traceroute which is often used to troubleshoot network connections. In a Unix or Windows environment, Traceroute can be used to determine the specific network route taken from your workstation to reach a specific remote host. (Dos command: tracert sitename.com) Fortunately, there are many Unix systems on the Internet that allow us to originate a traceroute from their location to any other location that you specify.
|Perform a traceroute to the host name|
Look at this sample traceroute from www.boardwatch.com to www.whitehouse.gov:
In this example, boardwatch gets its connectivity from "esoft.com" who get its connectivity from "coop.net" who is connected to BBNplanet (A backbone provider). BBNplanet Inter-connects with another Backbone provider (PSI) through the MAE-West Connection point. It then appears that the Whitehouse.gov Web site is connected though PSI near Virginia. Recognize that an organization's website may not be located at the organization. The organization's website may be hosted someplace else. A more accurate approach to determine the location of the organization might be to do a traceroute to the organization's mailhost or proxy server.
Tips on reading the traceroute results:
You can perform traceroutes from various sites via traceroute.org, opus1, Geektools. or Network-tools.com. World-wide map of traceroute servers, Multiple traceroute - Initiate simultaneous Traceroutes from servers around the world. Visual Route plots your traced route over a geographic map. You can try a Java_based version of Visual route online through this web page: http://visualroute.datametrics.com/ . Pingplotter another tool which also includes monitoring capabilities (to prove to your ISP that the connection does have problems)
For more information see CAIDA Animations for traceroute,
Final Thoughts. Recognize that a website can be hosted anywhere. Organizations make the decision on where they want to host a web site. Sometimes they have the connection, facilities and personnel to host the web site at their own facility, but it is also common for an organization to host their site at a commercial web hosting facility.
Contact me at 703-729-1757 or Russ
If you use email, put "internet training" in the subject of the email.
Copyright © Information Navigators | <urn:uuid:944f81b4-60a6-4670-a153-7abd6e12a94d> | CC-MAIN-2017-09 | http://navigators.com/traceroute.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00458-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.900597 | 653 | 2.6875 | 3 |
The U.S. National Security Agency has reportedly developed technology to commandeer computers even when they are off the Internet, and security experts warn it's only a matter of time before similar tools become part of cybercriminals' toolbox.
Since 2008, the NSA has used radio technology to send and receive data to compromise nearly 100,000 systems in a number of overseas targets, the New York Times reported Wednesday. While based on technology that has been around for decades, the level of sophistication of the tools developed by the NSA is impressive, experts say.
"The latest technology that they're using now, and the components that they're using, is infinitely more complex and is well-suited for the mission," Drew Porter, senior security analyst, for consulting firm Bishop Fox, said.
The NSA reportedly developed a radio transmitter/receiver that could fit in a USB plug or embedded in a laptop as a tiny circuit board. The technology can then move data over a secret radio channel back and forth with a relay station small enough to fit in an oversize briefcase and located as far as eight miles away.
To compromise the system the NSA would have to surreptitiously plant the technology using a spy, computer manufacturer or unwitting user, The Times said. Some of the tools used in such an operation were listed in an NSA catalog published last month by German newsmagazine Der Spiegel.
Such technology is far more advanced than the malware hackers use today to infect corporate networks to steal sensitive documents that can be sold on the black market or handed to government agencies.
As long as these techniques are effective, the majority of run-of-the-mill cybercriminals will use them, instead of going through the trouble of using something as complex as the NSA technology.
"They would go after another target before going to this length," Sean Sullivan, security adviser at F-Secure, said.
However, the technology and techniques used by spy agencies today will make their way into the criminal underground eventually.
"Within the intelligence community, it's known that the technology they are developing today is probably going to be used for corporate espionage or cybercrime down the road," Porter said.
In the case of the NSA's radio technology, successful cybercrime organizations would have the money to build the equipment or hire someone else to do it, if the technology's use would be highly profitable.
"Looking at a piece of equipment that costs $100,000 may be a lot for the average person, but if it means you can make $1 million off of it or $500 million for stealing (intellectual property), then it's definitely an investment (cybercriminals) would be willing to make," Porter said.
Some NSA technology has already found its way into criminal circles. The agency reportedly developed malware used to destroy centrifuges in Iranian nuclear facilities in 2010, The Times reported. The NSA used its radio technology for two years to gather information on the facilities in preparation for the attack.
Due to a technical error, the malware, later called Stuxnet, was discovered on the Internet and dissected by security researchers.
NSA gadgets that seem to come from a James Bond film will take years to find their way into criminal circles. Therefore, companies should focus today on keeping up with the less dramatic updates that occur regularly to hackers' malware and exploit kits.
"More sophisticated spying techniques and malicious attacks continue to be developed and organizations need to re-examine their critical applications and security processes to ensure that sensitive information and systems are protected," Sam Erdheim, senior security strategist for network security company AlgoSec, said. | <urn:uuid:2e3b6f1d-9aff-439c-a064-8224e3554329> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2134297/malware-cybercrime/nsa-hacking-tools-will-find-their-way-to-criminals-eventually.html?source=rss_cso_exclude_net_net | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00458-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963517 | 740 | 2.59375 | 3 |
Kelman: The impact of pay disparities
- By Steve Kelman
- Jul 24, 2008
Every once in a while you read something that uses a small set of concepts to explain a lot about the world. John Donahue’s new book, “The Warping of Government Work” (Harvard University Press), is an example. (In the interest of full disclosure, I must say that Donahue is a Harvard colleague and a friend.)
The book is organized around a simple idea. In the past 20 years, a major change has occurred in the distribution of earnings in the private sector.
With growing demand for knowledge-intensive services, incomes for highly educated, highly skilled people have taken off. At the same time, globalization has brought many unskilled people from developing countries into the international market, and competition has caused incomes for people at the middle and bottom to stagnate.
However, this change has not occurred in the public sector. Because of the strength of public-sector unions (at the bottom) and public hostility to high salaries for government employees (at the top), the government’s wage structure is now far more egalitarian than its private-sector counterpart. That means blue-collar government jobs pay noticeably more than comparable ones in the private sector. For example, in 1970, the pay for postal employees was 10 percent higher than for high school-educated men in general; by 2000, it was 60 percent higher.
That trend also means that professional, highly skilled jobs in government pay noticeably less. In 2003, the average salary for the top 10 percent of information technology employees was 27 percent less in government than in the private sector. For top executives, the gap is much larger.
As a result, for those at the bottom, government jobs are a safe harbor from the turmoil facing unskilled workers in the rest of the economy, and those workers will fight hard to prevent changes in their work conditions. For those at the top, such jobs are a backwater, unattractive to the best and brightest.
Donahue uses that observation to explain many of the ills facing government. To protect their safe harbor, employees create strong unions, which act to inhibit changes that would allow agencies to better serve the people. Because government is a backwater for high-end employees, its effectiveness in handling complex tasks is reduced. It often inappropriately outsources jobs for which contracts are hard to manage or that involve core governmental competencies because pay scales make it impossible to hire the talent government needs.
Donahue realizes the problems the separate government world has created. But changing the government’s egalitarian wage structure is difficult.
Maybe we can take some small steps? Kelman (email@example.com) is professor of public management at Harvard University’s Kennedy School of Government and former administrator of the Office of Federal Procurement Policy.
Kelman is professor of public management at Harvard University’s Kennedy School of Government and former administrator of the Office of Federal Procurement Policy. Connect with him on Twitter: @kelmansteve | <urn:uuid:1bd013ef-c212-4fe8-8f8e-6479d3dc1826> | CC-MAIN-2017-09 | https://fcw.com/articles/2008/07/24/kelman-the-impact-of-pay-disparities.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00402-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955567 | 635 | 2.609375 | 3 |
Through the use of inter-host mirroring and replication they can still provide many of the key features of virtualization, but there are some problems: You need a complete second copy of a virtual machine (VM) on another host, you are limited to only that second host for failover or migration (unless you make multiple copies), and there is CPU consumption required of the second target VM. Essentially, you double your VM count and the resources those VMs require. In a resource-constrained environment, this could be a problem.
Vendors are trying to deliver other solutions that keep the cost, simplicity, and performance advantages of local storage solutions but that still provide VM flexibility and efficiency. One approach is the SAN-Less SAN.
[ For more on shared vs. local storage, see Is Shared Storage's Price Premium Worth It? ]
The SAN-Less SAN is actually another form of shared storage, but the storage is in the physical hosts of the virtual infrastructure instead of on a dedicated shared storage system. Each host is equipped with hard drives or Flash SSD storage, and as data is being stored it is written across each host in the infrastructure--similar to how data is written across the nodes of a scale-out storage cluster.
Redundancy is achieved by using a RAID-like data stripping technique so that failure of one host or the drive of one host does not crash the entire infrastructure. As in traditional RAID, the redundancy is provided without requiring a full second copy of data. Also, it is not uncommon for the disks in each node to themselves be RAIDed via a RAID card inside the server.
This technique of striping data across physical hosts provides the VM flexibility. All the hosts can get to the VM images, so a VM can be migrated in real time to any host.
One downside of the SAN-Less SAN approach is that you lose the performance advantage of pure local storage since parts of the data must be pulled from the other hosts. From a performance perspective, you have essentially created a SAN.
As discussed in my article, Building The SAN-Less Data Center, some vendors are merging features of local storage with this SAN-Less technique to bring the best of both worlds. These vendors are keeping a copy of each VM data local to the host on which it is installed in addition to replicating the VM’s data across the host nodes. The value of this technique is that the VM gets local performance until it needs to be migrated. A second step in migration allows the newly migrated VM to have its data rebuilt on its new host, restoring performance. This is especially intriguing if the local data is PCIe Solid State Disk.
Of course, nothing is perfect, and the network that interconnects these hosts must be well designed. There is also some host resource consumption as the software that runs the data replication on each host does its work. However, that consumption should not be as significant as a host loaded down with target VMs in the mirroring/replication example discussed in my last column. Finally, the type of hard disks and solid state disks used in the hosts in a SAN-Less SAN must also be carefully considered.
Despite the advantages of local storage and SAN-Less SANs, shared storage is far from dead. In my next column, I will look at local storage vs. SANs.
Even small IT shops can now afford thin provisioning, performance acceleration, replication, and other features to boost utilization and improve disaster recovery. Also in the new, all-digital Store More special issue of InformationWeek SMB: Don't be fooled by the Oracle's recent Xsigo buy. (Free registration required.) | <urn:uuid:0716f587-094f-4024-a699-fa3f4aa23440> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/storage-only-looks-san/908683278?piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00578-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939629 | 747 | 2.578125 | 3 |
Upon first glance, a utility meter might seem like the furthest thing from a security threat than you could imagine. After all, what harm could come from a device that measures the amount of electricity or gas your building consumes?
The reality is, however, that in today’s ultra-connected world, this type of naive thinking could actually lead to a serious disaster. That’s because utility meters, like the majority of other devices currently on the market, are all ultimately linked to a much larger global network.
Unfortunately, many of these devices are rife with security loopholes that can be exploited by criminals and used to facilitate much larger attacks on utility companies, government, healthcare systems and other critical infrastructure.
While a single device such as a utility meter, machine or access point might not seem like much of a security concern, when illegally controlled a criminal could gain a foot in the door to a much larger operation and could compromise critical infrastructure systems such as a power grid that could ultimately lead to a loss of life or cascading system failures.
One type of security measurement that every organization should be leveraging as protection against network device-related espionage is called a public key certificate, or a digital certificate as it is more commonly referred.
Not easily cloned, a digital certificate is a strong identity that uniquely identifies the device. The certificates help protect the identities of computers, machines or devices that interact with critical infrastructure, cloud-based services, mobile platforms and network infrastructure. This is to prevent a third party from using a spoofed identity to manipulate the network and gain access to a network.
Digital certificates are issued by an independent third-party certification authority (CA) for the purpose of providing an independent source of public key authentication. For more information on how you can leverage the protection of digital certificates in your enterprise, please visit www.entrust.com/enterprise. | <urn:uuid:c74634c5-b74f-478d-9244-4058b88a1e7a> | CC-MAIN-2017-09 | https://www.entrust.com/digital-certificates-strengthening-security-enterprise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00578-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948313 | 380 | 2.5625 | 3 |
Exploit:Java/Majava.A identifies malicious files that exploit vulnerabilities in the Java Runtime Environment (JRE).
Java exploits typically target known vulnerabilities in the Java Runtime Environment; to prevent successful exploitation, please ensure you install the latest updates available for Java and/or remove any old, unnecessary installations.
To ensure you have the recommended version of Java installed on your system, please refer to the vendor's Verify Java version page.
F-Secure Anti-Virus will automatically clean the relevant files.
Suspect A False Positive?
If you suspect a file has been wrongly identified by this detection (that is, it is a False Positive), you may elect to submit a sample of the file to our Labs for further analysis via:
Exploit:Java/Majava.A is a Generic Detection that identifies exploit files used to target and exploit vulnerabilities in the Java Runtime Environment (JRE).
If successfully used, exploits can provide an attacker with a wide range of possible actions, from viewing data on a restricted-user database to almost complete control of a compromised system.
The exploit files may be delivered by other malware, such as the Blackhole exploit kit.
For more information, please see: | <urn:uuid:d7f2cc51-a52b-4a9d-8c1e-f627d13bdeb4> | CC-MAIN-2017-09 | https://www.f-secure.com/v-descs/exploit_java_majava_a.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00278-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.833207 | 249 | 2.53125 | 3 |
An international team of researchers claim to have uncovered a way to enhance padding oracle attacks against cryptographic hardware such as RSA SecurID 800 authentication tokens to enable hackers to access encryption keys.
However, an executive with EMC's RSA security division dismissed the attack strategy as impractical.
Padding oracle attacks attempt to trick the oracle such as a server into leaking data about whether the padding of an encrypted message is correct. The research, which will be presented at the Crypto 2012 conference in Santa Barbara, Calif., in August, builds off of previous research into attacks on the PKCS1v1.5 encryption standard.
According to a paper released by the team, their modified version of the Bleichenbacher RSA PKCS#1v1.5 attack in many cases allows the "million message attack" to be carried out with a few tens of thousands of messages or even fewer.
"We have implemented and tested this and the Vaudenay CBC attack on a variety of contemporary cryptographic hardware, enabling us to determine the value of encrypted keys under import," the researchers wrote. "We have shown that the way the C UnwrapKey command from the PKCS#11 standard is implemented on many devices gives rise to an especially powerful error oracle that further reduces the complexity of the Bleichenbacher attack. In the worst case, we found devices for which our algorithm requires a median of only 3,800 oracle calls to determine the value of the imported key. Vulnerable devices include eID cards, smartcards and USB tokens."
Other devices affected by the attacks include Siemens CardOS and Aladdin eTokenPro. The attack comes with some caveats. For one, it does not reveal the private half of the key used for encryption. The attacks also do not reveal the seed values used to generate one-time passwords on RSA tokens.
In a FAQ on the paper, the team explained that their modified Bleichenbacher attack reveals plaintext that in the context of the PKCS#11 UnwrapKey command is a symmetric key. The same attack can also be used to forge a signature, they said.
"The Vaudenay CBC attack may reveal either a symmetric key or a private RSA key if it has been exported from a device under a symmetric cipher like AES using CBC_PAD," the researchers wrote in the FAQ.
For their part, EMC's RSA security division was critical of the paper.
"The vulnerability outlined by the researchers makes it possible (however unlikely) that an attacker with access to the user s smartcard device and the user s smartcard PIN could gain access to a symmetric key or other encrypted data sent to the smartcard," blogged Sam Curry, chief technology officer of RSA's Identity and Data Protection unit. "It does not, however, allow an attacker to compromise private keys stored on the smartcard. Repeat, it does not allow an attacker to compromise private keys stored on the smartcard."
"This is not a useful attack," he continued. "The researchers engaged in an academic exercise to point out a specific vulnerability in the protocol, but an attack requires access to the RSA SecurID 800 smartcard (for example, inserted into a compromised machine) and the user s smartcard PIN. If the attacker has the smart card and PIN, there is no need to perform any attack, so this research adds little additional value as a security finding.
An RSA spokesperson told eWEEK that since 2002, RSA has cautioned customers to discontinue using PKCS#1v1.5 in favor of the more secure PKCS#1 v2.0 standard. Curry advised organizations to use PKCS#1 v 2.0 with Optimal Asymmetric Encryption Padding (OAEP) in applications that require encryption. | <urn:uuid:5e072f56-b057-46a3-b7b3-f303b72eb28d> | CC-MAIN-2017-09 | http://www.cioinsight.com/security/rsa-dismisses-researchers-securid-attack-claims | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00454-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925249 | 778 | 2.5625 | 3 |
The dozens of deaths that marred the recent Nigerian elections would be considered shocking by the standards of most developed nations. Compared to past elections, however, the violence this time around was limited, and many observers say social media and technology such as biometric card readers played a big role in minimizing conflict.
Online services are credited with keeping people informed during the runup to the elections, promoting the feeling they could communicate and express their views without resorting to violence, and other technology helped to ensure cheating would be kept to a minimum. Nigeria’s experience suggests that tech can play a role in reducing election-related violence in other countries.
The presidential and parliamentary elections were the most peaceful in Nigeria since the nation embraced democracy in 1999. The winner of the presidential election, former military leader Muhammadu Buhari of the All Progressives Congress, will officially take over from incumbent Goodluck Jonathan, of the People’s Democratic Party, at the end of May. It’s the first time a sitting Nigerian president has lost a bid for re-election.
“I do believe that the capacity for social media to connect and inform helped Nigeria conduct a free and fair election and helped to keep violence to a minimum,” said Michael Best, an associate professor at the Georgia Institute of Technology in Atlanta, via email. With Thomas Smyth from Sassafras Tech Collective, a worker-owned tech co-op in Ann Arbor, Michigan, Best published a qualitative dual case study, “Tweet to Trust: Social Media and Elections in West Africa,” about social media use during the general elections in Nigeria and Liberia in 2011.
Nigeria, in West Africa, is the continent’s most populous nation and has 82 million users of GSM-based mobile phones. A recent Mobile Africa 2015 study conducted in Kenya, Nigeria, Uganda, Ghana, and South Africa by GeoPoll and World Wide Worx indicates that Internet access via phones is on the rise in Africa, especially for Facebook use, which stands out as the most common phone activity among the countries surveyed.
While Facebook was the most visible platform for sharing views and information during the Nigerian electoral season, several election-related Twitter handles were created, including hashtags like #NigeriaDecides, which later became #NigeriaHasDecided.
Google, meanwhile, created a site as a one-stop resource, containing voting information and news relating to the elections, offering content including videos and other digital media for view on desktops, tablets and mobile phones.
Elections in Africa have always been tumultuous. Almost a thousand people died in the post-election period after Nigeria’s last general elections in 2011; about 3,000 people died in the Ivory Coast elections in 2010, for which its former president is still on trial; and Kenyan President Uhuru Kenyatta is answering to charges involving the 2007 post-election crisis during which hundreds died.
This year, while the election-related death toll in Nigeria was nowhere near that of four years ago, the country was far from tranquil during election season. Violence caused the government to postpone the presidential election from February to March 28. Elections for senators and representatives were held two weeks later.
The National Human Rights Commission reported that in February it had “received reports of and documented over 60 separate incidents of election-related violence from 22 states spread across the six geo-political zones of Nigeria, resulting in which 58 persons have so far been killed and many more injured.” In addition, various reports at the time put the death toll due to attacks by radical Islamic group Boko Haram at 39, though there was no direct link to the elections.
There have not been official reports on election-related death since March, though the All Progressives Congress (APC) claimed earlier this month that 55 of its members had been killed in election violence before the Rivers state governorship election.
There is a limit to what can be achieved through social media and other technology, observers acknowledge.
“Of course, these technologies are not silver bullets nor do they always contribute to positive elements within a democracy, noted Georgia Tech’s Best. But during the recent Nigerian elections, “our experience monitoring social media over our media aggregation platform, named ‘Aggie,’ demonstrated the power of these technologies can be used for good.”
Other experts agree. Through social media and mobile phone usage, a “new type of engagement and advocacy became possible in Nigeria,” said Adeola Oyinlade, a Nigerian lawyer and human rights expert.
Very practical information was shared via social media, Oyinlade noted. For example, card readers were provided by Nigeria’s Independent National Electoral Commission (INEC) to read identity cards issued to ensure, among other things, that people could not vote under assumed names. Some people had trouble scanning their cards in the readers, and learned through social media that a seal on the cards had to be removed so they could be read properly, Oyinlade said.
Such use of technology augurs well for other countries as well, Oyinlade said.
“People from African countries going to polls this year can ride upon innovative mobile technological advancement and the efficacy of social media to launch a bottom-up popularization of political participation among people and expand the frontiers of democracy,” Oyinlade said.
African countries holding elections later this year include West African nations Benin, the Ivory Coast, Guinea and Burkina Faso. To be effective, however, technology needs to be coupled with government policies promoting the flow of information, observers point out.
In the Nigerian elections, “I do believe that the use of technology played a major role and will continue to do so,” said Nnenna Nwakanma, Africa regional coordinator of the World Wide Web Foundation, which promotes affordable and uncensored access to the Internet. “However, for information to openly flow, there are policy underpinnings. Nigeria as a country has a FOI (Freedom of Information Act) and INEC practiced open data. In the case of other West African countries, these policy framings are missing and we may be hoping too much in expecting a Nigerian scenario.”
Nigeria itself may continue to develop technology to promote peaceful elections. The Nigerian Society of Engineers, for example, has called for the deployment of electronic voting systems using software developed locally by NigComSat, the country’s satellite communications agency.
Electronic voting systems, however, are not a panacea, Georgia Tech’s Best noted.
“Electronic voting systems can be beneficial if correctly designed and deployed but too often they are actually detrimental due to lack of smart engineering and weak deployments,” Best said. “Across many parts of the United States, for instance, badly designed e-voting machines have actually reduced the transparency and accountability of that nation’s elections,” Best noted. Critics of voting machines in the U.S. blasted the lack voter-verified paper audit trail in various electronic systems.
Technology can not be expected to resolve all election-related problems, for any country. Nigeria’s experience over the last few months, though, shows that social media and other technology can help light the way toward a more peaceful, democratic future for developing nations. | <urn:uuid:cbc14548-8fba-4312-9c42-d354c34e633a> | CC-MAIN-2017-09 | http://www.itnews.com/article/2916836/social-media-helps-curb-nigerian-election-deathtoll-paving-future-path.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00630-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965901 | 1,515 | 2.578125 | 3 |
A Collaboration: Network Coding + Reliability
In 2009, Robert Calderbank then a professor at Princeton received an Air Force grant to help solve a difficult communications problem: how to make wireless networks more reliable and thus avoid the communications failures that can result from packet loss. For the Air Force, such failures, occurring during critical situations, have the potential to be life-threatening.
Packet loss has long been a big problem on wireless networks where interference, physical obstacles, and distance between network devices all contribute to the problem. A second problem is the limited capacity of wireless networks and the resulting inefficiencies. Over the years, solving the problems of reliability and efficiency has been the focus of much effort and research.
Calderbank himself had long been investigating the problem. A past vice president of AT&T Research with a background in computational mathematics, his particular focus was on coding theory, both to correct for noise (which can interfere with packet delivery) and to compress data for more efficiency. He was also aware of others were approaching the same problems.
The problem of reliabilty
The reliability problem exists because wireless networks are inherently lossy and constrained. In case of packet loss, TCP (the transport protocol that ensures a packet makes it to its destination) depends on end-to-end retransmission to recover lost packets. While this mechanism is sufficient for wired networks where there is little packet loss, it is not optimal for wireless networks, which are lossy.
When wireless networks came into use, TCP’s inability to operate efficiently under significant packet loss was a big problem, and end-to-end retransmission (where the packet is re-sent from the source), TCP’s sole mechanism for recovering a lost packet, can result in substantial reduction in throughput.
TCP is ill-suited for lossy wireless networks in another way: TCP interprets all packet loss as a sign of congestion—whether or not the cause is overburdened links or errors in the channel—and will reduce the transmission rate. While this response is appropriate for reducing congestion, it doesn’t solve the problem of wireless error-caused packet loss and in fact may result in links that sit underutilized while TCP attempts to alleviate non-existent congestion.
One way to avoid the inefficiency of underutilized links is to implement Explicit Congestion Notification (ECN) in TCP. ECN, which K.K. Ramakrishnan of AT&T Research has long proposed (K.K. Ramakrishnan, Raj Jain, "A Binary Feedback Scheme for Congestion Avoidance in Computer Networks with a Connectionless Network Layer") may be used to disambiguate between congestion-caused packet loss and wireless channel errors. Any packet loss not correlated with ECN could therefore be interpreted as due to wireless channel errors.
The possibilities of network coding
The problem of inefficiency is also addressed by network coding, a new way of forwarding packets that was developed independently over the last decade. In network coding, intermediate nodes within the network combine several packets into one coded packet so more packets can simultaneously use the same link without competing for resources. (Packets are combined using XOR or other linear operations; see side bar.)
The coded packet can then be decoded by the receiving node with the help of additional packets or side information. In a wired network, this information is relayed to receiving nodes in separate packets routed over less-congested paths in the network.
In wireless networks, this side information comes for free, thanks to the broadcast and overhearing capabilities inherent in a wireless medium. Because nodes can overhear one another’s transmissions, they can exchange decoding information without additional overhead.
Since intermediate nodes must know what packets the receiving nodes have overheard to know what new packet to send, receiving nodes must communicate their buffer state upstream to access nodes, often by appending this information on their own transmissions. This need to constantly communicate with other nodes adds both communication and computational overhead to the network coding approach.
But the advantage gained is that packets from different flows, which used to compete for scarce resources in bottleneck links, can now share and better utilize the same resources.
In an environment as resource-constrained as wireless networks, this sharing of resources is important.
Coding packets from different flows is known as inter-session coding, while coding packet within the same flow is known as intra-session coding. Inter-session coding improves efficiency (different flows can share the same bottle-necked links). Intra-session coding can add redundancy to a flow (by adding linear combinations of packets) and thus improve reliability in the presence of loss.
Forming a MURI
Calderbank saw network coding and the gain in efficiency as a way to build a better network for the Air Force. But networking coding is still relatively a new approach with solutions that are yet to be widely tested in practice; many aspects still need to be worked out. Implementing network coding would require expertise in networking, information theory, algorithms, and network protocols.
Calderbank in late 2009 formed a Multidiscipline University Research Initiative (MURI) to assemble the needed expertise. A MURI is a well-established framework for university collaborations with guidelines for sharing resources and funding.
"Our project is organized around the idea that transformational change in network management will require extraordinary interdisciplinary breadth," Calderbank said at the time, "in particular the infusion of fundamentally new mathematical ideas from algebraic topology and compressive sensing.”
For networking and application of network coding, he approached three researchers all working independently on different problems associated with network coding on wireless networks: Christina Fragouli, Suhas Diggavi, and Athina Markopoulou, faculty at l’École Polytechnique Fédéral de Lausanne, University of California at Los Angeles, and University of California at Irvine, respectively.
Fragouli and Diggavi were mainly working on theoretical problems and algorithms while Markopoulou along with her graduate student Hulya Seferoglu (also at UC Irvine) was focused more on the practical matter of implementing network coding and integrating it with TCP and other protocols. Specifically, Markopoulou and Seferoglu’s prior work studied inter-session network coding and its cross-layer optimization with TCP ("Network Coding-Aware Queue Management for TCP Flows over Coded Wireless Networks").
With a team in place for the network coding component, the issue of reliability remained, and for this Calderbank arranged for MURI members to attend a kickoff meeting in 2009, held at AT&T Research (Florham Park, NJ) where reliability for wireless networks is very much a practical engineering problem. From AT&T researchers working on the problem, MURI members heard first hand about the latest network research and which methods had the best chance to be deployed in the near future. (It was a homecoming of sorts. Several MURI members had at one time worked at AT&T Research. )
The meeting was followed by a workshop at UCLA in January 2010, this meeting focused specifically on network coding . Also attending was K.K. Ramakrishnan of AT&T Research.
Ramakrishnan had been working to improve the reliability of IP protocols over wireless networks. This work was in collaboration with researchers from the Rensselaer Polytechnic Institute (RPI)—located in Troy, NY—set up through AT&T Research’s VURI program (Virtual University Research Initiative), which facilitates collaborations between AT&T researchers and universities.
(For AT&T Research, collaborations with universities play an important role, since students have the time and inclination to fully and deeply investigate fundamental problems. University collaborations enable AT&T to expand research efforts, while students are given practical problems to solve—and often a ready-made thesis topic—along with the chance to work with experts in their field. Working with AT&T offers one more advantage, access to the tremendous amounts of network data maintained by AT&T.)
The collaboration between Ramakrishnan and RPI in place since 2005 had been looking to ensure reliability through redundancy by appending extra packets to each transmission. Each redundant packet contains enough information to replace any one lost data packet. The mechanism employed is forward error correction (FEC) using Reed-Solomon to encode information from a fixed-length block of packets.
In FEC, redundant packets can take the place of any lost packets
If a loss occurs in the block, the receiving node uses one of the redundant packets to reconstruct the lost information in the decoding process. Therefore, the loss of a data packet doesn’t matter if there is a redundant packet to replace it. It’s the total number of packets received that counts; if a receiving node expects eight packets and receives six data packets and two redundant packets, it’s received the requisite eight.
Since the coding and decoding is done at the end nodes (unlike network coding where the coding is done at intermediate nodes within the network), FEC is sometimes referred to as source coding.
Redundant packets add overhead (there’s more to transmit after all) but because FEC adds reliability, there are far fewer retransmissions. Ramakrishnan and his RPI collaborators further increase efficiency by “right-sizing,” or varying, the amount of redundancy depending on the reliability of the link, using more redundancy for unreliable links and less for reliable ones.
The theoretical meets the practical
Calderbank’s hunch was that Ramakrishnan’s practical work on transport reliability would complement Markopoulou’s work on network coding for better efficiency, and that combining the two methods would yield a more reliable and efficient network.
Seferoglu and Markopoulou at UC Irvine had looked at the interaction of network coding with TCP flows, but had not evaluated adding redundancy. But the team realized that the combination of network coding and packet-level FEC gracefully solved two problems at the same time: first, it reduced loss while improving the efficiency so critical to wireless channels, and second, it simplified network coding by making it unnecessary for nodes to constantly track which packets were transmitted, which were overheard, and by which nodes. When network coding is combined with packet-level FEC, all that is needed to determine the amount of redundancy is a simple percentage (of packets lost).
With the plan set, and with Ramakrishnan agreeing to advise the MURI on the FEC scheme and network architecture, work began in earnest in April 2010 and Seferoglu began working full time on the project.
Progress to date
The initial months were spent resolving the inconsistencies that inevitably occur when combining two methods, each separately evolved.
One of the first was how to handle the burstiness of TCP flows when inter-session network coding depends on a similar number of similar-sized packets from different flows. This was resolved by modifying active-queue management schemes in a way to work best in conjunction with TCP congestion control and wireless network coding, building on prior work at UC Irvine (see here).
Most of the work integrating TCP and network coding was done in the summer 2010 when Seferoglu worked at AT&T Research under the supervision of Ramakrishnan.
The issue of loss was also especially complicated in network coding because loss affects not only the direct links but the overhearing links as well (overhearing depends on good, error-free links). Because the performance of network coding declines on lossy networks, decisions have to be made on what percentage and which flows should be coded together (inter-session network coding) and how much redundancy (in this case in the form of intra-session network coding) is required.
Working through these and other problems resulted in a novel unifying scheme, I2NC, which builds on top of one-hop constructive network coding (COPE) and combined inter-session coding with intra-session redundancy. The team also designed a thin layer between TCP and I2NC, which codes/transmits and acknowledges/decodes packets at the sender and receiver nodes in such a way to make I2NC operations transparent to the TCP protocol. The benefits of I2NC include: bandwidth efficiency (thanks to inter-session coding); resilience to loss (thanks to intra-session redundancy); and reduced protocol overhead (setting the nodes free from the need to communicate with one another and exchange information about which packets they have overheard). A paper on I2NC (“Inter- and Intra- Session Network Coding“) has just been accepted at the IEEE Infocom 2011 conference.
The next step is implementation, and with an offer of help from Air Force Office of Scientific Research (AFOSR)—specifically the Operations Integration branch at Rome, NY—the MURI team has just started implementation at the AFOSCR Emulab testbed.
The next steps
The collaboration is now approaching its one-year mark, and the indications are that network coding with the FEC redundancy will provide both efficiency and reliability in wireless networks, and in a simpler way than was previously thought.
A real test will come as the team begins implementing I2NC on Android smartphones to determine whether I2NC is feasible on devices with limited resources, something that would be very hard to do when nodes were required to track overheard packets.
Certainly implementing network coding on smart phones was not foreseen at the beginning of the project, and it was only by collaboratively fusing two different methods that the necessary gains in efficiency were achieved.
Calderbank is pleased with the way the collaboration is going:
"When we held our workshop at UCLA we discovered two different approaches to improving the rate and resilience of Air Force communications and we started to explore whether the benefits were additive. I am delighted to see that they are additive and that collaboration with AT&T Labs is accelerating the transfer of technology to the Air Force.”
XORing two packets
Access nodes and other network devices see a packet as a string of 0s and 1s. In network coding, the bit strings of two packets are combined using the exclusive OR logical operation, or XOR (symbol ).
XORing assigns a “1” if two bits are different, “0” if the same.
The idea for bit-wise XORing packets in this way was proposed in the paper XORs in the Air: Practical Wireless Network Coding. It works like this:
An access node looks at its input queue to find similar-sized packets going to nearby destinations. The packets may be from different sessions.
The access node opens the packets and XORs the two packets’ bit strings to form a new coded packet:
The packet is decoded at the receiving node using information overheard from nearby nodes.
XORing is the simplest, not the only, method to combine packets (linear combinations are also used) | <urn:uuid:25a544e9-19f4-41de-a4c1-63fe05db976b> | CC-MAIN-2017-09 | http://www.research.att.com/articles/featured_stories/2010_10_slider-stories/201010_collaboration.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00274-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946745 | 3,087 | 2.734375 | 3 |
This week HP Labs distinguished technologist, Parthasarathy Ranganathan, told Stacey Higginbotham that we are working our way from the information age to the insight age. For this shift to happen, however, computing architectures will need to keep pace with analytics, handling storage and massive processing in far more efficient ways.
Processing, storing and analyzing vast amounts of data is getting cheaper, which is certainly a good thing since technology delivers an ever-expanding assortment of new devices and instruments. The problem is, simply throwing more processing power at big data problems isn’t sustainable as it entails a race to pack more transistors on to already energy-hungry chips.
Since the pace of data gathering is outpacing processing capabilities, some are looking to companies like Intel with its 3-D transistor advancement. As Higginbotham notes, however, while this “is cool, it only gets us so far in cramming more transistors on a chip and reducing the energy level needed. For example, a 22 nanometer chi using the 3-D transistor structure consumes about 50 percent less energy than the current generation Intel chip, but less than an Intel chip using the older architecture would at 22 nanometers…And when we’re talking about adding a billion more people to the web, or transitioning to the next generation of supercomputing, a 50 percent reduction in energy consumption on the CPU is only going to get us so far.”
With that in mind, she points to the fact that the DoD estimates that powering an exascale supercomputer would require not one, but two complete power plants. She also slips in the aside that “this is why the folks at ARM think they have an opportunity and why the use of GPUs in high performance computing is on the rise.” | <urn:uuid:d4a02f76-decd-4c7a-8df4-f1b3f07061b1> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/05/16/enter_the_insight_age/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00274-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932934 | 372 | 2.640625 | 3 |
A teenager in the Washington suburb of Reston, Va., is seeking help from Twitter and Facebook to track how people are talking about the epidemic of online bullying.
Viraj Puri, 13, has already built a simple heat map that offers a daily catalog of when Tweeters use the terms “bully” or “bullying.” He’s working with researchers at Georgetown University and the University of Wisconsin to expand the heat map to examine when the words are used in a positive and negative light. With help from Facebook and Twitter, Puri said, he and the researchers could fine tune that data to better track the national conversation about bullying.
“We know we can do this, it’s just a matter of gaining access to the data, which would cost hundreds of thousands of dollars without help from Twitter and Facebook,” Puri said.
Data from the improved heat map could be used by policy makers to target areas where bullying is especially prominent, Puri said, and parents could look at it when considering a move just as they’d look at the quality of a school district.
Twitter, where most posts are public by default, sells much of its data to researchers and companies looking to track their reach. Facebook, where most individual posts are restricted to some extent, is much more guarded about its data.
Puri edits the teen bullying blog Bullyvention and has been recognized by the Congressional Anti-Bullying Caucus, chaired by Rep. Mike Honda, D-Calif., a Japanese-American who was placed in an internment camp during World War II.
It’s difficult to track conversations about bullying because the word is often used generically online, Puri said.
“The media world use the term bully and bullying extensively to describe Chris Christie’s actions in New Jersey,” Puri said. “As you can imagine, that has nothing to do with how a 13 year old feels about being bullied at school or on the playground.” | <urn:uuid:913753e9-297b-4244-8967-e8a0b01825aa> | CC-MAIN-2017-09 | http://www.nextgov.com/emerging-tech/emerging-tech-blog/2014/01/teen-uses-data-track-conversation-about-bullying/77102/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00450-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962836 | 418 | 3.046875 | 3 |
The basic premise of this document is simple: to explain why distributed transactional databases are the Holy Grail of database management systems (DBMS).
The promise of these systems is to provide on-demand capacity, continuous availability and geographically distributed operations. However, most of them require substantial trade-offs in terms of overall effort, cost, time to deployment and ongoing administration. Despite those trade-offs, these offerings have dominated the industry for decades, forcing compromises from start to finish – from initial application development through ongoing maintenance and administration.
The Three Traditional Architectures And NuoDB are:
- Shared-Disk Databases
- Shared-Nothing Databases
- Synchronous Commit (Replication) Databases
- New DDC Architecture Offers Comprehensive Solution | <urn:uuid:f79aeedb-3eae-4b48-a60e-bb3184510c4b> | CC-MAIN-2017-09 | http://www.dbta.com/DBTA-Downloads/WhitePapers/What-is-a-Distributed-Database-And-Why-Do-You-Need-One-4418.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00626-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.885117 | 154 | 2.609375 | 3 |
It beggars belief, but it appears that Elon Musk's Hyperloop is actually going to be built. The first test track will only be five miles long, and it won't operate at the supersonic speeds that Musk envisioned, but still, it's coming—Musk's "cross between a Concorde, railgun, and an air hockey table" really is coming.
Back in January, Elon Musk said that he planned to build a Hyperloop test track "soon" and that Texas was "the leading candidate." Curiously, nothing more has been said by Musk on the matter since. Then, in February, Hyperloop Transport Technologies (HTT)—an organisation that is unaffiliated with Musk—said that it had struck a deal to build a five-mile Hyperloop in California.
HTT is a research company that was founded soon after Musk's original Hyperloop thesis was published in 2013. The structure of HTT is somewhat interesting: it has employees, but it also uses crowdsourced engineering talent from across the US that is being paid in stock options. The CEO is a guy called Dirk Ahlborn, who founded JumpStartFund—an online platform that facilitates with building crowd-powered projects; basically, he took his own service and used it to build HTT.
Back in February, HTT said that it would attempt to raise $100 million through an initial public offering to fund construction of the five-mile (8km) test track. Agreements have been secured for a test track to be built near Quay Valley in California, in between San Francisco and Los Angeles. Construction is scheduled to begin in 2016 and complete in 2017.
A Hyperloop track consists of two tubes, affixed to above-ground pylons. Inside the tubes are pods, which can contain humans, livestock, cargo, etc. The tubes are partially evacuated by vacuum pumps, which in turn reduces drag and allows the pods to move at high speeds without consuming too much energy. (Elon Musk suggested that the power requirements would be so low that the Hyperloop could be powered by solar panels on the topside of the tube, though it's unlikely that HTT will go that way with the test track.)
Propulsion is provided via linear induction: magnets on the outside of the pod and the inside of the tube repel each other, pushing the pod forward. (That's the railgun bit.) To reduce rolling resistance, each pod has an air compressor that takes air from the front and ejects it through holes in the bottom. (That's the air hockey table bit.) For more technical details, see our original story on Musk's Hyperloop proposal.
Speaking to National Geographic, Ahlborn gave a few more details about HTT's deployment of Musk's Hyperloop tech. There will be a variety of different pods, travelling at speeds ranging from 200 to 300 miles per hour (320 to 480 km/h). “Maybe in one capsule, people would like to feel the speed a bit more and then for the 80-year-old, it’s a little softer and slower," Ahlborn said.
These speeds are far short of Musk's proposed 760mph, of course, but still a lot faster than existing US railways—and really, that seems to be the main point of Hyperloop in the first place. California is currently planning to build a high-speed rail link between Los Angeles and San Francisco at a cost of around $70 billion (~£45 billion). In Musk's original thesis, he postulated that a Hyperloop run between the two cities would only cost between $6 and $10 billion. HTT says it can't quite hit Musk's estimate, but that it could do it for around $16 billion, which is still pretty good.
Obviously, big questions remain about Hyperloop. Will the pylons that carry the tubes be able to withstand California's rather large and regularly occurring earthquakes? At such high speeds, the tubes will need to follow mostly straight paths, otherwise passengers will be subjected to stomach-churning forces—and in the US, there are lots of mountains, hills, and other topological quixotics that will be hard to build around. What happens if someone shoots a hole in a partially evacuated tube, anyway?
Many of those questions will hopefully be answered by the test track, though not all. "Unfortunately for us, it’s impossible to test everything out on a small scale,” Ahlborn said to National Geographic. To test whether Hyperloop can actually go supersonic, a longer track will be needed—and that's when things start to get difficult. Raising $100 million is one thing; raising $1 billion for a relatively unknown and immature technology is another. Ahlborn said that the first long Hyperloop might be built outside the US, perhaps in Singapore or Dubai, where there's a lot of money and "less poltiics."
Following "extensive safety testing," passengers may be allowed to ride HTT's Hyperloop test track in 2018.
This post originated on Ars Technica UK | <urn:uuid:9ad5017f-baa2-4bf1-8950-81862160bfaf> | CC-MAIN-2017-09 | https://arstechnica.com/cars/2015/06/elon-musks-hyperloop-is-actually-being-built-in-california-next-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00502-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966354 | 1,045 | 2.640625 | 3 |
NIST guide explains cloud in plain terms
The National Institute of Standards and Technology has unveiled a guide that explains cloud technologies in “plain terms” to federal agencies and provides recommendations for IT decision-makers, reports Camille Tuutti in Federal Computer Week.
The newly released Special Publication 800-146, "Cloud Computing Synopsis and Recommendations," repeats the NIST-established definition of cloud computing, describes cloud computing benefits and open issues, and gives insight into various cloud technologies. It also provides guidelines and recommendations on how organizations should weigh the opportunities and risks of cloud computing.
Also aimed at helping federal information systems professionals make better-informed decisions around cloud computing, the guidance gives general how-tos in five areas: management, data governance, security and reliability, virtual machines, and software and applications.
To read Tuutti's full report, click here.
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:ce93722f-9abd-423c-a8fb-60275ca1cdf0> | CC-MAIN-2017-09 | https://gcn.com/articles/2012/05/30/agg-fcw-nist-plain-language-cloud-guidance.aspx?admgarea=TC_BigData | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00622-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.868604 | 193 | 2.8125 | 3 |
The big problem facing supercomputing is that the firms that could benefit most from the technology aren't using it. It is a dilemma.
The big problem facing supercomputing is that the firms that could benefit most from the technology, aren't using it. It is a dilemma.
Supercomputer-based visualization and simulation tools could allow a company to create, test and prototype products in virtual environments. Couple this virtualization capability with a 3-D printer, and a company would revolutionize its manufacturing.
But licensing fees for the software needed to simulate wind tunnels, ovens, welds and other processes are expensive, and the tools require large multicore systems and skilled engineers to use them.
One possible solution: taking an HPC process and converting it into an app.
This is how it might work: A manufacturer designing a part to reduce drag on an 18-wheel truck could upload a CAD file, plug in some parameters, hit start and let it use 128 cores of the Ohio Supercomputer Center's (OSC) 8,500 core system. The cost would likely be anywhere from $200 to $500 for a 6,000 CPU hour run, or about 48 hours, to simulate the process and package the results up in a report.
Testing that 18-wheeler in a physical wind tunnel could cost as much $100,000.
Alan Chalker, the director of the OSC's AweSim program, uses that example to explain what his organization is trying to do. The new group has some $6.5 million from government and private groups, including consumer products giant Procter & Gamble, to find ways to bring HPC to manufacturers via an app store.
The app store is slated to open at the end of the first quarter of next year, with one app and several tools that have been ported for the Web. The plan is to eventually spin-off AweSim into a private firm, and populate the app store with thousands of apps.
Tom Lange, director of modeling and simulation in P&G's corporate R&D group, said he hopes that AweSim's tools will be used for the company's supply chain.
The software industry model is based on selling licenses, which for an HPC application can cost $50,000 a year, said Lange. That price is well out of the reach of small manufacturers interested in fixing just one problem. "What they really want is an app," he said.
Lange said P&G has worked with supply chain partners on HPC issues, but it can be difficult because of the complexities of the relationship.
"The small supplier doesn't want to be beholden to P&G," said Lange. "They have an independent business and they want to be independent and they should be."
That's one of the reasons he likes AweSim.
AweSim will use some open source HPC tools in its apps, and are also working on agreements with major HPC software vendors to make parts of their tools available through an app.
Chalker said software vendors are interested in working with AweSim because it's a way to get to a market that's inaccessible today. The vendors could get some licensing fees for an app and a potential customer for larger, more expensive apps in the future.
AweSim is an outgrowth of the Blue Collar Computing initiative that started at OSC in the mid-2000s with goals similar to AweSim's. But that program required that users purchase a lot of costly consulting work. The app store's approach is to minimize cost, and the need for consulting help, as much as possible.
Chalker has a half dozen apps already built, including one used in the truck example. The OSC is building a software development kit to make it possible for others to build them as well. One goal is to eventually enable other supercomputing centers to provide compute capacity for the apps.
AweSim will charge users a fixed rate for CPUs, covering just the costs, and will provide consulting expertise where it is needed. Consulting fees may raise the bill for users, but Chalker said it usually wouldn't be more than a few thousand dollars, a lot less than hiring a full-time computer scientist.
The AweSim team expects that many app users, a mechanical engineer for instance, will know enough to work with an app without the help of a computational fluid dynamics expert.
Lange says that manufacturers understand that producing domestically rather than overseas requires making products better, being innovative and not wasting resources. "You have to be committed to innovate what you make, and you have to commit to innovating how you make it," said Lange, who sees HPC as a path to get there.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His email address is firstname.lastname@example.org.
Read more about applications in Computerworld's Applications Topic Center.
This story, "Here comes a supercomputing app store" was originally published by Computerworld. | <urn:uuid:8238c432-0208-4b68-a253-a0bfb1a132cf> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2172304/applications/here-comes-a-supercomputing-app-store.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00198-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966968 | 1,068 | 2.84375 | 3 |
A leading UK professor expects a battle to ensue for the future of travel…with driverless cars leading the way.
Professor John Miles, of the department of engineering at University of Cambridge, was speaking at the Internet of Things Forum in Cambridge yesterday, where he gave a detailed overview of the opportunity for driverless cars as we move to a future likely to be dominated by the shared economy.
In his presentation, entitled “The near future for connected transport…from self-driving cars to the Hyperloop”, the professor outlined opportunity for cars to do more than they are currently, saying that increasing the capacity and complexity of cars could lead to less traffic and smarter travel.
He said that there are currently 224,000 miles of UK road network, but only 10,000 miles of railway network, something he says shows that roads remain a “powerful, existing asset”.
“Maybe we’re too quick to rubbish the car…maybe we should observe what we’ve got here, and ask ourselves if we should be concentrating on making them even better.”
And despite many ‘demonizing’ the car over continually congested roads and high emissions, he said it is invariably cheaper than competitors like the rail and bus, both of which have faults when it comes to capacity, usage and cost.
For example, he researched the M1 with the corresponding railway lines, and found that the car would cost £30 million per mile in each direction on the M1. The railway would cost £50m per mile for the railway in the same area. Furthermore, cars and railways deliver roughly the same amount of people (9-10,000) in this area, while Miles says that rail upgrades can be expensive.
Miles instead pushes for more ‘headroom’ of strategic road network. He says while capacity is an issue (roads can’t be built quick enough), there is more to be done to reduce minor incidents, and increase lane occupancy.
He believes that current minor incidents (80 percent of which are apparently caused by driver inattention) account for around 30 percent of congestion on all roads. He adds that having cars travel closer together could ultimately lead to four lanes rather than three, representing a capacity increase of 33 percent.
“If we could increase ln occupancy, we could increase number of people moving down those roads, without any increase in [financial] output.”
“What we need to do is to fill the vehicles we have, not just have big empty vehicles driving around because they are deemed to be ‘good’. What we need is scalable bus…but if I was being cynical I’d say that this is the car.”
He believes in on-demand systems – perhaps part of the shared economy trail-blazed by Uber and Airbnb – and urges us to move away from ‘yesterday’s thinking’ of fixed travel for a fixed group at a fixed moment in time.
The future, he says, is all about spontaneous on-demanding booking service, cloud-based booking and billing, and yet vehicles which still maintain a comfortable and reliable journey that is guaranteed to arrive within a set time-frame.
He says driverless cars are the driver for this and uses his previous example of the M1 to show that the car-based model should in future be able to deliver “six or seven times the amount we can do on the train.”
“This is why we should be interested in self-driving vehicles; it’s a very big step forward.”
Driverless cars improve road capacity
This capacity matching is already being pushed by the UK government and a number of academics. For example, he describes L-SATS as perhaps the closest thing to ‘last mile’ automated devices, with these currently being tested in Milton Keynes and Cambridge too.
There are other examples of driverless cars and other automated vehicles; the Bullet is an electric driverless 120mph vehicle where vehicles couple together (not wholly dissimilar to those imagined in Tom Cruise’s Total Recall), while the Mercedes F015 Luxury in Motion concept car was seen by IoB at MWC.
“This is about all convenience and facility for the user, and its provided by the self-driving car. It’s a whole new dimension to travel, a dimension where we don’t mind being stuck in traffic because we’ve got better things to do. And most of the time we’re not stuck in traffic because the roads are optimised.”
Yet he later suggested, after a question from the audience, that self-driving cars will also always have manual modes for the person to take-over.
“You don’t need to force anybody to do anything; if you want to drive your car you will be able to.”
Yet, he tempered his praise for driverless cars but suggesting that it could well face a battle against Elon Musk (and Tesla’s) next great invention – the Hyperloop, a 700mph subsonic train that is aiming to take passengers from London to Birmingham in 12 minutes.
The Hyperloop is in essence a futuristic train that Musk calls “a cross between a Concorde, a railgun and an air hockey table”. It’s based on the very high speed transit (VHST) system proposed in 1972 which combines a magnetic levitation train and a low pressure transit tube. Musk has likened it to a vacuum tube system in a building used to move documents from place to place.
Musk has previously said that all Tesla cars will be autonomous by 2018.
“Hyperloop is a fantastic idea; we’ve done some work on it,” added Miles. | <urn:uuid:0de21779-fc5f-45cb-aed5-95384c7f351d> | CC-MAIN-2017-09 | https://internetofbusiness.com/driverless-cars-a-step-forward-for-smart-travel/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00494-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961992 | 1,205 | 2.53125 | 3 |
This week's 'minutephysics' video tackles the issue of trying to identify "the fourth dimension", and why dimensions exist, but why we can't say which one is the first, second, third or fourth. It's always assumed that the fourth dimension was time, but it's more that we live in a three-dimensional world with a fourth "time dimension". Things get funky and fuzzy from there:
However, we do know that there's a "third" dimension, because that's where Homer Simpson went in this famous Simpsons clip: (forgive the Spanish dubbing, Fox still won't' let people post Simpsons clips on YouTube)
And finally, I do know that there's a "fifth dimension", as witnessed here:
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. | <urn:uuid:1ef7754b-3689-485b-8371-588ed6ba84b3> | CC-MAIN-2017-09 | http://www.itworld.com/article/2728996/virtualization/science-monday--there-s-no--fourth--dimension.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00018-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957075 | 206 | 2.640625 | 3 |
The electronic voting system that has been used in Estonia since 2005 cannot guarantee fair elections because of fundamental security weaknesses and poor operational procedures, according to an international team of security and Internet voting researchers.
The analysis performed by the team's members, some of whom acted as observers during 2013 local elections in Estonia, revealed that sophisticated attackers, like those employed by nation states, could easily compromise the integrity of the country's Internet voting system and influence the election outcome, often without a trace.
The team chose to analyze the Estonian system because Estonia has one of the highest rates of Internet voting participation in the world -- over 21 percent of the total number of votes during the last local election were cast through the electronic voting system.
During their observation of the local elections and by later watching the procedural videos released by the Estonian election authority, the researchers identified a large number of poor security practices that ranged from election officials inputting sensitive passwords and PINs while being filmed to system administrators downloading critical applications over insecure connections and using personal computers to deploy servers and build the client software distributed to voters.
The researchers also used open-source code released by the Estonian government to replicate the electronic voting system in their laboratory and then devised several practical server-side and client-side attacks against it.
To use the Estonian system, voters insert their electronic national ID card into a card reader attached to their computers and use the PINs associated with their ID cards to cast their votes through a special application. The researchers developed malware that can record the PIN numbers and later change the votes while the ID cards are attached to voters' computer for different operations.
The malware can be deployed in different ways, including through online exploits, through existing infections or through man-in-the-middle attacks during the download process. Attackers could also maliciously alter the voting software itself during the build process, if it's created on a personal computer instead of in a controlled environment, the researchers said Monday during a press conference about their findings in Tallinn, Estonia.
The system uses a vote confirmation procedure based on QR codes than need to be scanned by users with their mobile phones after casting their votes. However, a compromised voting application can potentially alter votes and QR codes in real time, meaning this additional verification system can't protect users from sophisticated attackers, the researchers said.
Such false verification attacks have been used in the real world against online banking users, so they're not just theoretical and could easily be applied to Internet voting, they said.
To compromise the electronic voting servers, attackers could either exploit vulnerabilities over the Internet or could target the people responsible for deploying the servers by first infecting their computers and then altering the server software. Because of the lack of security checks and control, a malicious insider could also carry out such attacks, the researchers said.
The research team included J. Alex Halderman, a computer science professor at the University of Michigan who studied electronic voting systems in different countries around the world; Maggie MacAlpine, an advisor on post-election audits in the U.S.; Harri Hursti, a Finnish independent security researcher known for previously demonstrating a successful attack against a Diebold voting machine; Jason Kitcat, who previously led an investigation into electronic voting in the UK for the Open Rights Group, a digital rights organization; and Travis Finkenauer and Drew Springall, two PhD students at the University of Michigan.
"There are so many attack vectors by which you could dirty the machines used to set up the elections that we believe this to be a very credible and viable attack; and we have photographic evidence on our website showing a personal computer with links to poker sites being used to set up the critical election systems [in Estonia]," Kitcat said.
The Estonian election officials should improve their operational procedures, but "we've also shown fundamental flaws in the architecture of the system, which means that we can steal votes remotely from voters' computers and those flaws cannot be fixed quickly or easily," he said.
The researchers said they notified the Estonian National Electoral Committee, as well as political parties, academics and media organizations in Estonia of their findings at the same time on Saturday. The research was presented in greater detail Monday during a press conference and a full report will be made available on a website that also contains other supporting material, including videos and photos.
The Estonian National Electoral Committee declined to comment until it reviews the full report.
The researchers believe the Estonian Internet voting system should be discontinued before the upcoming European Parliament elections on May 25. More generally they believe that building a secure and accurate electronic voting system is not possible with the current technology when taking sophisticated attackers like nation states into consideration. | <urn:uuid:7b928901-0e91-4186-926c-47786fdc8fa6> | CC-MAIN-2017-09 | http://www.cio.com/article/2376362/security0/estonian-electronic-voting-system-vulnerable-to-attacks--researchers-say.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00546-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96154 | 951 | 2.578125 | 3 |
Between spam, chain emails and the sheer volume of information that passes through many inboxes, email has lost much of the luster it once possessed in the days of America Online and CompuServe.
Now there's something else that's unappealing. Foreign governments targeting consumers' email inboxes, according to a new warning message being issued by Google.
Viewed in a Gmail inbox, on a Google home page or in the Google Chrome browser, thousands of users received a warning that read “Your account could be at risk of state-sponsored attacks.” Google first created the warning message in June, but it appears to be picking up steam. The emails blocked by Google's new filter may contain links to malicious websites designed to steal personal information or implant malware, or they may contain malicious attachments.
Google has said they will not share how they know that certain attacks are state-sponsored, because it's a matter of security. Mike Wiacek, a manager on Google’s information security team, said that Google saw an increase in state-sponsored activity coming from several different countries in the Middle East, which he declined to name specifically, The New York Times reported.
While Google is refusing to point the finger at any particular nation of origin, the questionable practice of secretly monitoring the populace with software disguised as a crime-fighting tool was recently uncovered by security researchers studying Iran, Qatar, the United Arab Emirates and Bahrain. Not coincidentally, Iran recently ranked worst in the world for Internet freedom, according to a Freedom House report. As a region, the report rated the Middle East as "two percent" free when it comes to the Internet.
Several American banks were hit by cyberattacks last week that reportedly came from the Middle East, The Times reported.
If President Obama's rumored cybersecurity executive order ever comes to fruition, it could prove good publicity for his administration as the issue is now being illuminated to the public in more tangible ways. Congress has yet to make significant progress on drafting legislation protecting national infrastructure from foreign cyberattacks. | <urn:uuid:46543477-ecdd-4e92-afc1-079227e7f590> | CC-MAIN-2017-09 | http://www.govtech.com/security/Google-Warns-Users-of-Middle-Eastern-Cyberattacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00366-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95362 | 416 | 2.5625 | 3 |
Future smartphones could gain numerous benefits from algorithms to fight interference, developed by a little-known startup in Lawrence, Kansas, that last week drew closer to implementing the technology in devices.
The algorithms are intended to more efficiently cancel out interference among different radios built into the same phone, potentially giving users longer battery life or fewer dropped calls. The same technology might also be used to counter other types of interference, including military jamming and power-transmission signals that hurt powerline networks.
The source of these signal-filtering algorithms isn't a wireless-industry giant but Avatekh, a 2-year-old research company that's working with Kansas State University's Electronics Design Laboratory. They announced last week they have received a National Science Foundation grant to implement and test the technology in hardware.
Radio-frequency interference can hold back network performance, and one source of it is the multiple radios found in many devices. Most smartphones are equipped with Wi-Fi, Bluetooth, GPS and other radio technologies, all of which may be transmitting or receiving signals at the same time in close proximity. Even if they use different bands, the radios in a device may interfere with each other if they're very close together, according to Alexei Nikitin, Avatekh's founder and chief science officer.
Overcoming this interference requires detecting and cancelling out the offending signals. This is often done through digital processing after the signals have been captured and digitized, Nikitin said.
Radio waves traveling through the air or within a device are analog until they're converted into digital signals. Instead of trying to fix the interference after that conversion, Avatekh's algorithms deal with the analog signals directly. This allows them to reduce some forms of interference that can't be corrected at all in the digital realm, and it also eases the computing and energy load on the device. Though there are analog filters in phones already, the new algorithms can outdo them in solving some types of interference that come from other radios, Nikitin said.
A key advantage of Avatekh's algorithms is that they can analyze and mitigate the internal interference in real time, said Tim Sobering, an electrical engineer at Kansas State who is helping to bring the technology onto hardware boards for testing and development. In the digital realm, this kind of work has been limited to a non-real-time process because it requires a huge amount of processing power and energy, Sobering said.
The higher efficiency of the analog algorithms could mean longer battery life. In addition, clearing up interference can raise the signal-to-noise ratio, making networking easier in multiple ways, Nikitin said: Depending on the situation, less noise may make a phone's useful range wider, let a mobile operator get more service out of the same spectrum, and give subscribers a higher data rate than they would otherwise get.
Beyond the wireless world, better interference mitigation could be a boon to powerline broadband, which has to share wires with electric current. The electricity running through powerlines can interfere with the data signals being sent through them, so the network can't run as fast as it might. The Avatekh algorithms could help to mitigate interference from that current, Nikitin said.
In a similar way, they could help to combat interference on copper wires used for DSL (digital subscriber line), clearing the way for faster DSL speeds, he said. Because they combat interference, the algorithms might also be useful for overcoming intentional jamming. Avatekh is starting to talk with some defense companies about this, Nikitin said.
Avatekh has patented the algorithms but is just beginning to test them out in hardware through the project with Kansas State, Nikitin said. It may be three to five years before they're implemented in commercially available devices, he said.
"It's pretty wet around the ears as far as actually putting it on a chip," he said.
Nikitin came to Kansas from the Soviet Union and earned a doctorate in physics from the University of Kansas in Lawrence. Avatekh was founded in 2011. Nikitin has been working on the signal-filtering idea for about 15 years but only recently got to the stage of implementing the algorithms in hardware. For help, he turned to Sobering, whom he already knew, even though Kansas State is a rival to Nikitin's alma mater.
Launching his company so far from the traditional centers of the tech industry was a brave move.
"I had some pressure to go to the Valley, but for various reasons, I kind of decided to undertake the insane task of trying to start a high-tech startup in Lawrence, Kansas," Nikitin said. | <urn:uuid:7aa11f74-dc6e-4e96-afae-994b56d54232> | CC-MAIN-2017-09 | http://www.itworld.com/article/2708087/mobile/smartphone-interference-tackled-by-kansas-startup.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00066-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95764 | 961 | 2.875 | 3 |
Companies could face massive fines in 25 European Union countries if they mishandle citizens' personal information, under a new privacy law due to take effect in 2018.
New age restrictions will mean no more Facebook or other social media for European pre-teens.
Today, fines for violations of EU data protection rules are typically limited to a few tens of thousands of euros, or hundreds of thousands in exceptional cases. That's hardly enough to upset companies such as Facebook or Google, which both reported billions of dollars in net income last year.
From 2018, though, data protection authorities will be able to impose fines of up to 4 percent of a company's worldwide revenue for breaches of the new privacy rules approved by the European Parliament on Thursday afternoon. For Google, the fine itself could now be in the billions of dollars.
The new General Data Protection Regulation (GDPR) also enshrines and extends the "right to be forgotten" created by a ruling of the Court of Justice of the EU in 2014. Where the court merely ordered search engines to make it difficult to discover certain kinds of personal information on request from the subject, the new regulation will enable EU citizens to request that companies entirely delete data concerning them.
Exceptions allow companies to retain data for historical, statistical, scientific, and public health purposes, to exercise their right to freedom of expression, or where required by law or to fulfill a contract.
Citizens also gain the right to move their data from one company to another -- so switching email providers will be easier -- and rules on obtaining consent to collect of personal information are reinforced. Pre-checked boxes or systems that require people to opt out of data collection will no longer be allowed.
Jan Philipp Albrecht, Parliament's rapporteur for the new law, said the GDPR represents four years' work by legislators.
It replaces the 1995 Data Protection Directive, introduced years before companies such as Google and Facebook were even founded. Directives are first transposed into national law, often resulting in variations in rules between countries, whereas EU regulations such as the GDPR are directly applicable in the EU member states.
The new rules, then, should be uniform throughout the EU and adapted to the Internet age, making it simpler for companies operating across European borders, online and off, to comply.
There are a couple of glitches in this perfect picture, though.
Three states, Denmark, Ireland and the U.K., have negotiated exemptions from EU home affairs and justice legislation, so the new rules will apply only partially in the U.K. and Ireland, while Denmark has six months to decide whether to adopt the new rules or reject them in their entirety.
Other national variations will exist in rules governing the age at which children can consent to the storage of their personal information: It will range from 13 to 16 years depending on countries' existing legislation. Whatever the country, though, it will mean no Facebook or other social media accounts for pre-teens across Europe.
The second glitch is that the GDPR doesn't cover all kinds of data: Another piece of legislation, the 2002 e-privacy directive, covers information exchanged through electronic communications services such as fixed and mobile phone networks, and there are inconsistencies between that directive and the new data protection rules. The European Commission is aware of this, and on Monday opened a three-month public consultation on how this needs to change.
The GSM Association, a trade body for mobile networks, welcomed the arrival of the new rules and called on the Commission to use the consultation to address the inconsistencies between the GDPR and the existing e-privacy directive.
"Consumers should be able to enjoy consistent privacy standards and experiences, irrespective of the technologies, infrastructure, business models and data flows involved or where a company may be located," said GSMA Chief Regulatory Officer John Giusti.
He cautioned that too much privacy would be bad for business: "The right balance needs to be struck between protecting confidentiality of communications and fostering a market where innovation and investment will flourish."
John Higgins, director-general of IT industry lobby group Digital Europe, also warned that privacy has a cost.
"While we continue to believe that the final text fails to strike the right balance between protecting citizens' fundamental rights to privacy and the ability for businesses in Europe to become more competitive, it is now time to be pragmatic," he said via email.
National differences in implementation are also a danger for those doing business entirely online, and threaten the EU's plans for a digital single market.
"If Europe fails to properly implement the GDPR across all 28 EU Member States, this could render the digital single market incoherent," he said.
Joe McNamee, executive director of campaign group European Digital Rights (EDRi), said the business lobby had already removed much of what legislators put in the original data protection package, but "the essence" had been saved.
Approval of the GDPR makes a moving target of EU data protection law for officials working on the Privacy Shield, a legal mechanism allowing companies to guarantee compliance with EU privacy rules when exporting citizens' personal information to the U.S. for processing.
On Wednesday EU data protection authorities called for a revision mechanism to be added to the draft Privacy Shield agreement to take into account future rules changes, including those now due to take effect in 2018. | <urn:uuid:bdd44982-eb75-40b7-96ac-913af2e1df3b> | CC-MAIN-2017-09 | http://www.itnews.com/article/3056700/eu-gives-companies-two-years-to-comply-with-sweeping-new-privacy-laws.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00242-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940807 | 1,087 | 2.53125 | 3 |
Demonetisation of High Denomination Notes in India
Demonetisation of currency is a radical monetary step in which a currency unit's status as a legal tender is declared invalid. This is usually done whenever there is a change of national currency, replacing the old unit with a new one. Such step, for example, was taken when the European Monetary Union nations decided to adopt Euro as their currency. However, the old currencies were allowed to convert into Euros for a period of time in order to ensure a smooth transition though demonetization. Zimbabwe, Fiji, Singapore and Philippines were other countries to have opted for currency demonetization.
India's currency notes are printed in Bharatiya Reserve Bank Note Mudran Private Limited at Mysuru and Salboni (near Mednapore in West Bengal), and Security Printing and Minting Corporation of India Limited's Currency Note Press at Nashik and Bank Note Press at Dewas.
India Security Press, Nashik, and Security Printing Press, Hyderabad, print non-currency material such as passport, visa, IT refund orders, identify cards for higher officials, and warrants, among other things.
Demonetisation in India
In India's case, the move has been taken to curb the menace of black money and fake notes by reducing the amount of cash available in the system. It is also interesting to note that this was not the first time the Government of India has gone for the demonetization of high-value currency. It was first implemented in 1946 when the Reserve Bank of India demonetized the then circulated ₹1,000 and ₹10,000 notes. The government then introduced higher denomination banknotes in ₹1,000, ₹5,000 and ₹10,000 in a fresh avatar eight years later in 1954 before the Morar Ji Desai government demonetized these notes in 1978. The government's move to demonetize, even then, was to tackle the issue of black money economy, which was quite substantial at that point of time. The move was enacted under the High Denomination Bank Note (Demonetisation) Act, 1978. Under the law all 'high denomination bank notes' ceased to be legal tender after January 16, 1978. People who possessed these notes were given till January 24 the same year a week's time to exchange and high denomination bank notes.
In a recent move Prime Minister Narendra Modi hit hard on corrupt bureaucrats, politicians, and business class, terrorists' group, smugglers, drug traffickers, hawala traders, black marketers and many others engaged in unlawful activities by announcing on November 8, 2016 that ₹500 and ₹1,000 currency notes would no longer be legal tender money. The people were given chance either to exchange the old currency with new through banks/post officers or deposit the same in their accounts up to December 30, 2016 with a rider that any deposit of more than ₹2.50 lakh would be scrutinized by the income tax authorities. All notes in lower denomination of ₹100, ₹50, ₹20, ₹10, ₹5, ₹2, and ₹1 and all coins continued to be valid, and new notes of ₹2,000 and ₹500 were introduced. There was no change in any other form of currency exchange be it cheque, DD, payment via credit or debit cards etc. The step was aimed at curbing the 'disease' of corruption and black money which had taken deep roots.
The main difference between previous drivers of demonetization previous drives of demonetization and the current one is that currency of higher denomination was barely in circulation in 1948 or 1978, unlike the ₹500 and ₹1000 note in 2016. There is a world of difference, however, between the demonetization of 1978 and now. The middle class in 1978 was not only sparse but also lived mostly with in the modest income bracket. A large section of this class had no access to high denomination currency notes, that began with ₹1,000 and went up to 10,000. Much of the currency notes. Even the common people with meager income possess ₹500/1000 notes.
According to the Reserve Bank of India's (RBI) annual report, 2016, the total value of currency in circulation is ₹16.4 lakh crore. Of this, 38.6 percent or ₹6.3 lakh crore is in the form of ₹1,000 notes. Another 47.8 percent or ₹7.8 lakh crore is in the form of ₹500 notes. This means that over 86 percent of Indian currency will be withdrawn and needs to be exchanged before it can be used.
Objectives of Demonetisation
Officially the current demonetization has been taken with the following objectives:
- to track fake currency
- to cut of the supply line money, arms and ammunition to terror funding
- to transform Indian economy into a cashless economy
- to bring tax evasion to halt
- to unearth and curb the black money
- to curb illegal and unethical business activities such as the black marketing, food adulteration, marketing of spurious goods, human trafficking, smuggling of gold and drugs
Benefits of Demonetisation
Looking ahead, the best course of action would be to consolidate on the benefits of demonetization. On the currency management front, regular upgrades of currency notes along with a complete withdrawal of the old series may have been good practice in the past. But this may not be required now thanks to the high-tech features built into the new notes. Still, periodic disruption will help fight counterfeiting and hoarding.
Impact of Demonetisation on Economy
In a bid to clampdown on black money, the government withdrew ₹500 and ₹1,000 notes from circulation with immediate effect. Reserve Bank of India (RBI) data suggests that the proportion of ₹500 and ₹1000 notes were 86.4% of total value of notes in circulation of March 31, 2016, amounting to ₹14 trillion. The growth rates in these notes were 76% and 109% respectively in the last five years versus overall currency in circulation going up by 40%, points out of Citigroup note. Experts visualized the following impact of demonetization on the economy in the near future:
- Which citizen will be inconvenienced in the short term, this is a big medium-term positive in the government's effort to crack down on black money and corruption.
- In the short – run GDP growth rate may decline by 0.1-0.2 percentage points, but in the long run the net impact of demonetization will be positive.
- As the old currency notes are deposits with banks, bank deposit growth will witness a pickup and currency in circulation will moderate a positive for banking sector liquidity.
- As rural house-holds open new bank accounts to deposit accounts to deposit old note, this may also end up giving a boost to the government's financial inclusion thrust.
- Since black money played a role in real estate transactions, this crackdown is very likely to hurt the real estate market, which is already reeling under high inventory in top tier cities such as Mumbai and Delhi.
- As some of the black money is brought under legitimate channels, the government's tax revenue collection will get a boost.
- There could be an immediate increase in bank deposits if some of the holders of these old notes decide to deposit them rather than exchanging them for new notes. Currency in circulation might decline substantially if heightened security forces people with unaccounted for cash to not exchange/deposit. The base money goes down in that case but the increase in money multiplier (because of a higher deposit to currency ratio) might mitigate the impact on overall money supply.
- If money supply declines temporarily because of these measures, then assuming no immediate change in velocity of circulation, we could either see some deflationary tendencies or lowering of real demand (economic activity). The impacts could be different depending upon sectors – deflationary in some while contractionary in other. This is a short-term risk for the economy.
- The move generally bodes well for the inflation outlook since black money was associated with higher inflation. However, it is likely to hurt near-term consumption demand.
Impact of Demonetisation on Black Money
Dr. Bibek Debroy, member NITI Aayog, categorized the black money as 'single black' and 'double black'. The 'Single black' is when the activity is not illegal but the persons using this type of money have not paid taxes, while the 'double black' is that type of money which is used to generate income through illegal activities crimes, trafficking, extortions etc.
According to Dr. Debroy, out of ₹14 lakh crore, currency in the form of ₹500 and ₹1000, probably something like ₹4 lakh crore, may be black money. He explains,” Let's assume that out of ₹4 lakh crores, ₹2.5 lakh crore is single black and ₹1.5 lakh crore is double black. So, ₹1.5 lakh crore is roughly 10 percent of ₹14 lakh crores. This is completely destroyed. On Single black - ₹2.5 lakh crore – I think a large part of it will come back into the system. Of the remaining ₹10 lakh crore, ₹8 lakh crore is just probably transaction-related. This cash temporarily goes out of the system, but it eventually comes back into the system. The remaining ₹2 lakh crore is what the people were sitting on. This is not illegal. This ₹2 lakh crore is unproductive for the people holding on to it and for the system. This comes into the system. In the short-term of course there is impact – macro and sectoral impact. But, in the slightly medium-term, several things happen through RBI and outside RBI. The government and the banking system have more resources. The government can spend this extra money on various public goods and services including infrastructure, and the lenders can lend more. So, wealth is transferred from relatively rich to relatively poor in the process”.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:53cb2f3a-e38a-4c7b-9f00-cb0e5319ee8e> | CC-MAIN-2017-09 | http://www.knowledgepublisher.com/article-1303.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00363-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952636 | 2,235 | 2.9375 | 3 |
Miles south of Chicago, amid the wind-swept flatlands of central Illinois, is the home of perhaps the world’s next fastest supercomputer. The National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, which is co-developing the Blue Waters supercomputer, is today at the forefront of petascale computing. But 25 years ago, when the Center revved up its first machine, the computing world looked much different.
This article looks back at the highlights in NCSA’s 25-year history, which illustrate well how far computing technology has come in such a short span of time and the various innovations that supercomputers have made possible that we now take for granted. These highlights are based on the slideshow posted on NCSA’s website. (In the interest of full disclosure, ICC works with NCSA on the Dark Energy Survey, so we’re a little biased).
The first supercomputer at NCSA, which went operational in 1986, was a dual-processor machine that performed at about 400 megaflops. In comparison, the upcoming Blue Waters supercomputer will have 300,000 CPUs and a peak performance of 10 petaflops (that’s 25 million times faster than NCSA’s first supercomputer).
In 1998, NCSA came out with its first “cluster”, which connected 128 workstations together and was known as the NT Supercluster. This aggregation of towers looks somewhat comical today, and it wasn’t long before rack servers replaced these bulky form factors.In 1993, NCSA launched the Mosaic web browser, which was the first popular and intuitive graphical web browser. Mosaic was the direct precursor to Netscape Navigator, which itself was developed by former NCSA employees. Since NCSA doubtless received federal grant money during this time, could the development of Mosaic be the origin of Al Gore’s misquoted “claim” that he helped invent the internet? (For the real origin, check out this article on snopes.com)
From 1989 until the present, NCSA has used supercomputers to develop visualization technology that we have now grown accustomed to when watching IMAX movies or TV documentaries. In 1991, an Illinois researcher named Sever Tipei showed that the patterns in music can be replicated by a computer to produce unique musical compositions (perhaps, as I suspect, some pop music hits coming out today are also being conceived by machines).
Beyond entertainment, the supercomputers at NCSA have helped expand humanity’s toolbox. NCSA cooperates with both industry (through partnerships with aeronatical, pharmaceutical and manufacturing companies) and the academic community (not only science but humanities departments are utilizing computers to further their work) to offer practical approaches to solving contemporary problems.
In 2007, scientist Carlos Simmerling used an NCSA supercomputer to digitally simulate how HIV protease, a molecule that helps form the HIV virus, functions. These simulations revealed how new medications could work to combat HIV. The year before, Klaus Schulten used NCSA’s SGI Altix supercomputer to create the first simulation of a complete life form down to the atomic level.
Supercomputers are also transforming how people learn about the world. NCSA teamed up with other university departments to create the Institute for Chemistry Literacy, which in 2009 reported in The Journal of Mathematics and Science that its training program for science educators in rural Illinois yielded a marked improvement in chemistry content knowledge in students of participating teachers. My favorite education-related project in NCSA is called the Papers of Abraham Lincoln, which will eventually scan and make available online all the writings of our 16th president (including a plot on Google Maps of the locations from where they were written and The Lincoln Log, a day-to-day account of President Lincoln’s life – riveting stuff for a history buff like myself).
So it’s been a busy 25 years for the National Center for Supercomputing Applications, and the future looks even brighter. Last year, the Director of NCSA testified before Congress on the need for federal support of HPC, especially now that international competition in this field is heating up.
Future innovation will require more and more industries to use high-performance computers to supplement human brainpower, much like machines of steel and steam increased our motive powers during the Industrial Revolution. Supercomputing centers like the one at the University of Illinois are helping to make these technologies available not only to leaders of industry and top-notch researchers, but also to underprivileged communities and the general public around the world.
Whether it’s experimenting with building a cluster out of video-gaming consoles (like NCSA did in 2003 with 70 Playstation 2s) or building the most powerful supercomputer on earth, let’s hope the next quarter-century of supercomputing will be as filled with invention and pragmatic progress as the last. | <urn:uuid:426125ad-8743-4878-9537-8bed73000621> | CC-MAIN-2017-09 | http://www.icc-usa.com/insights/national-center-for-supercomputing-applications-turns-25/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00587-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942769 | 1,029 | 3.4375 | 3 |
ISO 27001 risk assessment & treatment – 6 basic steps
Risk assessment (often called risk analysis) is probably the most complex part of ISO 27001 implementation; but at the same time risk assessment (and treatment) is the most important step at the beginning of your information security project – it sets the foundations for information security in your company.
The question is – why is it so important? The answer is quite simple although not understood by many people: the main philosophy of ISO 27001 is to find out which incidents could occur (i.e. assess the risks) and then find the most appropriate ways to avoid such incidents (i.e. treat the risks). Not only this, you also have to assess the importance of each risk so that you can focus on the most important ones.
Although risk assessment and treatment (together: risk management) is a complex job, it is very often unnecessarily mystified. These 6 basic steps will shed light on what you have to do:
Need help with risk assessment?
1. Risk assessment methodology
This is the first step on your voyage through risk management. You need to define rules on how you are going to perform the risk management because you want your whole organization to do it the same way – the biggest problem with risk assessment happens if different parts of the organization perform it in a different way. Therefore, you need to define whether you want qualitative or quantitative risk assessment, which scales you will use for qualitative assessment, what will be the acceptable level of risk, etc.
2. Risk assessment implementation
Once you know the rules, you can start finding out which potential problems could happen to you – you need to list all your assets, then threats and vulnerabilities related to those assets, assess the impact and likelihood for each combination of assets/threats/vulnerabilities and finally calculate the level of risk.
In my experience, companies are usually aware of only 30% of their risks. Therefore, you’ll probably find this kind of exercise quite revealing – when you are finished you’ll start to appreciate the effort you’ve made.
3. Risk treatment implementation
Of course, not all risks are created equal – you have to focus on the most important ones, so-called ‘unacceptable risks’.
There are four options you can choose from to mitigate each unacceptable risk:
- Apply security controls from Annex A to decrease the risks – see this article ISO 27001 Annex A controls.
- Transfer the risk to another party – e.g. to an insurance company by buying an insurance policy.
- Avoid the risk by stopping an activity that is too risky, or by doing it in a completely different fashion.
- Accept the risk – if, for instance, the cost for mitigating that risk would be higher that the damage itself.
This is where you need to get creative – how to decrease the risks with minimum investment. It would be the easiest if your budget was unlimited, but that is never going to happen. And I must tell you that unfortunately your management is right – it is possible to achieve the same result with less money – you only need to figure out how.
4. ISMS Risk Assessment Report
Unlike previous steps, this one is quite boring – you need to document everything you’ve done so far. Not only for the auditors, but you may want to check yourself these results in a year or two.
5. Statement of Applicability
This document actually shows the security profile of your company – based on the results of the risk treatment you need to list all the controls you have implemented, why you have implemented them and how. This document is also very important because the certification auditor will use it as the main guideline for the audit.
For details about this document, see article The importance of Statement of Applicability for ISO 27001.
6. Risk Treatment Plan
This is the step where you have to move from theory to practice. Let’s be frank – all up to now this whole risk management job was purely theoretical, but now it’s time to show some concrete results.
This is the purpose of Risk Treatment Plan – to define exactly who is going to implement each control, in which timeframe, with which budget, etc. I would prefer to call this document ‘Implementation Plan’ or ‘Action Plan’, but let’s stick to the terminology used in ISO 27001.
Once you’ve written this document, it is crucial to get your management approval because it will take considerable time and effort (and money) to implement all the controls that you have planned here. And without their commitment you won’t get any of these.
And this is it – you’ve started your journey from not knowing how to setup your information security all the way to having a very clear picture of what you need to implement. The point is – ISO 27001 forces you to make this journey in a systematic way.
P.S. ISO 27005 – how can it help you?
ISO/IEC 27005 is a standard dedicated solely to information security risk management – it is very helpful if you want to get a deeper insight into information security risk assessment and treatment – that is, if you want to work as a consultant or perhaps as an information security / risk manager on a permanent basis. However, if you’re just looking to do risk assessment once a year, that standard is probably not necessary for you.
Learn about the details of the risk management process in this free ISO 27001 Foundations Online Course.
Need help with internal audit?
RISK ASSESSMENT DOCUMENTS
ISO 27001 Risk Assessment Toolkit
contains all the document templates needed to
implement risk assessment and treatment.
RISK ASSESSMENT TRAINING
ISO 27001 Foundations Course
is a free online training that explains you step-by-
step how to perform risk assessment and treatment. | <urn:uuid:6506b8fc-7d9d-4153-a416-a96ca6436136> | CC-MAIN-2017-09 | https://advisera.com/27001academy/knowledgebase/iso-27001-risk-assessment-treatment-6-basic-steps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00111-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940906 | 1,228 | 2.640625 | 3 |
Security researchers at Kaspersky has came across a cross-platform malware which is capable of running on Windows, Mac and Linux.
The malware is completely written in Java. Even the exploit used for delivering the malware is also well-known Java exploit(CVE-2013-2465) which makes the campaign completely cross-platform.
Once the bot has infected a system, it copies itself into user's home directory as well as add itself to the autostart programs list to ensure it gets executed whenever user reboots the system.
Once the configuration is done, the malware generates an unique identifier and informs its master. Cyber criminals later communicates with this bot through IRC protocol.
The main purpose of this bot is appeared to be participate in Distributed-denial-of-service(DDOS) attacks. Attacker can instruct the bot to attack a specific address and specify a duration for the attack.
The malware uses few techniques to make the malware analysis and detection more difficult. It uses the Zelix Klassmaster obfuscator. This obfuscator not only obfuscate the byte code but also encrypts string constants.
All machines running Java 7 update 21 and earlier versions are likely to be vulnerable to this attack. | <urn:uuid:d2a4053d-c390-4816-9656-4039cb2dcf07> | CC-MAIN-2017-09 | http://www.ehackingnews.com/search/label/Cross-platform%20malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00231-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.894903 | 248 | 2.59375 | 3 |
Data encryption could help enterprises protect their sensitive information against mass surveillance by governments, as well as guard against unauthorized access by ill-intended third parties, but the correct implementation and use of data encryption technologies is not an easy task, according to security experts.
Encryption could limit the ability of law enforcement and intelligence agencies to access data without the knowledge of its owner as it travels over the public Internet or by forcing third-party service providers like hosting or cloud vendors to hand it over under a gag order. However, in order for this to work the data needs to be encrypted at all times, while in transit, while in use and while at rest on servers.
The recent media reports about the electronic surveillance programs run by the U.S. National Security Agency (NSA) have raised privacy concerns among Internet users, civil rights activists and politicians not only in the U.S., but also in Europe, Australia and elsewhere.
While there are still unanswered questions about the methods used by the NSA to collect data as part of its recently exposed Prism program, the information leaked to the media suggests that electronic communications have been gathered en masse for years from Microsoft, Yahoo, Google, AOL, Facebook, PalTalk, Skype, Apple and YouTube.
Some of these companies have already denied that the NSA has direct access to their servers or that they were even aware of this surveillance program before it was mentioned in the press. However, the possibility of the NSA having access, directly or indirectly, to the data stored on servers that belong to U.S. service providers is bound to raise data security concerns within organizations that moved or are considering moving their systems and applications into the cloud.
In general, encryption technologies can be used to limit the scope of data collection by government agencies, according to security experts. Even if governments do have the legal avenues to force companies to decrypt and provide access to their data by using national security orders, subpoenas or other methods, at the very least the use of encryption can allow companies to know when their data is being targeted, they said.
"While all reputable companies will want to comply with the laws of the states in which they do business, encryption can give them full visibility into what is being monitored so that they can be a willing and active partner in government investigations," said Mark Bower, vice president of product management at data protection vendor Voltage Security, via email. "Encryption can mean the difference between full visibility into lawful intercepts, and learning about their data being intercepted by the next big leak in the media."
Encryption is likely to be most effective against upstream data collection efforts, said Matthew Green, a cryptographer and research professor at the Johns Hopkins University Information Security Institute in Baltimore, via email.
The challenge is what kind of encryption to use, Green said. SSL is the most common way to protect data transmitted over the wire and the protocol is actually fairly strong, but SSL keys are relatively small and it's not outside the realm of possibility that an organization like the NSA could obtain these keys at some point, he said.
There is already evidence that the NSA is performing upstream traffic interception on the networks of high-level ISPs that operate Internet backbone infrastructure, as shown by the case of Room 641A, an NSA Internet traffic interception facility located in a AT&T building in San Francisco that was exposed in 2006.
"We have no idea what the NSA can do," Green said. "However it's reasonable to assume that even if they can break modern encryption schemes -- a pretty big assumption -- it's going to be pretty expensive for them to do so. That rules out massive non-targeted eavesdropping on encrypted connections."
The feasibility of breaking SSL encryption is also determined by the different configurations in which the protocol can be used. For example, the Diffie-Hellman -- DHE and ECDHE -- configurations of SSL are much more difficult to tap than the RSA configuration, Green said.
In order for encryption to completely prevent unwanted surveillance, the data must be encrypted throughout its life, said Dwayne Melancon, chief technology officer of IT security firm Tripwire, via email. "If it is in the clear at any point (at rest, in use, or in motion) it could potentially be accessed by others without credentials."
This means that data needs to remain encrypted not only as it travels across the global Internet and passes through routers and servers in different jurisdictions, but also while it's used in real time by applications, as well as when stored for backup purposes.
Ensuring that the private keys used to encrypt the data remain secret at all times is paramount. That's not easy to do when running live applications and hosting databases on cloud servers or when relying on other cloud services.
"If an organization relies on the cloud service provider [CSP] for encryption, the CSP holds the encryption keys," said Steve Weis, chief technology officer at PrivateCore, a company that develops technology for encrypting data during program execution, via email. "The organization has no knowledge or control when someone lawfully attempts to access encrypted data. The organization is blind."
Companies should adopt a "trust no one" model for the management of encryption keys, Melancon said. Private keys should not be shared with anyone else, especially third-party service providers, he said.
Even though there are technologies available that can enable the safe use of encryption when cloud servers are involved, getting everything right and ensuring that there are no errors in the overall implementation can require a lot of resources.
"It can be done, but it takes a lot of forethought, a lot of effort, and the use of true end-to-end encryption will increase your costs," Melancon said. "It may also require you to rewrite applications, or switch providers in order to handle all aspects of end-to-end encryption."
When considering that NSA's primary mission is the gathering of foreign intelligence, companies that are not based in the U.S. should probably be even more concerned about the recent revelations regarding the agency's surveillance efforts.
"If you're a European company dealing in sensitive corporate data, I think you'd be crazy to use a U.S. cloud service," Green said. However, that won't stop companies from doing it, he said.
"A big part of the political scandal in the USA right now is the fact that the NSA is spying on Americans," said Zooko Wilcox-O'Hearn, co-founder of the Tahoe-LAFS project, a distributed, fault-tolerant and encrypted cloud storage system. "However, absent evidence to the contrary, I would assume that the NSA is at least as effective at spying on data in European and other locales as in American locales."
That said, Wilcox-O'Hearn believes that companies should also be concerned about other actors spying on them. Those could include law enforcement, military and intelligence organizations from other countries, as well as organized crime gangs or corrupt employees of telecommunication companies and ISPs, he said.
Banks and other financial organizations, as well as companies from the telecommunications industry, that handle very sensitive data usually prefer to keep it on their servers, under their control, primarily because they need to meet regulatory compliance and can't perform security audits in the cloud, said Sergiu Zaharia, the chief operations officer at Romania-based security consultancy firm iSEC.
Such organizations use encryption to secure the traffic between their different branch offices or between customers and their publicly accessible services, but very few of them encrypt data as it travels through their internal networks, between their own servers, at least in Romania, he said.
Other companies, like small online retailers, that choose to use cloud servers to run applications and store customer data don't care too much about encryption or if they do encrypt the data, they don't care if the service provider has access to their encryption keys because they usually don't perform an advanced enough risk analysis, he said.
"All our customers have highlighted their concern with security issues, especially when it comes to services hosted in a third party location," said Dragos Manac, CEO of Appnor MSP, a provider of managed dedicated servers and cloud computing with infrastructure in both Europe and the U.S., via email. "The current Prism scandal is a major blow for governments, but it also hurts service providers."
As far as government surveillance is concerned, service providers are caught between a rock and a hard place, he said. "Not helping the authorities means you are violating the law. Helping them means you may be violating someone's rights."
There is no reason to believe that the NSA, or anyone else, can crack strong encryption algorithms that have been studied and vetted by scientists, Wilcox-O'Hearn said. "On the other hand, it is easy for a programmer or service provider to implement them incorrectly or for a user to use them incorrectly, in which case it would be possible for anyone who had access to the network traffic to read the data," he said. | <urn:uuid:81c29c30-19cf-4659-88f2-b13cadc2739e> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2497772/security0/spy-proof-enterprise-encryption-is-possible--but-daunting.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00583-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967538 | 1,856 | 2.828125 | 3 |
Desperate search for power points could be over
In future you may not have to look out for a power plug or external power source to charge your lap top as scientists have developed a new material which can turn the laplop casing as its battery.
Researchers from Vanderbilt University’s Nanomaterials and Energy Devices Laboratory have developed a supercapacitor that can store electricity by assembling electrically charged ions on the surface of a porous material, instead of storing it in chemical reactions as in batteries.
The wafer-shaped materials have been developed by graduate student of the university Andrew Westover and assistant professor of mechanical engineering Cary Pint.
Pint said, "These devices demonstrate – for the first time as far as we can tell – that it is possible to create materials that can store and discharge significant amounts of electricity while they are subject to realistic static loads and dynamic forces, such as vibrations or impacts."
"Andrew has managed to make our dream of structural energy storage materials into a reality," Pint added.
The material can store energy as well as withstand static and dynamic mechanical stresses. It can store and release electrical charge, subject to stresses or pressures up to 44 psi and vibrational accelerations over 80g which is far greater than those acting on turbine blades in a jet engine.
The duo has developed the material by using ion-conducting polymers infiltrated into nanoporous silicon that is etched directly into bulk conductive silicon.
The device platform claimed to maintain energy densities of about 10 W h/kg with Coulombic efficiency of 98% under exposure to over 300 kPa tensile stresses and 80 g vibratory accelerations.
Researchers also claimed that the structurally integrated energy storage material can be used across renewable energy systems, transportation systems, and mobile electronics, others.
The breakthrough could help in charging laptop with casing, or car powered by energy stored in its chassis, or create a smart home where the dry wall and siding store the electricity to power the lights and appliances.
"Battery performance metrics change when you’re putting energy storage into heavy materials that are already needed for structural integrity," Pint added.
"Supercapacitors store ten times less energy than current lithium-ion batteries, but they can last a thousand times longer. That means they are better suited for structural applications."
"It doesn’t make sense to develop materials to build a home, car chassis, or aerospace vehicle if you have to replace them every few years because they go dead."
The material is made of electrodes made from silicon that have been chemically treated so they have nanoscale pores on their inner surfaces and then coated with a protective ultrathin graphene-like layer of carbon.
A polymer film is sandwiched between the two electrodes which acts as a reservoir of charged ions, similar to the role of the electrolyte paste in a battery.
When the electrodes are pressed, the polymer flows into the tiny pores, similar to melted cheese into bread. | <urn:uuid:b1782f9e-35d3-4634-8bb6-b3f85861a039> | CC-MAIN-2017-09 | http://www.cbronline.com/news/mobility/devices/future-laptops-could-be-charged-by-its-casing-4278312 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00107-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945103 | 610 | 3.71875 | 4 |
What is OTDR?
OTDR, short for Optical Time Domain Reflectometer, is used for testing the integrity of fiber optic cables. The basic functions of OTDR is to verify splice loss, measure length and find faults. In addition, OTDR is commonly used to create a “picture” of fiber optic cable when it is newly installed and compare the original trace and a second trace taken if problems arise later, because analyzing the OTDR trace is always made easier by having documentation from the original trace that was created when the cable was installed. We can get a full report of losses as well as reflective events (connectors and mechanical splices) tied to the distance or the geographical information of an optical fiber link by using OTDR. As a good helper to test fiber optic cables, OTDR is widely used in many cable network testing nowadays. (Figure 1.)
Figure 1. Measurement in cable installations
When do we need an OTDR?
When we are installing an outside plant network such as a long distance network or a long campus LAN with splices between cables, we will need an OTDR to check if the fiber optic cables and splices are good. At this time, OTDR can be used to locate a break or similar problem in a cable run, or to take a a paper copy of the ODTR trace of fiber optic cables before turning an installation over to a customer. It gives us a permanent record of the state of that fiber optic cable at any point in time. This provide where responsibility for the damage lies so that can help installers to analyze issue when fiber optic cables have been damaged or altered after installation. Sometimes, OTDR is be used to test as a condition for system acceptance for some customers’ demands.
However, as impressive as OTDRs are, there are some limitations of OTDR, such as that its limited distance resolution makes it very hard to use in a LAN or building environment, where cables are usually only a few hundred feet long.
Figure 2. PON Power Meter
YOKOGAWA AQ1200A MFT-OTDR Handheld Optical Test Tool on sale in Fiberstore
Fiberstore is specialized in supplying a full line of fiber optic components and network equipment. For the OTDR, we provide one year warranty for both domestic and foreign brand product. YOKOGAWA, a famous brand of testing and measurement tools, is on sale in Fiberstore with a very reasonable price. AQ1200 (Figure 3.) is the newest addition to Yokogawa’s OTDR product family offering an even smaller and lighter alternative to the AQ7275 models. The AQ1200A (1310/1550nm) is a standard model with the same wavelengths used for communication services.
Figure 3. YOKOGAWA AQ1200 MFT-OTDR Handheld Optical Fiber Test Tool
Excellent Functionalities and Operabilities:
Figure 4. Features of AQ1200
Fiberstore supplies OTDR of famous brands, such as JDSU MTS series, EXFO FTB series, YOKOGAWA AQ series and so on. In addition, OEM portable and handheld OTDRs (manufactured by Fiberstore) are also available. All OTDRs are saled with a very reasonable price and warrenty for one year. Want to know more, please click here! | <urn:uuid:cecf7639-c126-4ebc-8bf5-ec6569d333e0> | CC-MAIN-2017-09 | http://www.fs.com/blog/multifunctional-handheld-otdr-good-helper-for-your-cable-network-testing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00107-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924688 | 687 | 2.71875 | 3 |
Attackers have long targeted application vulnerabilities in order to breach systems and steal data, but recently they’ve been skipping a step and going directly after the tools developers use to actually build those applications.
Consider the news that broke earlier this year that entailed how the CIA allegedly attempted to compromise Apple’s development software Xcode. Such a breach could mean that every app developed with the development environment would, in turn, contain malware that would enable its creators to spy and snoop on people who installed those apps, as The Intercept reported in the story The CIA Campaign To Steal Secrets. “The security researchers also claimed they had created a modified version of Apple's proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool,“
To be sure, infecting the tools developers use, in order to compromise the apps they ultimately ship, makes for a very juicy target for attackers as well as a dangerous and significant threat to enterprises. Consider the brute force-attacks that targeted the popular source code repository GitHub in 2013, after numerous accounts had been compromised, GitHub banned what it considers weak passwords and implemented rate limiting for logon attempts.
That GitHub attack and the attack on Xcode aren’t isolated incidents. Just last week Apple acknowledged that its App Store endured a significant breach involving thousands of apps. The compromise was made possible when Chinese developers downloaded counterfeit copies of Xcode that were tainted with malware dubbed XcodeGhost. XcodeGhost compromises the Xcode integrated development environment in such a way that apps created with that version of Xcode would comprise subsequently developed apps. While Apple removed the infected apps, more than 4,000 tainted apps have been estimated to have made it into the App Store. Also, in 2013, Apple’s Dev Center was taken down for an extended period with many developers reporting that Apple forced their passwords to be reset.
Strategist with IT risk management firm CBI, J. Wolfgang Goerlich, explains why the recent spate of attacks on Apple’s development tools are notable. “The number of OS X computers continues to raise in the enterprise environment. Few organizations are considering Macs [from a security perspective] as the numbers have long been small and most [security] controls are Windows-based,” he says.
“These types of attacks - infecting the compiler - used to be considered a potential threat by high security governmental organizations. You would be considered paranoid to present such a scenario as something that could impact the general public. And yet here we are,” says Yossi Naar, co-founder of Cybereason, a provider of breach detection software.
If these types of two-stage attacks are no longer threats only to the paranoid, and enterprise development environments are targeted, what does this mean for enterprises trying to ensure they are developing and deploying secure applications.
“From a development perspective, the best practices in continuous integration and deployment would have prevented the attack [against Apple’s App Store],” says Goerlich.
Chris Camejo, director of threat and vulnerability analysis for NTT Com Security, would agree. “This should be obvious, but developers (and anyone else for that matter) should only use software from trusted sources like a vendor’s website or official app store, or verify that software packages they’ve downloaded haven’t been tampered with by verifying the software’s digital signatures when available,“ says Camejo.
Sri Ramanathan, CTO of mobile app development platform Kony, says the same holds true for open source software. “To protect developers, enterprises need to ensure that any software used has been vetted and certified as safe for use. Vigilance must be maintained on open source software modules in particular,” he says. When it comes to Kony’s development environment, Ramanathan says that Kony developers working on a product cannot use open source unless its specifically approved, and that every piece of software is statically and dynamically scanned prior to and after being approved for use.
“We also use a battery of internal and external pen tests to periodically certify all our runtimes. And we ensure that any open source software we use originates from a vibrant trusted community, and is actively supported, does not have too many known security issues (known issues can and should be mitigated) and is well documented,” Ramanathan explains.
For enterprises, it’s important developers and the software development chain be protected like any other users and assets, perhaps more so in many instances. “For other tool chains, particularly open-source, it is important to verify the authenticity of the software before you use it. Most open-source projects provide cryptographic hashes that you can use to verify the authenticity of downloaded software,” says Bobby Kuzma, CISSP, systems engineer, at Core Security. “Treating build servers as secure systems, with advanced security controls, similar to what should be used when dealing with sensitive cryptographic materials will help gain control against this type of threat," Kuzma adds.
Good advice for any development team. And enterprises need to make certain developers work in a clean environment using separate systems for development from those used in building apps, adds Goerlich. “The build machine is then kept in a secure hardened state, with the compiling automated. Even if the developers download malicious code such as XcodeGhost on their computers, the build computer is kept clean and what is submitted to the App Store is protected,” he says.
“For enterprises, a strong network security management program that monitors for malware connecting out to command-and-control computers is the first line of defense when identifying attacks like XcodeGhost,” Goerlich adds.
This story, "Developers find themselves in hackers’ crosshairs" was originally published by CSO. | <urn:uuid:cbf85037-7fea-445f-b8fb-c7c776dde7f2> | CC-MAIN-2017-09 | http://www.itnews.com/article/2987237/application-security/developers-find-themselves-in-hackers-crosshairs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00459-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949006 | 1,219 | 2.765625 | 3 |
Cloud Computing: Google Maps Imagery Charts Japanese Earthquake, Tsunami Devastation
The 8.9 magnitude earthquake that rocked northern Japan March 11 triggered tsunamis that rippled across the Pacific Ocean, causing devastation in Japan and damaging harbor facilities as far away as Hawaii and the coast of California. Google March 11 responded by putting together one of its crisis response centers, including real-time updates for the disaster, a person locator, emergency lines, a disaster bulletin board and train information to help people evacuate. Google over the weekend donated $250,000 to organizations in Japan that are working on relief and recovery efforts and added donation information to the Website. Google March 12 began loading Picasa Web Albums with Google Maps content from satellite imagery providers to show areas affected most by the disaster. "We hope this new updated satellite imagery is valuable for them as well as everyone else following this situation to help illustrate the extent of the damage," the company explained. This eWEEK slide show highlights some of the before and after pictures of areas in Japan most heavily damaged by the earthquake and tsunami. | <urn:uuid:437ad775-9e52-4a8c-83a3-939760ab1bdb> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Cloud-Computing/Google-Maps-Imagery-Charts-Japanese-Earthquake-Tsunami-Devastation-103142 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00579-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943256 | 222 | 2.71875 | 3 |
Sensor array maps a first responder's location, movements
The human brain is amazing, but it has its limitations. One of the biggest is that it’s difficult for two humans to accurately share data. To tell another human what we know, we have to write it down in painstaking detail or carefully explain it orally. Those who have perfected those skills become successful politicians, writers or entertainers. But even those people probably don’t come close to perfectly replicating their ideas inside the minds of others.
That’s just a fact of life. But if you happen to be an emergency responder, or a solider, that little limitation can get you or other people killed.
Say you’re a firefighter entering a burning building looking for survivors, or a soldier trying to clear an area of enemies. Someone else may have gone into that same building last year, or last week, or five minutes ago. But the information about the various twists and turns is stored in that other person's brain, which doesn’t help you very much.
But help may have arrived. The scientists at the Massachusetts Institute of Technology have built an automatic building mapping computer that could make sharing positional data a lot easier.
As a person wearing the sensor array walks through a building, lasers scan the various distances between the mapper and the walls. Altimeter and barometer sensors estimate and track height, and cameras take snapshots so that they can be compared to the map the computer is drawing. The map is relayed to observers viewing it on a laptop.
Amazingly enough, the device comprises of a lot of off-the-shelf parts, such as the camera from an Xbox 360 Kinect sensor.
The experiment, which worked from the premise of a situation involving hazardous materials, expands on previous work done with robots. It currently allows the person wearing the sensor to press a button noting an area of interest, but the team expects to be able to add voice or text tags to the map.
Although still in prototype phase, it seems to work well. You can see the system in action below.
For this technology to go mainstream, the sensor would likely need to get a bit smaller. And some work on the end-user interface would also be needed so that the data can be easily shared in the field. But this is a promising development that could end up saving lives.
Posted by John Breeden II on Sep 26, 2012 at 9:39 AM | <urn:uuid:26f91b05-3b7b-41cb-930e-796de45ca99a> | CC-MAIN-2017-09 | https://gcn.com/blogs/emerging-tech/2012/09/mit-first-responder-mapping-sensor-array.aspx?admgarea=TC_HLS | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00579-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94971 | 503 | 3.359375 | 3 |
Wearable devices could be key to improving health, caring for patients with chronic diseases and understanding the impact of treatments. But there’s one snag: how do you get people to wear them?
For all the hype around smartwatches and fitness bands, not everyone wants to walk around with a computer strapped to their body. Studies have shown that people who buy wearables often wear them for a few months and then toss them in a drawer and forget about them.
For projects like President Barack Obama’s Precision Medicine Initiative, that’s a problem. So health experts got together at a conference in Boston Monday to decide what to do about it.
Getting people to use wearable health devices requires them to deliver meaningful, real-time information that takes into account people’s lifestyle and circumstances, the experts said. It sounds obvious, but it’s the type of thing the tech industry has a knack of forgetting.
If health devices don’t show value on a daily basis, people will reject them, said Maribeth Gandy Coleman, a researcher from the Georgia Institute of Technology.
For some, that value is about more than helping them manage a chronic illness. Patients want to understand the data and be able to use it to make positive changes in their health, she said.
Coleman spoke on a panel looking at how the National Institute of Health can maintain user engagement as it develops a medical research group of 1 million volunteers for the Precision Medicine Initiative, which aims to gather environmental, genetic and lifestyle information to understand how individual traits affect how people respond to treatments.
People have only so much room on their bodies for wearables, Coleman noted. They’ll avoid devices that don’t mesh with their daily routines or that make them socially or physically uncomfortable. And they’re unlikely to wear a device that display health data where others can see it, like on a large wrist display.
Apps that require people to constantly enter data probably won’t go over well either, she said.
And an app that encourages low-income families to eat more healthily needs to be realistic about the choices available to them.
“They don’t need an app that tells them to drink kale smoothies” if they live in a neighborhood where affordable, fresh fruits and vegetables aren’t available.
Apps that work the best “nudge” people to make lifestyle changes instead of “swooping in” and pointing out poor choices they’ve made, she said.
Social engagement through chat groups had a positive effect, but only a small number of people opt to use those platforms, said Bonnie Spring, director of the Center for Behavior and Health at Northwestern University’s Institute for Public Health and Medicine.
Spring helped build an app to help people lose weight and recruited eight people to use it. The results were mixed, with only half of those people using the app’s social support component, which allowed them to use a chat room to discuss their weight loss efforts.
Spring then expanded the project to a larger group. Only 11 percent of participants used the chat room function, but if a person felt like they had made a friend through the app, they continued to use it six months later.
Allowing people to provide input about what value they’d like an app to provide can help retain users, she said.
Northwestern is creating an app for its students aimed at promoting cardiovascular health. But students wouldn’t be interested in an app that tells them to eat vegetables and get more exercise. Instead, Springer asked students about their academic, personal and professional goals. Those points will be incorporated into the app and students will be told how a healthy lifestyle can help them reach those goals.
People want information they can use, and that information may be unique to each person, Spring said. | <urn:uuid:202facfd-d652-4f48-8a75-35af664d4899> | CC-MAIN-2017-09 | http://www.itnews.com/article/2953395/to-see-benefits-heath-wearables-must-keep-people-engaged.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00631-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956325 | 798 | 2.796875 | 3 |
The Latin America’s biopesticides market has been estimated at USD 144.71 million in 2015 and is projected to reach USD XX million by 2021, at a CAGR of XX% during the forecast period from 2016 to 2021. Biopesticides offer a unique and innovative approach to the management of agricultural pests using formulated microbial agents as an active ingredient. Microbes that have been used in this approach include fungi, bacteria, viruses and nematodes. Each microbial biopesticide is unique, not only the organism or active ingredient but also the host, the environment in which it is being applied, and economics of production and control.
In Latin America, the pesticides market, both synthetic as well as biopesticides is witnessing a steady growth and the key market drivers for the industry in the region are adoption of herbicides tolerant crops, increasing area of crop production and increasing yields of agricultural produce. However, strict regulation posed by US EPA and EU on pesticidal residue limit on food crops, will limit the use of the synthetic pesticides on crops and will increase the demand of biopesticides in the region. While the prevalence of chemical or synthetic pesticides in Latin America would continue, human, animal and environmental health concerns would play key roles in driving growth for Biopesticides. Emerging economies of Latin America are likely take the lead in adoption of both pesticides and Biopesticides. Principal factors driving the same include greater adoption of biopesticides in place of traditional chemical-based pesticides as a consequence of increasing efficacy and enhanced consumer confidence in their performance output.
Segmentation of this report has included categorizing Biopesticides as Bioherbicides, Bioinsecticides, biofungicides and other Biopesticides. By application area, Biopesticide demand has been analyzed in terms of crop-based (including grains & cereals, oilseeds and fruits & vegetables) and non-crop-based (including turf & ornamental grass and other non-crop-based applications).Bioherbicides form the largest and Biofungicides are the fastest growing segments.
Many countries, such as Argentina, have been at the forefront of introducing regulations aimed at minimizing the use of chemical pesticides within municipal limits, which are expected to provide the necessary momentum for biopesticides. Biological control agents (BCAs) or Biopesticides account for a small share of registered pesticides in Brazil because the market for unregistered BCAs is much higher.
The Biopesticides market, is witnessing a surge in corporate activity with several agrochemical companies entering into the agricultural biologicals sector either through dedicated R&D investment, licensing deals, partnerships, and mergers and acquisitions. The analysis of major companies in the Biopesticides industry has taken into account strategy adopted, financial revenues and the latest developments in the market. Some of the leading players covered include Bayer CropScience, BASF, Marrone Bio Innovations, De Sangosse and Valent Biosciences.
Key Deliverables in the Study | <urn:uuid:2cb9dd8b-2aaa-45b2-ab45-712779f8f1c8> | CC-MAIN-2017-09 | https://www.mordorintelligence.com/industry-reports/latin-american-biopesticides-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00631-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938264 | 620 | 2.71875 | 3 |
GCN LAB IMPRESSIONS
To study mental illness, scientists drive a computer crazy
Neural network claims responsibility for terrorist bombing after scientists simulate high levels of dopamine
- By Greg Crowe
- May 12, 2011
Computers are crazy.
That is what most of us will assert from time to time, usually when our computer-related tasks become the most frustrating. But as we all know, and will admit once we’ve calmed down, computers only try to do what they are told, and any apparent insanity is merely the product of environmental factors or bad data.
Or perhaps they can only attribute the problem to … human error. By the way, if a computer ever actually says this to you, get out of there quick. It’s probably the second worse thing a computer can say to you, right after, “Shall we play a game?”
Earlier this week a group of scientists in the Psychiatry Department at Yale University’s School of Medicine made some computers crazy, on purpose.
To better understand how schizophrenia affects the human brain, they designed a digital neural network and simulated excessive levels of dopamine by accelerating its learning process.
What they got were responses that had all sorts of disassociations and delusions, symptoms that occur in schizophrenic people. The neural network made up all sorts of stories of which it was the star. Once, the computer even claimed responsibility for a terrorist bombing, which we assume the government is investigating just on general principle.
The abstract describing the experiment explains their motivations for all this.
The research could lead to huge advances in finding a cure for the disease, or at least better understanding how our brains work. We wish them all the best, and hope that the computer does not get around to actually planning terrorist attacks — which would make a great B-movie plot, though we’ve all probably seen that one a bunch of times before.
Understand, we do not mean to make light of schizophrenia or the serious work of the researchers in understanding it. But computers are another matter. And humans making a computer crazy does have a certain man-bites-dog appeal. So, in the spirit of advancing medical science at the expense of computers, here are some ideas on other ways to drive a computer loopy.
1) You could make it manic-depressive by making all the transistors in its CPU bipolar ones. (Sorry – I couldn’t help it, read my previous article.)
2) You could run up to it and yell, “This statement is false!” This one always works in science fiction, and fiction would never lie to us, right?
3) Have it run Windows Vista. What? Too soon?
Think you can come up with a better way to drive a computer crazy? Let us know and we might just test it out in the lab.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:46fade54-b841-4983-99c9-e8ae8ebde707> | CC-MAIN-2017-09 | https://gcn.com/articles/2011/05/12/scientists-drive-computer-crazy-to-study-schizophrenia.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00155-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955102 | 610 | 2.640625 | 3 |
Heating Up the Exascale Race by Staying Cool
January 26, 2017 Ben Cotton
High performance computing is a hot field, and not just in the sense that it gets a lot of attention. The hardware necessary to perform the countless simulations performed every day consumes a lot of power, which is largely turned into heat. How to handle all of that heat is a subject that is always on the mind of facilities managers. If the thermal energy is not moved elsewhere in short order, the delicate electronics that comprise the modern computer will cease to function.
The computer room air handler (CRAH) is the usual approach. Chilled water chills the air, which is forced through the room at large. Cold air blows into the front of the racks and hot air comes out the back, where it is sent back to be chilled. This method works well enough in many use cases, although it can lead to unhappy operations staff if someone forgets their jacket. Of course, the power necessary to provide chilled water to the CRAH is in addition to what is needed to operate the compute hardware itself. This has lead some large sites to investigate alternative means of cooling.
Hyperscalers like Google and Facebook have taken to building datacenters near the Arctic Circle, giving them a year-round supply of very cold air. The Norwegian company Green Mountain built a datacenter in a cave, and used water from a nearby fjord for cooling. Using free resources is compelling, but geography limits the available locations.
Some sites choose to bring the coolant closer to the source. Active chilled water doors draw the exhaust air across water-filled coils, removing the heat before it enters the “hot” aisle. Coolant may even be brought into direct contact with components, or the entire machine may be immersed into a thermally-conductive liquid. The Cray-2 was one of the first to spend life in the bathtub, but Cray and others have used immersion cooling since. Immersion cooling allows for higher overclocking, but the challenges of building the tubs and repairing oil-covered hardware make it unappealing for all but rare cases.
However the heat is removed, the trend toward more dense systems means more heat per unit of server volume, and thus the push toward more efficient methods of handling the heat. Of course, the only thing better than efficiently removing heat is not producing heat in the first place. Since the inception of the Green500 list in June 2013, the efficiency of the top system has roughly tripled (see Figure 1). This is good news for both the immediate term and for future exascale goals.
The top of the Green500 list has shown approximately linear efficiency improvements. If that holds, the most efficient supercomputers will have the efficiency to hit an exaFLOP with 20-30 megawatts by the end of 2018 (see Figure 2). Of course, it’s not clear that the efficiency of the Green500 leader could be maintained at a higher total performance. It’s telling that the same system has not topped the Green500 and Top500 lists simultaneously. Furthermore, expecting a linear rate of change to hold forever is ultimately a losing proposition. The question is when, not if, progress slows.
With a four or five year gap between the desired efficiency threshold and the target date for a U.S. exascale system, the power goal seems realistic. Current Top500 leader Sunway TaihuLight also holds the number four spot on the Green500 list (and was in third place on the June 2016 list). This suggests that the more efficient machines no longer need to be less powerful. If this trend continues, the first exascale system may come in under the power budget.
Meanwhile, researchers are actively working to reduce the heat produced by electrical components. At the International Electron Devices Meeting in December, a team from Purdue University and Korea Institute of Science and Technology presented a paper that describes their research into reducing the heat of floating body transistors. Most notably, they found that a tradeoff between electrostatic control and heat outflow is not intrinsic – changes in device design can reduce the self-heating.
Materials used to manufacture the transistor make a difference. Switching from silicon dioxide to aluminum oxide decreased the channel temperature by 50-70% in the study done by the Purdue/KIST team. Other research suggests Germanium-based transistors may reduce heat compared to Silicon. This harkens back to the early days of the transistor, when purifying Silicon was too expensive. These days, Silicon is cheaper and Germanium is in demand for photonics and photovoltaic applications, so it’s not clear that manufacturers will see an economic incentive to make the switch in large quantities.
Changes in physical design have smaller, but meaningful improvements as well. For example, the Purdue/KIST team found that increasing the size of the drain pad in the transistor can lower the self-heating, but also more evenly distribute the heat generated. Since uneven distribution of heat can impact the performance and longevity of electronic elements, this has double benefit at extremely large scales.
Delivering an exascale system will not be an easy goal. As we wrote in September:
We expect more exascale projects and more delays as the engineering challenges mount. But we also think that compromises will be made in the power consumption and thermals to get workable systems that do truly fantastic things with modeling and simulation.
Reducing and managing the heat generated by the systems is only one part of a very large puzzle. But the trends in the Green500 list and work being done at the transistor level to reduce self-heating are encouraging. If today’s research can make it into production by 2023, the target can be met. By combining better performance per watt with reduced-heat components, power and cooling do not have to be a roadblock on the way to exascale. | <urn:uuid:c6276162-b2dd-4955-b5f8-3caaa5d1ac34> | CC-MAIN-2017-09 | https://www.nextplatform.com/2017/01/26/heating-exascale-race-staying-cool/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00331-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945556 | 1,218 | 2.6875 | 3 |
The event certainly would be momentous for the space exploration world - the first spacecraft to actually leave our solar system - but NASA says despites reports to the contrary its Voyager 1 has not left our realm -- just yet that is.
"The Voyager team is aware of reports today that NASA's Voyager 1 has left the solar system," said Edward Stone, Voyager project scientist based at the California Institute of Technology, Pasadena, Calif. "It is the consensus of the Voyager science team that Voyager 1 has not yet left the solar system or reached interstellar space. In December 2012, the Voyager science team reported that Voyager 1 is within a new region called 'the magnetic highway' where energetic particles changed dramatically. A change in the direction of the magnetic field is the last critical indicator of reaching interstellar space, and that change of direction has not yet been observed."
[RELATED: What is so infinitely cool about Mars?]
The reports come from research published online in the journal Geophysical Research Letters (GRL). According to reports the debate on whether or not the craft has left the solar system revolves around what data the system is sending back about its surroundings. How those data are interpreted to be precise.
From a BBC report on the research: "Voyager has been detecting a rise in the number of high-energy particles, or cosmic rays, coming towards it from interstellar space, while at the same time recording a decline in the intensity of energetic particles coming from behind, from our Sun. A big change occurred on 25 August last year, which the GRL paper's authors say was like a "heliocliff". "Within just a few days, the heliospheric intensity of trapped radiation decreased, and the cosmic ray intensity went up as you would expect if it exited the heliosphere," explained Prof Bill Webber from New Mexico State University in Las Cruces. Prof Weber acknowledges there is an on-going debate about the probe's status.
NASA said from January 2009 to January 2012, there had been a gradual increase of about 25% in the amount of galactic cosmic rays Voyager was encountering but beginning on May 7, 2012 the cosmic ray hits have increased five percent in a week and nine percent in a month, NASA said.
"The latest data indicate that we are clearly in a new region where things are changing more quickly. It is very exciting. We are approaching the solar system's frontier," said Ed Stone, Voyager project scientist at the California Institute of Technology said at the time. "The laws of physics say that someday Voyager will become the first human-made object to enter interstellar space, but we still do not know exactly when that someday will be."
From NASA last summer: "This marked increase is one of a triad of data sets which need to make significant swings of the needle to indicate a new era in space exploration. The second important measure from the spacecraft's two telescopes is the intensity of energetic particles generated inside the heliosphere, the bubble of charged particles the sun blows around itself. While there has been a slow decline in the measurements of these energetic particles, they have not dropped off precipitously, which could be expected when Voyager breaks through the solar boundary. The final data set that Voyager scientists believe will reveal a major change is the measurement in the direction of the magnetic field lines surrounding the spacecraft. While Voyager is still within the heliosphere, these field lines run east-west. When it passes into interstellar space, the team expects Voyager will find that the magnetic field lines orient in a more north-south direction. Such analysis will take weeks, and the Voyager team is currently crunching the numbers of its latest data set."
Last June, NASA said that the boundary between interstellar space and the bubble of charged particles the sun blows around itself is likely between 10 and 14 billion miles (16 to 23 billion kilometers) from the sun, with a best estimate of approximately 11 billion miles (18 billion kilometers). Since Voyager 1 is has crossed that threshold it could cross into interstellar space at any time.
Check out these other hot stories: | <urn:uuid:c39dbd52-4c41-41c4-8e54-3a8455423c12> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2224329/data-center/nasa-denies-reports-its-voyager-spacecraft-has-left-the-solar-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00099-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950555 | 824 | 3.15625 | 3 |
A fully functioning quantum computer is still twelve years off, according to Intel, but the company is already plowing research funding into the field.
On Thursday, Intel promised to fund QuTech, a research unit at the Technical University of Delft in the Netherlands, to the tune of US$50 million over 10 years, and to provide additional staff and equipment to support its work.
QuTech hopes the partnership will allow it to combine its theoretical work on quantum computing with Intel's manufacturing expertise to produce quantum computing devices on a larger scale.
Quantum computers are composed of qubits that can take on multiple values simultaneously, unlike the bits stored and processed in traditional computers, which are either 0s or 1s. This multiplicity of values makes quantum computing, at least in theory, highly useful for parallel computing problems such as financial analysis, molecular modelling or decryption.
There are a number of practical problems to be overcome before such computers become more than lab curiosities, including building them at scale, and dealing with cooling. Today's quantum computing systems contain only a few qubits, but systems will have to have thousands of qubits to be really useful. And cooling them takes more than a big fan or a cold aisle in the data center: To reveal the quantum behavior of the materials they are made from, qubits need to be cooled to within a few degrees of absolute zero -- to around -270C.
Intel chose to work with QuTech because of its long experience in the field, and particularly for its work on the interconnects used to link parts of quantum computers together, CEO Brian Krzanich said in an open letter Thursday.
The company is not the only one taking a commercial interest in quantum computing research. IBM is investing too.
In April, IBM researchers said they had designed a square qubit that would allow many qubits to be built on a single chip, and had also developed two ways to detect quantum errors, first steps on the way to the construction of error-correcting quantum systems.
But not everyone is optimistic about the effects quantum computing will have on society. Its ability to perform many calculations in parallel could allow it to crack in a matter of seconds encryption systems that would otherwise resist years of attack from supercomputers. It's a vision some are referring to as the cryptocalypse, or the total end of trust on the Internet, and it has prompted the NSA, for one, to prepare a move to "quantum-resistant algorithms in the not-too-distant future." | <urn:uuid:ec026002-f9af-4d1c-9224-5fe5e9e4c338> | CC-MAIN-2017-09 | http://www.itnews.com/article/2979729/intel-promises-50m-for-quantum-computing-research.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00451-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955984 | 515 | 2.796875 | 3 |
The U.S. Army is exploring the use of drones to deliver supplies to soldiers on the battlefield, a potentially game-changing use for an emerging technology that until now has been mostly identified with the future delivery of household items.
Currently, supplies are mostly transported in road convoys that are vulnerable to attack because they travel along known supply routes. Drones would take supply into the air, make it possible to modify supply routes and, perhaps most importantly, take soldiers out of high-risk situations.
"When we use autonomous air transport, we create a lot of dilemmas for adversaries, because we're not limited to a ground route," said Larry Perecko, branch chief for Science and Technology at the U.S. Army's Combined Arms Support Command (CASCOM) Sustainment Battle Lab in Fort Lee, Virginia.
There are other advantages, too.
Perecko spoke with soldiers who told of their frustration delivering supplies in mountainous terrain, like that in Afghanistan. Often drivers could see the delivery location, but it would take eight hours of driving to get there because of the slow going on treacherous mountain roads.
He said inspiration came from the U.S. Marines, which successfully used an unmanned KMAX helicopter to deliver 2 million kilograms of supplies to units in Afghanistan.
CASCOM had been concentrating on automating road conveys, but "as we looked at the Marines, we asked, 'Why not by air?'" Perecko said.
And so the Sustainment Aerial Mobility Vehicle project was born.
It turns out the Army Research Lab had already been looking at similar technology: a hoverbike produced by the U.K.'s Malloy Aeronautics. Originally envisioned as a tool to transport troops on a battlefield, it also has an unmanned variant called the Marshall Drone.
That technology is now being explored as part of the project. One of the project's goals is making a drone capable of piloted or remote operation. It would have a 200-kilometer range, a 70-kilometer-per-hour cruising speed and a 350-kilogram payload capacity.
But those specifications could change as the Army reassesses and refines its requirements.
In November last year, engineers from Malloy Aeronautics traveled to Fort Lee to demonstrate a one-third-scale version of the drone.
"It's still a work in progress," Perecko said. "They showed how you would program it and how it would execute a mission."
It's not just the Army that is working on such projects. Sikorsky has been working on an unmanned version of its UH-60 Blackhawk helicopter for autonomous cargo missions.
"We've done a couple of experiments to date and we’re getting pretty good results from those experiments," Perecko said. "We think it has a lot of promise. The technology is on the right path." | <urn:uuid:86010322-1bb0-4286-98bd-08f9f63071af> | CC-MAIN-2017-09 | http://www.itnews.com/article/3035534/move-over-amazon-the-us-military-is-also-developing-a-delivery-drone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00327-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96359 | 593 | 3.09375 | 3 |
As if tracking down bugs in a complex application isn't difficult enough, programmers now must worry about a newly emerging and potentially dangerous trap, one in which a program compiler simply eliminates chunks of code it doesn't understand, often without alerting the programmer of the missing functionality.
The code that can lead to this behavior is called optimization-unstable code, or "unstable code," though it is more of a problem with how compilers optimize code, rather than the code itself, said Xi Wang, a researcher at the Massachusetts Institute of Technology. Wang discussed his team's work at the USENIX annual technical conference, being held this week in Philadelphia.
With unstable code, programs can lose functionality or even critical safety checks without the programmer's knowledge.
That this problem is only now coming to the attention of researchers may mean that many programs considered as secure, especially those written in C or other low-level system languages, may have undiscovered vulnerabilities.
The researchers have developed a new technique for finding unstable code in C and C++ programs, called Stack, that they hope compiler makers will use when updating their products.
Using Stack, the research team has found over 160 bugs in various programs due to unstable code.
They found 11 bugs in the open source Kerberos network authentication protocol, all of which were subsequently fixed by the Kerberos developers.
Stack also found 68 potential bugs in the PostgreSQL database management software. Only after they had fashioned some sample code using bugs that crashed PostgreSQL did the database's core developers remedy the issues with 29 new patches.
Unstable code may be hard to pinpoint because, to the developer, it may look, and behave, like functional code. It may also compile into a working program with no problems. Only when the compiler tries to optimize the code for better performance do the issues arise.
A compiler translates the source code of a program into machine code, using the specifications of the programming language itself. Compilers can also optimize code, or examine the code logic to look for ways it can execute more efficiently, which would improve the performance of the running program.
A compiler could, for example, drop a subroutine that is never called. But compilers could also drop code that falls outside the typical programming behavior, even if the programmer may have specific reasons for crafting the program in such a way.
For instance, a routine that guards against buffer overflows may check such a large boundary of memory beyond what is allocated for the program that the compiler may assume it is a mistake and eliminate that safety check altogether, Wang noted. The programmer would never know that the resulting program has no defense against buffer overflow attacks.
The research looked at 16 open source and commercial C/C++ compilers -- from companies such as Intel, IBM and Microsoft -- and had found they all dropped unstable code.
A compiler can issue warnings when it drops code, though compilers typically issue so many warnings, especially for large programs, that a notice of code being eliminated may be lost in the deluge of other largely inconsequential messages.
"I think compiler developers have known about this for years," Wang said.
Not all the blame should be placed on the compiler makers, noted Peng Wu, a researcher at Huawei America Labs who was at the presentation.
In many cases, the specification of the language itself, which the compilers are based on, does not offer any guidance on how to handle certain conditions, she noted. So each compiler handles the cases of unstable code differently.
Also, the programmer should understand the trade-offs of using optimization, Wu said. For instance, if the entire code absolutely must stay fully intact, it shouldn't be optimized, even if optimization does speed the time it takes to build the program and helps the resulting program perform better.
Wu noted that optimization was a chief priority for compiler makers in previous decades, when developers tried to get the best performance from the hardware as possible. Over the past decade however, has more attention been placed on finding bugs, due to the growing impact of security vulnerabilities, and so the problem of unstable code is now surfacing. | <urn:uuid:d9af42ee-0ece-4d5f-a77e-8601419532a2> | CC-MAIN-2017-09 | http://www.cio.com/article/2375330/big-data/usenix--unstable-code-can-lead-to-security-vulnerabilities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00147-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95262 | 841 | 3.5 | 4 |
Cellular Data Bites Back at Malaria
Despite the immense complexity in predicting the spread of disease, mathematicians have developed models and formulas that predict how many people will succumb to a certain disease over time—the same formulas that constitute Newton’s Law of Cooling and Population Growth. But in order to really get a handle on a disease, the location and trajectory of an outbreak are paramount, as demonstrated by new research in public health.
A group of researchers from the Harvard School of Public Health sought to gain insight on the spread of disease by combining big data from cell phone usage with malaria prevalence maps in order to track the movement of the disease in their paper, “Quantifying the impact of human mobility on malaria.”
Said author Caroline Buckee, an assistant professor of epidemiology at HSPH, “This is the first time that such a massive amount of cell phone data—from millions of individuals over the course of a year—has been used, together with detailed infectious disease data
, to measure human mobility and understand how a disease is spreading.”
The team analyzed the movement of nearly 15 million Kenyan cell phone subscribers over the course of a year (from June 2008 to June 2009) and compared it to the instances of malaria found in the country using a map provided by the Kenya Medical Research Institute and the Malaria Altlas Project. The goal was to identify both source and sink points, or where the disease originates and where the disease primarily ends up.
Not surprisingly, they found that one of the primary sources was the area near Lake Victoria, as lakes are prime breeding grounds for mosquitoes. However, according to the study, a surprisingly large portion of non-native infections ended up in Nairobi, Kenya’s capitol.
The researchers, using text and call information, figured out that Nairobi was a sink by mapping every journey taken by each of the nearly 15 million cell phone subscribers. 15 million people journeying over the course of a year produces quite a large dataset even before comparing it to the malaria prevalence map. Through that data it was discovered that many people who travel to mosquito hotspots like Lake Victoria or the shore originate in Nairobi and end up bringing the disease back with them.
Malaria kills one million people per year, the vast majority of which are children under the age of 5 in Sub-Saharan Africa. The disease has received worldwide attention, with organizations like Nothing But Nets raising millions of dollars for mosquito bednets to prevent carriers from infecting people during the night. This research could aid that cause by pointing to locations where nets would be more effective, and possibly even sending text alerts to people who are moving into a highly infected area. | <urn:uuid:a8529741-95fc-433c-8118-1b8a6dfe96f3> | CC-MAIN-2017-09 | https://www.datanami.com/2012/10/31/cellular_data_bites_back_at_malaria/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00323-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956383 | 555 | 3.46875 | 3 |
Someday, will all learning be as quick and convenient as the Kung Fu lessons downloaded into Keanu Reeves' brain in The Matrix?
Researchers from Boston University and Japan's ATR Computational Neuroscience Laboratories have figured out how to use data from functional MRIs to create a method of neurofeedback that can project a pre-recorded pattern into some sections of the brain.
[ Free download: 6 things every IT person should know ]
The resulting pattern in the brain is very similar to the same material imprinted using more conventional learning techniques, according to a paper published in the journal Science.
The authors figured out a way to effectively imprint information onto the visual cortex– information that was absorbed well enough to allow human test subjects to perform vision-oriented tasks the imprinted pattern described with more efficiency than they could manage beforehand.
Their conclusion is that it may be possible to use the approach to "teach" humans some things the same way we "teach" computers – by downloading the lesson into available storage, relying on the self-deterministic ability of the brain itself to adapt the imprinted material into a form it can use in much the same way it would if it had learned the material the old-fashioned way.
So, is it possible?
Is it realistically possible?
How hard would it be to port an app to your brain?
Consider how difficult it is to transfer not just raw data, but instructions and data from one computer to another and get the new one to perform correctly.
Data is relatively easy, which implies you might be able to transfer memories, or raw information like the names and dates in office of all the American presidents into a human brain fairly easily.
But programmatic commands? Go here. Do this. Kick Agent Smith(s). Change facial expression (it's Keanu, remember?).
No. Viruses, bad programming, misconfigured data-projection machines and all the other things that could possibly go wrong with silicon-based data and instructions can go far wronger with meat-based data and instructions.
And that's just assuming there's no problem with the receiving platform itself. Getting one computer to accurately run a set of instructions designed to run on one with different components, a different version of the operating system, different drivers, diagnostics and programmatic interfaces is almost impossible.
Usually it requires throwing out the new machine and replacing it with one almost identical to the old one. Or recoding every bit of instruction by hand so it will run on the new machine.
Or building an emulation layer so the program will think it's running on the old machine and the new machine will think the program is written for it.
Emulation works, but slows everything down and is almost always inaccurate enough to create exciting new bugs in the new system that may not be found for years.
Human brains are a lot less standardized than computer hardware. The OSes are all wildly dissimilar; the wetware comes in such a variety of configurations most can't be considered to be the same "platform" from a programming perspective.
How hard would your brain resist the implant of knowledge?
Even assuming instructions simplified enough that they won't be warped in transmission (or warp the mind trying to perceive them), there's a good chance any instructions would be rejected like a bad liver or the wrong side of a political hot-button.
Human brains obviously have an as-yet-unidentified physical characteristic that allows them to reject even obvious and well-proven ideas that conflict with more dearly held beliefs. How else can you explain all those fools who disagree with you on abortion, defense, taxes, immigration, drugs, education and whether Starbucks and Hipsters should be allowed to live peacefully in neighborhoods that are otherwise not terribly annoying.
Trying to squeeze anything into a human head is tremendously difficult, dangerous to both squeezee and squeezer and frustrating due to its short half life. Ask any teacher two days after the end of a semester, or even yourself half an hour after the end of a final exam.
Human knowledge is fleeting and ephemeral; human error lasts forever.
The only things an adult human brain can retain for the long term are those that are either false, trivial or diabolical. (How long has it been since you've heard "Tie a Yellow Ribbon 'Round the Old Oak Tree? Still remember the tune? Can't get it out of your head? Sorry.)
Actually the technique probably will lead to some form of effective lightweight training, though not about anything involving belief or even, most likely, deep decision-making.
Remembering where you left your keys or where the light switch is in a room is easy compared to understanding calculus or, for example, Kung Fu.
Even if the ability to imprint functional information ever works, it's too much to expect it would ever work well enough to overcome the two characteristics of the human brain that have been the ultimate downfall of educators, dictators and saints throughout human history: determined ignorance among those who choose not to learn, and stubborn bloody mindednessamong those who do.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:3c2299ca-68a8-48fd-81ea-0967773517e5> | CC-MAIN-2017-09 | http://www.itworld.com/article/2731686/consumer-tech-science/beaming-information-onto-the-brain--learning-like-kung-fu-keanu-in-the-matrix.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00375-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941616 | 1,094 | 3.203125 | 3 |
With revelations about mass surveillance in the news everywhere, an obscure feature of SSL/TLS called forward secrecy has suddenly become very interesting. So what is it, and why is it so interesting now?
Session keys generation and exchange
Every SSL connection begins with a handshake, during which the parties communicate their capabilities to the other side, perform authentication, and agree on their session keys, in the process called key exchange. The session keys are used for a limited time and deleted afterwards. The goal of the key exchange phase is to enable the two parties to negotiate the keys securely, in other words, to prevent anyone else from learning these keys.
Several key exchange mechanisms exist, but, at the moment, by far the most commonly used one is based on RSA, where the server’s private key is used to protect the session keys. This is an efficient key exchange approach, but it has an important side-effect: anyone with access to a copy of the server’s private key can also uncover the session keys and thus decrypt everything.
For some, the side-effects are desirable. Many network security devices, for example, can be configured to decrypt communication (and inspect traffic) when given servers’ private keys. Without this capability, passive IDS/IPS and WAF devices have no visibility into the traffic and thus provide no protection.
In the context of mass surveillance, however, the RSA key exchange is a serious liability. Your adversaries might not have your private key today, but what they can do now is record all your encrypted traffic. Eventually, they might obtain the key in one way or another (e.g., by bribing someone, obtaining a warrant, or by breaking the key after sufficient technology advances) and, at that time, they will be able to go back in time to decrypt everything.
Diffie-Hellman key exchange
An alternative to RSA-based key exchange is to use the ephemeral Diffie-Hellman algorithm, which is slower, but generates session keys in such a way that only the two parties involved in the communication can obtain them. No one else can, even if they have access to the server’s private key.1
After the session is complete, and both parties destroy the session keys, the only way to decrypt the communication is to break the session keys themselves. This protocol feature is known as forward secrecy.2
Now, breaking strong session keys is clearly much more difficult than obtaining servers’ private keys (especially if you can get them via a warrant). Furthermore, in order to decrypt all communication, now you can no longer compromise just one key (the server’s), but you have to compromise the session keys belonging to every individual communication session.
SSL and forward secrecy
SSL supports forward secrecy using two algorithms, the standard Diffie-Hellman (DHE) and the adapted version for use with Elliptic Curve cryptography (ECDHE). Why isn’t everyone using them, then?
Assuming the interest and knowledge to deploy forward secrecy is there, two obstacles remain:
- DHE is significantly slower. For this reason, web site operators tend to disable all DHE suites in order to achieve better performance. In recent years, we’ve seen DHE fall out of fashion. Internet Explorer 9 and 10, for example, support DHE only in combination with obsolete DSA keys.
- ECDHE too is slower, but not as much as DHE. (Vincent Bernat published a blog post about the impact of ECDHE on performance, but be warned that the situation might have changed since 2011. I am planning to do my own tests soon.) However, ECDHE algorithms are relatively new and not as widely supported. For example, they were added to OpenSSL only fairly recently, in the 1.x releases.
If you’re willing to support both ECDHE and DHE, then you will probably be able to support forward secrecy with virtually all clients. But ECDHE alone is supported by all major modern browsers, which means that even with only ECDHE you might be able to cover a very large chunk of your user base. The decision what to do is entirely up to you. Google, for example, do not support any DHE suites on their main web sites.
Configuring forward secrecy
Enabling forward secrecy can be done in two steps:
1. Configure your server to actively select the most desirable suite from the list offered by SSL clients.
2. Place ECDHE and DHE suites at the top of your list. (The order is important; because ECDHE suites are faster, you want to use them whenever clients supports them.)
Knowing which suites to enable and move to the top can be tricky, because not all browsers (devices) support all forward secrecy suites. At this point you may want to look for inspiration from those who are already supporting forward secrecy, for example Google.
In the nutshell, these are some of the suites you might want to enable3 and push (close) to the top:
To make this process easier, I’ve added a new feature to the SSL Labs test; this feature, tentatively called handshake simulation, understands the capabilities of major browsers and determines which suite would be negotiated with each. As a result, it also tells you if the negotiated suite supports forward secrecy.
Here’s what it looks like in action:
When you get it right, you will be rewarded with a strong forward secrecy indicator in the summary section at the top:
Alternative attack vectors
Although the use of Diffie-Hellman key exchange eliminates the main attack vector, there are other actions powerful adversaries could take. For example, they could convince the server operator to simply record all session keys.
Server-side session management mechanisms could also impact forward secrecy. For performance reasons, session keys might be kept for many hours after the conversation had been terminated.
In addition, there is an alternative session management mechanism called session tickets, which uses separate encryption keys that are rarely rotated (possibly never in extreme cases). Unless you understand your session tickets implementation very well, this feature is best disabled to ensure it does not compromise forward secrecy.
(1) Someone with access to the server’s private key can, of course, perform an active man in the middle attack and impersonate the server. However, they can do that only at the time the communication is taking place. It is not possible to pile up mountains of encrypted traffic to decrypt later.
(2) It’s also sometimes called perfect forward secrecy, but, because it is possible to uncover the communication by breaking the session keys, it’s clearly not perfect.
(3) I am assuming the most common case, that you have an RSA key (virtually everyone does). There’s a number of ECDHE suites that need to enabled if you’re using an ECDSA key. I am also ignoring GCM suites for the time being, because they are not very widely supported. I am also ignoring any potential desire to mitigate BEAST by favouring RC4, which might be impossible to do across all client devices. | <urn:uuid:59e71178-31b6-4dfc-99f3-b21298ec8c0a> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/06/26/ssl-labs-deploying-forward-secrecy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00375-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931003 | 1,478 | 3.265625 | 3 |
After working on developing a self-driving car for several years, Google has built an autonomous car from the ground up -- with no steering wheel, no accelerator and no brake.
"Ever since we started the Google self-driving car project, we've been working toward the goal of vehicles that can shoulder the entire burden of driving," wrote Chris Urmson, director of Google's Self-Driving Car Project in a blog post. "Just imagine: Seniors can keep their freedom even if they can't keep their car keys. And drunk and distracted driving? History."
Google produced this video demonstrating its driverless car prototype.
He said Google is building prototypes of a new kind of self-driving car - one that can't be driven by people. That means passengers couldn't take over and drive the car if they wanted to or if the car was making them nervous.
"They won't have a steering wheel, accelerator pedal, or brake pedal... because they don't need them," wrote Urmson. "Our software and sensors do all the work. The vehicles will be very basic. We want to learn from them and adapt them as quickly as possible, but they will take you where you want to go at the push of a button. And that's an important step toward improving road safety and transforming mobility for millions of people."
It's no surprise that Google would like to take people out of the driver's seat with autonomous cars.
Last month, the company reported that it had taken on a new challenge. Google was taking its self-driving cars out on to city streets, tackling a more challenging driving environment muddled with jaywalking pedestrians, bicyclists and drivers circling in search of parking spaces.
According to Google, computers, and thus driverless cars, are better at responding to the unexpected on the road.
"As it turns out, what looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer," wrote Urmson in a blog post last month. "As we've encountered thousands of different situations, we've built software models of what to expect, from the likely (a car stopping at a red light) to the unlikely (blowing through it)."
Rodney Brooks, co-founder and former CTO of iRobot and co-founder and CTO of Rethink Robotics, might agree with him. Speaking at a computer science and artificial intelligence symposium at MIT in Cambridge, Mass., today, Brooks said autonomous cars will soon be seen as elder-care robots.
"Over the next 40 years, there will be a huge growth in the number of elderly residents," he said. "We need the elder-care robots and self-driving cars... It'll make driving easier. It detects pedestrians. It has sensors up the wazoo. It will give me the dignity of having control of my life longer."
Based on Google's assertion that robotic cars are better drivers than humans, it makes sense that the company's next step was to build a car that doesn't need a human driver at the controls.
"It was inspiring to start with a blank sheet of paper and ask, "What should be different about this kind of vehicle?" Urmson wrote. "We started with the most important thing: safety. They have sensors that remove blind spots, and they can detect objects out to a distance of more than two football fields in all directions, which is especially helpful on busy streets with lots of intersections."
At this point, Google has capped the speed of its first autonomous vehicles at 25 mph. The cars also are built for testing, not luxury, so they're light on comforts, basically coming with two seats, a space for passengers' belongings, buttons to start and stop, and a screen that shows the route.
Google is set to build about 100 prototype vehicles, according to Urmson. Later this summer, the company's testers will start working with the first prototypes, which, in case of trouble, will still come equipped with manual controls.
If those tests go well, Google plans to then move on to a small pilot program, in which the cars are more widely tested on highways and city roads in California over the next few years.
"We're going to learn a lot from this experience, and if the technology develops as we hope, we'll work with partners to bring this technology into the world safely," Urmson added. "We're looking forward to learning more about what passengers want in a vehicle where their number one job is to kick back, relax, and enjoy the ride."
This article, Google's autonomous car is truly hands-off -- there's no steering wheel, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Google's Autonomous Car is Truly Hands-Off -- There's No Steering Wheel" was originally published by Computerworld. | <urn:uuid:4a95ba2f-027c-4907-a252-93da354c1727> | CC-MAIN-2017-09 | http://www.cio.com/article/2375930/consumer-technology/google-s-autonomous-car-is-truly-hands-off----there-s-no-steering-wheel.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00143-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.972642 | 1,070 | 2.8125 | 3 |
Deep learning efforts today are run on standard computer hardware using convolutional neural networks. Indeed the approach has proven powerful by pioneers such as Google and Microsoft. In contrast neuromorphic computing, whose spiking neuron architecture more closely mimics human brain function, has generated less enthusiasm in the deep learning community. Now, work by IBM using its TrueNorth chip as a test case may bring deep learning to neuromorphic architectures.
Writing in the Proceedings of the National Academy of Science (PNAS) in August (Convolutional networks for fast, energy-efficient neuromorphic computing), researchers from IBM Research report, “[We] demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, perform inference while preserving the hardware’s underlying energy-efficiency and high throughput.”
The impact could be significant as neuromorphic hardware and software technology have been rapidly advancing on several fronts. IBM researchers ran the datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per watt). They report their approach allowed networks to be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. Basically, the new approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors.
“The new milestone provides a palpable proof of concept that the efficiency of brain-inspired computing can be merged with the effectiveness of deep learning, paving the path towards a new generation of chips and algorithms with even greater efficiency and effectiveness,” said Dharmendra Modha, chief scientist for brain-inspired computing at IBM Research-Almaden, in an interesting article by Jeremy Hsu on the IBM work posted this week on the IEEE Spectrum (IBM’s Brain-Inspired Chip Tested for Deep Learning.)
Shown here are dataset samples the researcher worked with.
As Hsu points out in the IEEE Spectrum article, “Deep-learning experts have generally viewed spiking neural networks as inefficient – at least, compared with convolutional neural networks – for the purposes of deep learning. Yann LeCun, director of AI research at Facebook and a pioneer in deep learning, previously critiqued IBM’s TrueNorth chip because it primarily supports spiking neural networks. (See IEEE Spectrum’s previous interview with LeCun on deep learning.)
“The IBM TrueNorth design may better support the goals of neuromorphic computing that focus on closely mimicking and understanding biological brains, says Zachary Chase Lipton, a deep-learning researcher in the Artificial Intelligence Group at the University of California, San Diego. By comparison, deep-learning researchers are more interested in getting practical results for AI-powered services and products.”
IBM is trying to widen that perspective. Clearly, understanding brain function better is an important element neuromorphic computing research but so, increasingly, is developing real-world applications. Lawrence Livermore National Laboratory has purchased a True-North-bases system to explore and in Europe the Human Brain Project has opened up its two big machines, SpiNNaker at Manchester University, U.K., and BrainSaleS in Germany to researchers to develop applications and explore neuromorphic computing.
The IBM paper authors describe the traditional deep learning challenge well: “Contemporary convolutional networks typically use high precision (32-bit) neurons and synapses to provide continuous derivatives and support small incremental changes to network state, both formally required for back-propagation-based gradient learning. In comparison, neuromorphic designs can use one-bit spikes to provide event-based computation and communication (consuming energy only when necessary) and can use low-precision synapses to co- locate memory with computation (keeping data movement local and avoiding off-chip memory bottlenecks).”
By introducing two constraints into the learning rule – binary-valued neurons with approximate derivatives and trinary-valued synapses – the researchers say it is possible to adapt backpropagation to create networks directly implementable using energy efficient neuromorphic dynamics.
“For structure, typical convolutional networks place no constraints on filter sizes, whereas neuromorphic systems can take advantage of blockwise connectivity that limits filter sizes, thereby saving energy because weights can now be stored in local on-chip memory within dedicated neural cores. Here, we present a convolutional network structure that naturally maps to the efficient connection primitives used in contemporary neuromorphic systems. We enforce this connectivity constraint by partitioning filters into multiple groups and yet maintain network integration by interspersing layers whose filter support region is able to cover incoming features from many groups by using a small topographic size,” write the researchers whose project was funded by DAPRA as part of its Cortical Processor program aimed at brain-inspired AI that can recognize complex patterns and adapt to changing environments,” write the researchers.
Shown below is a figure of both conventional convolutional network and the TrueNorth approach.
In the IEEE article, Modha notes TrueNorth’s general design as an advantage over those of more specialized deep-learning hardware designed to run only convolutional neural networks because it will likely allow the running of multiple types of AI networks on the same chip. He’s quoted saying, “Not only is TrueNorth capable of implementing these convolutional networks, which it was not originally designed for, but it also supports a variety of connectivity patterns (feedback and lateral, as well as feed forward) and can simultaneously implement a wide range of other algorithms.”
In their paper, the authors emphasize that their work demonstrates more generally that “the structural and operational differences between neuromorphic computing and deep learning are not fundamental and points to the richness of neural network constructs and the adaptability of backpropagation. This effort marks an important step toward a new generation of applications based on embedded neural networks.” It’s bet to read the paper in full for details of the work.
Link to Paper: http://www.pnas.org/content/early/2016/09/19/1604850113.full
Link to Jeremy Hsu’s IEEE Spectrum article: http://spectrum.ieee.org/tech-talk/computing/hardware/ibms-braininspired-chip-tested-on-deep-learning
Link to related HPCwire coverage: Think Fast – Is Neuromorphic Computing Set to Leap Forward? | <urn:uuid:d3ad23ac-0206-4b12-aa10-1c344467adda> | CC-MAIN-2017-09 | https://www.hpcwire.com/2016/09/29/ibm-advances-neuromorphic-computing-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00319-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918683 | 1,364 | 3.296875 | 3 |
The article in TIME is headlined "Google's Flu Project Shows the Failings of Big Data." However, critics says the real failing here is not with big data but with Google.
The article takes issue with a project named Google Flu Trends (GFT), pioneered by the Internet search giant to produce real-time monitoring of flu cases around the world using search data the company collects. The idea was that analyzing how many people are searching for flu terms in an area can predict where there are cases of the flu.
The work was lauded in a book, "Big Data: A Revolution That Will Transform How We Live, Work and Think." Google admitted at the time that not everyone who searches for flu terms would be sick, but it said it found "a close relationship" between search terms and flu cases.
The only problem is that it didn't.
The journal Science released a reports showing some flaws in GFT. Specifically, it said that GFT's predictions of flu cases were overestimated by 50% or more in some cases compared to figures produced by the federal Centers of Disease Control (CDC).
"From August 2011 to September 2013, GFT over-predicted the prevalence of the flu in 100 out 108 weeks," TIME reported. "During the peak flu season last winter, GFT would have had us believeA that 11% of the U.S. had influenza, nearly double the CDC numbers of 6%."
TIME goes on, "just because companies like Google can amass an astounding amount of information about the world doesn't mean they're always capable of processing that information to produce an accurate picture of what's going on--especially if turns out they're gathering the wrong information."
So what do big data enthusiasts think of all this? They point to the specifics of Google's approach in critiquing GFT, not big data in general. "What happened with Google wasn't a failure of big data," says Charles Caldwell, director of solutions engineering, Logi Analytics. "It is about believing that big data can be a replacement for everything else."
It's not a surprise that a team of professional epidemiologists at the CDC will have better information about the flu that an Internet search company. For big data projects, it's about picking the right tools and having the right data. That didn't happen with the GFT, but Caldwell says that's a failure of the project, not of big data in general. "Big data needs to support human expertise, not replace it."A
So is big data overhyped as being a panacea? "Absolutely," or at least the term is says Clarke Patterson, senior director of product marketing at Cloudera, which is one of the leading companies delivering Hadoop, the big data platform as a product. The fact of that matter is that there is a huge amount of new data that businesses and researchers have access to. But it's not just about having data, it's about knowing what to do with it and getting true insights out of it.
"Unfortunately, this transformation is in its early stages and as a result projects are going to fail (like the Google GFT example) if we get over excited about the technology alone," he says. A A
A few bad apples shouldn't spoil the bushel, says Jim Ingle, SVP at NTT Data, which consults with companies to hone a data management strategy. Many companies, he concedes don't have the need for a big data platform. But, he says traditional data warehousing tools are also not idea. New data platforms allow for faster and easier access to data. "It is difficult to effectively predetermine how an organization will want to access and analyze its data over time," he says. "Flexibility and speed of data analysis is the future and big data technologies enable this regardless of whether you have massive amounts of data or not."
Senior Writer Brandon Butler coverscloud computingfor Network World andNetworkWorld.com. He can be reached atBButler@nww.comand found on Twitter at@BButlerNWW.Read his Cloud Chronicles here.A http://www.networkworld.com/community/blog/26163
This story, "Big data Google-style comes under attack" was originally published by Network World. | <urn:uuid:f4b695d7-47c8-4766-b7c7-3642feaeb45c> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2489024/cloud-computing/big-data-google-style-comes-under-attack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00371-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959455 | 880 | 2.625 | 3 |
When Tesla Motors enabled the "autopilot" feature on its cars in October, it didn't take long for video to appear of drivers trying crazy things -- none more so than jumping into the passenger seat to leave the car totally in control.
The stunts are stupid but they highlight a concern that some already have with the growing use of autonomous technologies in cars: If a vehicle is driving itself, will the driver still pay attention to the road?
The advice from Tesla is clear: "Remain engaged and aware when autosteer is enabled," and "Keep your hands on the wheel."
But that's easier said than done.
Tesla's autosteer keeps a car in its lane by detecting road markings and works even around bends in the road, so drivers are left watching the scenery pass by.
"We struggle to pay attention; we get bored," said Stephen Casner, a research psychologist at NASA who has spent years studying the effects of autopilot technology in aircraft and is now looking at cars.
Trust in technology
When Google offered its employees the chance to commute to work in a prototype self-driving car, it discovered just that. Drivers quickly stopped paying attention to the road. In one instance, a driver turned to the back seat to search for a phone charger while the car was traveling at 65 miles per hour on a freeway.
"People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax," the company said in a recent report.
But drivers can't switch off with current automatic driving technology. While the computer does a pretty good job piloting the car, there are times when it can't figure out what's going on, for example if it loses sight of road markings, and drivers will be quickly required to take over.
Drivers are essentially being told, "We don’t want you to do this, but we want you to pretend you’re doing this," NASA's Casner said. "It’s a weird role to put us in."
When he took a drive on a freeway in an autonomous car, it didn't take him long to become comfortable with the technology. "But the one thing I realized was how vulnerable I was in that situation, should the automation suddenly need me or just quit," he said.
A recent study by the National Highway Traffic Safety Administration (NHTSA) and supported by auto makers found some drivers took as long as 17 seconds to respond to a takeover request from the autonomous control system to resume control of the car. That's time drivers don't have, especially on a fast-moving freeway.
There's plenty of evidence that drivers have trouble staying engaged when driving becomes monotonous or there's nothing for them to do.
Watch drivers at any stop light and you'll see them looking down at their smartphones, while boredom pushes some people to far riskier behavior.
In studies of college students in South Dakota, who spend hours driving conventional cars on long, straight roads with little traffic, multitasking is the norm. Without an obvious need to stay focused on the road, students start texting, studying, watching movies and even having sex, said Cindy Struckman-Johnson, a professor at the University of South Dakota.
"It's amazing there aren't more accidents," she said.
In a 2014 study, a third of men and 9 percent of women surveyed at the university reported engaging in sexual acts while driving. Almost half occurred at speeds between 61 and 80 miles per hour, and over a third reported their cars drifted into another lane, or that they ended up speeding because they weren't paying full attention.
Struckman-Johnson said she's worried that as autopilot systems get better, drivers will fall to more and more distractions and care even less about keeping an eye on the road.
A new type of accident
Autonomous technology is new enough that there don't appear to have been any major accidents yet caused by drivers not paying attention to the road. But they're probably coming.
"In the future, i think we're going to have a particular type of accident related to extreme distraction brought on by the features of autonomous car," said Struckman-Johnson. "The car becomes complicit in the accident."
But when that happens, it's important people don't overreact and conclude that self-driving cars aren't safe, NASA's Casner said. The technology promises to virtually eliminate rear-end collisions -- the most common type of car accident -- and is likely to save lives in other situations.
"There's the fear that there will be the one accident that people will react to, and even though we’ve saved thousands of lives in other categories, there'll be an overreaction because ‘A person was killed by a computer’," he said.
For now, researchers don't have a good answer as to how to keep drivers engaged, aware of traffic conditions and ready to take over after minutes or hours of not having to do anything.
Airlines have battled with the same problem and have yet to come up with an answer, Casner said, despite autopilot having been around for about 30 years. But in an aircraft, pilots often at least have a minute or more to figure out what's happening before the plane is in danger of crashing. Car drivers will often have far less time, hence the worry.
Until vehicles become truly autonomous, and drivers can switch off and not have to worry about doing anything, one answer could be to deliberately create things that the driver has to do, Casner said. But no one has figured out yet what that might be.
For now, Tesla has said it's adjusting its autopilot mode to place restrictions on when it can be used. It hasn't detailed any changes yet, but they will hopefully include a requirement to at least be in the driver's seat while the system is engaged.
"There's been some fairly crazy videos on YouTube," Elon Musk said during Tesla's recent earnings call. "This is not good." | <urn:uuid:14c264cf-ed9e-4e20-af93-8130cdb99959> | CC-MAIN-2017-09 | http://www.itnews.com/article/3012015/when-your-car-has-autopilot-will-you-be-ready-to-take-over.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00015-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.976602 | 1,260 | 2.78125 | 3 |
Are you an engineer who has dreamed of working on the International Space Station or maybe on the surface of Mars?
Your big chance could be here.
NASA announced today that it's looking for its next class of astronaut candidates. While the space agency is hoping to find scientists, medical doctors and pilots, it needs engineers, too.
"This next group of American space explorers will inspire the Mars generation to reach for new heights, and help us realize the goal of putting boot prints on the Red Planet," said NASA administrator Charles Bolden on the NASA website. "Those selected for this service will fly on U.S.-made spacecraft from American soil, advance critical science and research aboard the International Space Station, and help push the boundaries of technology in the proving ground of deep space."
The astronauts who participate in the upcoming missions can expect to launch into space on U.S.-made commercial spacecraft or on NASA's own Orion deep-space exploration vehicle.
NASA will accept astronaut applications between Dec. 14 and mid-February. Candidates will be announced in mid-2017, according to the agency. Applications can be submitted at USAjobs.gov, the federal government's job listings site.
NASA said that it will be seeking potential astronauts from what it calls a "diverse pool of U.S. citizens with a wide variety of backgrounds." Applicants do not need to be pilots, though that does help. A bachelor's degree in a STEM field (science, technology, engineering or math) is required, and an advanced degree is a plus.
Among other things, NASA is hoping that the pool of applicants will include people with backgrounds in engineering.
For instance, the basic requirements for a potential astronaut pilot include a bachelor's degree in engineering, biological science, physical science or mathematics. For mission specialists, NASA is seeking people with bachelor's degrees in engineering, biological science, physical science or mathematics.
There is no age requirement, although the average age has been 34, nor is a military background required. Candidates must also pass NASA's spaceflight physical.
This story, "Hey, all you engineers! Want to go to Mars?" was originally published by Computerworld. | <urn:uuid:48137039-03af-48eb-8ab3-163d2c15fe0d> | CC-MAIN-2017-09 | http://www.itnews.com/article/3001557/emerging-technology/hey-all-you-engineers-want-to-go-to-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00367-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939311 | 444 | 2.90625 | 3 |
We've all heard the controversy over whether cell phones and cell towers are safe to be around and most studies have concluded the risks are minimal to nonexistent (so far no study has shown conclusive proof that there is a serious health risk).
But what about WiFi?
US Cellular Frequency Bands [Wikipedia]
It turns out no research has been done into the effects of WiFi on humans and WiFi operates at much higher frequencies (2.4 to 2.5 GHz and 5.250 to 5.350 GHz and 5.470 to 5.725 GHGHz). But now, thanks to a group of Danish ninth grade students, we have some interesting evidence that garden cress seems to have a big problem with WiFi.
The students had noticed that after sleeping next to their cell phones they had difficulty concentrating the next day so they set up an experiment. According to Danish news site DR:
Six trays of [cress] seeds were placed in a room with no radiation, while six were placed in another room alongside two routers emitting roughly the same type of radiation as an ordinary mobile phone.
The results after 12 days were surprising: "the cress seeds alongside the routers did not grow at all, and some even mutated or died."
The cress seeds exposed to WiFi radiation (yuck)
Also according to DR:
Professor Olle Johanson of Karolinska Institutet in Stockholm is among those to have been impressed. Johanson considers the experiment to be ingenious and now wants to repeat it with a Belgian research colleague, Professor Marie-Claire Cammaert of the Université libre de Bruxelles.
“Within the limitations of their understanding and ability, the girls have carried out and documented a very elegant piece of work. The wealth of detail and precision is exemplary, the choice of the right cress is very intelligent, and I could go on,” said Johanson.
Of course many news services and blogs have picked this story up and run with it shouting "The sky is falling! The sky is falling!" but, somewhat obviously, as impressive as this was it was hardly a rigorous experiment and while it's interesting and provocative, serious research is required before anyone can say correlation does reveal causation ... in other words, correlating WiFi with the observed effect on cress doesn't mean that WiFi caused the effect, at least not without a whole heap of serious research.
As the Doubtful News blog observed:
But here's a thing ... EMFs also caused accelerated germination. So ... which is it? Always look for the counterpoint. The actual conclusion is probably way more complicated than X leads directly to Y. | <urn:uuid:f84f5a3a-c238-4da1-98d7-f0f7b92225d0> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2225989/security/wifi-might-rot-your-brain--or-kill-your-cress-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00367-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.972517 | 544 | 2.640625 | 3 |
Green Storage Technologies
In an effort to increase energy efficiency of storage technologies, the Storage Network Industry Association (SNIA) formed a group known as the Green Storage Initiative.
“Part of the issue for data centers in general is there’s no silver-bullet approach to dealing with energy efficiency because you have so many moving parts,” said Tom Clark, principal engineer at Brocade, who is currently on the governing board of the Green Storage Initiative. “You have servers, storage platforms, network equipment, cooling, air-conditioning, infrastructure, backup power supplies and so on.”
The group’s primary objective since its inception about two years ago has been to urge vendors and end users to go green.
“The intention is to develop methodologies and standards that would define how we measure the energy efficiency of storage technologies and encourage our vendor members to be much more cognizant of the energy consequences of what they produce in the market — as well as our end-user customers,” Clark said.
Another group within the SNIA — known as the Technical Working Group (TWG) — is in charge of the actual standards formulations and standards development. For instance, TWG hosts events called “Un-PlugFests,” in which vendors come together and try to determine the best ways to evaluate the energy draw of storage systems.
“The result is an initial power measurement specification that defines methodologies for making sure you can have apples-to-apples comparisons between products,” Clark explained.
Developing a system that is capable of classifying comparable systems is a necessary step.
“One of the challenges was to create a taxonomy where we could [classify] a wide diversity of storage technologies such as disk media, disk media composed of different classes of drives [and] technologies like backup systems,” Clark explained. “The taxonomy provides a way to divide out these various categories of storage technologies and [create] classes within the categories so that you don’t end up comparing a high-end, high-availability storage system to an entry-level system.”
Green storage refers to a broad spectrum of solutions ranging from sheer hardware efficiency to more application-level software, Clark explained.
“There are different technologies aside from just making more efficient power supplies and more efficient hardware design, such as storage virtualization and data deduplication, that can also reduce the amount of physical power-drawing hardware required,” he said.
– Deanna Hartley, firstname.lastname@example.org | <urn:uuid:c09361c7-7b93-440d-a866-675e0d2460f4> | CC-MAIN-2017-09 | http://certmag.com/green-storage-technologies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00243-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9297 | 537 | 2.703125 | 3 |
Watson At WorkBy Seth Earley | Posted 2011-05-03 Email Print
Understanding the challenges and opportunities of adapting IBM's Watson computer technology for business.
IBM’s Jeopardy! champion Watson computer is a technology triumph, capable of understanding human language and broad knowledge topics – not just facts and trivia, but ambiguous language including puns, double entendres and idioms.
Big Blue has set its sights on many commercial applications for the technology in healthcare, financial services and customer service operations. But the question remains, is it practical? Does Watson embody an approach that enterprises can exploit, or learn from? How readily can a “Watson” be applied to the knowledge and content access problems of the typical enterprise?
The 25-person IBM team spent millions on research in the 4-year period of development of the core technology. Few organizations have the resources that Watson required: $3 million worth of hardware (off-the-shelf-servers, with almost 3,000 processors and a terabyte of RAM). Additional challenges, including the nature of knowledge access, have been discussed by Watson team members.
Some principles that Watson exploited:
-- Watson used multiple algorithms to process information. These included the usual keyword matching algorithms of run-of-the-mill search, “temporal” (time based) reasoning that understand dates and relative time calculations, “statistical paraphrasing” an approach to convey ideas using different words, and “geospatial reasoning” – a way of interpreting locations and geographies, and various approaches to unstructured information processing.
-- At one level, Watson can be characterized as “semantic search” or natural language search. That is, questions are asked in plain English as opposed to a structured query and this question is parsed into its semantic and syntactic (meaning and grammatical structure) components. The components are then processed in a number of ways by the system.
-- The system consumed 200 million pages of information for processing ( “corpuses” of information) including Wikipedia, various news sources, dictionaries, thesauri, databases, taxonomies, literary works, and specialized knowledge representations called ontologies including two that have been developed over a number of years: Wordnet and DBPedia
What does this mean for an organization attempting to exploit this approach in order to make information easier to consume? Two major points stand out.
The first is that a core framework for structuring information is needed in order for any algorithm to make sense of data. Other than keyword matching which parses terms and processes them against a dumb bag of words, more complex and powerful approaches require an underlying structure to the information. These structures are in the form of taxonomies and ontologies which tell the system how concepts relate to one another. Many organizations are beginning to build these taxonomy frameworks for purposes of e-commerce, document management, intranet and knowledge base applications. The message here is to not stop those efforts in the hope that technology will obviate the need for them. Technology is getting better, but having a map of the specific and unique knowledge of the enterprise will improve the performance of search, business intelligence, and content management tools.
If you don’t already use and apply enterprise taxonomies, it is important to get started developing them now. While the initial time to value for siloed projects can be short, fully leveraging semantics across the enterprise can take years to refine, deploy and exploit across business units and applications. While data architects have part of the solution, semantic architects are needed to make sense of knowledge. Developing a semantic architecture will benefit the organization by making technology investments more productive and have payoffs in improved search and better reuse of intellectual assets. They form the foundation of knowledge systems that are finally becoming practical.
The second point is that Watson demonstrates key elements of solutions that do not assume that users know exactly how to frame questions regarding what they want. As much research on search shows, users frequently ask ambiguous questions and expect precise results. Therefore we need to build solutions that help them with the queries. These are the same approaches for structuring the information in the first place (the structures that the tools require to make sense of the data are the same ones that help guide users in their choices). Think of the new navigation/search approaches used in ecommerce sites – choosing color, size, brand, price, etc. help users find what they need and precisely navigate to specific information.
Bottom line: tools like Watson are a great leap forward in capabilities, but there is no free lunch – Watson’s power comes from organizing content. Tools for gaining insights and finding answers will get better as time goes on, but human judgment needs to be applied to information to develop a foundation of meaning and structure.
Seth Earley, CEO of Earley & Associates,is an expert on content management and knowledge management practices. | <urn:uuid:7d7c1ca4-560b-4546-b550-769c68907cf4> | CC-MAIN-2017-09 | http://www.baselinemag.com/infrastructure/Watson-At-Work | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00364-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918353 | 1,017 | 2.703125 | 3 |
For millennia, we surpassed the other intelligent species with which we share our planet -- dolphins, porpoises, orangutans, and the like -- in almost all skills, bar swimming and tree-climbing.
In recent years, though, our species has created new forms of intelligence, able to outperform us in other ways. One of the most famous of these artificial intelligences (AIs) is AlphaGo, developed by Deepmind. In just a few years, it has learned to play the 4,000-year-old strategy game, Go, beating two of the world's strongest players.
Other software developed by Deepmind has learned to play classic eight-bit video games, notably Breakout, in which players must use a bat to hit a ball at a wall, knocking bricks out of it. CEO Demis Hassabis is fond of saying that the software figured out how to beat the game purely from the pixels on the screen, often glossing over the fact that the company first taught it how to count and how to read the on-screen score, and gave it the explicit goal of maximizing that score. Even the smartest AIs need a few hints about our social mores.
But what else are AIs good for? Here are five tasks in which they can equal, or surpass, humans.
Building wooden block towers
AIs don't just play video games, they play with traditional toys, too. Like us, they get some of their earliest physics lessons from playing with wooden blocks, piling them up then watching them fall. Researchers at Facebook have built an AI using convolutional neural networks that can attain human-level performance at predicting how towers of blocks will fall, simply by watching films and animations of block towers standing or falling.
Lip-reading -- figuring out what someone is saying from the movement of their lips alone -- can be a useful skill if you're hard of hearing or working in a noisy environment, but it's notoriously difficult. Much of the information contained in human speech -- the position of the teeth and tongue, and whether sounds are voiced or not -- is invisible to a lip reader, whether human or AI. Nevertheless, researchers at the University of Oxford, England, have developed a system called LipNet that can lip-read short sentences with a word error rate of 6.6 percent. Three human lip-readers participating in the research had error rates between 35.5 percent and 57.3 percent.
Among the applications the researchers see for their work are silent dictation and speech recognition in noisy environments, where the visual component will add to accuracy. Since the AI's output is text, it could find work close-captioning TV shows for broadcast networks -- or transcribing surveillance video for security services.
AIs can be even more helpful when the audio quality is better, according to Microsoft, where researchers have been tweaking an AI-based automated speech recognition system so it performs as well as, or better than, people. Microsoft's system now has an error rate of 5.9 percent on one test set from the National Institute of Standards and Technology, the same as a service employing human transcribers that Microsoft hired, and 11.1 percent on another test, narrowly beating the humans, who scored 11.3 percent.
That's one situation where being better than a human might not be sufficient: If you were thinking you could dictate your next memo, you might find it quicker to type and correct it yourself, even with just two fingers.
This story was written by a human, but the next one you read might be written by an AI.
And that might not be a bad thing: MogIA, developed by the Indian company Genic, doesn't write stories, but when MogIA predicted Donald Trump would win the U.S. presidential election it did better than most political journalists.
When it comes to writing, AI's are particularly quick at turning structured data into words, as a Financial Times journalist found when pitted against an AI called Emma from Californian startup Stealth.
Emma filed a story on unemployment statistics just 12 minutes after the figures were released, three times faster than the FT's journalist. While the AI's copy was clear and accurate, it missed the news angle: the number of jobseekers had risen for the first time in a year. While readers want news fast, it's not for nothing that journalists are exhorted to "get it first, but first, get it right."
Putting those employment statistics in context would have required knowledge that Emma apparently didn't have, but that's not an insurmountable problem. By reading dictionaries, encyclopedias, novels, plays, and assorted reference materials, another AI, IBM's Watson, famously learned enough context to win the general knowledge quiz show Jeopardy.
After that victory, Watson went to medical school, absorbing 15 million pages of medical textbooks and academic papers about oncology. According to some reports, that has allowed Watson to diagnose cancer cases that stumped human doctors, although IBM pitches the AI as an aid to human diagnosis, not a replacement for it.
More recently, IBM has put Watson's ability to absorb huge volumes of information to work helping diagnose rare illnesses, some of which most doctors might see only a few cases of in a lifetime.
Doctors in the Centre for Undiagnosed and Rare Diseases at University Hospital Marburg, in Germany, will use this instance of Watson to help them deal with the thousands of patients referred to them each year, some of them bringing thousands of pages of medical records to be analyzed.
Authors including Vernor Vinge and Ray Kurzweil have postulated that AI technology will develop to a point, still many years off, that they call "the singularity," when the combined problem-solving capacity of the human race will be overtaken by that of artificial intelligences.
If the singularity arrives in our lifetimes, some of us might be able to thank AIs like Watson for helping keep us alive long enough to see it. | <urn:uuid:1598f648-082c-4de0-ac58-fc790bb81b17> | CC-MAIN-2017-09 | http://www.itnews.com/article/3142191/artificial-intelligence/five-things-ais-can-do-better-than-us.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00364-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957476 | 1,237 | 3.484375 | 3 |
The need for a better system
Doctors' poor handwriting might be a cliché, but being able to accurately read medical records can often be a matter of life and death. The ubiquity of the personal computer has allowed the clinic to enter the digital age, and given that computers excel at managing information, the development of electronic health records (EHR) has been a no-brainer. Despite this, EHR adoption in the US and elsewhere has been slower than some might like, and at least one presidential candidate has made their widespread adoption a healthcare policy platform plank, promising widespread savings through increased efficiency.
Unlike other software markets, where a single player controls the market (such as Microsoft with Office), or where there are but a few solutions, the EHR field is one of byzantine complexity. There are dozens of different software packages and competing products. In this article, we'll look at the state of the EHR field, along with some of the benefits and problems associated with their use.
Inefficiencies in the system
Despite the US' position as the world's largest and most advanced economy, the US health care system is a model of inefficiency. Costs are more than twice those of any other nation in the Organization for Economic Co-operation and Development; the US spends more than $6,000 per patient per year. Despite this expenditure, health outcomes are, by most metrics, worse than almost every other OECD nation, whether it be life expectancy, infant mortality, years lived free of disease, and so on.
Part of this inefficiency is related to the availability of records. Currently, it's estimated that 20 percent of medical tests ordered by clinicians are repeats of previous tests, conducted because the originals have been lost. When those tests include expensive CT and MRI scans, you can see where some of those massive costs come from.
It's not just money-saving either; medical errors due to incomplete, inaccurate, or illegible records are a serious problem, and patients moving from one care provider to another can encounter problems if their records don't follow them.
To this end, a recent study by the RAND Corporation suggests that widespread adoption of EHRs could save as much as $81 billion each year, thanks to fewer redundant tests and procedures and fewer errors in treatment. But EHR adoption in the US lags behind other countries, with adoption rates by physicians' practices at less than 20 percent. By way of contrast, over 90 percent of primary care practices in Scandinavian countries have adopted EHRs.
An example of an Electronic Health Record
So, by increasing the uptake of EHRs, practices should be able to cut their costs, and do away with the mountains of paper records, along with reducing errors and duplicate tests. But even if every doctor in the land adopted EHRs tomorrow, that's no guarantee that things would magically be all right.
Illegible handwriting, digital style
Working in an office, if someone sends you a file you can't open, it's not usually a matter of life or death. On the other hand, an incompatible medical record file moves the problem of illegible handwriting into the digital age. A common complaint among doctors that Ars spoke to was that of EHR format incompatibility; it's no good having a file you can't read. Unlike productivity software, where programs with differing file formats—such as Word versus WordPerfect—get sorted out in the marketplace, with EHRs, there is a real need for common standards.
In 2004, the US government created, via executive order, the National Coordinator for Health Information Technology within the office of the Secretary of the Department of Health and Human Services. The Office of the National Coordinator exists to provide "counsel to the Secretary of HHS and Departmental leadership for the development and nationwide implementation of an interoperable health information technology infrastructure."
Part of that job is to ensure that interoperability standards exist within the health IT industry. I spoke with Dr. John Loonsk, director of the Office of Interoperability and Standards, about the some of the issues surrounding standards. Ongoing issues with competing standards in EHRs have led to the creation of the Healthcare Information Technology Standards Panel, a public-private partnership that works to harmonize standards within health IT.
In addition, another body, the Certification Commission for Healthcare Technology, provides a "seal of approval" of interoperability; solutions certified by the commission can be bought safe in the knowledge that they won't speak Greek to each other. The positives, Loonsk told Ars, will be "having EHRs that can follow the patients and can be accessible by two providers to support care is going to be helpful to improve quality of care, efficiency of care and reduce errors."
In order to help the spread of such standards among EHRs, the federal government has mandated that standards recognized by HHS have to be incorporated into federal contracts. This is designed to provide a base level of compatibility between the dozens of different solutions without dictating to the market in a way that would stifle innovation.
Dr. Loonsk acknowledged that there is still more work needed in this area; some of the pieces of the challenge are that health information is a broad information space. Unlike banking,which deals in numbers, health IT involves lots of complicated concepts, and there are different ways to communicate those concepts. Your bank balance is your bank balance, but your health records need to relate what a patient is feeling, where they're feeling it, and so on. | <urn:uuid:15a0630f-0753-483e-a1e2-4e78e9f958b0> | CC-MAIN-2017-09 | https://arstechnica.com/features/2008/03/electronic-health-records/?comments=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00364-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962424 | 1,125 | 2.59375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.