text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Interest and press around artificial intelligence (A.I.) comes and goes, but the reality is that we have had A.I. systems with us for quite some time. Because many of these systems are narrowly focused (and actually work), often times they are not thought of as being A.I.
For example, when Netflix or Amazon suggests movies or books for you, they are actually doing something quite human. They look at what you have liked in the past (evidenced by what you have viewed or purchased), find people who have similar profiles, and then suggest things that they liked that you haven’t seen yet. This, combined with knowing what you last viewed and the things that are similar to it, enable them to make recommendations for you. This is not unlike what you might do when you have two friends with a lot in common and use the likes and dislikes of one of them to figure out a gift for the other.
Whether these recommendations are good or bad is not the point. They are aimed at mirroring the very human ability to build profiles, figure out similarities, and then make predictions about one person’s likes and dislikes based on those of someone who is similar to them. But because they are narrowly focused, we tend to forget that what they are doing is something that requires intelligence, and that occasionally they may be able to do it better than we do ourselves.
If we want to better understand where A.I. is today and the systems that are in use now, it is useful to look at the different components of A.I. and the human reasoning that it seeks to emulate.
So – what do we do that makes us smart?
Sensing, reasoning & communicating
Generally, we can break intelligence or cognition into three main categories: sensing, reasoning and communicating. Within these macro areas, we can make more fine-grained distinctions related to speech and image recognition, different flavors of reasoning (e.g., logic versus evidence-based), and the generation of language to facilitate communication. In other words, cognition breaks down to taking stuff in, thinking about it and then telling someone what you have concluded.
The research in A.I. tends to parallel these different aspects of human reasoning separately. However, most of the deployed systems that we encounter, particularly the consumer-oriented products, make use of all three of these layers.
For example, the mobile assistants that we see today - Siri, Cortana and Google Now - all make use of each of these three layers. They use speech recognition to first identify the words that you have spoken to the system, and then they capture your voice and use the resulting waveform to recognize a set of words. Each of these systems uses it own version of voice recognition–with Apple making use of a product built by Nuance and both Microsoft and Google rolling out their own. It is important to understand that this does not mean that they comprehend what those words mean at this point. They simply have access to the words you have said in the same way they would if you had typed them into your phone.
For example, they take input like the waveform below and transform it into the words “I want pizza!”
The result of this process is just a string of words. In order to make use of them, they have to reason about the words, what they mean and what you might want, and how they can help you get what you need. In this instance, doing this starts with a tiny bit of natural language processing (NLP).
Again, each of these systems has its own take on the problem, but all of them do very similar things with NLP. In this example, they might note the use of the term “pizza,” which is marked as being food, see that there is no term such as “recipe” that would indicate that the speaker wanted to know how to make the pizza, and decide that the speaker is looking for a restaurant that serves pizza.
This is fairly lightweight language processing driven by simple definitions and relationships, but the end result is that these systems now know that the speaker wants a pizza restaurant or, more precisely, can infer that the speaker wants to know where he or she can find one.
This transition from sound, to words, to ideas, to actual user needs, provides these systems with what they require to now plan to satisfy those needs. In this case, the system grabs GPS info, looks up restaurants that serve pizza and ranks them by proximity, rating or price. Or if you have a history, it may want to suggest a place that you already seem to like.
Once all of this is done, it is a matter of organizing the results in a sentence or two – this is a process called natural language generation, or NLG. These words will then turn into sounds (speech generation).
Broad A.I., narrow A.I.
The interesting thing about these systems is their mix between broad and narrow approaches to A.I. Their input and output -- speech recognition and generation -- are fairly general, so they are all pretty good at hearing what you say and giving voice to the results.
On the other hand, each of these systems has a fairly narrow set of tasks they can perform, and the actual reasoning they do is to decide which tasks (find a restaurant or find a recipe) they can accomplish. The tasks themselves tend to be search or query-oriented, sending requests for information to different sources with different queries based on text elements grabbed from the speech. So the real smarts inside these systems is essentially answering the question, “What do you want me to do?” by identifying the terms that indicate your wishes.
These systems tend to be brittle in that they know about a small number of tasks and how to decide between them, but, as we’ve all experienced, if you ask for something outside of their expertise, they really don’t know what to do. Fortunately, when they are confused, they default to their respective search engines which at least provide search results.
These systems are just one class of animal in the new A.I. ecosystem, but you can see how the mix of elements plays out to provide powerful services. High-end speech recognition and generation supports interaction. Simple language processing extracts terms that drive a term-based decision model, which, in turn, figures out what you have requested and thus what task to perform. And, finally, a lightweight natural language generation model is used to craft a response. Each of these is a combination of intelligent functionalities that come together to create integrated systems that can genuinely understand your needs and provide desired services.
A.I.’s capabilities around sensing, reasoning and communicating will be a dominant recurring theme that we will continue to explore. Next, I will discuss systems in which intelligence rises out of multiple, and sometimes competing, components.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:5d8c8d2e-e8d0-4509-8207-2f18cfc57ae7> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2918161/emerging-technology/the-ai-ecosystem.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00509-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968684 | 1,437 | 2.859375 | 3 |
Instead of charging directly at the player in the craggy martian landscape, as they might do normally, the aliens zigzag to take cover behind boulders and outcrops of rock, adjusting their approach as the hero opts for a less conspicuous route to his destination.A The player changes his route again, and the aliens adapt their movements accordingly.Situations like this are becoming more common in video games as the product of a technique known as situational, or tactical, awareness. The concept is rooted in military tactics, but programmers schooled in artificial intelligence (AI) have started incorporating it into games to make enemies and other characters seem smarter.A Situational awareness can play a big role in games that take place in immersive, "sandbox" environments, in which the objectives and challenges are not pre-set but rather determined by the player as he or she moves through the game.A But situational awareness can be useful in any game that seeks to include intelligent beings in its cast of characters.A
Advances in processing power mean the approach can allow for more realistic experiences in games such as first-person shooters and role-playing games, or RPGs.A Essentially it allows characters to adapt more intelligently to moves made by the protagonist.Traditionally, the movements and behaviors of characters have been less flexible. "Where people often start with this kind of system is hard-coding some specific functions for specific kinds of cover," said Matthew Jack, founder and AI consultant at Moon Collider, an AI development company that worked on the "Crysis" series of first-person shooter games.A But Jack's work, and that of his peers, is focused on a more organic, adaptive type of intelligence.A One programming technique, for instance, is to build measurement systems into a game so that distances between the protagonist and other characters are constantly recalculated and analyzed, allowing characters to make a variety of decisions based on those distances. One key application of this technique is "directness."A Directness is a ratio that developers can employ to control an enemy character's movement toward the protagonist, for example. The calculation looks at the distances between the enemy character and an intermediary object, such as a rocky outcrop, and the protagonist. Using those relative distances, programmers control how the enemy characters advance toward the protagonist, Jack said.A Setting the directness just barely above zero, for instance, could trigger flanking behavior by a group of enemies, since they would be moving closer to the protagonist via certain intermediary points but not close enough yet to attack, Jack told an audience of gamers and programmers at the Game Developers Conference (GDC) in San Francisco.A Negative directness, on the other hand, can be used for retreating or fleeing, while zigzagging could be the result of establishing a directness of 0.5, which yields the least direct points of advancing upon a target.A Another AI technique based on the same ideas as directness is the "golden path" method of measuring different location points between the gamer and some end goal or destination. Enemies might traditionally be scripted to appear along the most direct route to the player's goal, since that would be the most likely path for the gamer to take. But with the golden path technique, enemies could appear on the spur of the moment if the player takes a more circuitous route.A A somewhat different type of tactical awareness was discussed by Mika Vehkala, senior AI programmer at IO Interactive, developer of the first-person shooter "Hitman: Absolution." Vehkala described a programming approach that determines the best location for enemies by looking at how "visible" any given location, or node, is to their target.A As the player moves around, "it's constantly re-evaluating and seeing if there's a node with a better rating," he said.A This sort of AI, however, works best in games built on static environments that do not change as much, Vehkala said.A The techniques Jack described, on the other hand, are based on performing calculations and measurements as the game's obstacles and characters change.A "My takeaway would be to build a language so you can iterate on your queries most rapidly and get the best results," he said. | <urn:uuid:b8538b39-3413-4312-a7a4-836bfb35f5f1> | CC-MAIN-2017-09 | http://www.cio.com/article/2387237/byod/game-developers-employ-ai-for-more-adaptive-play.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00385-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962398 | 867 | 2.796875 | 3 |
Google and NASA are continuing to test quantum computers and this week entered into a new agreement to work with a series of updated systems.
D-Wave Systems, a quantum computing company based in Burnaby, British Columbia, announced this week that it had signed a deal to install a succession of D-Wave systems at NASA’s Ames Research Center in Moffett Field, California. NASA and Google on Wednesday also confirmed the deal.
As new D-Wave quantum machines are developed, they will be successively installed at Ames for as long as the next seven years, according to the company.
In 2013, when Google announced the launch of its Quantum Artificial Intelligence Lab, the company said it would use quantum computing to solve some of the most challenging computer science problems, particularly in the area of machine learning.
“If we want to cure diseases, we need better models of how they develop,” wrote Hartmut Neven, Google’s director of engineering, at the time. “If we want to create effective environmental policies, we need better models of what’s happening to our climate. And if we want to build a more useful search engine, we need to better understand spoken questions and what’s on the web so you get the best answer.”
For the past two years, scientists at Google, NASA and the USRA have been working with a 500-qubit D-Wave Two system, which also has been installed at Ames.
“Through research at NASA Ames, we hope to demonstrate that quantum computing and quantum algorithms may someday dramatically improve our ability to solve difficult optimization problems for missions in aeronautics, Earth and space sciences, and space exploration,” said Eugene Tu, director at the Ames Research Center, in a statement. “The availability of increasingly more powerful quantum systems are key to achieving these goals, and work is now underway with D-Wave’s latest technology.”
There has been a lot of excitement around the concept of quantum computing. Computer scientists and physicists generally believe that a quantum machine could far exceed the top classic supercomputers in highly complex calculations. Quantum computers, for instance, could work on problems involving searches of large data sets or on performing massive calculations.
The difference lies in how the two different kinds of machines function. Classic computers use bits -- ones and zeroes – to work through a calculation in an orderly, linear fashion. A quantum computer uses what are known as qubits, which, instead of being a one or a zero, can be both a one and a zero, allowing for a wide range of possibilities.
Because of these possibilities, a quantum machine is able to process all the options in a calculation at once, making it much faster than a classic computer.
Some scientists are skeptical that D-Wave, or any company, has developed a working quantum computer. Some in the field it could be 50 years before an actual quantum machine is developed.
D-Wave CEO Vern Brownell obviously disagrees and says his company is working on successive generations of quantum computers.
“The new agreement [with NASA and Google] is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” Brownell, said in a statement this week. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”
This story, "Google, NASA using quantum computing to push A.I., machine learning" was originally published by Computerworld. | <urn:uuid:58252f6d-629c-4402-b89f-a0e0d2373e0a> | CC-MAIN-2017-09 | http://www.itnews.com/article/2987974/emerging-technology/google-nasa-using-quantum-computing-to-push-ai-machine-learning.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00385-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947132 | 758 | 2.953125 | 3 |
Information is the currency of business, research, and other knowledge-based industries. Storing, accessing, and protecting information is critical for organizations and Information Technology professionals are tasked with delivering strategies and solutions. This course will cover the problems and solutions for information storage, explaining the technology employed and the different systems available. The primary initiatives of optimizing IT and changing to a services delivery model are the top-level sections along with solutions for information storage and management. From this course, solutions using storage technologies implemented in products with their architectures, features, benefits and issues will be explored with a goal of understanding a strategy to deal with information demands.
Cost for three day course is €2000
Section 1: Information Storage – Demands and Evolution
The industry demands for storing information are explored along with the challenges those demands create. There are demands to transform IT into more of a services delivery model. Meanwhile, the traditional IT environment must continue to maintain operations while optimizing the investments in technology meet ongoing needs. The competing initiatives of optimization of IT and transforming to a services delivery model will be laid out for understanding.
Section 2: Transforming IT – Optimizing IT, Private/Hybrid Clouds and IT as a Service
In addition to meeting demands in current data center environments, additional deployments of private and hybrid clouds to achieve IT as a Service (ITaas) characteristics are underway to deliver services in an on-demand manner. The motivations, rational, and methods for private/hybrid clouds are important to understand when creating and implementing a strategy for information storage. Deployment of private/hybrid clouds is a parallel activity to optimizing IT operations to address current and ongoing demands.
For enterprises planning to deploy private and hybrid clouds, there are many different products available and different approaches to deliver services with use of both public and private clouds. The different options and their characteristics can be confusing with an overwhelming amount of information available. There are also solutions that are more complete where they delivered as pre-packaged (in-a-box) products with installation and support. The offerings and the value are discussed in this section.
Section 3: Data Center Infrastructure – Integrating Solutions
Different storage technology elements are being integrated to provide solutions for storing and protecting information. Driven by improving the time to deployment, these integrations provide alternatives to the more traditional storage systems available and can be building blocks for cloud environments. This section will examine the different types of integrations including definitions of characteristics and the vendor product offerings. Virtual SANs and clustered storage are included in these discussions.
Section 4: Information Storage Technologies
Developing a strategy for employing solutions for Information Storage requires an understanding of underlying storage technologies. This section will delve into the technologies to create a common level of understanding for employing solutions. Included in this technology explanation are:
Section 5: Solid State Storage – Technology
The use of solid-state technology for storage, predominantly Flash, is an inflection point in the industry. Dramatic changes in the economics of systems in acceleration to achieve more value from the overall environment has changed the evaluation in storage selection. This section will explain the technology and the new developments underway that will continue to change the storage landscape. Methods of deployment and evolving data center usage are useful in creating new strategies.
Section 6: Performance – Impact and Measurement
Understanding the factors that impact performance is important in evaluating solutions and making effective decisions for storing and managing information. How to measure performance and interpreting the results is another major input for decisions. These examinations for storage performance and measurement and the factors that have influence are critical in making informed decisions.
Section 7: Block Storage – Implementation and Systems
Accessing stored information from block devices, whether SAN or direct attached is the most basic method employed by storage systems. The different block storage systems offered for enterprises are described with architectures and capabilities along with Evaluator Group’s opinion of strengths and weaknesses. Covered in this section with product explanation and Evaluator Group opinions:
Section 8: Storage Virtualization
Abstracting resources from multiple storage systems increases flexibility of data placement and movement, reduces the workload on administration by providing a central point of management for provisioning and control of advanced features, and applies advanced capabilities across a potentially diverse set of storage systems. This section will review the different types for storage virtualization and the product offerings. The characteristics of the systems including strengths and weaknesses are included in the product reviews.
Section 9: Network Attached Systems – File Access
File access to information on shared storage is primarily through Network Attached Storage systems. The approaches for NAS and the different product offerings from vendors are explained along with Evaluator Group analysis of the products.
Section 10: Object Storage Systems
Scaling to large capacities for use as content repositories or online archives is the primary target for object storage systems. Object storage with Ethernet interfaces and support for S3, Swift, and custom protocols are used for both on-premises systems and as systems in cloud environments. The differing implementations for object storage systems are covered in this section along with the major systems available.
Solutions for Information Storage
Section 11: Solutions for Archiving Information
Growing capacity demands and compliance issues are driving interest in the economies gained from archiving data. Creating an online archive requires systems and software to automate the process for effective implementations. Archiving as a practice with features and functions are discussed along with the software and hardware available for solutions.
Section 12: Information Storage and Management
There are many different points for management of information. This section will explain what those management elements are and how they related. Specific to storage, Storage Resource Management (SRM) is used to manage across storage systems and software products to provide a consolidated view and actionable information. The different SRM solutions available will be contrasted.
Section 13: Big Data Analytics
Analyzing large amounts of data in near real-time to arrive at new insights has become very popular with the abundance of newly captured data, much of it from machine to machine. A new discipline has arose from this with massive amounts of storage and processors used by data scientists. Areas such as marketing and sales have been the most visible proponents but many others exist. This section will explain the practice, the solutions for approaches such as Hadoop and their storage implications, and give guidance for IT professionals who may ultimately be responsible for operation.
Class Dates in the Stuttgart, Germany have been announced! April 3-6, 2017 | <urn:uuid:039bafc6-e948-4425-96b4-589d6592bc4a> | CC-MAIN-2017-09 | http://www.evaluatorgroup.com/information-storage-strategies-solutions-stuttgart-germany/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00029-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.922952 | 1,304 | 2.8125 | 3 |
The current Internet design - Internet protocol version 4 (IPv4) - was standardized in 1978, and has 4.3 billion unique terminations. Simply put, "We can foresee a time when the allocations of addresses will end," said Cerf, who is known for his work in helping to create what we now know as the Internet.
The number of unique Internet addresses in IPv4, he said, will run out around 2011.
Enter IPv6, which has "about 340 trillion, trillion, trillion addresses - or 3.4 x 1038," Cerf said. This exponential number of IP addresses means more mobile devices, which helps in emergency management.
Q: How does IPv6 create more mobility?
A: IPv6 and IPv4 are both relatively easily used to dynamically create new networks. There's something called DHCP [Dynamic Host Configuration Protocol], which is a way in which you connect to the Net, and you're assigned an IP address. So building emergency networks quickly is readily done in both IPv4 and IPv6 contexts.
Mobility - movement from one IP address to another - if you disconnect from the Net, maybe you radio disconnect or physically disconnect, and you plug in somewhere else, you often get a different IP address.
The current protocols to the Internet are not as friendly to that kind of mobility as they could be, so there's some serious technical work that could be done, and I think should be done by the research agencies and universities to make the protocols and the network more comfortable with mobility. [Disruption Tolerant Networks] DTN is an example of that - where it's assumed that things will be disrupted as opposed to being mostly connected and occasionally disrupted.
So dynamic networking and the so-called mobile access network that self-organize, are very important for emergency services, networking and sensing types of services. I think we can achieve our objectives with IPv4 or IPv6 - IPv6 just has the benefit that there is more address space available.
Q: In what other ways is IPv6 useful to the emergency management field?
A: I think there's another element here that might not be quite as obvious. In many emergency communication systems, radio compatibility is required for the different emergency responders to communicate with each other. So in the absence of compatible radios, you find voice communication is extremely awkward to impossible.
If we were to move to a voice over IP architecture then the end-to-end communication would be digitally formatted and use basic Internet protocols. We still have to deal with the gap between this frequency radio and that frequency radio, but if we build systems that will communicate at both frequencies and simply relay the packets back and forth, that's not too different than what we used to call a gateway between two networks that's pulling something out of the low-level network format, extracting an Internet packet from it, embedding it in the next network's format and forwarding it on.
The hosts, or the computers at both ends, are communicating end-to-end through IP; they're not conscious of the fact that there were several different formats in which these packets were embedded, or they were conscious of the fact that there were several different frequencies that you hopped between.
So if we move to a voice over IP architecture, I think we have a huge opportunity to make the compatibility among all of the emergency services parties much easier because they'll be able to communicate with voice over IP. Of course, if they're running Internet protocols, they also have the ability to do a lot of other things together.
Q: Can you give some specific examples?
A: They can be collaborating looking through
Google Earth, for example, with overlays to tell them where the emergency situations are, where are the emergency workers, where are the casualties, where's the hazardous material, which way is the wind blowing, who do I have to get out of the way of this gaseous cloud?
All of that collaborative stuff works because you're in a common Internet environment. I don't mean to suggest that just putting Internet in there solves all the problems, but it creates a layer of compatibility, which today is inhibited by too much dependence on too low a level of compatibility. What we want is higher-level compatibility among all of the services, which Internet is designed to offer.
Q: And in hard-core emergency management, is that technology ready to be as bulletproof as a police radio or something along that line?
A: Well, I'd suggest to you that if the military is using this, and they are, that it has to be at least as tough as the situation that the police and the fire departments are experiencing. | <urn:uuid:ba671862-0ffb-454d-81c4-316f2bfd0179> | CC-MAIN-2017-09 | http://www.govtech.com/policy-management/102478699.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00205-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.972063 | 956 | 3.328125 | 3 |
The International Space Station recently took a snapshot of the Korean peninsula that explicitly details the night-time power consumption of North and South Korea - North Korea is almost completely dark.
From NASA: "The darkened land appears as if it were a patch of water joining the Yellow Sea to the Sea of Japan. The capital city, Pyongyang, appears like a small island, despite a population of 3.26 million (as of 2008). The light emission from Pyongyang is equivalent to the smaller towns in South Korea. Coastlines are often very apparent in night imagery, as shown by South Korea's eastern shoreline. But the coast of North Korea is difficult to detect. These differences are illustrated in per capita power consumption in the two countries, with South Korea at 10,162 kilowatt hours and North Korea at 739 kilowatt hours."
+More on Network World: Fabulous space photos from NASA's Hubble telescope+
NASA said the photo is oriented toward the north and the brightest lights are coming from Seoul. There are 25.6 million people in the Seoul metropolitan area-more than half of South Korea's citizens.
Check out these other hot stories: | <urn:uuid:0f34ac86-2383-4c2f-9b71-616f59de0cd6> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2226415/security/nasa-space-photo-shows-incredible-light-disparity-between-north-and-south-korea.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00557-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945857 | 234 | 3.203125 | 3 |
To recap, my original definition of a Virtual Private Cloud (VPC) is as a method for partitioning a public computing utility such as EC2 into quarantined virtual infrastructure. A VPC may encapsulate multiple local and remote resources to appear as a single homogeneous computing environment bridging the ability to securely utilize remote resources as part of an seamless global compute infrastructure. A core component of a VPC is a virtual private network (VPN) and or a virtual LAN (Vlan) in which some of the links between nodes are encrypted and carried by virtual switches.
According to the new VPC website, the "Amazon Virtual Private Cloud (Amazon VPC) is a secure and seamless bridge between a company’s existing IT infrastructure and the AWS cloud. Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources. Amazon VPC integrates today with Amazon EC2, and will integrate with other AWS services in the future."
VPC definitions and terminology aside the new service is important for a few reasons.
1. In a sense Amazon now has publicly admitted that private clouds do exist and the core differentiation is isolation (what I call quarantined cloud infrastructure), be it virtual or physical.
2. Greater Hybrid Cloud Interoperability & Standardized Network Security by enabling native VPN capabilities within their cloud infrastructure and command line tools. Amazon's VPC has added a much greater ability to interoperate with existing "standardized" VPN implementations including:
3. Further proof that Amazon is without any doubt going after the enterprise computing market where a VPN capability is arguably one of the most requested features.
- Ability to establish IKE Security Association using Pre-Shared Keys (RFC 2409).
- Ability to establish IPSec Security Associations in Tunnel mode (RFC 4301).
- Ability to utilize the AES 128-bit encryption function (RFC 3602).
- Ability to utilize the SHA-1 hashing function (RFC 2404).
- Ability to utilize Diffie-Hellman Perfect Forward Secrecy in “Group 2” mode (RFC 2409).
- Ability to establish Border Gateway Protocol (BGP) peerings (RFC 4271).
- Ability to utilize IPSec Dead Peer Detection (RFC 3706).
- Ability to adjust the Maximum Segment Size of TCP packets entering the VPN tunnel (RFC 4459).
- Ability to reset the “Don’t Fragment” flag on packets (RFC 791).
- Ability to fragment IP packets prior to encryption (RFC 4459).
- (Amazon also plans to support Software VPNs in the near future.)
4. Lastly greater network partitioning, using Amazon's VPC, your EC2 instances are on your network. They can access or be accessed by other systems on the network as if they were local. As far as you are concerned, the EC2 instances are additional local network resources -- there is no NAT translation. A seemless bridge to the cloud.
In the blog post announcing the new service, I found their hybrid cloud use case particularly interesting; "Imagine the many ways that you can now combine your existing on-premise static resources with dynamic resources from the Amazon VPC. You can expand your corporate network on a permanent or temporary basis. You can get resources for short-term experiments and then leave the instances running if the experiment succeeds. You can establish instances for use as part of a DR (Disaster Recovery) effort. You can even test new applications, systems, and middleware components without disturbing your existing versions."
This was exactly the vision I outlined in my original post describing the VPC concept. I envisioned a VPC in which you are given the ability to virtualize the network giving it particular characteristics & appearance that match the demands as well as requirements of a given application deployed in the cloud regardless of whether it's local or remote. Amazon seems to realize that cloud computing isn't a big switch to cloud computing where suddenly you stop using existing "private" data centers. But instead the true opportunity for enterprise customers is a hybrid model where you use the cloud as needed, when needed, and if needed and not a second longer then needed.
I also can't help wondering how other cloud centric VPN providers such as CohesiveFT will respond to the rather sudden addition of VPN functionality, which in a single move makes third party VPN software obsolete or at very least not nearly as useful. (I feel your pain, remember ElasticDrive?) I am also curious to see how other IaaS providers such as Rackspace respond to the move, it may or may not be in their interest to offer compatible VPC services that allow for a secure interface between cloud service providers. The jury's still out on this one.
Let me also point out that although Amazon's new VPC service does greatly improve network security, it is not a silver bullet and the same core risks in the use of virtualization still remain. If Amazon's hypervisor is exploited, you'd never know it and unless your data never leaves an encrypted state it's at risk at one end point or another.
At Enomaly we have also been working on enhanced VPC functionality for our cloud service provider customers around the globe. For me this move by Amazon is a great endorsement of an idea we as well as others have been pushing for quite awhile.
On a side note, before you ask, Yes, I'm just glad I bought the VirtualPrivateCloud.com./.net/.org domain names when I wrote the original post. And yes, a place holder site and announcement is coming soon ;) | <urn:uuid:8c421d0f-6c91-416f-9095-72c373d25acb> | CC-MAIN-2017-09 | http://www.elasticvapor.com/2009/08/amazons-virtual-private-cloud-is.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00025-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923193 | 1,181 | 2.75 | 3 |
Researchers at The Universities of Washington and California at Los Angeles have just been given the green lighta> by the Food and Drug Administration to begin clinical trials on a "wearable" kidney dialysis machine.
Patients with kidney issues can require an outside device to clean their blood in the place of their body's natural filtration system. Some form of chronic kidney disease affects one in ten American adults -- though dialysis is typically only used when a significant portion of the tissue has becomes non-functional.
The key issue the team wanted to solve was that dialysis machines are stationary and so require patients to remain at home or in the clinic as they undergo treatment. With sessions occurring three times each week, and for up to a few hours at a time, dialysis eats up a huge chunk of people's time and of their ability to live a normal life.
"Wearable" is often a term we use for machinery like smartwatches and fitness bands but as computing and hardware shrink, the applications in medicine grow. It's no secret that healthcare eats up far too much of our budget and time. Finding new ways to treat patients without physician involvement or ensuring people stay compliant with their treatments has been one of the targets for lowering those costs.
The idea behind the Wearable Artificial Kidney (WAK) is allowing patients to undergo treatment while going about their day -- untethering them from a stationary machine. The device is currently the size of a very large tool belt with an attached filtration system and auxiliary pumps.
"Much of it is not fundamentally different than dialysis today, just with improved technology," says Dr Jonathan Himmelfarb of the University of Washington. The WAK runs continuously on batteries and does not require attaching to an outlet or water pipes. "The biggest challenge in making portable hemodialysis is how to handle water."
Typically, dialysis requires many liters of pure water to filter a patient's blood. With the WAK, Himmelfarb says the onboard water filter allows the recycling of just a half liter of water to perform the treatment -- resolving the issue of carrying around all that heavy liquid.
The next phase of the WAK study will involve sixteen patients with the hope that at least ten complete the full clinical trial. Blood samples will be tested every 24 hours and patients will need to participate for 28 days. Like all early stage treatments, safety must be determined before any sort of plans for commercial rollout.
In April of 2012, the FDA launched its second "Innovation Pathway" program designed to fund cures and treatments for "unmet public health needs." The WAK was one of the three applications selected out of 32 submissions that focus on end stage renal disease. The other two were an implantable artificial kidney being developed at the University of California at San Francisco and an artificial on/off valve for the arteries tapped when dialysis is performed, developed by a South Carolina biotech firm.
This story, "Wearable Kidney Dialysis Machine Sent to Clinical Trials" was originally published by CITEworld. | <urn:uuid:e080f1e2-ae73-4c6a-bfd5-841e375fc9da> | CC-MAIN-2017-09 | http://www.cio.com/article/2688828/wearable-technology/wearable-kidney-dialysis-machine-sent-to-clinical-trials.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00553-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959428 | 625 | 3.296875 | 3 |
Smart cities has been a topic of great interest of late. The infrastructure and planning situation in most cities and urban areas has not been encouraging. There has been increasing pressure on cities to work within existing infrastructure, the challenge to cut costs, and the need to utilize energy sources effectively and manage resources optimally. As migration to cities continues, the challenging environment continues to intensify. However, significant steps have been taken by companies and political/decision making authorities to create “smart cities” that are connected, responsive, scalable and productive.
In addition, the increasing role and awareness of M2M/IoT solutions has influenced the decision makers, government bodies, and public authorities to invest in smart cities. All of the major technology companies such as IBM, Cisco, Oracle, Siemens, GE, Honeywell, Verizon, AT&T, etc. and political movers and shakers have increased their focus on enabling smart cities.
So what defines or constitutes a smart city? Smart cities refers to cities utilizing concepts of the internet of things (IoT) and other intelligent technologies. The components of a smart city are depicted in the chart below.
Compass Intelligence recently published a study on analyzing opportunities and growth prospects in the smart cities segment. Some findings are discussed below:
As noted, the smart city concept is no longer just a concept. It is very much a reality. There are so many success stories including Santander, Barcelona, Beijing, Boston, Dallas, and Singapore to name just a few. The key driving factor is the need to address increasing congestion, strain on insufficient infrastructure, and resource management challenges such as water and energy, traffic, pollution, and overall governance issues.
What has helped push the concept of smart cities is the rapid level of innovation. The smart cities market is no longer plagued with the constant “what if” scenarios. Ecosystem partners along with decision makers and influencers have helped make significant strides when it comes to developing tangible use cases and deployments. At every level of participation, ecosystem partners have focused resources on creating smart technologies that enable the formation of “smart cities”. These include intelligent and advanced sensor technology, smart devices such as smart metering, switches, advanced wireless communication technologies and protocols, open platforms for more applications, cloud computing, Big Data analytics, and more. As with other IoT segments, continuing advances in technology and the IoT market will further push towards the formation of smart cities globally.
What is more promising is the level of focus from companies across the ecosystem to elevate smart city offerings and fostering of synergetic partnerships. Smart cities is more than just a buzzword today. New partnerships within the value chain and innovative solutions are making smart cities a reality- be it through sensor networks, increasing automation and intelligence in hardware, software as a service model, increasing algorithms or remote monitoring and decision making control. These partnerships are taking the smart cities concept to the next step. All solutions are aimed at increasing efficiency and connectivity. The level of integration and connectivity needed to produce the desired smart cities implementation outcome is only possible through strong ecosystem integration. Strong partnerships will play a significant role in determining the future roadmap of smart cities. This is also a well-received way to monetize smart city investments quickly.
As always there are two sides to any story. Smart cities growth is not without its hurdles. High cost of investment and bureaucracy are amongst the main challenges impacting smart city implementations. Integrating disparate functions and components of a city is a huge investment. In addition, decision making authorities need to overcome the bureaucracy tangle and jump through hoops to obtain permission and grants for any implementation. This has prevented cities and government bodies from making the initial high investments. However with tangible use cases and ROI case studies, ecosystem partners are helping create awareness of the long-term benefits of smart city implementations.
Another factor to contend with is the long decision making time. There are too many stakeholders within the smart cities market. This includes the government (national, state and local), financiers (banks, venture capitalists, etc.), developers and builders, ecosystem partners, and citizens.
Integration is another key challenge within the smart cities market. There are not just disparate segments to bring together cohesively, but one must also contend with different communication standards. For example, security systems communicate through one protocol, ZigBee etc., and smart mobility will communicate wirelessly or through some sensor network protocol. While Wi-Fi is the go-to for enabling citizen apps, overall confusion as to a cohesive solution is a challenge. There will never be a completely open or standardized protocol or standard. The smart cities market will witness companies throughout the ecosystem banding together to develop a form of standards that will spread across some applications or segments. Proprietary solutions will never completely be eradicated.
These challenges are not expected to be mitigated any time soon. While the force of the drivers is expected to alleviate the impact of these challenges to a great degree, they will persist. However, as mentioned above, ecosystem partnerships will help reduce the impact of the challenges greatly.
Continued focus on smart technologies and the push from ecosystem partners will continue to boost smart city deployments. There will be more focus on gathering data and gleaning valuable information regarding patterns and consumer behavior. There is a lot of education and awareness that is being proliferated across stakeholders to increase investments and shorten the long decision making cycle. Smart energy and smart governance will lead growth rates. However mobility will offer more tangible return on investment. Security at all segment levels will continue to increase.
The ecosystem is still fragmented, however a few companies have certainly carved out a strong market position. Companies across the ecosystem will continue to look for avenues to expand their customer base and increase revenues. The one sure way of obtaining this goal is through offering an end-to-end solution or acting the role of facilitator/consultant. Currently, no such company exists that spans solutions across all segments and applications. There will be increasing overlap in the roles system integrators, hardware companies, telecom companies, and service providers play.
All said and done, the smart cities market offers a great potential for growth globally, across all its segments and in applications and technologies.
Welcome to the Compass Intelligence blog, where we cover hot topics in the industry, comment on latest news, and share recent research!
Sign up for our Newsletter | <urn:uuid:13bc8306-f226-4aab-b79a-df9fd8a648ff> | CC-MAIN-2017-09 | https://www.compassintelligence.com/blog/smart-cities-creating-new-opportunities-for-iot-ecosystem-and-business-models | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00077-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938015 | 1,297 | 2.984375 | 3 |
To be prepared for college and careers, students today must become adept at collaboration. Research has shown that physical space alone can improve student learning. These are two of the factors driving the concept of the collaborative classroom.
Often referred to as next-generation learning spaces or 21st century classrooms, collaborative classrooms are another tool in the educator’s arsenal to achieve their mission of improving learning outcomes. Collaborative designs are extremely flexible and fully compatible with emerging styles of learning, including personalized learning, flipped classroom, and gamification.
Collaborative Classroom Defined
The design of the collaborative classroom emphasizes group learning. Typically, tables enable small groups to sit and work together, unlike the rows of desks associated with factory model schools of last century. Each group has ready access to the Internet, multimedia displays and collaboration software. The group tables, shared table-top displays, and wall displays with unrestricted lines of view, are the most common characteristics of the collaborative classroom.
Implementing an effective collaborative classroom also requires an instructor station, simple remote control of the technology and lighting, quiet HVAC, and configurable audio/video. Some classrooms make use of special chairs to allow students to glide in and out of groupings or to sit at elevated tables.
See Wireless Collaborative Presentation System Product Round-Up for more on the latest products supporting the collaborative classroom.
The Variety of Learning Space Designs
The collaborative classroom is but one of a number of different types of emerging specialized classroom designs. Some designs are closely related to the collaborative classroom, such as Active Learning Classrooms (ALCs). Casual areas in libraries and lounges have evolved into the Informal Learning Space. With recent advances in virtual reality from Google Cardboard, Oculus Rift, and Samsung Gear, immersive learning environments are gaining traction.
Another concept is the makerspace or hackerspace. These are areas with tools, hardware, 3D printers, electronics, and supplies where students create, invent, and learn. Some schools, like Sinclair College, have implemented a variety of specialized learning spaces or laboratories that mirror the environments their students will operate in after graduation.
Here are photos of innovative inspiring classrooms and learning environments.
The Importance Of High Density Wi-Fi To The Collaborative Classroom
Given the need for flexibility and configurability, these classrooms must not be hamstrung by wired networking. All the devices in the classroom are moving to Wi-Fi networking. Video typically makes up a high portion of the content shared. Add to that the range of devices that the students will bring into the classroom and the need for high performance, completely-reliable Wi-Fi becomes paramount.
The network must be able to seamlessly control and monitor access, track bandwidth usage, and keep a record of application usage. Since Apple TV and Chromecast are often used with displays, the network must handle a full range of protocols, services, and standards including Airplay, Bonjour, and Miracast. In the more advanced collaborative classrooms, the screens can display images from student as well as teacher devices.
There are several ways to go about designing and implementing collaborative classrooms. Which method your school selects will depend on whether you are building new or remodeling an existing space; the size of your available budget; and the desired size of the collaborative groups. We’ll drill into the specifics of collaborate space design in the next blog of this series.
For information about the collaborative presentation systems that help enable the collaborative classroom, see our Wireless Collaborative Presentation System Round-Up.
The post What’s a Collaborative Classroom and Why Is It Important? appeared first on Extreme Networks. | <urn:uuid:51cfdf5a-7fdd-4e38-8c5f-c0a4a99eb474> | CC-MAIN-2017-09 | https://content.extremenetworks.com/h/i/319064829-what-s-a-collaborative-classroom-and-why-is-it-important | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00373-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929802 | 745 | 3.390625 | 3 |
[Answer ID: 9828]
What is the differences between Backup and Replication?
Created 02/16/2011 13:00 | Updated 09/22/2011 11:52
The Backup and the Replication are different.
Use the one that suits your purposes better.
We use 2 TeraStaion with same models in both cases and see the difference.
- What is the Replication?
The Replication is using 2 TeraStation and synchronizing shared folders of those 2 TeraStation.
I Data will be sent from the computer to the first TeraStation. II Data(each updated file) will be sent from the first TeraStaion to the second TeraStaion. * Replication source will be sent to Replication target every time the source file is updated.
- What is the Backup?
The Backup is copying the data from the backup source to the backup target according to the schedule set on the TeraStation.
I Data will be sent from the computer to the first TeraStation. II The backup source will be copied to the backup target according to the schedule set on the TeraStaion.
- Backup and Replication comparison.
* This is the example of using 2 TeraStation.
See the difference and use the one that suits your purposes better.
No Topic Replication Backup 1 Restoration time Hardware When all the data are targeted for Replicantion, it can be restored by just modifying settings. When all the data are targeted for Buckup, it can be restored by just modifying settings. 2 Data You have an almost instant snapshot of the latest data before the failure.
The data will be the last data copied from the backup source according the backup schedule. 3 Prevent file loss by mishandling
You can restore data from the last backup target data. 4 Transfer time from the computer to the first TeraStation It can be longer depending on the Replication settings Buckup settings don’t effect the transfer time from the computer to the first TeraStation. 5 Backup time No backup time when the Replication setting is on It can be long depending on the volume of data, so it is better to do backup not during the work time. 6 When to use When you want the latest data before the failure.
When you don’t want long restoration time.
When the volume of data is large.
When you want to keep the history of data.
See the difference and use the one that suits your purposes better. *1 Replication will start every time the updated data is sent from the computer to the first TeraStaion.
When there is a problem with the first TeraStation , updated data can be lost if the data transfer from the first TeraStation to the second TeraStation is not finished.
*2 Replication is different from backup.
You need a backup to prevent a file loss by mishandling.
OS / Hardware
Was this answer helpful?
Please tell us how we can make this answer more useful. | <urn:uuid:9f34fa54-a9c5-4cad-bf11-4dbed97b6fa4> | CC-MAIN-2017-09 | http://vn.faq.buffalo-global.com/app/answers/detail/a_id/9828 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00549-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.879904 | 631 | 2.59375 | 3 |
NASA sends HTC/Google Nexus One...into space
Launched with Android 2.1 Eclair in January 2010 the HTC-built Google Nexus One is more than two years old, but that is not stopping NASA from re-launching the smartphone... into space this time around.
Part of the PhoneSat program designed to create "small, low-cost, and easy-to-buid nano-satellites", in 2013 the National Aeronautics and Space Administration will launch Google's former Android flagship smartphone into space. According to HTC, NASA will not unbox the Google Nexus One and strap it on a rocket, as it was already put through thorough testing. The smartphone's first contact with space was in 2010, when it was attached to a rocket and launched to the edge of space, while also recording every step of the trip.
"Why the Google Nexus One?" you may ask. According to HTC its two-year old device has 100 times more processing power than the run-of-the-mill satellite that is orbiting overhead today and incorporates the majority of features that a satellite needs such as GPS module, flexible operating system, multi-band radios, gyroscope, accelerometer and camera among other features.
The Taiwanese smartphone manufacturer also made a Jupiter reference to the recently introduced One X+, though it's less likely to be sent into space for the next two years if the Nexus One is of any indication.
This is an interesting event, which marketing-wise looks to benefit both HTC and Google. Even more impressive is that basically what is almost three-year old smartphone technology is perfectly adequate for NASA. | <urn:uuid:e805a81d-1c9f-4a43-b1e2-16a3910228ea> | CC-MAIN-2017-09 | https://betanews.com/2012/10/10/nasa-sends-htcgoogle-nexus-one-into-space/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00125-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953624 | 332 | 2.546875 | 3 |
This post has nothing to do with IT, just happened to have been a curiosity conjured during my travels up North and back down South on various IT projects.
The Earth is not a perfect sphere, it is a spheroid that bulges out at the equator – the Earth's equatorial radius is greater than the Earth's polar radius. From high-school physics we know that Potential Energy = mass x gravity x height and so it follows that we might expect the potential energy of an object on the Earth's surface (sea-level) at the equator, to be greater than the potential energy of an object on the Earth's surface closer to the poles, since we can think of sea-level at the equator as being higher (further away from the Earth's core/center of mass) than sea-level close to the poles.
Application of the Hypothesis
If I travel from London to Glasgow achieving an MPG of 50 (Diesel), by how much would I expect the MPG to be affected on the drive back from Glasgow to London (because of the need to burn more fuel to acquire the additional potential energy)?
This application is based on a complete fantasy scenario where there are no traffic problems, the road is upon a perfectly flat spheroidal Earth (it could be argued that even with undulations in the carriageway, would still need to acquire more potential energy on the drive to London,) and I travel from sea-level in London to sea-level in Glasgow, and is really more of a mathematical exercise that attempts to calculate if there would be any noticeable difference. Apologies in advance for any flaws in the calculations!
An old copy of Maple 7 was used for the calculations, and some of the lines below in red represent the Maple Execution Group Inputs with some formulas in blue.
Latitudes in degrees North:
The Earth's equatorial radius a and polar radius b in metres:
Mass of the automobile in kg:
Calorific value of diesel in J/kg:
Density of petroleum diesel in kg/l:
Litres in a UK gallon:
Distance London to Glasgow in miles:
Radians as a function of Degrees:
Earth's gravity (in ms-2) as a function of Radians:
Radius (in metres) at a given geodetic Latitude as a function of Radians (or distance from the Earth's center to a point on the spheroid surface):
f:=phi->sqrt( ( (a^2*cos(phi))^2 + (b^2*sin(phi))^2 ) / ( (a*cos(phi))^2 + (b*sin(phi))^2 ) );
PotentialEnergy in Joules with mass (in kg) gravity (in ms-2) and height (in m):
GlasgowLatitudeRadians = 0.9751154533
LondonLatitudeRadians = 0.8991430162
GlasgowGravity = 9.818471842 ms-2
LondonGravity = 9.816854446 ms-2
*Notice that the gravity in Glasgow worked out as very slightly stronger!
GlasgowRadius = 6363522.841 m
LondonRadius = 6365075.641 m
And Potential Energy for the 1000kg automobile:
GlasgowPotentialEnergy = 62480069830 J
LondonPotentialEnergy = 62485021110 J
And the potential energy difference for LondonPE minus GlasgowPE:
PEDifference = 62485021110 - 62480069830 = 4951280 J
Kilos of diesel required:
KilosOfDiesel = 4951280 / 45300000 = 0.1092997793
Litres of diesel required:
LitresOfDiesel = KilosOfDiesel / 0.832 = 0.1313699271
Gallons of diesel required:
GallonsOfDiesel = LitresOfDiesel / 4.54609188 = 0.02889733216
A journey from London to Glasgow of 405.1 miles at 50 MPG uses:
GallonsToGlasgow = 405.1/50 = 8.102
To get back to London requires an additional 0.02889733216 gallons of diesel making the MPG:
MPGtoLondon = 405.1/(8.102+0.02889733216)
The difference would be barely noticeable! | <urn:uuid:9dd74a3d-dce7-4144-9e6f-db555d178da5> | CC-MAIN-2017-09 | http://www.cosonok.com/2012/05/theory-into-why-it-should-be-cheaper.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00545-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.856075 | 947 | 2.875 | 3 |
The traditional lecture model is the standard learning method in most American classrooms, but there is growing interest in new learning models that are encouraging students and teachers to “learn now, lecture later.”
CDW-G’s new report, Learn Now, Lecture Later, looks at the different learning methods teachers and students are using and how technology is supporting the move to these new learning models. The report also examines the challenges that high schools and colleges must overcome to make a successful transition.
To view an in-depth analysis of Learn Now, Lecture Later, please complete the information form at the link below.
- Get to the heart of what students and faculty want: Understand the technology students and teachers already have, how they want to use it in class and how they best learn and teach
- Consider how to incorporate different learning models: Work closely with faculty to meet their subject-area and curriculum needs and personal teaching styles
- Explore how technology can support and enhance learn now, lecture later: Enable the community to consult with each other and share best practices
- Support faculty with professional development and IT with infrastructure: Unless faculty are comfortable, the change will be slow; without IT, the change will not happen at all | <urn:uuid:ffc8d037-1538-4ebd-8c3a-759fbb259691> | CC-MAIN-2017-09 | http://www.cdwnewsroom.com/2012-learn-now-lecture-later-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00245-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938634 | 251 | 2.78125 | 3 |
The dawn of the personal computer in the 1970s promised the greatest change in American instructional methods since the 19th century—which is when schools began to use standardized textbooks. While the fulfillment of the personal computer’s educational promise is debatable, the machines’ commercial impact on education is not: During the 1980s, public school systems and universities across the United States threw themselves headlong into the PC revolution, investing hundreds of millions of dollars in computer systems, accessories, and software. Tech companies eager for new customers were happy to oblige, and a new educational market was born.
Soon it became common for most schools (some of which were perpetually under-funded) to assemble their expensive new computers in one place for group instruction. And thus was born the computer lab. In the slides ahead, we’ll take a trip back in time to visit some of these formational learning grounds of the 1980s. | <urn:uuid:79fe4947-03c4-4f1c-90c8-ec84d59acc09> | CC-MAIN-2017-09 | http://www.itnews.com/article/2972895/computers/9-awesome-photos-of-school-computer-labs-from-the-1980s.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00066-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.974454 | 187 | 3.203125 | 3 |
PROBLEM/SITUATION: Providing accurate, up-to-date tourism information to travelers.
SOLUTION: A centralized database of travel information accessible via web or kiosk.
JURISDICTIONS: Illinois Bureau of Tourism, Illinois Department of Natural Resources, Historic Preservation Agency, Illinois Department of Transportation.
VENDORS: Destination Marketing Group, Valassis Communications, Sybase.
Tourists rarely employ a travel agent for trips less than a week long, or vacations not involving major travel. Instead, they seek pamphlets and other information from the state. But while most states possess state or national parks, sponsor special events and festivals and maintain countless tourist attractions, not all are successful at providing accurate, timely information to the traveler. Most simply mail brochures or answer questions by phone.
Some states have automated their tourism marketing by creating static Web sites. There, tourists can download basic information replicated from the states' brochures. Unfortunately, glossy brochures and Web sites can be difficult and costly to produce and maintain.
ILLINOIS TOURISM CHALLENGES
Despite a lack of major national parks, the Illinois Bureau of Tourism (IBOT) has an amazing ability to promote its attractions. In the past, Convention and Visitors' Bureaus (CVBs) maintained relatively accurate and current tourism data, but the state lacked a centrally located clearinghouse where customers could access this information.
Like those of many other states, IBOT's "old" tourism system was generic and unresponsive. It was plagued by a number of problems, including a reliance on expensive third-class bulk mailing and outdated information. Tourism information can change quickly and the state was unable to provide the most current information to customers. This problem was compounded by the method used to update information. The CVBs regularly completed information survey forms and forwarded them to a central publication/information center, but this proved woefully inadequate at maintaining current tourism information.
In the past, IBOT implemented travel kiosks, a Web site, and a central call center, but each operated from separate back-end databases. The state lacked a centrally located database
to provide tourism information in a timely manner.
According to Desi Harris, the assistant deputy director of the Illinois Bureau of Tourism, IBOT's vision was to create a marketing approach "to link buyers and sellers in a creative and distinctive way that gave Illinois the competitive edge it needed. Illinois has a lot to offer, but it's been a well-kept secret compared to the well-known activities available in nearby states such as Michigan and Wisconsin."
TRAVEL COUNSELING SYSTEM
In response, IBOT contracted for the development of a system based on a single product database. The new system combines a call center, a dynamic Web site, and an intranet communication network, developed and maintained by the Destination Marketing Group -- a subsidiary of Valassis Communications.
The foundation for the system is the call center software, a client/server application running on NT with a Sybase database engine. The front-end client application was written in Powerbuilder, and an intranet contains CGI scripting, providing dynamic access to the database. Doug Parks, vice president at Destination Marketing Group, manages the account with IBOT.
Using an intranet, the state's 36 CVBs and four regional tourism development offices are connected to the main database. When tourism information changes in different regions of the state, the CVBs immediately update the information contained in the main database. This system has already reduced the state's workload and proved extremely cost-effective.
Most importantly, with 9,000 records in the system, vacationers now get excellent, up-to-date trip information.
The system provides a "personal counselor" to assist with travel planning. The personal counselor locates relevant travel information and sends it to the customer by first-class U. S. mail, fax or e-mail.
Information can also be obtained self-serve by using the
state's tourism Web site: . The IBOT intranet, integrated with its site on the Internet and its advanced call center software, make the Illinois tourism system one of the best in the country.
An impressive array of static information can be accessed from the award-winning IBOT Web site, including seasonal information, maps, weather updates, links to various theater pages, Illinois sports and transportation schedules.
Travelers will find the Web site trip planner especially valuable. Working off a detailed search engine that accesses the database, the trip planner allows even the most meticulous vacationer to create, view, and print valuable, destination-specific information.
THE STATE'S INVESTMENT
Currently, IBOT maintains the system through regular yearly appropriations. Overall, Harris estimated that "the new system is far superior to the old in terms of cost-effectiveness. For example, the reductions in mailing and printing alone has made the system worthwhile. In addition, by instituting a single product database, IBOT has eliminated manpower costs associated with maintaining numerous smaller databases around the state."
To attract more domestic and international tourists, business travelers, and convention and trade show attendees, Illinois' 1998 fiscal year budget includes a $2 million increase -- to $18,716,500 -- earmarked for marketing. This money will be directed toward local tourism and CVBs located throughout the state. The Tourism Promotion Fund is also being increased by $741,800. That will enable IBOT to respond to the increasing number of calls currently handled by the call center.
While certainly cost-effective, the new system also allows IBOT to target market and advertise through response indicators in the software. IBOT can monitor the number and kind of queries made on the Web site and the call center. For example, if query statistics show customers are no longer accessing information about the Lincoln Home, IBOT can focus advertising away from areas of current popularity, such as Chicago or Lake Michigan, and devote more resources to promoting the Lincoln homestead.
According to Harris, "The new tourism system has completely changed how the Tourism Bureau targets promotion of Illinois attractions."
So far, IBOT and its customers seem satisfied with the new system. "Eighty-five percent of users surveyed indicated that they were either very satisfied or extremely satisfied with the tourism system," Harris said.
"The state has been visionary in its willingness to invest in new solutions that actually meet customer needs," said DMG's Parks. "But they're being practical by using the database in multiple ways to make certain they're getting a return on that investment."
Although the current system is impressive, Harris said it's still a challenge keeping up with the explosion in information technology systems. Each year, it becomes more difficult to stay ahead of other states, especially in the cutthroat tourism business. Illinois has come up with some interesting solutions for keeping its edge in the future.
One is launching a geographic information system (GIS) component in the system. This will provide accurate maps via the Web site, and allow the customer to plan hotels, meals, and side trips along the designated travel route, mapped-out in detail for the weary traveler.
For value-conscious travelers, another future attraction will be a coupon program on the Web site, where customers will be informed of special values offered at events, restaurants, and other Illinois attractions. This will give prospective customers, especially those with cost-cutting in mind, the "best deals" in the state.
IBOT is also currently developing direct e-mailing and faxing of database information to customers who show a special interest in particular areas. For example, if an outdoor enthusiast regularly canoes on Illinois rivers and periodically accesses the database for river information, the bureau may establish a direct link to that customer. After the customer specifies canoeing as a category,
the system will periodically inform the boater of the latest in river conditions, special deals at river base camps, and other special events.
Overall, Harris and Parks promoted the technological flexibility of Illinois' new tourism system. By maintaining a single product database, IBOT is able to deal with new delivery and distribution systems as technology arises. Separate databases make this difficult, but with a single database, delivery systems like kiosks, Web site changes, and upgrades at the call center are manageable. IBOT can make simple changes to the database to incorporate new delivery systems as they hit the market.
The system's simplicity also allows other Illinois agencies to take part in promoting Illinois as a travel destination. IBOT is currently hooked up with a number of sister agencies, including the Department of Natural Resources, the Historic Preservation Agency, and the Department of Transportation. State agencies involved with advertising and public relations will eventually be networked as well. This state connectivity gives future Illinois tourists the greatest amount of quality information long before they cross the state border.
John Kost runs the State and Local Services Group at Federal Sources Inc. Previously, he was Michigan's CIO. | <urn:uuid:8d25c6bd-9517-4dcb-b4ce-afcf49dfe29c> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/Destination-Illinois.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00118-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930155 | 1,837 | 2.53125 | 3 |
Apple recently unveiled Swift, a new language to replace Objective-C for OS X and iOS application development. Apple won't accept submissions built using Swift to the iOS or Mac App Store until the fall, when iOS 8 and the next version of OS X (Yosemite) ship, so there's still some time to learn the ins and outs of this new programming language.
Without further ado, here are 10 things you need to know about Swift.
Swift should appeal to younger programmers. Swift is more similar to languages such as Ruby and Python than is Objective-C. For example, it's not necessary to end statements with a semicolon in Swift, just like in Python. In Objective-C, on the other hand, it's necessary to do so; forgetting a semicolon at the end of just a single statement can cause errors. If you cut your programming teeth on Ruby and Python, Swift should appeal to you.
That said, Swift is compatible with existing Objective-C libraries. There's no problem with writing new modules in Swift that interoperate with existing Objective-C code bases. That may make Swift attractive if you've already built a considerable skill base in Objective-C, too.
Swift should be a safe(r) language. Apple has made an effort to make Swift safe in a variety of subtle ways. For starters, programmers must include brace brackets to open and close "If" statements, which prevents bugs such as the SSL "goto fail" error. In addition, switch statements must include a default statement. This guarantees that something will run at the end of the statement even if none of the possibilities in the statement are satisfied.
Swift isn't that fast. Despite the name, Swift is unlikely to result in applications that run much faster than applications written in Objective-C. Although the two languages are different, they're not that different both target the same Cocoa and Cocoa Touch APIs (for OS X and iOS, respectively), both are statically typed languages and both use the same LLVM compiler as well. There will inevitably be performance differences, as the two languages aren't identical after all, but don't expect significant differences.
Swift is incomplete. The language that's available today isn't the finished product. Apple is still working on it, and it's highly likely that new features will be added over the coming months. While it may well be worth coding in Swift to familiarize yourself with the language, to do so you'll need to use Xcode 6 beta and the iOS 8 SDK (also in beta). And don't forget: Apple's app stores won't accept apps built with Swift until it first releases Yosemite and iOS 8.
You can experiment with Swift code in "Playgrounds." One of Swift's most interesting features is an interactive environment called a Playground. This tool lets you see the effects of changes or additions to code as you type, without going through the time-consuming rigmarole of running the code through the compiler and executing it.
Other Playground features include the capability to "watch" the value of a variable, typing its name on a separate line in the code and seeing its current value displayed in a side bar, as well as a set of "Quick Look" buttons that display images, strings and other content intended for graphical display.
Swift offers type inference. Like Scala, Opa and other programming languages on the rise, Swift carries out type inference. Coders don't need to spend time annotating variables with type information and risk making mistakes; in most cases, the compiler can infer the type from the value that a variable is being set with.
As a result, you can expect to find fewer type-related bugs hiding in your code. Plus, thanks to smart optimizations, your code should run faster.
Swift introduces Generics. In static typing, when you write a function, you have to declare the types of the function's parameters. That's fine until you have a function that you want to work in different circumstances with different types.
Enter Generics. Much like Templates in C++, Generics are functions that can be reused with different variable types without being rewritten for each type. For example, a function that adds up the contents of an array. In some cases, the contents might be integers; in other cases, floating point numbers.
Swift handles strings more easily. If string handling drives you mad in Objective-C, then you'll love Swift, as the way you deal with strings in the new language is much simpler. Most notably, you can concatenate strings easily using "+=" and compare strings using "==" instead of the more cumbersome "isEqualToString:". Strings can also be used in switch statements.
Swift tuples offer compound variables. A tuple lets you group multiple elements into a single compound variable. The values in a Swift tuple can be of any type and don't have to be the same type as each other. You can make a tuple from any permutation of types that you like: (Int, Int, Int) or (int, String) or (String,Bool) or whatever else you need.
There are a number of ways to get the values in a tuple. You can access them by index number (starting with 0), for example, or you can decompose a tuple into separate constants or variables.
Apple is the master of Swift. Having been around for 30 years, Objective-C is getting rather long in the tooth. Nonetheless, Apple didn't articulate a precise reason for introducing the new language. The most likely reason? As the creator of Swift, Apple is free to add or change any functions that it wants, whenever it wants.
There's also the advantage that, once Swift becomes mainstream, it will make porting iOS apps to Android that much harder. You won't be able to use existing and relatively mature tools that port Objective-C to Java.
This story, "10 Things You Should Know About Apple's Swift" was originally published by CIO. | <urn:uuid:9633a6ad-28f7-40e1-8628-ea57d1dab285> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2490215/data-center/10-things-you-should-know-about-apple-s-swift.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00414-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928584 | 1,240 | 2.640625 | 3 |
A new supercomputer being deployed this month in the U.S. is using solid-state drive storage as an alternative to DRAM and hard drives, which could help speed up internal data transfers.
The supercomputer, called Catalyst, will be deployed at Lawrence Livermore National Laboratory in Livermore, California. Built by the U.S. Department of Energy, Cray and Intel, the supercomputer delivers a peak performance of 150 teraflops and will be available for use starting later this month.
Catalyst has 281TB of total SSD storage and is a giant computing cluster broken into 324 computing units, called "nodes" by LLNL. Each computing unit has two 12-core Xeon E5-2695v2 processors, totaling 7,776 CPU cores for the supercomputer. Each node has 128GB of DRAM, while 304 nodes have 800GB of solid-state drive storage. Additionally, 12 nodes have 3.2TB of solid-state drive storage for use across computing units.
The supercomputer is built around the Lustre file system, which helps break bottlenecks and improves internal throughput in distributed computing systems.
The overall performance of the supercomputer is nowhere near that of the world's fastest supercomputer, Tianhe-2, which delivers a peak performance of 54.9 petaflops, but the implementation of solid-state drives as an alternative to both volatile DRAM and hard drives sets Catalyst apart.
"As processors get faster with every generation, the bottleneck gets more acute," said Mark Seager, chief technology officer for Technical Computing Group at Intel.
The throughput in the supercomputer is 512GB per second, which is equal to that of Sequoia, the third-fastest supercomputer in the world, which is also at LLNL, Seager said. Sequoia delivers peak performance of 20 petaflops.
Intel's 910 series SSDs with 800GB of storage are being used in Catalyst. The SSDs are plugged into PCI-Express 2.0 slots, the same used for graphics cards and other high-bandwidth peripherals.
Faster solid-state drives are increasingly replacing hard drives in servers to improve data access rates. SSDs are also being used in some servers as cache, or short-term storage, where data is temporarily stored for quicker processing. For instance, Facebook replaced DRAM with flash memory in a prototype server called McDipper, and is also using SSDs for long-term cold storage.
Though SSDs are more expensive than hard drives, observers say SSDs are poised for widespread enterprise adoption as they consume less energy and are becoming more reliable. SSDs are also smaller and can provide more storage in fewer servers. Samsung in August announced faster V-NAND flash storage chips that could be 10 times more durable than current flash storage.
With faster SSD storage, Catalyst is adept at solving "big data" problems, such as in bioinformatics, analytics and natural language processing, LLNL said in a statement. LLNL has developed a system so memory arrays are mapped directly to DRAM and SSDs, which helps in faster processing of serial applications like gene sequencing.
Seager said Catalyst is partly an experiment in new supercomputer designs as the nature of applications and hardware changes. | <urn:uuid:4814b2e7-715b-41ed-8d15-df42ac40a789> | CC-MAIN-2017-09 | http://www.itworld.com/article/2702674/hardware/new-supercomputer-uses-ssds-as-alternative-to-dram--hard-drives.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00590-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943615 | 674 | 2.828125 | 3 |
Black Box Explains...UARTs and PCI buses
Universal Asynchronous Receiver/Transmitters UARTs are designed to convert sync data from a PC bus to an async format that external I/O devices such as printers or modems use. UARTs insert or remove start bits, stop bits, and parity bits in the data stream as needed by the attached PC or peripheral. They can provide maximum throughput to your high-performance peripherals without slowing down your CPU.
In the early years of PCs and single-application operating systems, UARTs interfaced directly between the CPU bus and external RS-232 I/O devices. Early UARTs did not contain any type of buffer because PCs only performed one task at a time and both PCs and peripherals were slow.
With the advent of faster PCs, higher-speed modems, and multitasking operating systems, buffering (RAM or memory) was added so that UARTs could handle more data. The first buffered UART was the 16550 UART, which incorporates a 16-byte FIFO (First In First Out) buffer and can support sustained data-transfer rates up to 115.2 kbps.
The 16650 UART features a 32-byte FIFO and can handle sustained baud rates of 460.8 kbps. Burst data rates of up to 921.6 kbps have even been achieved in laboratory tests.
The 16750 UART has a 64-byte FIFO. It also features sustained baud rates of 460.8 kbps but delivers better performance because of its larger buffer.
Used in newer PCI cards, the 16850 UART has a 128-byte FIFO buffer for each port. It features sustained baud rates of 460.8 kbps.
The Peripheral Component Interconnect (PCI®) Bus enhances both speed and throughput. PCI Local Bus is a high-performance bus that provides a processor-independent data path between the CPU and high-speed peripherals. PCI is a robust interconnect interface designed specifically to accommodate multiple high-performance peripherals for graphics, full-motion video, SCSI, and LANs.
A Universal PCI (uPCI) card has connectors that work with both a newer 3.3-V power supply and motherboard and with older 5.5-V versions. | <urn:uuid:b15c238b-dcfe-4697-89a5-5ab965b196ac> | CC-MAIN-2017-09 | https://www.blackbox.com/en-pr/products/black-box-explains/black-box-explains-uarts-and-pci-buses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00114-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923284 | 484 | 3.390625 | 3 |
When deception is a good thing
February 16, 2017 Leave a comment
By Nick Mirabile, director of cybersecurity
In 2013, a pro-Assad group known as the Syrian Electronic Army hacked into the Associated Press’ Twitter account and broadcast a fake report about explosions at the White House. It caused the Dow Jones industrial average to drop nearly 150 points, erasing $136 billion in market value.
This is cyber deception in action. Cyber attackers have long embraced deception with tactics such as social engineering help-desk employees to install Trojans or obtain users’ credentials. If deception can be used to attack, can it also be used in cyber defense?
The commercial world has been investigating and successfully employing these techniques. In the simplest terms, the technology creates a decoy network that tricks the adversary into thinking they’re gaining access to valuable information.
The goal of deploying deception to detect hackers is to change the underlying economics of hacking, making it more difficult, time-consuming and cost prohibitive for infiltrators to attack. Realistically, there will always be attackers seeking to gain an advantage, and the reality is that the hacking problem cannot be solved, but it can be proactively managed.
This approach is different because it cuts down on the false positives often generated with traditional breach-detection solutions and it allows network administrators to study the movements and strategies of an adversary in what they think is a real network.
Many organizations have a strong security perimeter composed of firewalls, IDS/IPS and end-point security solutions. But when an adversary has already bypassed these precautions and is inside an organization’s network, they’re typically only discovered after data has been compromised or as they’re causing harm.
By trapping the malware and studying the movements of an adversary in this decoy environment, the cybersecurity community is able to learn their strategies, provide contextual awareness of the threat and thus develop stronger, more accurate responses.
This deception technology is growing commercially among financial and health care institutions, as well as technology, energy and entertainment companies. The deception cybersecurity market is already valued at $12 billion and is expected to grow steadily at about 19 percent annually, according to MarketResearch.com.
So why do deception solutions make sense for government? The Trump administration hasn’t yet signed a cybersecurity executive order. But there are components of the draft order that speak to deception solutions. If signed, agencies will be required to provide recommendations on ways national security systems and public and private critical infrastructure can be better protected.
The draft order also calls for a report on the identities, capabilities and vulnerabilities of the most common cyber adversaries to U.S. interests. The ability of deception cybersecurity tools to study the movements and behaviors of adversaries within decoy networks could be particularly useful here.
One of the biggest challenges facing agencies is the shortage of cyber analysts. Because of their in-demand skills, they command higher salaries in the private sector, making it harder for agencies to recruit them. That shortage doesn’t help the high number of breach alerts created by legacy security products. It’s too much for an understaffed workforce to keep up with. Deception technology allows them to prioritize the critical alerts and not waste time with false positives.
The second major challenge is how many intruders are hitting government agencies, especially with financial and espionage motives. They represented a staggering 89 percent of all breaches last year, with most of those hitting government agencies. They reported 31 cyber-espionage infiltrations last year, according to Verizon’s 2016 Data Breach Investigations Report. Another disturbing trend was the fact that government accounted for the highest number of security incidents by far in 2015, with more than 47,000.
If the federal government wants to take its cybersecurity strategy up a notch, it should look at this type of solution. A handful of companies are talking to government about deception solutions and we expect more to enter the market as the threat becomes even harder to manage. Resellers and systems integrators should start adding deception to their cyber offerings. | <urn:uuid:d67a2ab4-6830-4588-b74c-cd772e7ec249> | CC-MAIN-2017-09 | https://blog.immixgroup.com/2017/02/16/when-deception-is-a-good-thing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00642-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943484 | 830 | 2.703125 | 3 |
Black Box Explains...Speaker sound quality
A human with keen hearing can hear sounds within a range of about 20 Hz to 20 KHz. But most human speech is centered in the 1000 Hz range, so most old-fashioned analog telephone networks provided audio bandwidth only in this range. This range transmits most voice information but can fail to register voice subtleties and inflections.
Because these older analog phone systems had such a narrow bandwidth, headset manufacturers built their products to operate only in those particular frequencies.
When digital networks and fiber optic connections came into use, however, they provided a much wider bandwidth for voice transmission. This led to a corresponding increase in headset sound quality.
Today, quality headsets take advantage of increased network bandwidth and typically can reproduce sounds in the 300 Hz to 3500 Hz range. This makes voices far easier to understand and enables you to pick up all the nuances and inflections of your caller’s voice. | <urn:uuid:e1ec547b-f412-464f-8edc-2ee93a02da8c> | CC-MAIN-2017-09 | https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-speaker-sound-quality | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00518-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913862 | 192 | 3.71875 | 4 |
FTP was designed as an easy mechanism for exchanging files between computers at a time when networks were new and information security was an immature science. In the 1970s, if you wanted to secure a server from unwanted access, you simply locked the computer room door. User access to data was controlled by the basic User ID and password scenario. (Right is a reminder of how much technology has advanced since the 1970s. The photograph, taken December 11, 1975, is the Apollo Project CSM Simulator Computers and Consoles. Photo Courtesy of NASA.)
The Internet did not yet exist and the personal computer revolution was still a decade away.
Today, the security of business file transfers is of paramount importance. The exchange of business records between computing systems, between enterprises, and even across international borders has become critical to the global economy.
Yet, the original native FTP facility of TCP/IP wasn't designed for the requirements of the modern, globally connected enterprise. FTP's basic security mechanisms - the User ID and password -- have long ago been outdated by advances in network sleuthing technologies, hackers, malware, and the proliferation of millions of network-attached users.
Risks associated with using native (standard) FTP include:
For more information download our White Paper - Beyond FTP: Securing and Managing File Transfers. | <urn:uuid:0ea6f479-27c1-4f7b-b39c-20f9422b3cbe> | CC-MAIN-2017-09 | http://www.linomasoftware.com/blog/2011/01/24/ftp-lack-of-security-exposed | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00110-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958055 | 269 | 3.5625 | 4 |
Software defined networking applies the abstraction concepts of hardware virtualization to networking infrastructure. This works well for cloud implementations, which need significant configuration and planning. But SDN and network virtualization may still be too immature for prime time.
Software defined networking is one of the most misunderstood concepts in infrastructure computing. It's a phenomenon that's growing in relevance, but it's still mysterious to many CIOs, particularly those who were not reared in overly technical practice. Many myths still surround SDN. What exactly is the notion behind the technology? How can you apply SDN at your business? And how can your organization benefit from it.
Software-Defined Networking Basics
Essentially, SDN takes the virtualization phenomenon that's been sweeping datacenters around the globe for the past several years and extends it from computing hardware and storage devices to network infrastructure itself. By inserting a layer of intelligent software between network devices (such as switches, routers and network cards) and the operating system that talks to the wire, software defined networking lets an IT professional or administrator configure networks using only software. No longer must he travel to every physical device and configure-or, in many cases, reconfigure-settings.
SDN achieves the same abstraction that hardware virtualization does. With hardware virtualization, the hypervisor inserts itself between the physical components of a computer (the motherboard, main bus, processor, memory and so on) and the operating system. The operating system sees virtualized components and operates with those, and the hypervisor itself translates the instructions coming to these virtualized components into instructions the underlying physical hardware can handle.
As a result, you can move virtual machines to different computers made up of different underlying hardware as long as the hypervisor is the same or is compatible. That's because the operating system in the virtual machine has to know only how to talk to the virtualized components; it can't see or interact directly with the underlying hardware. This abstraction provides a freedom and more capability to configure and reconfigure computers and servers as ongoing operational needs dictate.
This abstraction idea is the same in SDN. It just involves different pieces of hardware. Networks are virtualized so software can configure how networks are built, routed and configured. While the underlying physical network components still route the actual traffic, the place where that traffic flow is controlled-which is called the control plane in SDN parlance-moves from the hardware to the software running on top of it.
Analysis: Exploring the SDN WAN Use Case
This is useful because the network then transforms from a bunch of wires physically connected to a lot of different devices-as you might guess, this is the data plane in SDN vernacular-into a quasi-intelligent fabric that can be controlled, rerouted, redesigned and troubleshooted (troubleshot?) from a software console.
In particular, it allows for self-service network reconfiguration. When users request resources for themselves, the network can automatically accommodate connectivity for those requests-even if the resources are located in different physical areas. The network appears to be one "unit" to the end user, a main benefit of this type of virtualization.
Why Has Software Defined Networking Emerged?
SDN evolved from virtualization primarily because of its usefulness in public and private cloud scenarios. Running clouds involves an enormous amount of network configuration and planning. Especially in disaster recovery scenarios, it's especially valuable to be able to reconfigure networks on the fly from software.
In addition, most SDN implementations are open sourced-or at least based on widely accepted international standards-and thus are supported by a variety of different vendors. This sort of vendor neutrality is implemented by a set of APIs called OpenFlow. Think of OpenFlow as the engine and the mechanics behind implementing SDN. Most tools that let you administer and configure virtualized networks use OpenFlow to communicate with the various physical devices on the network.
In the past, a network might have had several different profiles of capabilities among the different vendors represented in the infrastructure. Having an SDN implementation lets an administrator holistically administer the entire network using a known set of universal capabilities without having to worry about some vendor gear only supporting some specific capabilities and not others.
SDN is taking on a particular prominence lately because it's essentially the last frontier of physical devices that have yet to be virtualized for easier management and usability. Hardware virtualization has been around for a while, software virtualization is ages old, but networks are the last stone that has been left unturned in this new "virtual" way of thinking. Additionally, mainstream operating systems are beginning to add direct support for managing and configuring software defined networking. Windows Server 2012 and the upcoming Windows Server 2012 R2 in particular both offer increased support for managing SDN implementations.
Drawbacks, Disadvantages of Software Defined Networking
The main issue with SDN is that it's new. Because of this infancy, many believe SDN implementations are not ready for prime time. Networks and backbones perform such a core and critical role in corporate IT operations. Plus, given the somewhat patchwork state of both OpenFlow APIs themselves and also vendor support for them, it stands to reason that you shouldn't plan to rely on network virtualization and SDN implementations fully at this time. (That said, as the OpenFlow stack of interfaces matures, and more network component vendors decide to fully implement SDN compatibility based on standards, SDNs will emerge into maturity.)
Where might SDN deployments be appropriate? If you're thinking of setting up a private cloud for a department or a given set of development projects, that would represent an excellent opportunity to pilot some of these technologies with good gear, good software and good practices. Additionally, if you're planning a major, entire network restructuring, it may make sense to plan for SDN and deploy it in certain spots with an eye toward expanding your implementation as your network grows. It wouldn't be wise to invest the kind of money that a major network overhaul would require without planning for SDN in some way.
Software defined networking is a concept that still has a ways to go before it should be considered mature. Standards are evolving, vendor support is patchy but improving, and many administrators simply don't have enough impetus to really get the ball rolling on proper deployments.
For companies that have, or plan to have, extensive private cloud implementations, SDN provides a way to squeeze more usability and flexibility from existing network component infrastructure. Companies running public clouds, either from themselves or for paying customers to use for their own hosted infrastructure, already know this material and are well positioned to take a leadership role in pushing for standards to mature and manufacturers to support OpenFlow and other industry SDN efforts.
The SDN concept is still forming, but it will play an increasingly important role in networks around the globe in the next two to three years. Be ready.
Jonathan Hassell runs 82 Ventures, a consulting firm based in Charlotte. He's also an editor with Apress Media LLC. Reach him via email and on Twitter. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.
Read more about management in CIO's Management Drilldown.
This story, "What CIOs Need to Know About Software Defined Networking" was originally published by CIO. | <urn:uuid:4c894b8f-2214-457a-bb42-3a4e02fae9ca> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2168394/smb/what-cios-need-to-know-about-software-defined-networking.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00110-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.933539 | 1,507 | 2.84375 | 3 |
The next time you're harassed by a bee, look closer. It just might be Mobee, a robotic insect, that's spying on you.
Developed at the Harvard Microrobotics Laboratory, the Mobee (from "Monolithic Bee") can buzz around performing tasks such as surveillance -- or even pollinizing plants. Harvard's mechanical bee weighs about as much as a real one (roughly one-tenth of a gram), is about the size of a quarter and, like a real insect, has a pair of fluttering wings, a thorax and stabilizing halteres (small structures which operate like gyroscopes).
Unlike an insect, Mobee also has a battery, microprocessor, sensors, transmitter and antennas, and is made from 18 layers of different materials that assemble like a child's pop-up book. | <urn:uuid:4a9c72d8-9900-4005-b391-d138ec37e7ed> | CC-MAIN-2017-09 | http://www.cio.com/article/2369298/hardware/113033-10-small-devices.-No-we-mean-really-small.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00286-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967123 | 173 | 2.921875 | 3 |
As citizens of the United States prepare to cast their votes in the upcoming presidential election, the time is right to consider what implications, if any, Internet-borne threats may have on this process. With political candidates increasingly relying on the web to communicate their positions, assemble supporters and respond to critics – Internet-based risks are a serious concern as they can be used to disseminate misinformation, defraud candidates and the public and invade privacy.
Protecting against these risks requires a careful examination of the attack vectors most likely to have an immediate and material effect on an election, which in turn impact votes, candidates or campaign officials. Once individuals and organizations have a better understanding of these risks, they can put in place many of the same tools and processes that have proven effective in providing Internet protection for both consumers and enterprises.
Barbarians at the Gateway
As malware has evolved into crimeware, Internet threats are no longer noisy devices designed to get attention. Rather, today’s malicious code has moved out of basements and dorm rooms and into the hands of organized crime, aggressive governments and organizations intent on using this ubiquitous high-tech tool for their own criminal purposes.
Businesses and consumers are responding by adopting a more proactive approach to Internet security. Both at home and at work, many Internet users are implementing technologies and practices to mitigate their risk as they work and play online. After all, with their identities, financial well-being and reputations on the line, consumers and businesses have little choice but to tighten their defenses.
However, an equally insidious yet less publicized threat remains: the potential impact of this malicious activity on the election process. Many of the same risks that users have become accustomed to as they leverage the Internet in their daily lives can also manifest themselves when the Internet is expanded to the election process.
Beyond the concerns about voter fraud and the challenges of electronic voting, many of today’s threats from Internet-borne crimeware also have the potential to influence the election process leading up to voting day. From domain name abuse to campaign-targeted phishing, traditional malicious code and security risks, denial-of-service attacks, election hacking and voter information manipulation, the potential impact of these risks deserves consideration.
What’s in a Domain?
In today’s online environment, a number of risks are posed by individuals attempting to abuse the domain name system of the Internet. These include typo squatters, domain speculators and bulk domain name parkers.
Typo squatting aims to benefit from mistakes users might make as they enter a URL directly into the address bar of their web browser. It used to be that a typo resulted in an error message indicating that the specified site could not be found. Now, however, a user is likely to be directed to a different website unrelated to the intended one.
Unfortunately, organizations rarely have registered all potential variations of their domain name in an effort to protect themselves. Typo squatters anticipate which misplaced keystrokes will be most common for a given entity—in the case of election-focused activities, these would be websites related to the leading candidates—and register the resulting domain names so that traffic intended for the correct site goes instead to the typo squatter’s own web properties. The relative scarcity of simple, recognizable “core” domain names has resulted in the development of an after-market for those domain names and has led to the creation of a community of speculators who profit from the resale of domain names.
In fact, typo squatters and domain name speculators no longer even need to host the physical web infrastructure for their own web content or advertisements. Domain parking companies now handle this, for a cut of the advertising profits.
What’s more, some typo squatters’ sites may not simply host advertisements whose profits go back to them rather than to the intended site’s owner, but they may actually forward the user to an alternative site with differing political views. Worse yet, the real potential for future abuse of typo domains may revolve around the distribution and installation of security risks and malicious code, the potential impact of which is evident in online banking, ecommerce and other business-related online activities today.
Phishers, Hackers, and More
The use of malicious code and security risks for profit is certainly not new. It seems the authors of such creations are quick to reach into their bag of tricks in the wake of everything from natural disasters to economic downturns and even elections to try to manipulate users into becoming unwitting participants in their latest cyber scheme.
For example, phishers targeted the Kerry-Edwards campaign during the 2004 federal election—in one case, setting up a fictitious website to solicit online campaign contributions and in another, setting up a fictitious “toll-free” number for supporters to call (and then charging each caller nearly $2 per minute). Whether leveraging a fundraising site to which users have been redirected, a candidate’s legitimate site, spoofed emails or typo-squatted domains, phishers have a wide range of vehicles from which to deliver their malicious activity.
Malicious code infection represents one of the most concerning potential online threats to voters, candidates and campaign officials. With malicious tools that monitor user behavior, steal user data, redirect browsers and deliver misinformation, malicious code targeted at voters has the potential to cause damage, confusion and loss of confidence in the election process itself. By placing keyloggers or Trojans on a user’s system, a cyber criminal could hold the user’s data hostage until a fee is paid to release it; such threats have already surfaced and been leveraged in the larger Internet user community. In addition, a carefully placed targeted keylogger might potentially result in the monitoring of all communications from an individual, including the candidate, campaign manager and other key personnel.
Denial-of-service attacks, which make a computer network or website unavailable and therefore unusable, have become increasingly common on the Internet today. In May 2007, one such attack was launched against the country of Estonia by Russian patriots who disabled numerous key government systems over the course of several weeks. Regardless of the motivation of such attacks or their geographic setting, in an election process they could potentially prevent voters from reaching campaign websites and impede campaign officials from communicating with voters.
In fact, the security of a campaign’s website plays a role in how much faith voters have in the election process. Yet, these websites can also be hacked so that attackers can post misinformation or deploy malicious code to unsuspecting visitors. Attempts to deceive voters through the spread of misinformation using traditional forms of communication are not new. Past campaigns have aimed at intimidating minorities and individuals with criminal records, announced erroneous voting dates and introduced other tactics to create voter confusion. Such activities lend themselves to the Internet because of the ease with which they can be conducted by a single attacker rather than an organized group.
As campaigns increasingly look to the Internet as a tool for gathering support, the inherent risks that follow must also be considered. From domain name abuses to phishing, hacking and other security threats, the risks of online advocacy must be understood by election campaigns so that the necessary precautions can be put in place to protect against them. By keeping a vigilant watch on cyber activities, candidates, their campaigns and voters can help maintain a dynamic yet reliable election process. | <urn:uuid:04dcde88-b9f9-4e3d-acbf-cb1c295ea94d> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2008/08/04/cybercrime-and-politics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00162-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949947 | 1,498 | 2.71875 | 3 |
Informix Dynamic Server 11.50 Fundamentals Exam 555 certification preparation, Part 4, Examining database objects
Tables, constraints, views, indexes, triggers, sequences, and synonyms
From the developerWorks archives
Date archived: January 12, 2017 | First published: September 03, 2009
This tutorial continues your journey into IBM® Informix® Dynamic Server by discussing many of the objects that can be created and used inside of a database. Some of these objects include tables, indexes, triggers, and views. This tutorial discusses what they are, how they are used, and how to create them.
This content is no longer being updated or maintained. The full article is provided "as is" in a PDF file. Given the rapid evolution of technology, some steps and illustrations may have changed. | <urn:uuid:851dd76d-ca1d-4649-99bf-606dd0a9e7f8> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/data/tutorials/dm-ids-cert5554/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00158-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.874338 | 165 | 2.765625 | 3 |
A quick guide to computer viruses - what they are, how they work and the potential consequences of a virus entering and infecting your computer.
What is a virus?
A computer virus is a malicious program designed to insert itself into the code of another program or data file (called a host), then makes copies of the inserted code.
Arriving and infecting
Viruses can be distributed and installed on a computer in many ways, though the most common methods usually involve either: social engineering, to trick the user into receiving and installing the virus themselves; exploiting a vulnerability to silently install the malware; or some combination of the two.
Once the virus arrives on a machine and is run, it begins its attack on the files in the system. Viruses can infect various file types – critical files used by the operating system; document files such as Word or Excel; even special programs tied to the computer's hardware, like the Master Boot Record (MBR). For this reason, viruses are often named by the type of file they infect, such as ‘file infectors' (for system files), ‘Word viruses', and so on.
Damaging the infected
Usually, a virus will replicate each time the host is run and will either insert more code into the same file, or another similar host. As this process repeats, it will usually damage the host and in the most extreme cases, make it completely nonfunctional as the virus takes over it completely.
In addition to infecting and damaging its host, a virus may perform other malicious actions on the affected computer. These actions can range from simple nuisances, such as changing the desktop background, to severely harmful, such as deleting files and programs, modifying or stealing sensitive data files, and so on. The total effect a virus may have on a computer system can be devastating.
The decline of viruses
Viruses used to be the main type of computer threat faced by users in the 1990s, but today most users are more likely to encounter trojans, worms or other types of malware. Though the total number of actual virus infections have dropped over the years, they still remain a threat, especially to users using older, unprotected operating systems or programs.
Technically, viruses can range from fairly simple programs to very sophisticated constructions. Some viruses include features that are similar to the functionality of trojans or worms; others are capable of constantly changing their own code to avoid detection by antivirus programs. These capabilities make viruses more difficult for users to identify and counter this threat. | <urn:uuid:877251d2-e274-4bd5-9784-575ef403651e> | CC-MAIN-2017-09 | https://www.f-secure.com/en/web/labs_global/viruses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00158-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941082 | 521 | 3.890625 | 4 |
IPv4 Address Exhaustion
IPv4 address exhaustion is the depletion of the pool of unallocated Internet Protocol Version 4 (IPv4) addresses. The IP address space is managed by the Internet Assigned Numbers Authority (IANA) globally, and by five regional Internet registries (RIR) responsible in their designated territories for assignment to end users and local Internet registries, such as Internet service providers. On 31 January 2011, IANA officially exhausted, assigning the last IP ranges to the RIRs. IPv6 is the ultimate solution to the IPv4 address exhaustion. Carrier Grade NAT (CGN) is an integral part of IPv6 migration. | <urn:uuid:d2122f1b-75cc-4287-8739-4657d11c86a4> | CC-MAIN-2017-09 | https://www.a10networks.com/resources/glossary/ipv4-address-exhaustion | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00510-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.903476 | 136 | 2.578125 | 3 |
The fight for privacy advanced in the U.S. with 16 states and the District of Columbia introducing legislation that addresses such issues as requiring permission before student data is shared for non-educational purposes and warrants before using cell-tower simulators to intercept phone calls.
“A bipartisan consensus on privacy rights is emerging, and now the states are taking collective action where Congress has been largely asleep at the switch,” said Anthony D. Romero, executive director of the American Civil Liberties Union, which coordinated the initiative, in a statement.
It is also not known whether the next U.S. president will prioritize defending privacy to the extent it "deserves as a core American value," wrote Chad Marlow, ACLU's advocacy and policy counsel.
Legislators in the states of Minnesota, New York, New Mexico and Virginia have introduced bills that model California’s Electronic Communications Privacy Act (CalECPA), a digital privacy law that prevents government access without warrant to private electronic communications, except in emergencies.
The federal Electronic Communications Privacy Act of 1986 that was meant to protect electronic communications from government surveillance has not been updated to reflect new technologies, such as the holding of email or other documents by third parties in “the cloud,” wrote Jadzia Butler, privacy, surveillance and security fellow at the Center for Democracy & Technology.
Legislation aimed to prohibit companies from demanding access except in exceptional circumstances to the social media accounts of current or prospective employees, or educational institutions from demanding access to the social media accounts of students were also introduced in some states, as were bills that would require authorities to quickly delete data collected by automatic license-plate readers of people who were not suspected of any wrongdoing.
The ACLU is also asking people to sign and join its campaign and take control of their data. It has set up a hashtag #TakeCTRL.
The states participating in the initiative are Alabama, Alaska, Connecticut, Hawaii, Illinois, Massachusetts, Michigan, Minnesota, Missouri, Nebraska, New Hampshire, New Mexico, New York, North Carolina, Virginia, and West Virginia, and the District of Columbia. Some 100 million people live in these states. | <urn:uuid:770f78d1-efcd-4bbb-b903-47f8192cc268> | CC-MAIN-2017-09 | http://www.computerworld.com/article/3025155/security/fight-for-privacy-of-students-cellphone-users-moves-to-states.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00210-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95264 | 441 | 2.625 | 3 |
USGS's new database helps users map biodiversity
- By Frank Konkel
- Apr 19, 2013
A new Web-based federal resource makes more than 100 million mapped records of nearly every living species nationwide searchable by any user.
The Biodiversity Information Serving Our Nation (BISON) system allows users -- often land managers, researchers, refuge managers, citizen scientists, agriculture professionals, fisheries managers, water resource managers, educators and others -- to search for hundreds of thousands of species records in search fields ranging from the entire country and U.S. territories down to specific towns or parks.
BISON displays search results in list or interactive map formats, and each species occurrence point can be clicked on to find more information about who provided or collected the data. Ultimately, more than 50 layers of environmental information can be visualized in each search result, making the data easier to understand.
The U.S. Geological Survey “is proud to announce this monumental resource,” said Kevin Gallagher, associate director of USGS’ Core Science Systems, in a press statement.
“This is a testament to the power of combining the efforts of hundreds of thousands of professional and citizen scientists into a resource that uses big-data and open-data principles to deliver biodiversity information for sustaining the nation’s environmental capital,” Gallagher added.
BISON was built by the USGS and will be maintained on the Energy Department’s computing infrastructure at Oak Ridge National Laboratory in Tennessee.
Users can query species by scientific or common name, year range, state, county, basis of record or provider institution. And searches are not limited to living species. A search for Tyrannosaurus rex -- the dinosaur that died out 65 million years ago -- returns results showing where T. rex fossil remains have been found across the country.
One example of a practical use of BISON would be a land manager looking for land to purchase for conservation. BISON makes it easy to search for all documented species on a parcel and helps conservationists make more informed decisions.
BISON already includes millions of data points from the federal investment in biodiversity research, and it stands to increase the size of its database and its delivery of federally funded biodiversity data through formal cooperation with other agencies.
The undertaking already includes participation from hundreds of thousands of citizen and professional scientists. Nongovernmental organizations, state and local governments, universities, and many others are also participating in the enormous undertaking.
"With BISON, the USGS takes a big step toward making biodiversity data held within federal agencies easier to find and use,” said Mary Klein, president of NatureServe, a nonprofit organization whose mission is to provide the scientific basis for effective conservation action.
“I am enthusiastic about future opportunities to work with USGS to increase collaboration among federal, state and private data holders,” Klein added.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:c6744c6d-ed91-43d3-8248-e0c99dfc160a> | CC-MAIN-2017-09 | https://fcw.com/articles/2013/04/19/usgs-species-finder.aspx?admgarea=TC_Agencies | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00154-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.885581 | 601 | 3.203125 | 3 |
Several lectures from the VSCSE Summer School on Science Clouds (July 30, 2012) are now available for viewing on YouTube. The presentations provide a clear and concise overview on the state of cloud and virtualization technologies with a particular focus on MapReduce.
These free, online lectures are part of the MOOC movement – referring to massive open online course. MOOCs are the product of an open education ethic that is characterized by the features of open access and scalability.
There are currently four “Cloud Computing MOOC” lectures are available for viewing. In the first one, Professor Geoffrey Fox introduces the Indiana University Cloud MOOC. In addition to laying out the agenda, Fox provides examples of the applications that are best-suited for clouds, most notably those that are “pleasingly parallel.” He highlights several science projects, for example FutureGrid, that are using cloud-based technologies, but also alludes to a lot of untapped potential.
Fox points to some interesting future possibilities. For example, it is projected that 24 billion devices will be connected to the Internet by 2020. This Internet of Things will rely on cloud for control and management functions. More and more, computing will look like a grid or mesh that touches nearly every aspect of our lives. The ability to offload computational tasks to the cloud will also enable advances in mobile computer devices and robotics.
Life science is another major vertical when it comes to cloud technology. Assistant Prof. Michael Schatz of the Simons Center for Quantitative Biology lectures on the use of cloud computing in genetic sequencing. Schatz is known for having produced some highly-sophisticated uses of MapReduce for biology applications. MapReduce was developed at Google for big data computations. It is a proprietary framework, but thanks to a 2004 paper, there are now open source implementations, most notably Hadoop.
Schatz notes that “Google every single day does the equivalent of a year’s worth of sequence analysis.” Traditional servers are no longer sufficient to handle such enormous data loads, but that’s where parallel computing technologies like MapReduce come in. Schatz gives an overview of the benefits and challenges of Hadoop and MapReduce before delving into specific implementations.
In the next video series, Professor J. Hacker argues that there is a growing need for virtualization in HPC. He explains the motivation for this conclusion is threefold: the clock speed increases following Moore’s law have ceased; hardware is going to multicore (example Intel MIC); and memory capacity of systems is increasing (512 GB on systems today). He notes that the traditional approach is to tie a single application to a single server. With 50-plus cores, this approach is no longer effective. Virtualization technology is being used to partition large scale servers to run many operating systems and VMs independent of each other.
The entire lecture is less than one hour long and provides an overview of virtualization and cloud technology in relation to HPC and then offers some practical advice for leveraging virtual HPC clusters. Hacker refers to cloud computing as the “distributed computing of this decade.” He views cloud as a computing utility that provides services over a network that “pushes functionality from devices at the edge (e.g. laptops and mobile phones) to centralized servers.”
In the last video series, Jonathan Klinginsmith, a PhD candidate at the School of Informatics and Computing at Indiana University, speaks about virtual clusters, MapReduce and the cloud. He covers such important questions as “Why is cloud interesting?” (hint: scalability, elasticity, utility computing).
While Klinginsmith’s main research interest is machine learning and artificial intelligence, he has turned to computer science and information systems to address the problem of growing data sets. He is not alone. Researchers from nearly scientific endeavor are finding it necessary to attain some degree of computational proficiency.
Klinginsmith aims his talk primarily at these non-computer scientists. Thus his presentation focuses mainly on running applications on top of clusters rather than getting too deep into the nuts and bolts of building and operating clusters. For anyone who is just getting started with Hadoop or MapReduce, this will be a valuable resource. In under an hour, the viewer should acquire a basic understanding of MapReduce, virtual machines, clusters, cloud and virtualization. | <urn:uuid:77716bde-2772-4bbf-a346-4bfb85918108> | CC-MAIN-2017-09 | https://www.hpcwire.com/2013/01/07/free_lectures_cloud_virtualization_mapreduce_and_more/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00030-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937578 | 911 | 2.828125 | 3 |
Domain hijacking is a less frequently discussed but no less harmful attack on a company's or individual's Web presence. To establish a website, a domain name ("Companyabc.com") and a Web server (hosting service) must be procured. When a domain is hijacked, the attacker takes control of the domain registrar, a company that has been accredited by the Internet Corporation for Assigned Names and Numbers (ICANN) or a national country code top-level domain (TLD) account, to manipulate communication between the domain name and Web server. In effect, the attacker is interrupting the communication and redirecting traffic from one domain name server to another, using the new domain server for his/her own purposes. Once under new control, the criminal(s) can use the replicated name server (associated with the public-facing website) to send traffic to a new IP address and defraud visitors, interrupt private communications between the server and user, access visitors' account information (steal passwords/credentials), hold a domain hostage from the rightful owner, deface the website, interrupt service, serve up malware, or perpetrate pharming or phishing attacks. It is often extremely difficult to distinguish between the legitimate website and the coopted website. | <urn:uuid:80f9a742-901a-4e7c-9232-8beee77ca1f9> | CC-MAIN-2017-09 | http://misti.com/infosec-news-trends?limit=20&start=180 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00206-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.888175 | 256 | 2.734375 | 3 |
Liquid Cooling Gaining in Popularity Again
The idea of cooling your data center equipment using any type of liquid might sound like an absolute non-starter. Lunacy in fact. But thanks to spiraling energy costs, corporate green initiatives and new high-tech coolants, the concept of liquid cooling in the data center is enjoying a renaissance.
Liquid cooling actually goes back a long way in the computer industry: The first computer designed to be cooled by liquid was almost certainly the Cray-2 supercomputer back in 1985, using an extremely environmentally-unfriendly coolant called Flourinert to dissipate heat. More recently water based cooling systems have been used by hardcore gamers, enabling them to overclock their gaming rigs without causing them to melt.
Now a number of new companies have sprung up with solutions that use new coolants. These liquids do not conduct electricity so they can be in direct contact with electronics without causing any damage, and since they are many times better than air at capturing and transporting heat they offer the prospect of dramatically more energy-efficient cooling than is possible using the conventional chilled air approach.
The heat captured by the coolant is generally transferred to a water pipe loop using a heat exchanger, and the heated hot water is then pumped out of the data center where the heat can be dissipated into the air using a radiator. But it doesn't have to be wasted. The waste heat can also be used to warm office spaces or to provide hot water in a building, reducing corporate energy costs further.
The potential benefits of liquid cooling in are significant. For starters, a suitably designed system can capture almost all of the heat generated by a server's components, so there is no need to power internal fans to assist airflow. That in itself can reduce server power consumption by about 30 percent. But the main savings come from reduced air cooling costs. Since heat from the servers is captured by the coolant and removed without warming the air around the server racks, there is little or no need for computer room air conditioning (CRAC) equipment. And since the electricity needed to power CRAC equipment, chilling plants and other cooling equipment may account for as much as 30 percent of data center running costs, the potential savings are enormous.
But there are other potential benefits of liquid cooling, as well. One major benefit is reduced space. Server racks can be packed much more densely without the need for hot and cold aisles, because good airflow is no longer necessary. Some vendors also claim there is the potential to overclock servers, in the way that gamers do, without introducing reliability issues because liquid cooling is so effective. If you are unwilling to risk overclocking then, at the very least, it is reasonable to expect that components that are cooled more effectively should also last longer and prove more reliable. And since liquid cooling is almost silent, unlike CRAC equipment, data center noise levels can be significantly reduced. This makes it easier to comply with Occupational Health and Safety Administration noise regulations, and almost certainly obviates the need for staff to use ear defenders when working close to the server hardware.
Added benefits (and drawbacks)
The design of liquid cooling systems varies, but in systems where the coolant is in direct contact with components this acts as a fire suppression system. That's because the coolant is inert and components are no longer exposed to air. This also removes any potential corrosion problems due to air quality when data centers are located close to salty sea air or where there is excess humidity.
The one significant drawback to liquid cooling systems is that most can only be used with special hardware (usually based on standard components) supplied by the cooling system vendor or its partners. The rest can only work with existing server hardware after it has been modified by the vendor.
Hardcore Computers, a Minnesota based systems manufacturer, offers a liquid cooling system called LSS (Liquid Submerged Server) 200 which works by pumping a coolant it calls Core Coolant through sealed server cases so that all the internal components are submersed in the liquid. The servers, which are based on Intel Xeon processors and use solid state drives (SSD) for internal storage, are priced at a slight premium to conventional servers with the same specification, according to Chad Attlesey, the company's founder.
But the system enables significant energy costs savings, he said. "We can cut your data center power consumption in half because of the reduction in air conditioning and air moving equipment."
A full rack of servers can require 10kW to power the server fans alone, while three racks can be cooled using liquid cooling using a single 200W pump, without the need for CRAC equipment, he said. A liquid cooling system could pay for itself in as little three years, said Attlesey.
A variation on this system is about to be launched by Iceotope, a UK based vendor. The company's Iceotope Platform server cabinet can be filled with up to 48 hot-swappable sealed servers filled with a 3M engineered coolant called Novec. The difference from Hardcore's system is the coolant remains sealed inside the servers and is then cooled using water that flows in a loop on the outside of the case.
Peter Hopton, Iceotope's CTO, said that Iceotope servers consuming 300kW of power would normally require 150kW of power for cooling. Instead, the water pump uses about 1KW. He also said that servers cooled in this way will prove more reliable.
"The cooling is so uniform inside that components have almost no variation in temperature, so there is no thermal fatigue."
The system will be priced in line with comparable air cooled systems so that pay back will be achieved from energy cost savings.
Asetek, a California based company, has recently unveiled its Sealed Server Liquid Cooling systems which uses both air and liquid powered by its Rack CDU (Coolant Distribution Unit.) Each sealed server in a rack contains a pipe loop containing coolant. The pipe cools two or more cool plates that are placed over the CPUs and (optionally) memory, and also cools the air in the sealed server which then cools the rest of the server's components.
The company estimates that a full rack would draw 21kW, requiring 7kW to power computer room air conditioning. Its liquid cooling would require around 3kW per rack, resulting in a saving of about $3500 per rack per year at $0.10 per kWh. The company's Rack CDU is priced to achieve a one year payback period
Green Revolution Cooling, a Texas-based based company, offers probably the most radical departure from traditional data canter cooling. Its CarnotJet system is based on the concept of dunking -- literally placing an entire server rack into a tank of its GreenDEF coolant. The system can be used with almost any server as long as various modifications are carried out first. These include sealing hard drives (as these can't function properly when immersed in liquid,) removing internal cooling fans, and replacing any thermal compounds with ones that won't dissolve in the coolant. The company said CarnotJet can cut cooling energy use by 90 percent and offer a payback period of between one and three years.
Since liquid cooling solutions need new or modified server hardware this does limit their appeal. But, for enterprises planning a hardware refresh or a new data center build out, investing in liquid cooling could be a sound financial investment which reduces energy bills and enhances corporate green credentials.
Paul Rubens has been covering IT security for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch. | <urn:uuid:8c452164-bc76-46c0-96fc-f07585126a1a> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/datacenter/liquid-cooling-making-a-comeback.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00382-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946755 | 1,581 | 2.796875 | 3 |
According to the team, the technique works (at least in part) even on devices that are fully encrypted and have locked bootloaders. Their toolkit for the exploit is dubbed FROST, for Forensic Recovery Of Scrambled Telephones.
[ MORE DEVICES IN EXTREMIS: 10 rugged gadgets for surviving dirty, dangerous jobs ]
Credit: Friedrich-Alexander University
"Scrambled telephones are a nightmare for IT forensics and law enforcement, because once the power of a scrambled device is cut any chance other than brute force is lost to recover data," the FAU team said.
Disk encryption was introduced to Android, ironically, in Version 4.0, or Ice Cream Sandwich. The researchers used a Samsung Galaxy Nexus to demonstrate FROST in a step-by-step tutorial posted to their website.
The idea behind the trick is that information stored in RAM remains present for much longer if the temperature is particularly cold -- which means that it can be possible to access decryption keys stored in the phone's memory if it's done quickly enough.
By chilling a well-charged phone to about minus 10 degrees Celsius, then turning it off and on again as fast as possible (the team recommended simply popping the battery in and out quickly) and booting it into recovery mode, data like photos, Web history and phone contact lists can be plucked from the device using custom software developed by the German researchers.
If the phone has an unlocked bootloader, the software can even snare encryption keys from the vulnerable RAM, allowing for full access to everything stored on the device. | <urn:uuid:115f9dc0-4146-4027-80ab-56882868d6b9> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2164213/security/freezedroid--researchers-discover-cold-temps-can-unlock-secured-android-phones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00026-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925498 | 324 | 2.578125 | 3 |
While the healthcare sector is finally becoming aware of the cyberthreats and risks facing medical devices, new Internet of Things health devices are quickly creating new vectors for cyberattacks, warns cybersecurity expert Tyler Cohen Wood.
"The problem is that we've moved to this constantly connected healthcare system where we have devices that are sending data to a doctor, or healthcare systems that are using other digitally controlled devices," says Wood, cybersecurity adviser at Inspired eLearning. "The more connected you become, and the more software you're utilizing, typically the more open you are to attack."
In an interview with Information Security Media Group, Wood, a former Defense Department intelligence officer, says healthcare providers as well as device manufacturers are starting to implement security measures that go beyond what's recommended in the Food and Drug Administration's recently issued draft guidance for post-market cybersecurity of medical devices.
But consumer wearable health devices and other Internet of Things health gadgets and applications are creating new potential vulnerabilities, she says.
"If your heart monitor or diabetes monitor is connected to a [smart] phone, then you also have the added issue of the security of the phone. So, that's really with the problem lies," she says. "We are moving so quickly to the Internet of Things types of devices. When [manufacturers] were developing these devices, it's not intentional that security was not added; it's that they don't know all the risks and threats that are out there."
The healthcare sector "has just moved at tremendous speed in just the past couple of years" in becoming dependent on Internet-connected devices, she notes.
In the interview (see audio link below photo), Wood also discusses:
- The need for education on the latest risks and threats;
- What the healthcare sector can learn from other industries about cybersecurity, and what lessons other industries can learn from the challenges faced by the healthcare sector;
- Why ransomware is becoming an increasingly significant problem;
- The impact of the Cybersecurity Act of 2015 on potential cyber threat information sharing opportunities.
Wood is cybersecurity adviser at Inspired eLearning, a provider of Web-based training services. Previously, she spent more than 13 years working for the U.S. Department of Defense's Defense Intelligence Agency. There she served as a senior intelligence officer, deputy cyber division chief of the special communications division and the science and technologies directorate's cyber subject matter expert. In those roles, she made recommendations significantly changing, interpreting and developing important cyber policies and programs affecting DoD and intelligence community programs. | <urn:uuid:3aeb2286-ce5a-4e5e-b9dc-c09d77de8795> | CC-MAIN-2017-09 | http://www.inforisktoday.com/interviews/internet-things-new-cyber-worries-for-healthcare-sector-i-3075 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00202-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965041 | 511 | 2.515625 | 3 |
While the Mars Curiosity rover is the most complex machine NASA has ever sent to another planet, the computer that runs it is no more powerful than the one in your smartphone.
The robotic rover, which landed in the Gale Crater on Mars early Monday morning, now is being put through a series of tests to make sure all of its systems are functioning properly after its more than 350-million-mile journey from Earth. It may be several weeks before the SUV-sized robotic rover is ready to begin its trek across the Martian surface.
Curiosity is on what scientists hope will be a two-year mission to find out if the planet has or ever has had what it takes to support life, even in a microbial form. The rover also will look for signs that humans could one day live on Mars.
While Curiosity is on a major scientific mission, the rover itself is something of an engineering marvel, too, according to the men and women who built it.
The Mars rover Curiosity has two computers, four chips and software designed to last throughout its two-year mission. (Artist concept: NASA)
The rover not only has to be sturdy enough to survive a more than eight-month journey through outer space to reach Mars, once there, it has to be able to work in brutal temperature extremes for years.
"This is the most complex system that we've every sent to another planet," Devin Kipp, an operations lead on NASA's Curiosity team, told Computerworld. "The entry and landing system was the most complex system we've ever attempted. And now that it's on the ground, it's incredibly capable. The amount of state-of-the-art scientific instruments is really impressive."
Curiosity, which weighs nearly 2,000 pounds and carries 17 cameras and 10 scientific instruments, only has two computers and four processors.
Jonathan Grinblat, an avionics systems engineer with the Jet Propulsion Laboratory, said the rover used one Sun Microsystems Sparc processor to run the craft's thrusters and descent-stage motors as it moved through the Martian atmosphere. With the rover on the ground, that processor's job is done.
After the landing, a PowerPC processor, which was originally developed by the Apple-IBM-Motorola alliance, known as AIM, has taken over. This main processor, which has a redundant backup processor, runs all of the main software on the rover, handling "pretty much everything the rover will do," according to Grinblat.
The fourth chip, which also is a Sparc processor, is in Curiosity's motor controller box. Grinblat explained that the main processor sends it commands and this one handles the logistics of getting the motors to move.
All of the processors are single-core. Grinblat noted that Curiosity will be one of the last generations of NASA spacecraft that will have single-core chips.
"This project has been in development for more than 10 years," he said. "Multi-core, space-certified chips weren't available back then. Now, they are available, so they'll probably be used in projects eight to 10 years from now."
Grinblat added that any spacecraft being launched in the next several years still won't carry multi-core chips because of the long development time. "Because of that, we lag behind in the latest and greatest, especially since there's so much work to do to make them space ready," he said.
Curiosity only has two computers onboard: one main computer and one backup, according to Kipp. Both have been hardened to handle the rigors of temperature extremes and solar radiation.
Neither is a supercomputer. In fact, they're not even as powerful as today's laptops. Grinblat explained that the computer doesn't have to be highly powerful. It needs to be able to handle basic functions, but it's more important that it be able to survive and work in the harsh conditions in space and on Mars.
"It's not more powerful than my cell phone," said Grinblat. "They have to be tolerant to radiation so they run much slower and their feature sizes are much larger than modern-day processors. The smaller they are, the more susceptible they are to radiation. We 're constantly getting bombarded with particles. A processor with a much smaller feature size would mean more errors than it could handle."
NASA released photos taken by the Mars Reconnaissance Orbiter showing the rover surrounded by the landing sites of several of its components, such as its heat shield, parachute and sky crane.
Because of this, he added that an Intel Core i7 processor, for example, wouldn't survive a day on Mars because of the extremes in temperature.
Despite the fact that Curiosity doesn't have a huge amount of compute power, it's still a smart machine, able to scan its environment and make decisions.
Kipp noted that the rover has an Auto Nav, or automatic navigation, mode, which enables it to monitor its own wheels for slippage and use cameras to scan the ground for rocks or holes that could impede its travel.
"It can drive and figure out if the hazard poses a risk to the safety of the rover," said Kipp. "If it believes it might, it will stop and phone home and verify if we want it to keep driving. It's pretty smart. We have a heightened confidence in it."
The rover, which can receive software updates from Earth, also has a robotic arm, which is designed to use various tools to dig for rock and soil samples, as well as to scoop up the samples and deposit them in onboard scientific instruments.
"It's about the science. That's why we do this," said Kipp. "But the expertise that we've developed on how to build, develop, test and run a mission like this is really a national treasure.... Hopefully we'll have following missions where we can use what we're learning here."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about processors in Computerworld's Processors Topic Center.
This story, "NASA: Your smartphone is as smart as the Curiosity rover" was originally published by Computerworld. | <urn:uuid:01a8a87c-1d4f-45a6-8327-bfc5cab4b1a4> | CC-MAIN-2017-09 | http://www.itworld.com/article/2725157/hardware/nasa--your-smartphone-is-as-smart-as-the-curiosity-rover.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00202-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969602 | 1,321 | 3.5 | 4 |
The use of electronic health records directly improves treatment and outcomes for patients with diabetes, according to a recent study involving tens of thousands of patients. The greatest improvement was among patients with the most severe diseases.
Researchers from Kaiser Permanente Northern California studied nearly 170,000 diabetics who were being treated as outpatients at the health system’s 17 medical centers between 2004 and 2009. The use of certified EHRs resulted in “statistically significant improvements in treatment, monitoring and disease control,” according to an abstract of the study, “Outpatient Electronic Health Records and the Clinical Care and Outcomes of Patients With Diabetes Mellitus.”
The researchers measured how EHRs helped clinicians intensify treatment to improve patients’ blood-glucose and LDL cholesterol levels.
"Increases in information availability, decision support, and order-entry functionality help clinicians to identify the most appropriate patients for drug-treatment intensification and retesting, which leads to better care of patients with diabetes," Dr. Marc Jaffe, clinical leader for the Kaiser Permanente Northern California Cardiovascular Risk Reduction Program, said in a statement.
The improvements in blood-glucose and lipid levels were seen across the board, lead researcher Mary Reed said in a statement. The Kaiser Permanente researchers next intend to examine how EHR use affects emergency room visits for diabetics, she said.
The National Institute of Diabetes and Digestive and Kidney Diseases provided funds for the study, which was published Oct. 2 by the Annals of Internal Medicine. | <urn:uuid:a9a68b54-8b87-4492-9af0-e4e1e0248ee4> | CC-MAIN-2017-09 | http://www.nextgov.com/health/health-it/2012/10/study-shows-health-it-benefits-diabetic-patients/58645/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00202-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941701 | 319 | 2.71875 | 3 |
Significant advances in technology and shifts in economies and culture are bringing about a new age of intelligent tools that are aware, can make sense of their surroundings, and are socially cognizant of the people who are using them.
Sentient tools are the next step in the development of computational systems, Smart Cities and environments, autonomous systems, artificial intelligence (AI), Big Data and data mining, and an interconnected system in the Internet of Things (IoT). These tools are “what comes next” and emerge from a base of computational, sensing, and communications technologies that have been advancing over the last 50 years.
The "awareness" of these sentient tools is not comparable to a human level of consciousness. They are not meant to mimic, mirror, or replace human interaction. These tools are designed for specific physical and virtual tasks that could be vastly complex but are not meant to replace humans. Conversely, they are meant to work alongside the human labor force.
The rise of sentient tools will have a significant impact on the global work force and education, leaving practically no industry unaffected. | <urn:uuid:b8117ff5-4ec1-4754-a9cf-ece747d7e105> | CC-MAIN-2017-09 | http://www.frost.com/sublib/display-market-insight.do?id=296998960 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00378-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.936181 | 220 | 3.0625 | 3 |
A recent Evans Data survey indicates that developers are concerned that advancements in artificial intelligence could mean fewer jobs for programmers.
Despite being among the leaders of the digital revolution, software developers apparently are just as concerned as workers in other fields that automation and technological advances could, at some point, endanger their jobs, according to a recent Evans Data survey
Indeed, the Evans Data study indicates that developers fear that their own obsolescence will be spurred by artificial intelligence
(AI). The company surveyed more than 550 developers across a variety of industries. When asked to identify the most worrisome thing in their careers, nearly one-third (29.1 percent) selected the "I and my development efforts are replaced by artificial intelligence" category.
Developers' next biggest worries were related to platform concerns. Twenty-three percent of the respondents said they were worried that the platforms they work on might become obsolete, and 14 percent said they were worried that the platform they are targeting might not gain significant adoption.
"Another dimension to this finding is that over three-quarters of the developers thought that robots and artificial intelligence would be a great benefit to mankind, but a little over 60 percent thought it could be a disaster," said Janel Garvin, CEO of Evans Data, in a statement. "Overlap between two groups was clear which shows the ambivalence that developers feel about the dawn of intelligent machines. There will be wonderful benefits, but there will also be some cataclysmic changes culturally and economically."
Some observers note that developers often see firsthand the power of AI and are more keenly aware of its potential.
"It does not surprise that developers, who have the skills to understand AI at a deeper level than most folks, would be concerned about it," said Al Hilwa, an analyst with IDC. "However, I would say there are many other jobs and roles that are less creativity-centric—e.g. news reporting—that are more vulnerable in the first order."
Yet, from a broader perspective, this has been a major anxiety in recent history over how technology, which in the early days was largely mechanical and electrical, would replace humans, Hilwa said.
"Over time, it did, but the net result is a transformation in the nature of work towards knowledge work, and the shift in the nature of economies and the products produced," he said. "Overall, there has been dislocation of course, but also incredible growth and net improvement in lifestyles at almost every level of the income scale."
The Evans Data study comes at the emergence of what IBM CEO Ginni Rometty
calls the "cognitive era." With its Watson cognitive computing system, IBM is pushing into the cognitive era in a major way. Big Blue's Watson features a natural language interface, which enables users to directly query the system in natural language. Watson understands and responds in natural language. The system can ingest vast amounts of data and analyze it in milliseconds. It also learns from itself and builds its base of knowledge every time it used.
During a keynote at the Consumer Electronics Show
(CES) in January, Rometty
announced several new advances and partnerships built around the IBM Watson cognitive computing platform. Each of those advances has the potential to impact jobs at some level, including IBM's plans with Softbank Robotics
to take their partnership on a Watson-powered robot global. Through their joint work, Softbank has infused Watson into its "empathetic" robot Pepper, enabling it to understand and answer questions in real time, opening up new possibilities for the use of robotics in business scenarios such as banking, retail and hospitality. | <urn:uuid:d298a48d-b9e9-4a5a-a270-295697ffd3f0> | CC-MAIN-2017-09 | http://www.eweek.com/developer/developers-worried-that-ai-may-take-their-jobs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00430-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966793 | 738 | 2.546875 | 3 |
You may not have heard the name Ceph before, but that's about to change. This new storage technology is about to enter the data center and change the way we look at data storage and cloud computing.
First, we need to know what Ceph is. Basically, it's an open-source file system for Linux. The opening line on the official Ceph site is a powerful boast:
Ceph uniquely delivers object, block and file storage in one unified system.
The structure of Ceph is built on top of a distributed object store, with a Representational State Transfer (REST is a protocol for transferring data to storage over the Internet) gateway, block device interface, and a Portable Operating Systems Interface (POSIX -- an industry standard for operational compliance) file system. All of the access methods can run on the same object store, and this is made up of commercial, off-the-shelf (COTS) hardware.
However, Ceph's real claim to fame is that the store can be scaled to exabytes of data by adding storage nodes, which provide the type of scale needed for large cloud-computing infrastructures, as well as big-data at some point.
There are three types of nodes in the clusters. A metadata node serves up data about the objects. This is a memory-heavy, powerful multicore x86 system. Object storage devices (OSDs) carry the data, and are much less compute intensive. Finally, monitor nodes hold the cluster map. The metadata servers load-balance dynamically, and data is striped across OSDs, so the implementation is relatively free of hot-spots.
OSDs uses a standard file system internally to provide a stable environment for storing the objects. In that sense, Ceph is a piggy-back layer, but the philosophy is to use the best of what is available without attempting to build that better mousetrap, and that seems like a sensible approach.
The current production file system recommended is XFS, which is mature, with Btrfs (B-tree file system) being the longer-term choice. Btrfs is still early life and a bit buggy, but is designed to be very extensible. It handles heterogeneous storage, has snapshots and compression already, and will have deduplication and encryption built in at a future release.
Stepping back from the techy level, Amazon S3 and OpenStack Object Storage (Swift) compatibility is built in to the Ceph product. Right now, that means that Ceph can simulate a Swift or S3 object store, but it is likely that Ceph will expand to build interfaces into these stores at a later data, potentially allowing inter-cloud bridging for geographic dispersion, extension, and for data migration.
The impact of a product like Ceph shouldn't be underestimated.
This is a unique solution right now, giving a truly unified access and view of a single common storage method. Being an open-source solution, the price is right. It has been accepted into the Linux mainstream build process, which highlights the support in the industry, and it is being taken up by other providers in the cloud, including OpenStack.Org, RedHat, and SUSE.
DreamHost is currently running a 3 Petabyte Object Store using Ceph, so it is certainly production-ready.
Ceph is being enthusiastically picked up. It's a fair bet that it will become the backbone for host-based storage software services. There's also potential to use the technology with (Virtual Storage Appliance) VSA-type systems. Ceph's Reliable Autonomic Distributed Object Store (RADOS) can and has been decoupled from the upper layers, as SUSE Cloud is doing. With it migrating features away from the storage nodes, it could change the way systems are built, and upset the Big Iron Storage applecart by moving the storage sweet spot to COTS hardware.
The evolution of Ceph, and the growth of the surrounding ecosystem, will move rapidly, and it will change storage. | <urn:uuid:be35aa0f-a26b-462b-b9d4-30c2523e3683> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/ceph-poised-change-data-center-and-cloud-storage/701336891 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00198-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938654 | 838 | 2.84375 | 3 |
Data visualization Army style
- By Reid Davenport
- Sep 09, 2013
The famous 19th century graph by Charles Joseph Minard showing Napoleon’s troops during the invasion of Russia is an intersection of data, illustration and storytelling. The graph depicts three distinct and major variables that factored into the eventual French defeat – troop levels, temperature and movement.
Even though the process of cartography is starkly different in the 21st century, the use of pictures and diagrams as visual data is still prevalent, at least for the U.S. Army.
Data visualization is helping the Army cut through red tape by displaying information such as equipment counts in places all around the world, said Chuck Driessnack, vice president of missile defense at SAIC.
For example, Driessnack said that having a comprehensive view of the estimated $36 billion worth of equipment in Afghanistan is crucial as the U.S. continues to draw down its presence and bring equipment back home.
Charles Joseph Minard, a French civil engineer, drew this map depicting Napoleon's 1812 advance into and retreat from Russia. According to scimaps.org, it 'may be the best statistical graphic ever drawn.'
"We have all this equipment that has accumulated over all those operations and they're sitting over in Afghanistan and we're coming out," said Driessnack, who was speaking at the Tableau Customer Conference in National Harbor, Md., on Monday. Tableau Software specializes in making data digestible through visualization systems, and works with both the private and public sectors.
An example of SAIC’s visualization program is a map that shows how many Army ambulances are at locations around the globe.
"So what's common in these organizations is they have the data but they can't get arms around it," he said.
The other major benefit in implementing visualization systems is ensuring that personnel at every level receive consistent information through dashboards. This allows for better information sharing from data analysts all the way up to the upper echelons of Army leadership.
"I'm talking about from the four-star general all the way down to the analyst and they're seeing it all at the same time," Driessnack said.
Reid Davenport is an FCW editorial fellow. Connect with him on Twitter: @ReidDavenport. | <urn:uuid:e79b69d4-9af1-4ddf-932a-add769f164df> | CC-MAIN-2017-09 | https://fcw.com/articles/2013/09/09/army-data-visualization.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00074-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955038 | 482 | 2.515625 | 3 |
01/09/15. Version 0.1.3 (Open Source)
Evil Foca is a tool for security pentesters and auditors whose purpose it is to test security in IPv4 and IPv6 data networks.
The tool is capable of carrying out various attacks such as:
- MITM over IPv4 networks with ARP Spoofing and DHCP ACK Injection.
- MITM on IPv6 networks with Neighbor Advertisement Spoofing, SLAAC attack, fake DHCPv6.
- DoS (Denial of Service) on IPv4 networks with ARP Spoofing.
- DoS (Denial of Service) on IPv6 networks with SLAAC DoS.
- DNS Hijacking.
The software automatically scans the networks and identifies all devices and their respective network interfaces, specifying their IPv4 and IPv6 addresses as well as the physical addresses through a convenient and intuitive interface.
Man In The Middle (MITM) attack
The well-known “Man In The Middle” is an attack in which the wrongdoer creates the possibility of reading, adding, or modifying information that is located in a channel between two terminals with neither of these noticing. Within the MITM attacks in IPv4 and IPv6 Evil Foca considers the following techniques:
- ARP Spoofing
Consists in sending ARP messages to the Ethernet network. Normally the objective is to associate the MAC address of the attacker with the IP of another device. Any traffic directed to the IP address of the predetermined link gate will be erroneously sent to the attacker instead of its real destination.
- DHCP ACK Injection
Consists in an attacker monitoring the DHCP exchanges and, at some point during the communication, sending a packet to modify its behavior. Evil Foca converts the machine in a fake DHCP server on the network.
- Neighbor Advertisement Spoofing
The principle of this attack is identical to that of ARP Spoofing, with the difference being in that IPv6 doesn’t work with the ARP protocol, but that all information is sent through ICMPv6 packets. There are five types of ICMPv6 packets used in the discovery protocol and Evil Foca generates this type of packets, placing itself between the gateway and victim.
- SLAAC attack
The objective of this type of attack is to be able to execute an MITM when a user connects to Internet and to a server that does not include support for IPv6 and to which it is therefore necessary to connect using IPv4. This attack is possible due to the fact that Evil Foca undertakes domain name resolution once it is in the communication media, and is capable of transforming IPv4 addresses in IPv6.
- Fake DHCPv6 server
This attack involves the attacker posing as the DCHPv6 server, responding to all network requests, distributing IPv6 addresses and a false DNS to manipulate the user destination or deny the service.
- Denial of Service (DoS) attack
The DoS attack is an attack to a system of machines or network that results in a service or resource being inaccessible for its users. Normally it provokes the loss of network connectivity due to consumption of the bandwidth of the victim’s network, or overloads the computing resources of the victim’s system.
- DoS attack in IPv4 with ARP Spoofing
This type of DoS attack consists in associating a nonexistent MAC address in a victim’s ARP table. This results in rendering the machine whose ARP table has been modified incapable of connecting to the IP address associated to the nonexistent MAC.
- DoS attack in IPv6 with SLAAC attack
In this type of attack a large quantity of “router advertisement” packets are generated, destined to one or several machines, announcing false routers and assigning a different IPv6 address and link gate for each router, collapsing the system and making machines unresponsive.
- DNS Hijacking
The DNS Hijacking attack or DNS kidnapping consists in altering the resolution of the domain names system (DNS). This can be achieved using malware that invalidates the configuration of a TCP/IP machine so that it points to a pirate DNS server under the attacker’s control, or by way of an MITM attack, with the attacker being the party who receives the DNS requests, and responding himself or herself to a specific DNS request to direct the victim toward a specific destination selected by the attacker. | <urn:uuid:c0528697-2577-44a7-af38-7ee936af6635> | CC-MAIN-2017-09 | https://www.elevenpaths.com/labstools/evil-foca/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00074-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.905731 | 925 | 2.625 | 3 |
Chicago this week began deploying sensors on light poles to monitor, photograph and listen to the city. The effort is costing as much as $7 million, and may be the largest urban data collection of its kind once all 500 nodes are in place.
The beehive-shaped nodes have an array of sensors with enough onboard computing capability to conduct data processing on the device and minimize the amount of bandwidth needed to transmit data.
Cameras will track the movement of pedestrians, vehicles and whether water is pooling on the street. Another camera will be pointed to the sky. A microphone will monitor noise levels. There will also be temperature, pressure, light and vibration sensors. Particle sensors will detect pollen. Gas sensors will check air quality, recording carbon monoxide, nitrogen dioxide, sulfur dioxide, and ozone. Even the magnetic field will be monitored.
Chicago is taking an Internet of Things (IoT) deployment to what may be a new level as it seeks insights into the city's environment. Aside from sensors, each node will offer computing capability, including the use of Odroid Linux systems and a separate controller that will enable rebuilding and updating the operating system. And it includes a cellular modem.
Once the unit is placed 20 feet up on a pole, the city wants to do as much as remotely as possible.
The data will be publicly available through the OpenGrid.io portal once enough sensors are deployed later this fall. But Chicago's CIO Brenna Berman, says the city doesn't yet know how businesses, community groups and others will creatively use the data. "We can't even begin to imagine what they are going to do with it," she said.
But Berman has some clear ideas about how the city could use this data. It has an analytical team of 17 people comprising data scientists, business intelligence experts and database administrators, all of whom can use it for predictive analytics.
For instance, Chicago has relied on spot surveys to measure traffic and pedestrian flows. But the camera data, which may snap up to two photos per second, will enable the city to continuously track movement at intersections and analyze how to improve them.
The city has ample data on traffic accidents, said Berman, "but what we don't have is information about accidents that almost happen or near misses." That information will help improve safety, she said.
The project has been dubbed the "Array of Things," an homage to the array of instruments that are combined in telescopes, said Charlie Catlett, a senior computer scientist at Argonne National Laboratory and the project's principal investigator.
The National Science Foundation provided $3.1 million and about $2 million was spent on research and development to create the base platform. Cost sharing involing the city and industry partners brings the entire project investment into the $6-to-$7 million range, said Catlett. The entire installation will be completed in 2018.
The processing on the device means that data from the photos can be gathered and transmitted and the photo itself deleted. The monthly transmission, per device, is expected to be about one gigabyte over a cellular network, said Catlett.
The on-board processing also protects privacy, said Catlett, which was one of the concerns the project sought to address. Although all of the data will go to a cloud-based server, a sampling of photos used for baseline analysis will be sent a University of Chicago-based server. The university was awarded the NSF grant.
The device will shut down in extreme weather, although the heat generated by a four-core Arm processor and a Samsung processor used in cell phones will provide some protection in extreme cold, said Catlett.
Although cities are deploying sensors in urban environments, Catlett said he is unaware of anything as extensive as what's going on in Chicago.
"I've yet to talk to anyone from any city who feels that they have adequate information about even the simple things," said Catlett.
This story, "Chicago deploys computers with eyes, ears and noses" was originally published by Computerworld. | <urn:uuid:79c71d1e-85cf-41a6-bf7d-4e01668b4cd9> | CC-MAIN-2017-09 | http://www.itnews.com/article/3115224/internet-of-things/chicago-deploys-computers-with-eyes-ears-and-noses.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00246-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950141 | 823 | 2.5625 | 3 |
As part of its efforts to speed up the delivery of web content, Google has proposed changes to Transmission Control Protocol (TCP), "the workhorse of the Internet." Yuchung Cheng who works on the transport layer at Google writes:
"To deliver content effectively, Web browsers typically open several dozen parallel TCP connections ahead of making actual requests. This strategy overcomes inherent TCP limitations but results in high latency in many situations and is not scalable. Our research shows that the key to reducing latency is saving round trips. We’re experimenting with several improvements to TCP."
Cheng believes the current transport layer badly needs an overhaul to catch up with other (networking) technologies. Read more.
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:ebfae5ba-71b6-44af-b8b6-60c6b623c95f> | CC-MAIN-2017-09 | http://www.circleid.com/posts/making_web_faster_google_working_on_enhancing_transmission_control_protocol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00474-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.732605 | 207 | 2.53125 | 3 |
In response, the IEEE and the Wi-Fi Alliance are working on 802.11v, a standard that aims to calm that chaos by creating an interface that enables a network to be managed and optimized all the way down to client devices, and leverages existing infrastructure and WLAN standards to do it.
The standard, expected to be finalized in mid-2010, should be near top of mind for network administrators and CIOs alike because it can help them get a grip on wireless usage, while potentially saving power and minimizing network disruptions. The benefits of 802.11v are particularly significant as enterprises move toward ubiquitous corporate wireless networks. The standard includes provisions to smooth client transitions between access points, which will not only minimize congestion during busy times, but also boost performance of applications such as wireless voice over IP.
802.11v's Real Time Location Services (RTLS) technology accommodates high-level wireless client tracking. This enables a WLAN to redirect a client to another nearest access point if the one it's on is overworked. RTLS also provides for new location-based services and applications by letting network administrators compile network performance data from clients themselves. Admins can see how well a WLAN is operating, and plan capacity and upgrades accordingly.
A Greener Net
802.11v's Wake-On-WLAN and Wireless Network Management Sleep Mode might "green up" wireless networks as well. 802.11v stands to drastically improve the battery life of mobile devices and may also lower the energy draw from access points. For example, an 802.11v-enabled smartphone could lower power to its wireless radio when it's inactive, then power back up to take a VoIP call or new e-mail. Likewise, inactive access points could run on minimal power and switch to full power when wireless clients are in range.
Given that both the WLAN infrastructure and client devices must support 802.11v to achieve its power-saving and management benefits, it's unlikely that products supporting draft versions of the standard will appear. The first offerings likely will come from vendors that provide both wireless infrastructure and mobile devices, such as Cisco or Motorola.
Furthermore, although 802.11v is designed to complement standards such as 802.11b or 802.11g, it's unclear whether vendors will offer software or firmware upgrades for existing products. Infrastructure vendors likely will add 802.11v to existing wireless controllers and access points as part of ongoing maintenance, but laptop and mobile device manufacturers might not backfill support for the standard into older products. So while 802.11v can enhance the battery life and management of legacy devices, it may only arrive in the next generation of mobile devices. | <urn:uuid:6e9e3a83-76c5-40fc-a730-f8a9530179d6> | CC-MAIN-2017-09 | http://www.networkcomputing.com/networking/80211v-answers-call-order-wireless-lans/831225492?cid=sbx_iwk_related_mostpopular_wireless_reviews_mobility&itc=sbx_iwk_related_mostpopular_wireless_reviews_mobility | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00471-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93413 | 546 | 2.578125 | 3 |
December is Identity Theft Prevention and Awareness month. Amidst all the bells-a-ringing and people singing that also come around this time of year, it can be easy to ignore the more sober reality of this important topic.
Are You Aware?
A great place to start on the “awareness” side of all this is with Norton Security’s 2011 cybercrime report: a sleek, user-friendly chart with surveys, maps, and even animations breaking down everything you need to know about the current state of cybercrime in the U.S. Some of the statistics are pretty grim, but eye-opening. Here are some numbers you really need to see:
- Last year, in 24 countries, 14 people suffered from cybercrime every second
- Altogether cybercrime cost victims (in those same countries) $113,882,054,117
- The odds an online adult will become a victim of cybercrime this year is almost 1 in 2
- 10% of all online adults have experienced cybercrime on their mobile phones
Are You Protected?
So what can you do to protect yourself from cybercrime? That’s where the “prevention” part comes in, and there are steps you can take. Your online identity is the sum total of your vulnerable personal information, including credit card numbers, social security number, usernames, email addresses and passwords. This data can and must be protected.
A secure password manager like Keeper is essential to make sure your passwords are strong and encrypted, and to ensure that private data stays exactly that: private. Keeper for your computer, your smartphone, or your tablet is the solution to the scary threat of cyber attacks. | <urn:uuid:23bd21d9-dc3a-45f5-89aa-4b43d57f4014> | CC-MAIN-2017-09 | https://blog.keepersecurity.com/2012/12/20/the-truth-about-cybercrime-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00471-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935172 | 350 | 2.578125 | 3 |
Spam is a slang term for unsolicited commercial email or junk email. Spam and spim are cousins: Spim stands for “Spam over instant messaging,” and refers to unsolicited instant messages. Spim not only disrupts your chatting, but can also contain viruses or spyware. You can prevent spim by blocking any messages from sources not on your contact list. Your Internet and email provider can help you to prevent spam, and today most anti-virus programs include spam protection features. | <urn:uuid:e7e2051b-4adf-4a03-9561-980d6d63bbb5> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/us/what-spam-spim/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00115-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910199 | 105 | 2.859375 | 3 |
In a world where cars can drive
In part one of our "Future of Driving" series, we looked at the current state of self-driving car technology and tried to predict how that technology would progress in the coming decades. Now we're going to assume that the technical problems we discussed can be solved and explore how self-driving cars could change society.
Some benefits of self-driving cars are obvious—less time spent behind the wheel and fewer accidents—but the consequences are likely to be much broader than that. Among the most intriguing are much greater use of taxis, more widespread use of smaller, more energy-efficient cars, the virtual elimination of parking lots, and a dramatic transformation of the retail sector.
Throughout this article we'll be linking to essays by Brad Templeton, a Silicon Valley entrepreneur who is currently the chairman of the Electronic Frontier Foundation. In recent months, Templeton has become an evangelist for self-driving vehicle technology, speaking and writing extensively about the topic. Many of the predictions we make in this story are based on ideas sketched out in Templeton's writings. If you're interested in more discussion of the topic, Templeton's web site is the place to go.
An important caveat before we get started: predicting the future is hard, and self-driving technology is still young enough that we're guaranteed to get some of the details wrong. One only has to glance through past predictions about the future to see how difficult it is to predict the social effects of new technologies. Nevertheless, we think it's worthwhile to spend some time thinking about the promise of this technology. We don't have all the answers, but we hope that talking about these benefits will inspire the next generation of engineers and entrepreneurs to turn the dream into a reality.
The deadly human driver
Image credit: evelynishere
Highway safety has improved steadily over the last half century. In the United States, five people died for every 100 million vehicle miles traveled in 1960. By 1980, cars were killing 3.3 people per 100 million vehicle miles. In 2000, the rate was down to 1.5. But progress has slowed since the turn of the century, and this may be because most of the low-hanging fruit—seatbelts, anti-lock brakes, stronger drunk-driving enforcement—have already been plucked. The introduction of advanced collision-prevention software, which we discussed in our first installment, will help to push accident rates lower. And we can expect the introduction of fully self-driving cars to push accident rates lower still.
That's important because for all our progress, we still lose far too many people on our highways. Here in the United States, there were six million car crashes in 2006, injuring 2.5 million people and killing 42,000. Worldwide, according to World Health Organization figures, cars kill about 1.2 million people each year and injure 50 million. Many of these crashes are alcohol-related: the National Highway Traffic Safety Administration estimates that in 2004, 14,400 Americans died in crashes involving at least one driver with a blood-alcohol level of .08 or higher. Other crashes are caused by drivers who are fatigued, distracted, or reckless.
Self-driving cars will never be drunk, tired, or inexperienced. They should make designated drivers as anachronistic as linotype operators, freeing suburbanites from worrying about how they'll get home after an evening of drinking. Similarly, people on long road trips won't need to worry about falling asleep at the wheel. They'll be able to take naps while their cars drive for them. Hundreds of truckers die every year, and the automation of the trucking industry could eliminate the need for human truck drivers, saving hundreds of lives in the process. And far fewer teenagers will have their lives cut tragically short due to crashes caused by their lack of experience behind the wheel.
Image credit: bloomsberries
In short, a car that drives as well as the best human drivers would save tens of thousands of lives in the United States and hundreds of thousands of lives worldwide. And most likely, we'll be able to do even better than that. Computers have much faster reaction times than humans do, and they will be "looking" in all directions simultaneously. Self-driving cars may be able to avoid many of the mistakes that even experienced human drivers make. They won't have blind spots, they'll have better sensors, and they will be able to react almost instantaneously to unexpected problems, giving them the ability to recover from dangerous situations that no human driver could have handled.
Dramatically fewer accidents is the most obvious—and probably the most important—benefit of self-driving technology. But self-driving technologies will also bring significant changes to peoples' daily lives. Next we'll consider how self-driving technology could transform the transportation system, reducing congestion and sprawl and dramatically improving energy efficiency. | <urn:uuid:5550674c-3377-45c5-9f42-2caf43952ff5> | CC-MAIN-2017-09 | https://arstechnica.com/features/2008/10/future-of-driving-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00467-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964988 | 1,000 | 3 | 3 |
Analyzing Big Data in DNA to Find Diseases
September 20, 2012
Mass amounts of raw data cause problems for more fields than just computer science. Life scientists struggle to wade through the amounts of data surrounding sequencing human genes and genetic characteristics. However, according to “Computational Method for Pinpointing Genetic Factors That Cause Disease” on Science Daily, Researchers are Roswell Park Cancer Institute and the Center for Human Genome Variation at Duke University Medical Center have developed an approach for analyzing this data to quickly cull out relevant genetic patterns and find variants that lead to particular disorders.
The study is outlined in the September issue of The American Journal of Human Genetics. We learn:
“[Zhu, the paper’s first author, notes,] ‘We’re confident that our method can be applied to genome-wide association studies related to diseases for which there are no known causal variants, and by extension may advance the development of targeted approaches to treating those diseases.’
‘This approach helps to intergrade the large body of data available in GWASs with the rapidly accumulating sequence data,’ adds David B. Goldstein, […]Director of the Center for Human Genome Variation at DUMC and senior author of the paper.’”
The technological advancement allowing scientists to pinpoint such causal variants is fascinating. However, as this technology advances, we are left to wonder how insurers will begin to use these predictive methods. Could faulty genes be analyzed in the future to justify declining policies?
Andrea Hayden, September 20, 2012 | <urn:uuid:58e91556-aedb-42c0-8804-ec9a2a491733> | CC-MAIN-2017-09 | http://arnoldit.com/wordpress/2012/09/20/analyzing-big-data-in-dna-to-find-diseases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00167-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913309 | 323 | 2.71875 | 3 |
Interestingly, ransomware is not a new thing. It first appeared in 1989 with a Trojan program called, “AIDS Trojan,” which was spread by floppy disk. The AIDS Trojan used several tricks to hide files and encrypt their names using symmetric cryptography. The author extorted a $189 fee from users to provide a restoration tool. The author was identified and forced to stop the distribution,
Taking an item of tremendous value – data belonging to an organization or an individual – and demanding compensation for its return is a highly effective way for criminals to get what they want. This criminal act is achieved through ransomware and, because it is effective and generally not all that complicated for a cybercriminal to carry out,
In the last few years extortion has hit computer users, big time.
Consumers and businesses alike are finding themselves locked out of their computers, or prevented from accessing their valuable data, by ransomware attacks that demand a payment be made to online criminals.
But normally when these malicious attacks are described,
Over the last several weeks I’ve written about ransomware primarily as it relates to individual machines or mobile devices. There is another very sneaky variant of ransomware which you should be aware of. It’s specifically crafted to hold websites hostage. It’s called RansomWeb. It’s methodology is slow and diabolical, and I believe it’s out there silently working on websites today.
In my previous two posts How Does Ransomware Work? Part 1 and Part 2 I described the process ransomware goes through to get on your systems, encrypt your files, and collect your money. Like any malware, all of the steps in the process need to be successful in order for ransomware to work.
In part 1 I outlined how ransomware gets on your system in the first place. We saw that it operates in much the same manner as other malware: It needs a delivery system, a vulnerability to exploit, a payload to install, and a way to establish communications with a command & control (C&C) server.
Let’s take a look at how ransomware works. In some stages of the operational cycle ransomware runs much like any other malware which may find its way onto your systems. In other stages ransomware has introduced completely new areas of operating for cybercriminals.
The first few stages of the ransomware cycle use the tried-and-true methods cybercriminals are accustomed to using.
Let me paint a scene for you. You’re sitting at your desk between meetings. You’re working on a PowerPoint for a customer meeting tomorrow, and you’re waiting for an email back from a co-worker. You have another meeting in an hour, which gives you just enough time to hone this presentation. It’s been 15 well-crafted slides since you last saved.
In the pre-internet days, ransoms typically involved only prominent, wealthy people and their families. Kidnapping people for ransom is mostly a thing of the past nowadays. It’s an old-fashioned crime. You can’t really get away with it anymore.
Kidnapping files, however, is rapidly becoming more popular. Intel/McAfee reports a 155% rise in ransomware in Q4 of 2014,
This is the first in a series of posts about ransomware. In this post and over the next several weeks I’ll discuss what ransomware is, who the victims are, give some details on a couple of specific types, how to protect your organization, and what to do when your systems have been taken captive.
Lumension recently released the sixth annual State of the Endpoint Risk report [PDF], based on research by the Ponemon Institute. I’ve blogged about this report several times this year: you can find those posts here and here.
This past week I was honored to present the results of this research alongside Dr.
If you want to watch a video, you go to YouTube. It’s as simple as that.
Although other sites exist which host videos, Google-owned YouTube is the Goliath in the market – and gets the overwhelming bulk of the net’s video-watching traffic.
And, of course, that enormous success and high traffic brings with it unwanted attention –
I do not believe when Apple launched the iPhone it had some grand plan to change the very nature of how we work. If it had, the phrase would be Bring Your Own iDevice – and it would surely have been copyrighted. iDevices are consumer products, and as Jean Brodie said, “Safety does not come first. | <urn:uuid:ea7ed6d6-6d7e-4bf9-970b-fe8279059019> | CC-MAIN-2017-09 | https://heatsoftware.com/blog/tag/ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00343-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954018 | 946 | 2.65625 | 3 |
Researchers at Marshall University in the United States are set to receive a new GPU-powered cluster that will allow them to make further advances in bioinformatics, climate research, physics computational chemistry and engineering.
Nicknamed “BigGreen” the cluster will boast “276 central processing unit cores, 552 gigabytes of memory and more than 10 terabytes of storage.” This, coupled with the eight NVIDIA Tesla GPUs with 448 cores each will push BigGreen into the six teraflop range—and will allow the university’s researchers to explore new areas aided by simulation and parallel computation capabilities.
This new cluster comes about following a round of NSF funding under the “Cyberinfrastructure for Transformational Scientific Discovery in West Virginia and Arkansas (CI-TRAIN) program. This is a project that seeks to advance the IT capabilities of the two states’ institutions to build more robust nanoscience and geosciences research programs in particular.
As Dr. Jan I. Fox, Marshall’s senior vice president for information technology said in a statement this week, “For example, a 3-D scan of Michelangelo’s statue ‘David’ contains billions of raw data points. Rendering all that data into a 3-D model would be nearly impossible on a desktop computer,” she said. “Using our high-performance computing capabilities, a student or professor could run that same data and produce the model in just a fraction of the time. It will literally change the way we work and do research at Marshall University.”
Fox went on to note that “the new cluster is critical to assisting researchers with their diverse objectives. He noted that this addition “makes possible scholarly innovation and discoveries that were, until recently, possible only at the most prestigious research institutions,” she said. “Along with our connection to Internet2, our students and faculty now have access to computing power, data and information we could only imagine just a few years ago.” | <urn:uuid:3067ffb2-4ae4-4821-8bd6-20835cbb9c71> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/09/29/marshall_scores_biggreen_gpu_cluster/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00463-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925826 | 420 | 2.578125 | 3 |
Building solar voltaic cells from nanowires instead of standard metal conductors can increase the amount of energy that can be captured by a factor of 15, according to a new study by scientists from the Nano-Science Center in Denmark.
The study, published this week in the peer-reviewed journal Nature Photonics, found that nanowires have unique light absorption properties, meaning the limit of how much energy can be harnessed from the sun's rays is vastly higher than previous believed.
This graphic shows that the sun's rays are drawn into a nanowire, which stands on a substrate. At a given wavelength, the sunlight is concentrated up to 15 times. (Image: Niels Bohr Institute
The research focused on improving the quality of the nanowire crystals, which is a cylindrical structure with a diameter 1/10,000th that of a human hair.
The typical efficiency limit for photovoltaic cells, known as the "Shockley-Queisser Limit" has been the benchmark for solar cell efficiency.
The Denmark researchers, however, found that nanowires naturally concentrate the sun's rays into a very small area in nanowire crystal and because the diameter of a crystal is smaller than the wavelength of the light coming from the sun, it can cause resonances in the intensity of light in and around nanowires. Those resonances then offer a higher conversion efficiency for the sun's energy, according to Niels Bohr Institut researcher Peter Krogstrup.
The nanowires are predicted to have great potential in the development not only of solar cells, but also of future quantum computers and other electronic products.
"It's exciting as a researcher to move the theoretical limits, as we know. It will have a major impact on the development of solar cells, exploitation of nanowire solar rays and perhaps the extraction of energy at [the] international level," Peter Krogstrup a Niels Bohr Institute researcher, said in a statement.
"However, it will take some years years before production of solar cells consisting of nanowires becomes a reality," he said.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Nanowires could boost solar power 15X" was originally published by Computerworld. | <urn:uuid:c2d8d1a6-f3ac-4045-9980-06e8eb7faae8> | CC-MAIN-2017-09 | http://www.itworld.com/article/2713952/hardware/nanowires-could-boost-solar-power-15x.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00511-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940185 | 535 | 3.765625 | 4 |
AJAX is not a technology; rather, it is a collection of technologies each providing robust foundations when designing and developing web applications:
- XHTML or HTML and Cascading Style Sheets (CSS) providing the standards for representing content to the user.
- Document Object Model (DOM) that provides the structure to allow for the dynamic representation of content and related interaction. The DOM exposes powerful ways for users to access and manipulate elements within any document.
- XML and XSLT that provide the formats for data to be manipulated, transferred and exchanged between server and client.
- XML HTTP Request: The main disadvantages of building web applications is that once a particular webpage is loaded within the user’s browser, the related server connection is cut off. Further browsing (even) within the page itself requires establishing another connection with the server and sending the whole page back even though the user might have simply wanted to expand a simple link. XML HTTP Request allows asynchronous data retrieval or ensuring that the page does not reload in its entirety each time the user requests the smallest of changes.
As such, AJAX is meant to increase interactivity, speed, and usability. The technologies have prompted a richer and friendly experience for the user as web applications are designed to imitate ‘traditional’ desktop applications including Google Docs and Spreadsheets, Google Maps and Yahoo! Mail.
At the start of a web session, instead of loading the requested webpage, an AJAX engine written in JS is loaded. Acting as a “middleman”, this engine resides between the user and the web server acting both as a rendering interface and as a means of communication between the client browser and server.
The difference which this functionality brings about is instantly noticeable. When sending a request to a web server, one notices that individual components of the page are updated independently (asynchronous) doing away with the previous need to wait for a whole page to become active until it is loaded (synchronous).
Imagine webmail – previously, reading email involved a variety of clicks and the sending and retrieving of the various frames that made up the interface just to allow the presentation of the various emails of the user. This drastically slowed down the user’s experience. With asynchronous transfer, the AJAX application completely eliminates the “start-stop-start-stop” nature of interaction on the web – requests to the server are completely transparent to the user.
Another noticeable benefit is the relatively faster loading of the various components of the site which was requested. This also leads to a significant reduction in bandwidth required per request since the web page does not need to reload its complete content.
Other important benefits brought about by AJAX coded applications include: insertion and/or deletion of records, submission of web forms, fetching search queries, and editing category trees – performed more effectively and efficiently without the need to request the full HTML of the page each time.
Although a most powerful set of technologies, developers must be aware of the potential security holes and breeches to which AJAX applications have (and will) become vulnerable.
According to Pete Lindstrom, Director of Security Strategies with the Hurwitz Group, Web applications are the most vulnerable elements of an organization’s IT infrastructure today. An increasing number of organizations (both for-profit and not-for-profit) depend on Internet-based applications that leverage the power of AJAX. As this group of technologies becomes more complex to allow the depth and functionality discussed, and, if organizations do not secure their web applications, then security risks will only increase.
Increased interactivity within a web application means an increase of XML, text, and general HTML network traffic. This leads to exposing back-end applications which might have not been previously vulnerable, or, if there is insufficient server-side protection, to giving unauthenticated users the possibility of manipulating their privilege configurations.
There is the general misconception that in AJAX applications are more secure because it is thought that a user cannot access the server-side script without the rendered user interface (the AJAX based webpage). XML HTTP Request based web applications obscure server-side scripts, and this obscurity gives website developers and owners a false sense of security – obscurity is not security. Since XML HTTP requests function by using the same protocol as all else on the web (HTTP), technically speaking, AJAX-based web applications are vulnerable to the same hacking methodologies as ‘normal’ applications.
Subsequently, there is an increase in session management vulnerabilities and a greater risk of hackers gaining access to the many hidden URLs which are necessary for AJAX requests to be processed.
Another weakness of AJAX is the process that formulates server requests. The Ajax engine uses JS to capture the user commands and to transform them into function calls. Such function calls are sent in plain visible text to the server and may easily reveal database table fields such as valid product and user IDs, or even important variable names, valid data types or ranges, and any other parameters which may be manipulated by a hacker.
With this information, a hacker can easily use AJAX functions without the intended interface by crafting specific HTTP requests directly to the server. In case of cross-site scripting, maliciously injected scripts can actually leverage the AJAX provided functionalities to act on behalf of the user thereby tricking the user with the ultimate aim of redirecting his browsing session (e.g., phishing) or monitoring his traffic.
Although many websites attribute their interactive features to JS, the widespread use of such technology brings about several grave security concerns.
In the past, most of these security issues arose from worms either targeting mailing systems or exploiting Cross Site Scripting (XSS) weaknesses of vulnerable websites. Such self-propagating worms enabled code to be injected into websites with the aim of being parsed and/or executed by Web browsers or e-mail clients to manipulate or simply retrieve user data.
As web-browsers and their technological capabilities continue to evolve, so does malicious use reinforcing the old and creating new security concerns related to JS and AJAX. This technological advancement is also occurring at a time when there is a significant shift in the ultimate goal of the hacker whose primary goal has changed from acts of vandalism (e.g., website defacement) to theft of corporate data (e.g., customer credit card details) that yield lucrative returns on the black market.
XSS worms will become increasingly intelligent and highly capable of carrying out dilapidating attacks such as widespread network denial of service attacks, spamming and mail attacks, and rampant browser exploits. It has also been recently discovered that it is possible to use JS to map domestic and corporate networks, which instantly makes any devices on the network (print servers, routers, storage devices) vulnerable to attacks.
Ultimately such sophisticated attacks could lead to pinpointing specific network assets to embed malicious JS within a webpage on the corporate intranet, or any AJAX application available for public use and returning data.
The problem to date is that most web scanning tools available encounter serious problems auditing web pages with embedded JS. For example, client-side JS require a great degree of manual intervention (rather than automation).
Summary and Conclusions
The evolution of web technologies is heading in a direction which allows web applications to be increasingly efficient, responsive and interactive. Such progress, however, also increases the threats which businesses and web developers face on a daily basis.
With public ports 80 (HTTP) and 443 (HTTPS) always open to allow dynamic content delivery and exchange, websites are at a constant risk to data theft and defacement, unless they are audited regularly with a reliable web application scanner. As the complexity of technology increases, website weaknesses become more evident and vulnerabilities more grave.
The advent of AJAX applications has raised considerable security issues due to a broadened threat window brought about by the very same technologies and complexities developed. With an increase in script execution and information exchanged in server/client requests and responses, hackers have greater opportunity to steal data thereby costing organizations thousands of dollars in lost revenue, severe fines, diminished customer trust and substantial damage to your organization’s reputation and credibility. | <urn:uuid:4279fd9f-3afa-40e8-982e-43642137e1a7> | CC-MAIN-2017-09 | https://www.acunetix.com/websitesecurity/ajax/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00035-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917048 | 1,664 | 3.359375 | 3 |
The runaway costs of game development
In 1982, Atari released a version of Pac-Man for its 2600 game console, also known as the Video Computer System (VCS). The game was written by a single programmer over a couple of months and had a total development cost of US$100,000. It was not a very good port, as the game flickered annoyingly and struggled to overcome the limitations of the VCS hardware. Nevertheless, it sold over 10 million copies at US$30 a shot, with a cost of goods sold of just about US$5.
In 2004, Microsoft released Halo 2. Over 190 people are listed in its credits, and the game took three years to complete with a total development cost of over US$40 million. The game did record sales in its first month, and has currently sold over 8 million copies at US$50 a crack.
Even after adjusting for inflation, the figures below paint a striking story. The cost of developing high-profile games has increased exponentially over the last few years, with costs for the next generation of consoles expected to continue this trend. Estimates have ranged from a 20% to a 100% increase in development costs for next-generation titles.
Source: Business Week; www.eurogamer.net; www.buzzcut.com; www.erasmatazz.com
While the combined income for video games has also increased dramatically since 1982, this has largely been a function of more games being available at any one time, rather than an exponential increase in the earning potential of a single hit game. Indeed, the examples of Pac-Man versus Halo 2 show that the earning potential, adjusted for inflation, of a hit game today is not significantly higher than it was in the Atari 2600 era. Of course, not every game in 1982 did as well as Pac-Man, but then again not every game released today does as well as Halo 2. Yet in the early days of video games it was trivial to find the funds to invest in the development of a new game. It is obviously significantly harder to find US$20 million to $40 million to launch a new hit title.
In the early days of gaming, developers would typically fund the creation of a new game themselves, then shop around the finished product to publishers, who would handle the mass duplication, packaging, and shipping to retailers around the world. In the early 1990s, this equation started to change. Origin Systems sold themselves to publishing giant Electronic Arts in 1992, in part to obtain enough funds to complete the development of the latest installment of their epic series, Ultima VII. The fact that Origin recovered the full cost of developing the game in the first two days of preorders wasn't the problem. The issue was keeping positive cash flow while game budgets continued to spiral. Origin wasn't able to raise enough money to continue funding the next generation of game projects, but EA was, and so EA swallowed the development company whole.
The motivations of game publishing companies are typically very different from game development firms. Game developers are largely motivated by a love of gaming and a desire to create a new project that outshines and out-innovates anything that has been done before. An interesting new game idea that happens to make a little bit of money is considered a success. Publishers are looking for large returns above all other considerations. The business model is to push for huge blockbuster releases and reap the windfalls of the surefire hits, then collect tax write offs on the projects that wind up losing money. The worst case scenario for a game publisher is a game that is modestly successful.
The publisher mentality tends to dismiss quirky new game ideas in favor of sequels and licensed properties from movies, comics and TV shows. This trend has been especially visible over the last few years. Out of the top 100 games in terms of sales, only 13 were neither sequels nor movie/TV licenses (source: USA; TRSTS). Electronic Arts published only one new game this year in comparison to 25 sequels.
Sequels and licenses are only one aspect of the ever-increasing demands made by publishers on the game development companies that they either finance or, increasingly, acquire. The other major trend over the last few years has been towards more cross-platform releases. | <urn:uuid:aec5fe8b-a34c-4c27-ae10-e7ba420fa178> | CC-MAIN-2017-09 | https://arstechnica.com/features/2005/11/crossplatform/2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00503-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969927 | 870 | 2.734375 | 3 |
7/18/2014 - U.S. AIR FORCE ACADEMY, Colo. -- New research here reveals Academy trash might be a treasure.
Last August, the Department of Defense Environmental Security Technology Certification Program funded CDM Smith, a national engineering and construction firm, to test how the Academy can reduce energy use and cost at its wastewater treatment plant, and convert food waste from its dining hall into energy.
Academy professors and engineers toured the Mitchell Hall kitchen and the wastewater treatment plant here Tuesday to learn more about the processes and results of the year-long project.
"About 2-3 percent of the nation's energy goes to treating wastewater and water," said Pat Evans, CDM Smith vice president. "Most of the energy that's used is for pumping the water and aerating it. We're trying to get wastewater treatment plants to become energy neutral or energy producers instead of energy consumers. One step toward that goal is capturing energy from food waste through anaerobic digestion."
According to Glen Loyche, Mitchell Hall facility manager, two- to- three semi-trucks haul food to the Academy every day to feed 4,000 cadets.
"Each trailer carries 20-40 pallets of food," he said.
Leftover food at the dining hall is run through large grinders, turned into pulp and transferred into dump trucks.
"Waste management here picks up four and a half tons of pulp product here every week," Loyche said.
CDM Smith collects food waste from Mitchell Hall three days a week and converts it into methane and carbon dioxide.
"We're testing on a very small, pilot scale," Evans said. "We transfer the food waste into anaerobic digesters, about 350 gallons in size that hold about 250 gallons of sludge and food waste. We convert the waste into methane for beneficial uses such as heating boilers, generating electricity and vehicle fuel once it's purified."
Greenhouse gases emitted from food waste takes a toll on the environment, Evans said.
"Some landfills capture the methane released but a lot don't," Evans said. "Methane is a really potent greenhouse gas, much more potent than carbon dioxide. The environmental impact is that it takes up space, emits greenhouse gases and water can go through the waste and generate leaching, which can contaminate ground water."
CDM Smith removes hydrogen sulfide, carbon dioxide and water when converting the waste into methane.
"We purify it," Evans said. "Hydrogen sulfide, or rotten egg gas, is very toxic and can result in corrosion of a lot of equipment. At the end of the process we have pure methane, or natural gas, that can be compressed into vehicle fuel."
Overall, the project has been successful, Evans said.
"We found you get a lot more gas and energy out of fat and protein than you do out of carbohydrates," he said. "We can't control the amount of carbs, fat and protein cadets eat or waste, but now we have a better understanding of how much gas we can get for a given food waste."
One- to- two percent of the solid waste generated in the U.S. is food waste, Evans said.
"The Academy's food waste is an energy-rich resource that in going to landfills ends up having an environmental impact," he said. "By converting food waste to methane through anaerobic digestion, we can decrease the impact to the environment, recover energy and help the Defense Department's reach its net zero goals."
Russell Hume, a mechanical engineer with the Academy's Directorate of Installations, said converting waste to make energy is a phenomenal step in the right direction for the Academy and world.
"I think it has been a great demonstration of the art of the possible," he said. "I would like to see this technology further developed and perfected to the point that it becomes widely available to all."
The project ends Aug. 1. | <urn:uuid:d9759f95-6f29-421a-80ff-320bd8f4ccb5> | CC-MAIN-2017-09 | http://www.govtech.com/fs/news/Air-Force-Academy-Pilots-Food-Waste-to-Energy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00203-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954628 | 821 | 2.859375 | 3 |
First responders from around the nation got to take the controls of state-of-the-art robotics at Texas A&M's Disaster City on Wednesday.
The Texas A&M Engineering Experiment Station Center for Emergency Informatics hosted about 50 experts, A&M faculty, students and private vendors, demonstrating technology used by the U.S. armed forces. The two-day workshop focused on how to reduce the loss of life and property during floods.
Experts from the National Weather Service, Texas Task Force 1, the National Guard, American Red Cross, Texas A&M Forest Service and others descended on the College Station facility from as far away as South Carolina, Utah, California and Washington, D.C.
It was the 15th biannual event hosted by TEES and the seventh held at Disaster City, a high-tech sprawling collection of derailed trains, polluted ponds, faux meth labs and other obstacles where first responders get near real-world training.
The different groups used the event as a brainstorming workshop, with private companies getting feedback on what sort of technology first responders need and the service men and women getting to test out and mold the gadgets of the future.
"As these technologies like the robots that fly -- the UAVs -- are beginning to be transitioned from military use only into use for the private and public sector, first responders are looking at them as a tool as well," said David Martin, Texas Task Force 1 member. "It enables us to get a view of the situation we can't get from any other perspective without putting our firefighters or responders in harm's way."
For example, drones and robots can provide video from areas easier and safer than traditional means. A robotic boat can detect underwater debris in a flooded area and a aerial drone can quickly take photos or video from a disaster zone.
"It makes the search go faster and it gives you a better overview of the entire scene," Martin said. "Part of what we're doing here this week is exploring what those possibilities are. What is the technology out there and what are the ways we can use it that would benefit the search and rescue community."
Steven Rutherford with the South Carolina Emergency Response Task Force is interested how the technology can combat forest fires and hurricanes on the East Coast.
"Usually when hurricanes come in, they shut the beach down," Rutherford said. "We can take some of this knowledge and go out there and recon the beaches before we go in there to start searching buildings. So we can have a good layout of exactly how devastating it is."
©2014 The Eagle (Bryan, Texas) | <urn:uuid:00bb95ef-8fc5-466f-880e-8a9b8d338181> | CC-MAIN-2017-09 | http://www.govtech.com/public-safety/First-responders-get-look-at-newest-gadgets-in-the-field.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00555-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943916 | 530 | 2.828125 | 3 |
Exploring the DWARF debug format information
DWARF (debuggingwith attributedrecord formats) is a debugging file format used by many compilers and debuggers to support source-level debugging. It is the format of debugging information within an object file. The DWARF description of a program is a tree structure where each node can have children or siblings. The nodes might represent types, variables, or functions.
DWARF uses a series of debugging information entries (DIEs) to define a low-level representation of a source program. Each debugging information entry consists of an identifying tag and a series of attributes. An entry or group of entries together, provides a description of a corresponding entity in the source program. The tag specifies the class to which an entry belongs and the attributes define the specific characteristics of the entry.
The different DWARF sections that make up the DWARF data are:
| Abbreviations used in the |
|Lookup table for mapping addresses to compilation units|
|Call frame information|
|Core DWARF information section|
|Line number information|
| Location lists used in the |
|Lookup table for global objects and functions|
|Lookup table for global types|
| Address ranges used in the |
| String table used in |
.debug_abbrev section contains the abbreviation tables for all the compilation units that are DWARF compiled. The abbreviations table for a single compilation unit consists of a series of abbreviation declarations. Each declaration specifies the tag and attributes for a particular debugging information entry. The appropriate entry in the abbreviations table guides the interpretation of the information contained directly in the
.debug_info section. The
debug_info section contains the raw information regarding the symbols. Each compilation unit is associated with a particular abbreviation table, but multiple compilation units can share the same table.
There are licensed tools, such as readelf, dwarfdump, and libdwarf available to read DWARF information. A script or program can read the output of these tools to find and interpret the required information. It is important to know tags and attribute definitions to write such scripts.
Common tags and attributes
The following list shows the tags that are mostly of interest when debugging a C++ application.
|Represents the class name and type information|
|Represents the structure name and type information|
|Represents the union name and type information|
|Represents the enum name and type information|
|Represents the typedef name and type information|
|Represents the array name and type information|
|Represents the array size information|
|Represents the inherited class name and type information|
|Represents the members of class|
|Represents the function name information|
|Represents the function arguments' information|
|Represents the name string|
|Represents the type information|
|Is set when it is created by compiler|
|Represents the sibling location information|
|Represents the location information|
|Is set when it is virtual|
The following command is used to compile a program in the DWARF format using the XLC compiler.
/usr/vacpp/bin/xlC -g -qdbgfmt=dwarf -o test test.C
Figure 1. Sample test program
dwarfdump output of the above example can be interpreted in the following way.
.debug_abbrev section for
DW_TAG_compile_unit looks as shown in Figure 2.
Figure 2. .debug_abbrev section
DW_TAG_* is generally followed by
DW_CHILDREN_* and a series of attributes (
DW_AT_*) along with the (
DW_CHILDREN_* is a 1-byte value that determines whether a debugging information entry using this abbreviation has child entries. If the value is
DW_CHILDREN_yes, the next physically succeeding entry of any debugging information entry using this abbreviation is the first child of that entry. If the 1-byte value following the abbreviation's tag encoding is
DW_CHILDREN_no, the next physically succeeding entry of any debugging information entry using this abbreviation is a sibling of that entry. Each chain of sibling entries is terminated by a null entry.
Figure 3. DW_TAG_compile_unit in the .debug_info section
DW_FORM_* attribute specifies the way to read
DW_AT* in the
.debug_info section. In this case,
DW_AT_name is of the form string. So the first attribute of
DW_TAG_compile_unit has to be handled as a string in the
.debug_info section, which is
- The file type is
C_plus_plusand it is present at
- The file is compiled using
IBM XL C/C++v12.
Extract class information
DW_TAG_class_type– Represents the class name and type information
DW_TAG_member– Represents the members of class
Figure 4. Class name and member information
Figure 4 explains that:
- There is a data type, named
intand its size is
- There is a class, named
baseand its size is
4bytes and its sibling entry is at location
- There is a class member, named
basemember. The type of this member is at location
<82>, which is
int. The scope is
publicand it is present at the 0th location from the starting of the class.
Extract array size
The immediate child of
DW_TAG_subrange_type, which has the array size. Array size is calculated as (
DW_AT_lower_bound) +1. If it is a two-dimensional array, there will be an immediate sibling of type
DW_TAG_subrange_type again. In this case, the array size is 8 (7+1).
Figure 5. Array size
Extract function names and arguments
DW_TAG_subprogram -Represents function name information
-Represents function arguments' information
Figure 6. Function name
Figure 7. Function arguments
Figure 7 describes that:
- There is a function, named
display, and its scope is
publicand its sibling is located at
- The mangled name is
- The first argument to the function is
this. It is created by the compiler as
DW_AT_artificialand is set to
yesand the type is at location
<421>, which is
- The second argument name is x and the type is at location
<82>, which is
DW_TAG_typedef represents the typedef name and type information.
Figure 8. typedef
From Figure 8, we can understand that there is a
typedef entry named
int_type and its type is at location
<82>, which is
Extract enum information
DW_TAG_enumeration_typehas the enum name and
DW_TAG_enumerator represents its elements' information.
DW_AT_const_valuespecifies the values assigned to the elements.
Figure 9. enum values
Figure 9 explains that:
myenumis the name of the enum and its size is
Janis the first element and its value is
Febis the second element and its value is
Extract inheritance information
DW_TAG_inheritance represents the inherited class name and type information.
Figure 10. inheritance
Figure 10 explains that:
- There is a derived class named,
myclass, and its size is
- The base class is at location
<94>, and is named
The DW_VIRTUALITY_noneattribute specifies it as a non-virtual class. | <urn:uuid:c1a30455-4945-4452-b477-6194fa9642d2> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/aix/library/au-dwarf-debug-format/index.html?ca=dbg-twodw20130821 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00431-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.764443 | 1,655 | 2.84375 | 3 |
The Wikimedia Foundation last week warned that readers who are seeing ads on Wikipedia articles are likely using a Web browser that has been infected with malware. The warning points to an apparent resurgence in adware and spyware that is being delivered via cleverly disguised browser extensions designed to run across multiple Web browsers and operating systems.
In a posting on its blog, Wikimedia noted that although the nonprofit organization is funded by more than a million donors and does not run ads, some users were complaining of seeing ads on Wikipedia entries. “If you’re seeing advertisements for a for-profit industry (see screenshot below for an example) or anything but our fundraiser, then your web browser has likely been infected with malware,” reads a blog post co-written by Philippe Beaudette, director of community advocacy at the Wikimedia Foundation.
Examples of the information we may collect and analyze when you use our website include the IP address used to connect your computer to the Internet; login; e-mail address; password; computer and connection information such as browser type, version, and time zone setting, browser plug-in types and versions, operating system, and platform; the full Uniform Resource Locator (URL) clickstream to, through, and from the Site, including date and time; cookie; web pages you viewed or searched for; and the phone number you used to call us. Continue reading → | <urn:uuid:94cf06f1-b2d5-43fc-ae8a-ac63425f19fc> | CC-MAIN-2017-09 | https://krebsonsecurity.com/tag/wikipedia/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00431-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932125 | 281 | 2.609375 | 3 |
Our Cyber Security Awareness training programme has been developed to provide employers and employees with a general understanding of the threats to cyber/information security that they face, help them to recognise the threats and vulnerabilities to their company’s information assets, and respond to them appropriately including detailing the countermeasures that can be adopted. The programme has been developed on a modular basis using elements from our highly regarded hands-on technical training courses in addition to materials produced solely for the programme. Each module varies in length between 30 minutes to 90 minutes - but each can be tailored or combined to provide organisations with a programme that best suits their requirements.
- Reduce the risk of a cyber security breach
- Instil proper behaviour in to the people who come in to contact with your valuable assets
- Protect your brand and reputation and avoid the resulting media attention
Modules available include:
The Threat - This module provides an overview of cyber threat landscape faced by organisations, outlining the impacts of cyber security incidents as well as tactics and strategies to aid cyber defence.
Passwords and password management - Passwords are the keys to your sensitive data when using websites, email accounts and your computer itself (via User Accounts). This module is designed to provide users with an understanding of the importance of strong passwords along with some simple techniques to assist users in choosing and managing their passwords.
An introduction to hacking - This module introduces the basic technical concepts behind the various stages of a hacking attack, as well as some common tools and techniques used by hackers and security professionals alike.
Phishing attacks - This module takes a detailed look at what phishing is, why it poses a threat and how users can minimise their exposure to phishing attacks.
People risk / insider threat - This module looks at the weakest security link in any organisation – its people. Most organisations have good technology, but people often bypass controls or forget procedures. Guidance will be provided on how to help people do the right thing, and deterring or detecting malicious intent.
Social Engineering - This module looks at what social engineering is, who or what are social engineers, what they want, how they get it, and how to stop them
Bring Your Own Device (BYOD) -This module introduces users to the growing trend of BYOD, analysing the Pros and Cons as well as providing guidance on BYOD policy considerations.
Safe internet use - While the internet offers us many benefits, this module is designed to highlight that there are a number of risks associated with going online – some general and some specific to the respective activities that you’re undertaking - including threats to the integrity of our identity, privacy and the security of our financial transactions.
Online and mobile banking - Online banking is becoming ever more popular and most importantly it’s convenient and reasonably safe – as long as you take reasonable precautions as detailed in this module.
Online shopping – This module identifies the steps that should be taken to make sure that you are shopping safely
Social networking - Social media has revolutionised the way we communicate with others. We can now talk one-to-one or to large groups of people at once from the convenience of our computer or mobile device. This module identifies the ways in which ID fraudsters harvest sensitive information from these services and provides best practices to mitigate them.
Using wireless networks - This module provides an overview of the unsecure nature of wireless networks and how that risk can be mitigated.
Antivirus software and installing updates – This module explains why it is necessary to install antivirus software and patches.
PCI DSS – This module provides an introduction to PCI DSS, what it is and why it’s important
Home and Mobile Working - This module explores the potential threats of working remotely and provides guidance and best practices.
Malware - This module explores, what is and are the types of malware and what should you do if you’re infected.
Physical security - This module highlights the importance of physical security as part of an overall information security strategy and the risks of not considering it – be it as simple as locking your doors and desk/file cabinet drawers, and a clear desk policy.
Removable Media – This module explores the benefits of limiting the use of removable media and producing policy to support this. | <urn:uuid:a3369996-863b-4ea8-b9f5-48b3ff614735> | CC-MAIN-2017-09 | https://www.7safe.com/professional-development/cyber-security-awareness-training-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00431-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923157 | 867 | 2.75 | 3 |
In the state of New York, one of the world’s most pristine natural ecosystems is being threatened. Road salt, storm water runoff and invasive species are harming Lake George -- a long, narrow lake at the southeast base of the Adirondack Mountains.
So to both understand and manage these threats, Rensselaer Polytechnic Institute, IBM and the FUND for Lake George have launched a three-year, multi-million dollar collaboration, called "The Jefferson Project at Lake George."
This project, according to a press release, includes an environmental lab with a monitoring and prediction system that will give scientists and the community a real-time picture of the health of the lake. The facility, according to the release, is expected to "create a new model for predictive preservation and remediation of critical natural systems on Lake George, in New York, and ultimately around the world."
To gain a scientific understanding of the lake, a combination of advanced data analytics, computing and data visualization techniques, new scientific and experimental methods, 3-D computer modeling and simulation, and historical data will be used -- as will weather modeling and sensor technology.
The monitoring system is expected to give scientists a view of circulation models in Lake George -- something they've not seen before. These 3-D models could then be used to understand how currents distribute nutrients and contaminants across the 32-mile lake and their correlation to specific stressors, according to the release. The models also can be overlaid with historical and real-time weather data to see the impact of weather and tributary flooding on the lake's circulation patterns.
In addition, a new Smarter Water laboratory and visualization studio will help local leaders see a real-time picture of the current and future computer modeled conditions, water chemistry and health of the lake's natural systems -- data that local groups could use to make informed decisions about protecting the lake and its ecosystem.
“Lake George has a lot to teach us, if we look closely,” said Rensselaer President Shirley Ann Jackson. “By expanding Rensselaer’s Darrin Fresh Water Institute with this remarkable new cyberphysical platform of data from sensors and other sources, and with advanced analytics, high performance computing and web science, we are taking an important step to protect the timeless beauty of Lake George, and we are creating a global model for environmental research and protection of water resources.” | <urn:uuid:bf8cd1f5-859c-4ea8-897e-61a7d2e227a4> | CC-MAIN-2017-09 | http://www.govtech.com/data/Making-New-Yorks-Lake-George-the-Worlds-Smartest-Lake.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00075-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926847 | 494 | 3.375 | 3 |
In an effort to promote public-sector support of alternative energy technology, researchers in Australia will study noise caused by wind turbines -- of which there are many unknowns, the University of Adelaide researchers said.
Of particularly uncertainty are the low-frequency sounds produced by large wind turbines found around the world, Chief Investigator and Associate Professor Con Doolan said in a press release. "This project is aimed at getting to the bottom of what is creating the noise that can cause disturbance," he said in the release. "When we know what is contributing most to that noise -- exactly what's causing it -- then we can stop it."
The researchers will build a small-scale wind turbine in the university's wind tunne and will build an anechoic chamber (a specialist acoustic test room) around the turbine.
By using “laser diagnostics” and arrays of microphones, researchers say they will test wind turbines in a lab to recreate real-world scenarios and identify the source of the sound generated. By finding a correlation between aerodynamics and sound production, researchers hope to identify engineering solutions and influence public policy.
"If we can understand what's creating these sounds, then we can advise governments about wind farm regulation and policy, and make recommendations about the design of wind farms or the turbine blades to industry," Doolan said. | <urn:uuid:93b297cd-733d-4196-b630-d486ea2532cd> | CC-MAIN-2017-09 | http://www.govtech.com/technology/Researchers-Will-Study-the-Sound-of-Wind.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00251-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937699 | 271 | 3.328125 | 3 |
In this article, learn about these concepts with Linux on the desktop:
- Working with user and group accounts
- Managing files and folders
- Working with services
- Monitoring the system
- Reviewing log files
To get the most from the articles in this series, you should have knowledge of managing user and group accounts, files, folders, and reviewing log files in a Windows server environment. A basic understanding of logging in and navigating a Linux desktop environment such as LXDE, GNOME, or KDE is expected. Additionally, it's beneficial to have a working Linux computer to explore the concepts and examples in the article.
Working with user and group accounts
When you consider server management from a desktop environment, a user and group management tool is a must have. For the latest release of GNOME 3.2, that tool is located in the GNOME Control Center. The GNOME Control Center is a central location to manage various aspects of your Linux computer—not unlike the Windows Control Panel. Still widely used today is the GNOME System Tools (GST). GST provides useful tools for Linux administrators, including a tool to manage user and group accounts. Some popular Linux distributions provide the system-users-config tool as the default user manager. So, no matter your Linux distribution, you should have access to manage local user accounts out of the box, usually under the users and groups label.
Linux has no registry. So the GUI tools you use are just front-end tools to write data to a file on the file system. For the task of managing users and groups, you manage the underlying /etc/passwd and /etc/group files with those tools. The /etc/passwd file maintains user account information, while the /etc/group file stores group account information. User passwords are encrypted in the /etc/shadow file, while group passwords are stored in the /etc/gshadow file.
The GNOME user and group management tool provides a straightforward interface for account management, shown in Figure 1.
Figure 1. Creating a user
Provided you have the privileges, you can create and edit user and group accounts. In addition, you can manage detailed account settings such as location of the home directory, user ID, default login shell, password resets, and group membership assignment. Standard users typically have access to the tool for password management.
To create a new group, click on the Groups tab and enter the group name. As shown in Figure 2, after you create a group you can then manage the membership simply by selecting or clearing the check boxes next to users' names.
Figure 2. Creating a group
Managing files and folders
In Windows, Windows Explorer is the tool of choice for many to navigate the file system. The GNOME project has Nautilus. Nautilus is a file manager with a strong development team and wide user base. It has been in development since 2001.
If you use the GNOME desktop, chances are Nautilus is already installed. If you are not using GNOME, you can still download and install Nautilus, so check your distribution's documentation.
One of the primary tasks you perform with a file manager is navigating the file system. With Nautilus, you can even switch it to browser mode, which gives you more of a Windows Explorer feel. Remember, in Linux all folders are mounted as subfolders to the main root (/) directory. So if you have remote drives mounted to your Linux server, such as from a Windows or another Linux computer, you can navigate the file system from the root (/) directory just as you would a directory located on the local file system, as shown in Figure 3.
Figure 3. Navigating the file system
Navigating in Nautilus is similar to navigating in Windows Explorer. Click on a folder to drill down into the subfolders. Right-click on any folder or file to perform the usual tasks you've come to expect with a file manager, such as copy, rename, delete, open, compress, and manage the permissions.
For viewing preferences, you can set options such as detailed view, listed view, by name, and so on, as shown in Figure 4.
Figure 4. Managing file preferences
Read, write, and execute
When you right-click to manage the permissions of a folder or file, you can view or change the permissions (if your account has access to do so). In Nautilus, if a file or folder is not within the security permissions for your user account, a lock icon displays next to it. With Linux, each folder has three sets of permissions: user owner (u), group owner (g), and other (o). Within each set, you can assign the basic permissions of read, write, or execute.
Remember to assign execute permission to shell scripts or any other files users need to execute. Unlike Windows, in Linux you need to explicitly grant the file execute permission to the set of users that need to perform that action. See Figure 5 for an example.
Figure 5. Setting the permissions on a file
Table 1 summarizes the basic permission options for a typical Linux file. I include octal representation because many Linux-related installation and software documents reference the permissions in octal notation.
Table 1. Linux permissions
|Read and write||6|
|Read and execute||5|
|Full permission (read, write, and execute)||7|
When you create a new file using Nautilus, it uses the underlying Linux
umask, just like when you
create a file from the command line. The
determines the default permissions. Most Linux distributions default to
022. Files work from the 666 octal. You just
need to do some subtraction to understand how
umask works. If you create a new file and the
file's default permissions are 666 - 022, or 644. That's to say the user
owner has read and write permission, while the group owner and other have
The same concept applies when you create folders using Nautilus. Folders
work from the 777 octal. With the same
022, if you create a new folder its default
permissions are 777-022, or 755. This means the user owner has full
permission, and the group owner and other have read and execute
As in Windows Explorer, you can easily share a folder using Nautilus. To do so, right-click on the folder you want to share, and select the Sharing Options menu item. Next, select the Share this folder check box, as shown in Figure 6.
Figure 6. Sharing a folder
Additionally, you can select the Allow others to create and delete files in this folder check box if you want other people to be able to save documents in the folder. If you do, Nautilus will ask for confirmation from you to change the folder permissions.
If you have users from a mixed environment, you can select the Guest access check box to allow those users access to the folder without having a local Samba account for authentication. Use this option with care because it could introduce unnecessary security vulnerabilities to your Linux server.
If you want to share the folders with Windows users and authenticate them, you must first set up and configure Samba on your Linux server.
Working with services
With the GST installed, you can manage your server services and various other daemons (a Linux word for services) within the desktop environment, similar to using Windows services. This tool is usually labeled as Startup Preferences, and you can launch it from the System menu by clicking Preferences.
To date, the GST doesn't completely manage services such as login as, reload, restart, and so on. It can give you a basis for understanding what services are installed on your Linux computer and whether you want to run particular services upon system boot.
The green and red icons shown for the services in Figure 7 indicate whether the services are enabled or disabled to run upon system boot. These are similar to the Windows services automatic and manual options. A separate icon represents whether the services are running.
Figure 7. Viewing a running sendmail service in GST
Monitoring the system
Although not identical to the Windows Task Manager, GNOME System Monitor provides similar functionality. If you want a high-level view of your Linux computer's resource usage, GNOME System Monitor can provide a quick snapshot of the system. The four main tabs for monitoring are:
- File Systems
The System tab provides general operating system and hardware information (memory and processor), much like the System dialog in Windows that displays when you click Properties from the Window Manager desktop icon.
The Processes tab (Figure 8), displays all running processes (and there are many!). You can sort the processes by name, central processing unit (CPU) usage, and so on.
Figure 8. Killing a process
The Resources tab (Figure 9) is similar to the Windows Task Manager Performance tab. It provides historical graphing of CPU, memory/swap, and network bandwidth usage.
Figure 9. Monitoring system resources
The File Systems tab shows all mounted file systems and general information such as mount points for various partitions, free space, total space, and so on.
Reviewing log files
The GNOME System Log Viewer is comparable to the Event Viewer in Windows.
Under the hood, Linux typically uses the
rsyslog) mechanism to generate log files
for various applications, server services, and system messages. These
files usually reside in the /var/log directory on the Linux file system.
So when you first open the GNOME System Log Viewer, your distribution might
provide a way for the tool to automatically read the various logs in that
directory. If not, or if you choose to add additional log files in the
viewer, simply click File > Add, and navigate to the
desired log file.
Table 2 lists and describes some of the common Linux logs you might want to monitor with the log viewer.
Table 2. Linux log files
|boot.log||Hardware detection, mounting, and other system messages upon startup of the machine|
|secure||Security related messages|
|messages||Kernel and other general system message information|
|httpd||Web server logs directory containing access and error logs in separate files|
|cups||Directory containing log messages related to printing|
|cron||Log messages related to scheduled jobs|
|Xorg.0.log||Log messages related to the X-Window server|
|auth.log||Authentication success and failures|
|samba||Directory of access and error logs relating to the Samba server|
Table 2 is not an exhaustive list of the log files you can read with the log viewer. Even if you have commercial software installed on your server, you can use the log viewer to view those logs files as long they are in the proper log format. Explore your /var/log directory and add the log files that are appropriate for your needs.
One thing about the GNOME System Log Viewer that is considerably different from the Windows Event Viewer is that many of the log views are dependent upon the settings for the underlying system configuration. For example, the logs for the Apache web server can be configured to rotate daily. In that case, the httpd access.log only displays the current day's log messages, while the older log files are moved to archives. You can still configure the log viewer to view the archive logs by adding those as well.
When you view your logs, you can easily scroll through and read the various messages. Sometimes the volume of messages can make finding the interesting logs, such as errors or fatal messages, much like the proverbial search for the needle in the haystack. The GNOME System Log Viewer provides a filter feature that allows you to define various filters using regular expressions to highlight or show only specified log messages. For example, while troubleshooting an email issue for a particular user, you might want to filter and display only messages that contain that user's email address, as shown in Figure 10.
Figure 10. Displaying only messages based upon a filter
An more typical example for daily use is to create filters to highlight error messages in red while using another color, such as orange, for warning messages.
Figure 11 demonstrates a filter configuration that displays only root login attempts.
Figure 11. Highlighting log messages based upon a filter
With regular expressions, you can really use your imagination as the need arises to highlight or display only the messages you need.
Although Linux is known for its abundant command-line tools, you don't have to use it that way. Over the last several years, successful projects such as GNOME have made tremendous strides in providing good quality desktop tools for Linux system administration. If you are moving to Linux from a Windows environment, these tools can provide a more comfortable transition while allowing you to effectively manage your Linux servers.
- Read Windows-to-Linux roadmap: Part 3. Introduction to Webmin (Chris Walden, developerWorks, November 2003) to learn more about browser-based administrative tools for Linux.
- Learn more about the Nautilus file manager from the GNOME Desktop User Guide and discover advanced tasks for managing files and folders.
- To learn more about Linux logging, check out Windows-to-Linux roadmap: Part 5. Linux logging (Chris Walden, developerWorks, November 2003).
- In the developerWorks Linux zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators.
- In the Open Source area on developerWorks, find extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM products.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends.
- Listen to developerWorks podcasts for interesting interviews and discussions for software developers.
- Follow developerWorks on Twitter.
- Watch developerWorks demos that range from product installation and setup for beginners to advanced functionality for experienced developers.
Get products and technologies
- Learn more about the general features of the GST and perform a variety of administrative chores.
- Explore the GNOME System Log Viewer.
- Access IBM trial software (available for download or on DVD) and innovate in your next open source development project using software especially for developers.
- Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. Help build the Real world open source group in the developerWorks community. | <urn:uuid:e3ede00f-2680-4663-a4cf-4bca629e042a> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/linux/library/l-admin-gnome/index.html?ca=drs- | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00427-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.872191 | 3,035 | 2.703125 | 3 |
In the section on spatial locality I mentioned that storing whole blocks is one way that caches take advantage of spatial locality of reference. Now that we know a little more about how caches are organized internally, we can look a bit closer at the issue of block size. You might think that as cache sizes increase you could take even better advantage of spatial locality by making block sizes even bigger. Surely fetching more bytes per block into the cache would decrease the odds that some part of the working set will be evicted because it resides in a different block. This is true, to some extent, but we have to be careful. If we increase the block size while keeping the cache size the same, then we decrease the number of blocks that the cache can hold. Fewer blocks in the cache means fewer sets, and fewer sets means that collisions and therefore misses are more likely. And of course, with fewer blocks in the cache the likelihood that any particular block that the CPU needs will be available in the cache decreases.
The upshot of all this is that smaller block sizes allow us to exercise more fine-grained control of the cache. We can trace out the boundaries of a working set with a higher resolution by using smaller cache blocks. If our cache blocks are too large, we wind up with a lot of wasted cache space because many of the blocks will contain only a few bytes from the working set while the rest is irrelevant junk. If we think of this issue in terms of cache pollution, we can say that large cache blocks are more prone to pollute the cache with non-reusable data than small cache blocks.
The following image shows the memory map we've been using, with large block sizes.
This next image shows the same map, but with the block sizes decreased. Notice how much more control the smaller blocks allow over cache pollution.
The other problems with large block sizes are bandwidth-related. Since the larger the block size the more data is fetched with each LOAD, large block sizes can really eat up memory bus bandwidth, especially if the miss rate is high. So a system has to have plenty of bandwidth if it's going to make good use of large cache blocks. Otherwise, the increase in bus traffic can increase the amount of time it takes to fetch a cache block from memory, thereby adding latency to the cache.
Write Policies: Write through vs. Write back
So far, this entire article has dealt with only one type of memory traffic: loads, or requests for data from memory. I've only talked about loads because they make up the vast majority of memory traffic. The remainder of memory traffic is made up of stores, which in simple uniprocessor systems are much easier to deal with. In this section, we'll cover how to handle stores in single-processor systems with just an L1 cache. When you throw in more caches and multiple processors, things get more complicated than I want to go into, here.??
Once a retrieved piece of data is modified by the CPU, it must be stored or written back out to main memory so that the rest of the system has access to the most up-to-date version of it. There are two ways to deal with such writes to memory. The first way is to immediately update all the copies of the modified data in each level of the hierarchy to reflect the latest changes. So a piece of modified data would be written to the L1 and main memory so that all of its copies are current. Such a policy for handling writes is called write through, since it writes the modified data through to all levels of the hierarchy.?
A write through policy can be nice for multiprocessor and I/O-intensive system designs, since multiple clients are reading from memory at once and all need the most current data available. However, the multiple updates per write required by this policy can greatly increase memory traffic. For each STORE, the system must update multiple copies of the modified data. If there's a large amount of data that has been modified, then that could eat up quite a bit of memory bandwidth that could be used for the more important LOAD traffic.?
The alternative to write through is write back, and it can potentially result in less memory traffic. With a write back policy, changes propagate down to the lower levels of the hierarchy as cache blocks are evicted from the higher levels. So an updated piece of data in an L1 cache block will not be updated in main memory until it's evicted from the L1.?
There is much, much more that can be said about caching, and this article has covered only the basic concepts. In the next article, we'll look in detail at the caching and memory systems of both the P4 and the G4e. This will provide an opportunity note only to fill in the preceding, general discussion with some real-world specifics, but also to introduce some more advanced caching concepts like data prefetching and cache coherency.????
- David A. Patterson and John L. Hennessy, Computer Architecture: A Quantitative Approach. Second Edition. Morgan Kaufmann Publishers, Inc.: San Francisco, 1996.?
- Dennis C. Lee, Patrick J. Crowley, Jean-Loup Baer, Thomas E. Anderson, and Brian N. Bershad, "Execution Characteristics of Desktop Applications on Windows NT." 1998. http://citeseer.nj.nec.com/lee98execution.html
- Institute for System-Level Integration, "Chapter 5: The Memory Hierarchy."?
- Manish J. Bhatt, "Locality of Reference." Proceedings of the 4th Pattern Languages of Programming Conference. 1997. http://st-www.cs.uiuc.edu/users/hanmer/PLoP-97/
- James R. Goodman, "Using Cache Memory to Reduce Processor-Memory Traffic." 25 Years ISCA: Retrospectives and Reprints 1998: 255-262
- Luiz Andre Barroso, Kourosh Gharachorloo, and Edouard Bugnion, "Memory System Characterization of Commercial Workloads." Proceedings of the 25th International Symposium on Computer Architecture. June 1998.
|7/9/2002||1.1||Page 3 was missing,
so I've added it in. | <urn:uuid:c15382ee-d622-46d7-a0db-5a4e5826273c> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2002/07/caching/8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00127-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927456 | 1,297 | 3.078125 | 3 |
Scanning the Internet used to be a task that took months, but a new tool created by a team of researchers from the University of Michigan can scan all (or most) of the allocated IPv4 addresses in less than 45 minutes by using a typical desktop computer with a gigabit Ethernet connection.
The name of the tool is Zmap, and its uses can be many.
“ZMap can be used to study protocol adoption over time, monitor service availability, and help us better understand large systems distributed across the Internet,” the researchers say, and they have used it to see how fast organizations / websites are implementing HTTPS, how Hurricane Sandy disrupted Internet use in the affected areas, how widespread are certain security bugs, and when is the best time to perform scans like these.
Among the things that they discovered are that in the last year the use of HTTPS increased by nearly 20 percent (nearly 23 percent when it comes to the top 1 million websites), and that the Universal Plug and Play vulnerability discovered earlier this year was still found on 16.7 percent of all detected UPnP devices after a few weeks passed from the revelation.
The scanner can also be used to enumerate vulnerable hosts (and hopefully notify its administrators of the fact so that they can remedy the situation), to uncover hidden services, detect service disruptions and even study criminal behavior, the researchers pointed out.
On the other hand, it can also be used for “evil” – attackers can also wield it to detect vulnerable hosts in order to compromise them.
“While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space and some users may not appreciate your scanning. We encourage ZMap users to respect requests to stop scanning and to exclude these networks from ongoing scanning,” the researchers noted and added that coordinating with local network administrators before initiating such a scan is also a good idea.
“It should go without saying that researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions,” they stressed. | <urn:uuid:0864b14e-b638-4c55-be1d-8eca57dab9d0> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/08/19/scanning-the-internet-in-less-than-an-hour/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00599-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944954 | 437 | 2.78125 | 3 |
NAND Flash, the nonvolatile memory used in smartphones, SSDs (solid-state drives), and many other devices, keeps dropping in price. But the physics of silicon lithography indicates we can’t keep shrinking its size. “These warnings have been out a long time,” says Jim Handy, analyst at market research firm Objective Analysis. “Intel warned they couldn’t make flash below 60 nm [nanometers], but now they’re shipping chips made at 16 nm.” Companies are searching for new ways to make nonvolatile memory, and Santa Clara, Calif.-based start-up Crossbar Inc. believes it has the best option with its RRAM (resistive RAM).
“RRAM is higher density, has more endurance and longevity, and is 20 times faster than NAND flash while using a fraction of the electricity,” says Hagop Nazarian, vice president of engineering at Crossbar. “RRAM works in 3D, so you can stack chips to increase memory from 16GB to 32GB with another layer, and 64GB with another two.” Crossbar has silicon wafers in testing now, and expects to ship product in two to three years.
Crossbar’s RRAM uses a pair of electrodes separated by their proprietary amorphous silicon switching media that moves silver ions into a filament that dramatically lowers resistance. Reversing the current moves the silver ions in the other direction, breaking the filament and raising resistance.
Handy counts numerous Crossbar competitors also trying to create the next flash memory breakthrough. “HP said they’d have their memristor [technology] shipping by now, but it’s not. RRAM, MRAM [magnetoresistive RAM], ReRAM, and PCRAM [phase-change memory] are all significantly more expensive than NAND flash, and cost is just about everything in memory.”
Nazarian believes Crossbar’s RRAM has the inside track. “We have filed over 100 patents and 30 have been issued. Our technology is CMOS compatible with multiple layers, using techniques chip foundries already use, so we will be able to add embedded memory onto microcontrollers. We’ll serve all markets for memory, from the smallest device in the Internet of Things to the largest servers.”
“NAND flash will be around for another decade,” says Handy. “There will be a couple more generations of current technology, then three generations or more of 3D NAND flash. But someday NAND flash prices will level out because of the technology difficulties, and these other memory options will drop enough to match their price.”
When the switch does come, resellers will have few changes to operations to integrate the new memory, says Handy. Controller makers may be able to scale back complexity of the controller chips. “You may update your 2103 iPhone with 1TB of NAND flash to a 4TB model with a new memory type.” | <urn:uuid:51c06f0e-4b62-43bf-9302-d0531f4fdb08> | CC-MAIN-2017-09 | http://www.channelpronetwork.com/article/rram-takes-flash | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00299-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920596 | 633 | 2.578125 | 3 |
The concept of cloud hosting technology accounts for a general shift of computer processing, storage, and software delivery away from the desktop and local servers, across the network, and into the next generation data centers hosted by competent cloud computing companies that have large infrastructure. Just as the electric grid revolutionized business, cloud computing or cloud hosting is revolutionizing information technology or IT. This information technology revolution is helping corporations to get free themselves from large information technology related capital investments, and at the same time is enabling them to plug into the extremely powerful computing resources offered by cloud hosting technology over the network. Cloud hosting helps businesses to focus better on their core business and not to worry about the associated IT tasks.
442 Total Views 2 Views Today | <urn:uuid:cd517cbb-85dc-4e5d-a7fd-a8b87d865398> | CC-MAIN-2017-09 | http://www.myrealdata.com/blog/457_cloud-hosting-helps-businesses-to-focus-better | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00068-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930284 | 144 | 2.578125 | 3 |
by Geoff Huston, APNIC
Back at the end of June 2012 there was a brief IT hiccup as the world adjusted the Coordinated Universal Time (UTC) standard by adding an extra second to the last minute of the 31st of June. Normally such an adjustment would pass unnoticed by all but a small dedicated collection of time keepers, but this time the story spread out into the popular media as numerous Linux systems hiccupped over this additional second, and they supported some high-profile services, including a major air carrier's reservation and ticketing backend system. The entire topic of time, time standards, and the difficulty of keeping a highly stable and regular clock standard in sync with a slightly wobbly rotating Earth has been a longstanding debate in the International Telecommunication Union Radiocommunication Sector (ITU-R) standards body that oversees this coordinated time standard. However, I am not sure that anyone would argue that the challenges of synchronizing a strict time signal with a less than perfectly rotating planet is sufficient reason to discard the concept of a coordinated time standard and just let each computer system drift away on its own concept of time. These days we have become used to a world that operates on a consistent time standard, and we have become used to our computers operating at sub-second accuracy. But how do they do so? In this article I will look at how a consistent time standard is spread across the Internet, and examine the operation of the Network Time Protocol (NTP).
Some communications protocols in the IP protocol suite are quite recent, whereas others have a long and rich history that extends back to the start of the Internet. The ARPANET switched over to use the TCP/IP protocol suite in January 1983, and by 1985 NTP was in operation on the network. Indeed it has been asserted that NTP is the longest running, continuously operating, distributed application on the Internet .
The objective of NTP is simple: to allow a client to synchronize its clock with UTC time, and to do so with a high degree of accuracy and a high degree of stability. Within the scope of a WAN, NTP will provide an accuracy of small numbers of milliseconds. As the network scope gets finer, the accuracy of NTP can increase, allowing for sub-millisecond accuracy on LANs and sub-microsecond accuracy when using a precision time source such as a Global Positioning System (GPS) receiver or a caesium oscillator.
If a collection of clients all use NTP, then this set of clients can operate with a synchronized clock signal. A shared data model, where the modification time of the data is of critical importance, is one example of the use of NTP in a networked context.
(I have relied on NTP timer accuracy at the microsecond level when trying to combine numerous discrete data sources, such as a web log on a server combined with a Domain Name System (DNS) query log from DNS resolvers and a packet trace.)
NTP, Time, and Timekeeping
To consider NTP, it is necessary to consider the topic of timekeeping itself. It is useful to introduce some timekeeping terms at this juncture:
NTP is designed to allow a computer to be aware of three critical metrics for timekeeping: the offset of the local clock to a selected reference clock, the round-trip delay of the network path between the local computer and the selected reference clock server, and the dispersion of the local clock, which is a measure of the maximum error of the local clock relative to the reference clock. Each of these components is maintained separately in NTP. They provide not only precision measurements of offset and delay, to allow the local clock to be adjusted to synchronize with a reference clock signal, but also definitive maximum error bounds of the synchronization process, so that the user interface can determine not only the time, but the quality of the time as well.
Universal Time Standards
It would be reasonable to expect that the time is just the time, but that is not the case. The Universal Time reference standard has several versions, but these two standards are of interest to network timekeeping.
UT1 is the principal form of Universal Time. Although conceptually it is Mean Solar Time at 0° longitude, precise measurements of the Sun are difficult. Hence, it is computed from observations of distant quasars using long baseline interferometry, laser ranging of the Moon and artificial satellites, as well as the determination of GPS satellite orbits. UT1 is the same everywhere on Earth, and is proportional to the rotation angle of the Earth with respect to distant quasars, specifically the International Celestial Reference Frame (ICRF), neglecting some small adjustments.
The observations allow the determination of a measure of the Earth's angle with respect to the ICRF, called the Earth Rotation Angle (ERA), which serves as a modern replacement for Greenwich Mean Sidereal Time). UT1 is required to follow the relationship:
ERA = 2π(0.7790572732640 + 1.00273781191135448Tμ) radians
Coordinated Universal Time (UTC) is an atomic timescale that approximates UT1. It is the international standard on which civil time is based. It ticks SI seconds, in step with International Atomic Time (TAI). It usually has 86,400 SI seconds per day, but is kept within 0.9 seconds of UT1 by the introduction of occasional intercalary leap seconds. As of 2012 these leaps have always been positive, with a day of 86,401 seconds.
NTP uses UTC, as distinct from the Greenwich Mean Time (GMT), as the reference clock standard. UTC uses the TAI time standard, based on the measurement of 1 second as 9,192,631,770 periods of the radiation emitted by a caesium-133 atom in the transition between the two hyperfine levels of its ground state, implying that, like UTC itself, NTP has to incorporate leap second adjustments from time to time.
NTP is an "absolute" time protocol, so that local time zones—and conversion of the absolute time to a calendar date and time with reference to a particular location on the Earths' surface—are not an intrinsic part of the NTP protocol. This conversion from UTC to the wall-clock time, namely the local date and time, is left to the local host.
Servers and Clients
NTP uses the concepts of server and client. A server is a source of time information, and a client is a system that is attempting to synchronize its clock to a server.
Servers can be either a primary server or a secondary server. A primary server (sometimes also referred to as a stratum 1 server using terminology borrowed from the time reference architecture of the telephone network) is a server that receives a UTC time signal directly from an authoritative clock source, such as a configured atomic clock or—very commonly these days—a GPS signal source. A secondary server receives its time signal from one or more upstream servers, and distributes its time signal to one of more downstream servers and clients. Secondary servers can be thought of as clock signal repeaters, and their role is to relieve the client query load from the primary servers while still being able to provide their clients with a clock signal of comparable quality to that of the primary servers. The secondary servers need to be arranged in a strict hierarchy in terms of upstream and downstream, and the stratum terminology is often used to assist in this process.
As noted previously, a stratum 1 server receives its time signal from a UTC reference source. A stratum 2 server receives its time signal from a stratum 1 server, a stratum 3 server from stratum 2 servers, and so on. A stratum n server can peer with many stratum n – 1 servers in order to maintain a reference clock signal. This stratum framework is used to avoid synchronization loops within a set of time servers.
Clients peer with servers in order to synchronize their internal clocks to the NTP time signal.
The NTP Protocol
At its most basic, the NTP protocol is a clock request transaction, where a client requests the current time from a server, passing its own time with the request. The server adds its time to the data packet and passes the packet back to the client. When the client receives the packet, the client can derive two essential pieces of information: the reference time at the server and the elapsed time, as measured by the local clock, for a signal to pass from the client to the server and back again. Repeated iterations of this procedure allow the local client to remove the effects of network jitter and thereby gain a stable value for the delay between the local clock and the reference clock standard at the server. This value can then be used to adjust the local clock so that it is synchronized with the server. Further iterations of this protocol exchange can allow the local client to continuously correct the local clock to address local clock skew.
NTP operates over the User Datagram Protocol (UDP). An NTP server listens for client NTP packets on port 123. The NTP server is stateless and responds to each received client NTP packet in a simple transactional manner by adding fields to the received packet and passing the packet back to the original sender, without reference to preceding NTP transactions.
Upon receipt of a client NTP packet, the receiver time-stamps receipt of the packet as soon as possible within the packet assembly logic of the server. The packet is then passed to the NTP server process. This process interchanges the IP Header Address and Port fields in the packet, overwrites numerous fields in the NTP packet with local clock values, time-stamps the egress of the packet, recalculates the checksum, and sends the packet back to the client.
The NTP packets sent by the client to the server and the responses from the server to the client use a common format, as shown in Figure 1.
The header fields of the NTP message are as follows:
The next four fields use a 64-bit time-stamp value. This value is an unsigned 32-bit seconds value, and a 32-bit fractional part. In this notation the value 2.5 would be represented by the 64-bit string:
The unit of time is in seconds, and the epoch is 1 January 1900, meaning that the NTP time will cycle in the year 2036 (two years before the 32-bit Unix time cycle event in 2038).
The smallest time fraction that can be represented in this format is 232 picoseconds.
The basic operation of the protocol is that a client sends a packet to a server and records the time the packet left the client in the Origin Timestamp field (T1). The server records the time the packet was received (T2). A response packet is then assembled with the original Origin Timestamp and the Receive Timestamp equal to the packet receive time, and then the Transmit Timestamp is set to the time that the message is passed back toward the client (T3). The client then records the time the packet arrived (T4), giving the client four time measurements, as shown in Figure 3.
These four parameters are passed into the client timekeeping function to drive the clock synchronization function, which we will look at in the next section.
The optional Key and Message Digest fields allow a client and a server to share a secret 128-bit key, and use this shared secret to generate a 128-bit MD5 hash of the key and the NTP message fields. This construct allows a client to detect attempts to inject false responses from a man-in the-middle attack.
The final part of this overview of the protocol operation is the polling frequency algorithm. A NTP client will send a message at regular intervals to a NTP server. This regular interval is commonly set to be 16 seconds. If the server is unreachable, NTP will back off from this polling rate, doubling the back-off time at each unsuccessful poll attempt to a minimum poll rate of 1 poll attempt every 36 hours. When NTP is attempting to resynchronize with a server, it will increase its polling frequency and send a burst of eight packets spaced at 2-second intervals.
When the client clock is operating within a sufficient small offset from the server clock, NTP lengthens the polling interval and sends the eight-packet burst every 4 to 8 minutes (or 256 to 512 seconds).
Timekeeping on the Client
The next part of the operation of NTP is how an NTP process on a client uses the information generated by the periodic polls to a server to moderate the local clock.
From an NTP poll transaction, the client can estimate the delay between the client and the server. Using the time fields described in Figure 3, the transmission delay can be calculated as the total time from transmission of the poll to reception of the response minus the recorded time for the server to process the poll and generate a response:
δ = (T4 – T1) – (T3 – T2)
The offset of the client clock from the server clock can also be estimated by the following:
θ = ½ [(T2 – T1) + (T3 – T4)]
It should be noted that this calculation assumes that the network path delay from the client to the server is the same as the path delay from the server to the client.
NTP uses the minimum of the last eight delay measurements as δθ. The selected offset, θ0, is one measured at the lowest delay. The values (θ0,δ0) become the NTP update value.
When a client is configured with a single server, the client clock is adjusted by a slew operation to bring the offset with the server clock to zero, as long as the server offset value is within an acceptable range.
When a client is configured with numerous servers, the client will use a selection algorithm to select the preferred server to synchronize against from among the candidate servers. Clustering of the time signals is performed to reject outlier servers, and then the algorithm selects the server with the lowest stratum with minimal offset and jitter values. The algorithm used by NTP to perform this operation is Marzullo's Algorithm .
When NTP is configured on a client, it attempts to keep the client clock synchronized against the reference time standard. To do this task NTP conventionally adjusts the local time by small offsets (larger offsets may cause side effects on running applications, as has been found when processing leap seconds). This small adjustment is undertaken by an adjtime() system call, which slews the clock by altering the frequency of the software clock until the time correction is achieved. Slewing the clock is a slow process for large time offsets; a typical slew rate is 0.5 ms per second.
Obviously this informal description has taken a rather complex algorithm and some rather detailed math formulas without addressing the details. If you are interested in how NTP operates at a more detailed level, consult the references that follow, which will take you far deeper into the algorithms and the underlying models of clock selection and synchronization than I have done here.
NTP is in essence an extremely simple stateless transaction protocol that provides a quite surprising outcome. From a regular exchange of simple clock readings between a client and a server, it is possible for the client to train its clock to maintain a high degree of precision despite the possibility of potential problems in the stability and accuracy of the local clock and despite the fact that this time synchronization is occurring over network paths that impose a noise element in the form of jitter in the packet exchange between client and server. Much of today's distributed Internet service infrastructure relies on a common time base, and this base is provided by the common use of the Network Time Protocol. | <urn:uuid:eb8b63b6-5018-4e1f-a52a-df6b1fe28db1> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-58/154-ntp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00296-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925195 | 3,278 | 3.265625 | 3 |
A 21st-century approach to democratizing data
- By Mark Forman (Moderator), Christopher J. Lyons
- Oct 24, 2012
This graph, from MIT's Billion Prices Project, represents a cutting-edge way to gather data and turn it into useful information, according to Christopher J. Lyons and Mark Forman.
“Unbelievable jobs numbers... These Chicago guys will do anything,” Jack Welch tweeted.
Not surprisingly, the recent steep drop in the unemployment rate has given rise to conspiracy comments and discussions about how the rate is derived. Maybe the employment rate is inflated. Maybe it is understated for months. Maybe seasonal adjustments play a part. Maybe.
Recent “democratizing data” concepts hold great promise for improving accountability and even increasing value from the billions of dollars spent on thousands of government data-collection programs. Yet when doubts dominate market-moving, election-shifting data, it is clear that America needs government to change more than how it distributes data. Should government collect the same data and in the same way that it did in the last century? More important, should government’s central role in collecting and disseminating data be changed?
Consider this example: Every day an organization near Boston sends its agents out to collect the prices of thousands of items sold by hundreds of retailers and manufacturers around the world. The agents are dozens of servers using software to scrape prices from websites. In near-real time, the price data is collected, stored, analyzed and sent to some of the largest investment and financial organizations on the planet, including central banks.
This is the Billion Prices Project run by two economics professors at the Massachusetts Institute of Technology. With a 21st-century approach, two people can collect and analyze the costs of goods and services purchased in economies all over the world using price data readily available online from thousands of retailers. They mimic what consumers do to find prices via Amazon, eBay and Priceline. The Billion Prices Project does not sample. It uses computer strength to generate a daily census of the price of all goods and services. It routinely predicts price movements three months before the government Consumer Price Index (CPI) announces the same.
Beginning in the early 20th century, the Bureau of Labor Statistics responded to the need to determine reasonable cost-of-living adjustments to workers’ wages by publishing a price index tied to goods and services in multiple regions. Over time, government data collections grew through the best methods available in the 20th century — surveys and sampling — and built huge computer databases on a scale only the government could accomplish and afford. Even today, the CPI is based on physically collecting — by taking notes in stores — of the prices for a representative basket of goods and services. The manual approach means the data is not available until weeks after consumers are already feeling the impact.
The federal government’s role as chief data provider has resulted in approximately 75 agencies that collect data using more than 6,000 surveys and regulatory filings. Those data-collection activities annually generate more than 400,000 sets of statistics that are often duplicative, sometimes conflicting and generally published months after collection. The federal government is still investing in being the trusted monopoly provider of statistical data by developing a single portal — Data.gov — to disseminate data it collects using 20th-century approaches.
However, because the value of price data diminishes rapidly with age, it is worth asking why government would invest any taxpayer dollars in finding new ways to publish data that is weeks out of date. More importantly, in an age in which most transactions are accomplished electronically, does it make sense to spread economic data assembled as if we were still in the 20th century?
Old approaches to collecting data no longer invoke a sense of trust. Consider the London Interbank Offered Rate benchmark interest rate, an average of the interest rates paid on interbank loans developed using manual data collection. Those funds move by electronic transactions, but the reporting of interest is an old-school, good-faith manual submission from certain major banks each morning to the British Bankers’ Association. So while the actual transactional data is available instantly in electronic format, it is gathered through individual reporting from each bank daily, creating opportunities for error and manipulation.
The lessons from the Billion Prices Project lie in its 21st-century approach, which affects the breadth, quality, cost and timeliness of data collection. It is an excellent example of how the rise of the Internet as the ubiquitous kiosk for posting information and the unstoppable movement to online transactions require changing government’s 20th-century approach to collecting and disseminating data.
The trusted information provider role of government is ending, and new ways to disseminate long-standing datasets will not change that. Non-government entities are increasingly filling the information quality gap, generating the timely, trusted data and statistics that businesses and policy-makers use — and pay for. The Case-Shiller indices, compiled by Standard and Poor’s using transaction data, are the standard for determining trends in housing prices. The ADP National Employment Report, generated from anonymous payroll information, is widely trusted to accurately relay changes in national employment.
It is time for the government to reconsider its role in data collection and dissemination. The 21st century is characterized by digital commerce that makes large amounts of transactional data available as those transactions occur. Government efforts to collect and analyze data — much like the U.S. Postal Service in the face of texting and e-mail — are becoming more disenfranchised the longer they ignore the paradigm shift.
Statistics developed by independent organizations and companies are already essential to markets, businesses and policy-makers, and the government is increasingly a marginal player. As long as the methods of collection and analysis are open and auditable, government might be better served by shifting away from being a producer to simply being a consumer.
Christopher Lyons is an independent consultant who works primarily with government clients on performance improvement and adoption of commercial best practices. Mark Forman was the government’s first administrator for e-government and IT and is co-founder of Government Transaction Services, a cloud-based company that simplifies and reduces the burden of complying with government rules and regulations.
Mark Forman is an accomplished Executive with more than 29 years of government management reform experience, including a Presidential appointment to be the first U.S. Administrator for E-Government and Information Technology, the Federal Government’s Chief Information Officer. Mr. Forman is currently the CEO of Government Transaction Services, Inc. which was established in 2010 to be the leading provider of cloud-based business process and transaction services supporting organizations that do business with the federal government. Government Transaction Services’ products reduce administrative burdens and simplify interactions with government, as well provide on-line practitioner communities.
Christopher Lyons is an independent consultant who works primarily with government clients on performance improvement and adoption of commercial best practices. | <urn:uuid:92e9e6aa-1705-41bf-aa09-7ec7d8de2f22> | CC-MAIN-2017-09 | https://fcw.com/articles/2012/10/24/democratizing-data.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00648-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935549 | 1,411 | 2.578125 | 3 |
Image: Colorado is one of a number of states where state and local governments are prohibited by law from directly providing broadband service. image © 2005 Matthew. GNU Free Documentation License, Version 1.2. Article reprinted courtesy of Stateline.org
Colorado is one of a number of states where state and local governments are prohibited by law from directly providing broadband service, for example, free municipal wireless connections. So a recommendation in the Federal Communication Commission's National Broadband Plan has state officials scrambling. Released in March, the plan calls for Congress to ensure that state and local governments don't pose any barriers to making broadband available. If approved, the action could override the state laws.
"The worst thing that could happen in the state of Colorado is for a law like that to be rolled back, and we don't have accompanying policies in place," says John Conley, executive director of the state's Statewide Internet Portal Authority. To deal with that and other possible federal actions, Colorado has formed a broadband council to review the plan, as well as state policy, and deliver guidance to state lawmakers in the coming year.
Colorado's situation reflects the dynamically changing broadband environment in the United States today and the efforts of state officials not only to keep up with the changes in the plan, but to get out ahead of them. And since March, the FCC, its plan and other factors shaping the landscape of broadband in the nation have been on a wild ride.
What's at stake, states recognize, is the potential of broadband to improve the delivery of health care, public safety, education, and other services-and to make their workers and businesses competitive in a global economy. According to a new report released June 21 by the Pew Center on the States, "Bringing America Up to Speed: States' Role in Expanding Broadband," states -- with an infusion of $7.2 billion in federal stimulus funds and guidance from the FCC's broadband plan -- have stepped up efforts to ensure universal access to fast, reliable broadband connections and to give their residents the skills and resources they need to understand the benefits of broadband and get the most out of it.
Three weeks after the release of the plan, however, a federal court ruling, Comcast v. FCC, effectively undermined the FCC's authority to regulate many aspects of broadband, including oversight of management of Internet service providers and enforcement of a range of objectives, such as network neutrality, which prohibits providers from restricting all forms of content. "The effect of the Comcast decision," says Austin Schlick, general counsel for the FCC, "made their services unregulated and unregulatable under the current legal framework." Perhaps worried that Congress would step in to set its own standards, a group of providers, including Verizon, AT&T and Comcast, banded together to voluntarily impose net neutrality on themselves, developing guidelines to manage their networks. Net neutrality advocates have said that such an agreement, while nice, is no substitute for clear rules.
The court decision also casts some of the National Broadband Plan's recommendations into doubt. For one thing, it could upset the FCC's plan to reform the Universal Service Fund, which now guarantees funding for universal telecommunications services to everyone, to also provide broadband to all Americans, according to FCC Chairman Julius Genachowski.
The ruling also threatens, he says, the plan's recommendations that would protect consumers and promote competition by ensuring transparency in broadband access services, safeguard the privacy of consumer information, facilitate access to broadband services by persons with disabilities, protect against cyber-attacks, ensure next-generation 911 services for broadband communications and preserve a free and open Internet.
In all of these areas, states have direct and indirect interests as well. For example, the funding for a broadband plan proposal to expand E-rate, a grant program that enables many schools and libraries to be connected to the Internet, could be in question. "Over the years, we have done very well with that program to get Internet to our schools," says Craig Orgeron, strategic services director at Mississippi's Department of Information Technology Services, who waits along with officials from other states to see how broadband's regulatory limbo will affect the E-Rate program and other areas.
After the ruling, the FCC voted June 17 to begin the process of reclassifying broadband as a telecommunications service like traditional phone lines, over which the FCC has more clearly delineated regulatory authority. Genchowski, though, has called for an approach that would scale back some of this authority that he has said would be inappropriate for broadband -- such as regulating Internet content -- an approach similar to the FCC's regulation of wireless telephone.
But U.S. Congressman Lee Terry, a Republican from Nebraska and a proponent of broadband for revitalizing economies in rural areas, argues that Congress needs to step in and decide the next steps for broadband and broadband regulation. He says that the FCC is "usurping the Congressional role in broadband planning." He is not alone; more than half of Congress, including members of both parties, has expressed concern about the new reclassification plan. In one of several Congressional letters sent to the FCC before its vote, more than 70 House Democrats urged, "the significant regulatory impact of reclassifying broadband service ... should not be done without additional direction from Congress."
That could slow the process, however. Harold Feld, legal director for Public Knowledge, a public interest group focused on digital rights, notes, "Democrats and Republicans are fairly far apart on what sort of action they'd like to see." It took 20 years, he says, for Congress to act when similar FCC authority over cable television was in question.
"I don't know what the shakeout will be," says Barbara Esbin, a senior counsel with law firm Cinnamon Mueller who spent more than a decade with the FCC in the Media Bureau and the Cable Services Bureau. "But if I were a state regulator or broadband director, I would be watching this very closely."
Right now, state officials like Colorado's Conley are hoping just to get some clarity. Conley, the go-to person when others in the state have questions related to state or national broadband issues, says he is disconcerted by the murkiness that currently shrouds some important national broadband matters. "If someone asks me what you have to do to meet the net neutrality requirement, I don't know," he says. "And I don't know where to look."
Although the federal ruling casts uncertainty on aspects of FCC authority over broadband, it does not affect many of the recommendations in the FCC's broadband plan. Indeed, the FCC and other agencies already have begun implementing some of the suggestions, including changing regulations regarding utility pole attachments and taking steps to auction broadband spectrum..
For states, perhaps the most significant recent development has been the announcement of a new round of National Telecommunications and Information Administration (NTIA) grants for broadband mapping and planning activities, funded out of the $350 million the Recovery Act had designated for states to map the availability, speed, and location of broadband services. The new grants, in addition to the $100 million already granted for state mapping, cover three additional years beyond the initial two that the first round of grants had covered and expand funding to include state task force planning work and programs to increase computer ownership and Internet use.
"The NTIA broadband mapping program has allowed us to take a more centralized
approach and to take more resources in the state to focus on broadband," notes Stuart Freiman, broadband program manager for the Rhode Island Economic Development Corp. For Freiman, the gamut of state actions suggested by the plan and supported by this planning grant -- everything from improving Internet adoption and digital literacy to using broadband to bolster education and integrate broadband applications across state public safety agencies -- "have created a fantastic opportunity for states to deal with issues they maybe haven't addressed in the past or have ignored because they thought it was being taken care of."
In all, the FCC has more than 60 action items from the plan slated for 2010 implementation. But one of the most prominent measures, auctioning off some new airwaves to commercial providers for broadband applications, has erupted in a dispute over whether a dedicated public emergency broadband network should be owned by government or private carriers. Public safety officials, as well as a number of state and local government groups, including the Council of State Governments and the National Governors Association, argue that these airwaves should be dedicated to a public emergency broadband network. Paying for a public safety network might be difficult, however, and the FCC has suggested that such a network could be constructed less expensively on existing public safety airwaves and supplemented by empowering public safety agencies to take over commercial bandwidth in emergency situations. Congress weighed these arguments in a public hearing June 17 as it considers legislation to build a national public safety broadband network.
One of the FCC's first actions relating to the plan, to reduce the cost and time it takes broadband providers to access the country's 49 million utility poles that the FCC regulates, was influenced by existing programs in some states. FCC General Counsel Phoebe Yang says the move was modeled after attachment guidelines in Connecticut and New York, which regulate their own poles, that can halve the number of days the process might take in other states. When the FCC implements these new rules, those poles still regulated by those other states will lag behind.
By informing the FCC of similar best practices, as well as challenges without current solutions, states will continue to play a crucial role in developing many of the federal regulations that will tumble out in the coming months and years. "We'd love to have input on the infrastructure issues, particularly around the impact of the plan's recommendations on traditional wireline carriers. We rely on the states to communicate that to us. Nothing is self-effectuating. Nothing is pre-decided," Yang says, noting that there are numerous issues, such as broadband adoption by those with disabilities, where the front lines are at the state level.
States also are moving ahead to use their authority to modernize policies and bolster broadband availability. On June 15, Governor Pat Quinn made Illinois the latest state to revamp its telecommunications law, overhauling obsolete standards from a 1985 law written in the days before widespread cell phone and broadband adoption. State officials say the new law will stimulate greater private investment in broadband and wireless technologies. | <urn:uuid:e58bc760-dbfb-42e9-80e8-a64c668a0241> | CC-MAIN-2017-09 | http://www.govtech.com/budget-finance/States-Ride-Broadband-Wave.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00116-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959656 | 2,093 | 2.734375 | 3 |
Device Puts Steering at the Tip of the TongueBy Reuters - | Posted 2008-06-30 Email Print
The magnet lets people direct the movement of a cursor across a computer screen or a powered wheelchair around a room and can be implanted under the tongue.
WASHINGTON (Reuters) - A new device that uses a tiny magnet can help disabled people steer a wheelchair or operate a computer using only the tip of the tongue, U.S. researchers reported on Monday.
The magnet, the size of a grain of rice, lets people direct the movement of a cursor across a computer screen or a powered wheelchair around a room.
It is easily implanted under the tongue, the team at the Georgia Institute of Technology said.
"We chose the tongue to operate the system because unlike hands and feet, which are controlled by the brain through the spinal cord, the tongue is directly connected to the brain by a cranial nerve that generally escapes damage in severe spinal cord injuries or neuromuscular diseases," said Maysam Ghovanloo, an assistant professor who helped direct the work.
"Tongue movements are also fast, accurate and do not require much thinking, concentration or effort."
A headset with magnetic field sensors detects the magnetic tracer on the tongue and transmits wireless signals to a portable computer, which can be carried on the user's clothing or wheelchair.
"This device could revolutionize the field of assistive technologies by helping individuals with severe disabilities, such as those with high-level spinal cord injuries, return to rich, active, independent and productive lives," Ghovanloo said in a statement.
The team reported on their device to a meeting of the Rehabilitation Engineering and Assistive Technology Society of North America in Washington.
The researchers said the computer could be programmed to recognize a unique set of specific tongue movements for each user. "An individual could potentially train our system to recognize touching each tooth as a different command," Ghovanloo said.
The researchers tested the Tongue Drive system on 12 able-bodied volunteers and now plan to test it on people with severe disabilities, Ghovanloo said.
(Reporting by Maggie Fox)
© Thomson Reuters 2008 All rights reserved | <urn:uuid:39bf1032-ca95-466d-b785-3e21b27e21ee> | CC-MAIN-2017-09 | http://www.baselinemag.com/c/a/Infrastructure/Device-Puts-Steering-at-the-Tip-of-the-Tongue | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00292-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935015 | 447 | 2.640625 | 3 |
Without much relation to anything, I wrote this short essay about the role prime numbers play in Internet security. In a nutshell, security relies on the ability to form leverage for the defender over the adversary. Such leverage can be of one of two types:
Prime numbers are used as part of at least one mathematical mechanism that serves #2.
Security is about enforcing rules on the behavior of a system, in light of an adversary wishing to break those rules. For example, the security mechanism of access control is designed to prevent a database from showing data to unauthorized people even if an unauthorized person tries to gain such access.
For the security engineer to be more powerful than the adversary in getting his own rules enforced, two types for leverage are possible. The first type is the most typical and it is leverage through the ability to code the system behavior. The programmer of the ATM gets to decide what code (logic) the ATM runs, and if he tells the ATM not to dispense cash until the current PIN was entered, and as long as the adversary has no capability to change the programmed ATM code, it works.
This type of leverage is versatile and sufficient, but is not always applicable. For example, when restricting access to data in communication it is impossible to moderate how communicated bits are processed by whoever gets to read them.
The other type of leverage is through math. Here the good guy may not have control over all aspects of the system operation, but he knows something that the adversary does not, and mathematical tricks are used to convert this unique knowledge into allowing the good guy to carry out an operation that the adversary cannot. The best example for this type of leverage is encryption. If a message is encrypted, only whoever knows the right key can decrypt and read that message, even if the system is operable by anyone.
Whereas simple types of encryption make use of Symmetric methods, that is, methods where the key used for encryption is the same one used for decryption, Asymmetric methods are also used, where encryption and decryption are carried out using mathematically related, yet different, keys. For such a mechanism to be secure, it shall be unfeasible to deduce one key from the other, unless you are the entity that made them both.
Generating a pair of keys that are mathematically related (so one decrypts what the other encrypted), while having it impossible to deduce one from the other is a difficult mathematical challenge.
Since the keys are related after all, the only way to make it impractical to deduce one from the other is to assure that such deduction requires the solution of a mathematical puzzle which is believed to be impractical to solve. One such puzzle is the breaking of a large number in a group into its prime components. When you have a modular group, and a number in it that is the product of two primes, it is believed to be impossible to deduce the prime factors without excessive work that is exponentially proportional to the size of the number. Have this number really large, and the problem is believed to be practically unsolvable.
This principle is at the heart of the RSA encryption algorithm that is used in security protocols that protect Web traffic in e-commerce and e-banking, digital signatures, and other security mechanisms to which we owe the confidentiality and integrity of our online interactions.
Form is loading... | <urn:uuid:2a14e7f7-0713-4718-8878-5536c8858abe> | CC-MAIN-2017-09 | https://www.hbarel.com/analysis/eng/prime-numbers-and-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00468-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952052 | 685 | 3.015625 | 3 |
A port scanner is a program that is used in network security testing and troubleshooting. An online port scanner is a scan that is able to externally test your network firewall and open ports because it is sourced from an external IP address. It is a regular port scanner that is hosted on another system usually with an easy to use web interface.
To understand what a port scanner does we need to first understand the basics of how the network "works". In referencing the network this could be a local area network in your home or office or it could be the Internet.
A network is compromised of systems with addresses and on those systems you have services.
The address is called an "IP Address" and the Service could be many things but is basically software that is running on the system and accessible over the network on a port number. It could be a web server, email server or gaming server.
A service will run on 192.168.1.3 and listen on a port
- web server : port 80
- mail server (smtp) : port 25
- mail server (pop3) : port 110
- game server : port 49001
There are many resources that cover the more technical details of port scanning and the different types of port scanning. We are going to stick to the basics.
The missing part of this introduction to network basics is the host name, dns record or domain name. It is a reference to the IP address using an easier to remember name. For example what is easier to remember: 18.104.22.168 or www.google.com ?
When you type www.google.com into your browser you are directed via the domain name system to 22.214.171.124 on port 80. The port 80 is done by the browser automatically. If you type https:// into the browser you go to a different port 443. As this is the known port for SSL traffic.
Here are some common ports that you will find when using a port scanner:
- 25 Email (SMTP)
- 53 Domain Name Server
- 80 Web Server (HTTP)
- 110 Email Server (POP3)
- 143 Email Server (IMAP)
- 443 Web Server (HTTPS)
- 445 Windows Communication Protocol (File Sharing etc)
- 8080 Proxy Server
A more complete list of ports can be found at Wikipedia.
In the diagram we have a server behind a firewall, the server is a Web Server and Mail Server. So it is listening on Port 80 and Port 25.
The Nmap port scanner is the worlds leading port scanner. It is very accurate, stable and has more options than we are going to get into here, for more information and installation instructions head over to the nmap page.
Using the Nmap Port scanner to test this IP address we find that the ports 25 and 80 are Open and allowed through the firewall. Nmap also reports that port 443 is Closed. All other ports are filtered.
[bash]Starting Nmap 5.00 ( http://nmap.org ) at 2009-07-16 23:12 UTC
Interesting ports on 126.96.36.199:
Not shown: 997 filtered ports
PORT STATE SERVICE VERSION
25/tcp open smtp
80/tcp open http Apache httpd
443/tcp closed https
Service Info: OS: Linux
Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 64.27 seconds
We have scanned the IP Address: 188.8.131.52[/bash]
What do these nmap port scanner results mean?
Open Ports 25 and 80 are listening on the server and are allowed through the Firewall.
Closed Port 443 is not listening on the server but is allowed through the Firewall.
All other Ports Filtered: this indicates the firewall is blocking all the other ports.
Now that you have an understanding of what a port scanner is you can jump over to our Online Nmap Scan testing page and run a port scan. The advantage of using our server is that it is external facing to your network and will see what any other external attacker on the Internet will see. You can also install Nmap yourself and run it against your network, you will likely see a different result to that of the external facing scan. | <urn:uuid:027e6fe9-2bfb-4e11-9f4d-0a72d371257e> | CC-MAIN-2017-09 | https://hackertarget.com/port-scanner/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00112-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.909041 | 912 | 3.25 | 3 |
China has passed a new cybersecurity law that gives it greater control over the internet, including by requiring local storage of certain data.
Human rights groups and trade associations in the U.S. and other countries have warned of the implications of the law both for internet businesses and human rights in the country.
The National People's Congress Standing Committee passed the new cybersecurity law Monday, according to reports.
“Despite widespread international concern from corporations and rights advocates for more than a year, Chinese authorities pressed ahead with this restrictive law without making meaningful changes,” said Sophie Richardson, China director of Human Rights Watch in a statement over the weekend.
HRW has described the new cybersecurity law as a “regressive measure that strengthens censorship, surveillance, and other controls over the Internet.” The final draft of the new law would, for example, require a large range of companies to collect real names and personal information from online users, including from users of messaging services, as well as censor content, HRW said.
The law will also place burdens of storing data locally for foreign internet companies. It requires “critical information infrastructure operators” to store users’ “personal information and other important business data” in China, which are terms that are vague.
“The final draft narrows the scope to only data that is related to a firm’s China operations, but the term ‘important business data’ is undefined, and companies must still submit to a security assessment if they want to transfer data outside the country,” HRW said.
Under the new rules, companies will also be required to monitor and report to authorities network security incidents, which are not defined in the law. The requirement that the companies provide “technical support,” a term that is again undefined, to investigating security agencies raises fears of surveillance, according to HRW. The new regulations also provide the legal basis for large-scale network shutdowns in response to security incidents, it added.
In August, industry associations from the U.S., Europe and other countries wrote to the Chinese government to protest the draft cybersecurity law and provisions for insurance systems that were also proposed. The letter said the data retention and sharing and law enforcement assistance requirements "would weaken technical security measures and expose systems and citizens' personal information to malicious actors."
Online activities prohibited under the new provisions include those that are seen as attempts to overthrow the socialist system, split the nation, undermine national unity, advocate terrorism and extremism, according to a news report.
Chinese officials could not be immediately reached for comment.
The country already blocks access to a number of foreign internet services including Facebook and Twitter. | <urn:uuid:69c08d5d-d9aa-4da7-88f9-256db3d904a5> | CC-MAIN-2017-09 | http://www.itnews.com/article/3138948/security/china-passes-controversial-cybersecurity-law.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00464-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945755 | 545 | 2.546875 | 3 |
Last Friday, Archivist of the United States Allen Weinstein and Google Co-Founder and President of Technology Sergey Brin announced the launch of a pilot program to make holdings of the National Archives available for free online. This non-exclusive agreement will enable researchers and the general public to access a diverse collection of historic movies, documentaries and other films from the National Archives via Google Video (video.google.com/nara.html) as well as the National Archives website (www.archives.gov).
"This is an important step for the National Archives to achieve its goal of becoming an archives without walls," said Professor Weinstein. "Our new strategic plan emphasizes the importance of providing access to records anytime, anywhere. This is one of many initiatives that we are launching to make our goal a reality. For the first time, the public will be able to view this collection of rare and unusual films on the Internet."
"Today, we've begun to make the extraordinary historic films of the National Archives available to the world for the first time online," said Sergey Brin, co-founder and president of technology at Google. "Students and researchers whether in San Francisco or Bangladesh can watch remarkable video such as World War II newsreels and the story of Apollo 11 - the historic first landing on the Moon."
The pilot program undertaken by the National Archives and Google features 101 films from the audiovisual collections preserved at the Archives. Highlights of the pilot project include:
- The earliest film preserved in the National Archives holdings by Thomas Armat, "Carmencita - Spanish Dance," featuring the famous Spanish Gypsy dancer,1894;
- A representative selection of U.S. government newsreels, documenting World War II, 1941-45;
- A sampling of documentaries produced by NASA on the history of the spaceflight program;
- Motion picture films, primarily from the 1930s, that document the history and establishment of a nationwide system of national and state parks. Included is early footage of modern Native American activities, Boulder Dam, documentation of water and wind erosion, Civilian Conservation Corps workers, and the establishment of the Tennessee Valley Authority. A 1970 film documents the expansion of recreational programs for inner city youth across the nation. | <urn:uuid:28880510-c0f1-4f70-930f-1f29350c07b8> | CC-MAIN-2017-09 | http://www.govtech.com/e-government/National-Archives-and-Google-Launch-Pilot.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00340-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.91841 | 454 | 2.734375 | 3 |
Two German computer scientists have proved that it’s possible to access and recover data from an encrypted Android smartphone by performing a set of simple and easily replicable steps that start with putting the phone in a freezer.
They tested the attack on Samsung Galaxy Nexus devices, which they kept “on ice” for an hour before.
Since version 4.0 of the Android platform, the device’s storage is automatically encrypted and not accessible except by entering the required PIN. When the device is switched off, the data contained in its RAM chips does not instantly disappear, but fades over time (the so-called “remanence effect”).
The researchers’ theory was that when the switching off and rebooting of the device is performed at sub-zero temperatures, the fading of the data will be slowed down enough to allow them to access it from the phone’s memory.
After pulling the device out of the freezer, they rebooted it, unlocked its bootloader, and they booted up their FROST (Forensic Recovery of Scrambled Telephones) data recovery tool, which allowed them to recover sensitive information such emails, photos, contacts, calendar entries, WiFi credentials, and eve the disk encryption key.
“If a bootloader is already unlocked before we gain access to a device, we can break disk encryption. The keys that we recover from RAM then allow us to decrypt the user partition. However, if a bootloader is locked, we need to unlock it first in order to boot FROST and the unlocking procedure wipes the user partition (but preserves RAM contents),” they shared.
“Since bootloaders of Galaxy Nexus devices are locked by default, and since we conjecture that most people do not unlock them, disk encryption can mostly not be broken in real cases. Nevertheless, in addition we integrated a brute force option that breaks disk encryption for short PINs.”
“We believe that our study about Android’s encryption is important for two reasons: First, it reveals a significant security gap that users should be aware of. Since smartphones are switched off only seldom, the severity of this gap is more concerning than on PCs. Second, we provide the recovery utility FROST which allows law enforcement to recover data from encrypted smartphones comfortably,” they concluded. | <urn:uuid:4e62ba54-660b-40b7-a330-74c814b28fdf> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/02/18/freezing-android-devices-to-break-disk-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00040-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929471 | 475 | 3.078125 | 3 |
A text file describing a new virus called PROTO-T was distributed via electronic bulletin boards and the Internet late in the year 1992. This text told about a virus of a new kind that was threateningly spreading itself all over the world. The virus was, among other things, claimed to be impossible to spot and supposedly able to hide itself in the RAM memory of a modem or a hard disk. This text and the things described in it are pure invention, it would be technically impossible to build a virus to match the description.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
A virus cannot hide its code in the buffers of modems or hard disks, because these memory areas are very small and unprotected - in reality the virus code would be overwritten almost immediately. In any case, part of the viral code would have to be stored in normal DOS memory in order for a virus to function. PC computers execute code that is located in their core memory, and that code only.
It is possible to hide part of the viral code in the memory of a VGA card. Some viruses (like Starship and GoldBug) do so, but even in this case the virus can be found by normal means.
The text was apparently a practical joke that spread uncommonly far. On the other hand, this joke inspired the development of several new viruses. As rumors of PROTO-T spread, some individuals decided to take advantage of its reputation and wrote viruses that contained the text "PROTO-T". Naturally enough, these viruses contained none of the characteristics mentioned in the original description.
The 'real' Proto-T viruses are not known to be in the wild. Their characteristics differ a lot from each other. | <urn:uuid:4ca292b0-acec-420d-b15d-d5b41506a516> | CC-MAIN-2017-09 | https://www.f-secure.com/v-descs/proto-t.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00336-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948291 | 421 | 3.109375 | 3 |
Computers, as we have all experienced, can seem highly intelligent and infuriatingly ignorant at the same time. One reason is that they lack even the most straightforward understanding of the way the world works.
In artificial intelligence, this understanding is referred to as "common sense". It has been described as "common knowledge about the world that is possessed by every schoolchild and the methods for making obvious inferences from this knowledge".
There are number of AI systems that seek to advance the common sense of computers. Perhaps the most famous is IBM's Watson, a question answering system that learns facts and relationships about the world from encyclopaedias and the Internet.
Another example is called ConceptNet. Built by researchers at MIT, ConceptNet is at heart a semantic network of concepts linked by relationships. The word "saxophone" is linked to the word "jazz" via the relationship "used for", for example.
A team of scientists at the University of Illinois recently decided to test how ConceptNet really would compare to "every schoolchild". They put the system through the Wechsler Preschool and Primary Scale of Intelligence Test, a standard tool for assessing children's attainment. It is made up of a number of subtests, including a vocabulary test and more open ended reasoning tasks.
ConceptNet, they found, has the common sense of a four-year old.
In fact, explains Professor Robert Sloan, head of computer science at UIC, the system had a wide spread of subscores. It performed well on vocabulary recall but on "why?" or "what if?" style questions, such as "Why do people wear clothes?", it fared relatively poorly.
"The experiment highlights the tremendous progress we've made since the 1990s."
Professor Robert Sloan
University of Illinois, Chicago
Sloan explains that it not necessarily that the knowledge required to answer the question has not been represented in ConceptNet, but that the system itself may not have the natural language processing (NLP) capabilities required to understand the question (unlike Watson, it was not designed to be question answering system).
He says that if the NLP capabilities could be improved, it might be possible to get ConceptNet up to the common sense level of a five- or six-year old.
To get the system up to the level of an eight year old, there would need to be a more fundamental breakthrough. Again, that is no sleight on MIT's work on ConceptNet, he adds. The experiment highlights "the tremendous progress we've made since the 1990s".
Improvements to the common sense capabilities of computers will lead to more useful speech recognition systems, Sloan says. “Siri, or its equivalent, will be able to get you the answers you actually want.”
They could also pave the way for a true semantic web, in which search engines understand the meaning of search queries and the information they retrieve. | <urn:uuid:d99a989d-d434-4f4e-b3d7-859edd19ea7e> | CC-MAIN-2017-09 | http://www.information-age.com/the-ai-with-the-mind-of-a-child-123457216/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00512-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959346 | 599 | 3.484375 | 3 |
Hardware Management Consoles
Hardware Management Console is a technology created by IBM to provide a standard interface for configuring and operating partitioned (also known as an LPAR or virtualized system) and SMP systems such as IBM System i, IBM System p, IBM System z, and IBM Power Systems.
The HMC is a Linux kernel using Busybox to provide the base utilities and X Window using the Fluxbox window manager to provide graphical logins. The HMC also utilizes Java applications to provide additional functionality.
Using an HMC, the system administrator is able to manage the software configuration and operation of partitions in a server system, as well as to monitor and identify hardware problems. HMCs offer an inexpensive method to administer complex and expensive servers, as a console need only consist of a 32-bit Intel-based desktop PC with a DVD-RAMdrive. HMC is used to:
- Configure and manage logical partitions and partition profiles
- Perform DLPAR functions.
- Activate and manage Capacity on Demand resources. | <urn:uuid:2cb3fa7e-cea2-4cc3-b3fa-c889115ee4a3> | CC-MAIN-2017-09 | https://www.midlandinfosys.com/ibm-i-series-as400/ibm-iseries-power-systems/ibm-hmc-hardware-magagement-consoles | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00156-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.905948 | 213 | 2.640625 | 3 |
Community policing was born in the 1990s when a surge in the national crime rate prompted the Clinton administration to flood the states with Community Oriented Policing Services (COPS) grants that put cops on the streets and in many cases, laptops in their cars.
The original COPS grants resulted from the COPS bill in 1994, which aimed to put 100,000 new police in the communities where they could forge relationships and develop trust among the populace. Later COPS MORE (Making Officer Redeployment Effective) grants allowed police to use the money for crime-fighting technology.
Perhaps coincidentally, the COPS grants paralleled a dramatic drop in crime throughout the '90s. But after 9/11, much of that money dried up or was shifted for homeland purposes. Again perhaps coincidentally, the crime rate took an upward turn for the worse nationally in what one police chief called an epidemic of violence.
With resources now going to the wars in Iraq and Afghanistan at a clip of $10 billion a month, $1 billion of "get Osama bin Laden money" going to Pakistan every year, as well as $34 billion (fiscal 2006) going to the states for homeland security -- though that pie is also shrinking -- police say they're getting squeezed, and it's affecting how they cope with the spike in violent crime.
"The COPS Office over the years was a great source of leveraging technology, but over the last six or seven years, it's pretty well been gutted, and most of the funding that was going to police has been redirected to homeland security or the war effort," said Colorado Springs, Colo., Police Chief Richard Myers. "That left us high and dry, and that's why we have fewer cops on the streets than we did pre-9/11."
White House officials didn't return phone calls requesting a response.
In many midsize cities, police are down in numbers, and cops say they've turned from community-oriented police to "in your apartment police" as they struggle just to get from call to call. Many departments say they've lost their ability to use intelligence or focus on preventive policing because they're mired in answering calls.
"At any given time, darn near every cruiser in an urban jurisdiction may be tied up with social-related, crime-related problems," said Springfield, Mass., Police Commissioner Edward Flynn. "We work hard to create a preventive policing capacity. But we really end up spending an awful lot of overtime because if we just staff up to meet the needs of our calls for service, we don't have sufficient organizational slack to provide a stable presence in public spaces, and people need to see cops in public."
In the early 1990s, with crime rates on the rise, police began getting out of their cars for face-to-face communication with residents. By the time COPS was rolled out in 1994, crime rates had begun to dip in some areas, and community policing garnered much of the accolades. The COPS grants helped further the cause and put anywhere from 60,000 to 90,000 new cops on the streets (depending on whose numbers you use) to forge a bond with communities and gang up on the bad guys.
The COPS grants required that all new officers took to the streets to spend time with the local citizenry. Crime rates continued to dip, and at remarkable levels -- from 1994 to 2000 violent crime declined by 46 percent nationally.
But after 9/11, the Bush administration focused on homeland security, and direct funding to law enforcement took a detour to homeland security causes. Some funding still winds up with law enforcement agencies, but it's earmarked specifically for homeland security, according to Flynn.
"What those of us in law enforcement noticed in the years after the 9/11 attacks -- particularly when the congressional funding started making its way to local government
via the states in 2003 -- was that at the same time funding was expanding dramatically in some cases for homeland security equipment and programs, at the same rate or even more rapid rate, funding for criminal justice generally, and law enforcement specifically, was dramatically being scaled back," he said.
Police are in a position to provide homeland security intelligence, Flynn continued, but they're so tied up with their core responsibilities that they can't develop relationships with the community.
"By removing our ability to consistently interact with them to buy bunkers or explosion detection vehicles -- or whatever the hell -- you're removing from us our ability to develop street-level intelligence about ongoing suspicious conditions," he said. "The same people who want to tell us about drug dealers will tell us about terrorists if they trusted us and knew us.
"Our position is the core missions of police and fire are the same regardless of the cause," he continued. "The police respond to threats, try to prevent threats through the development of intelligence, and they have to have both a tactical and strategic capability. Fire departments deal with HAZMAT incidents and fires and explosions. Who the hell cares who did it?"
The Bush administration submits a yearly budget that hacks away at COPS funding. For the most part, Congress restored some of that funding before those budgets became law, but the COPS hiring grants and MORE grants disappeared altogether. Overall, COPS grants and State and Local Law Enforcement Assistance grants, the two main pots of federal justice money, fell from $4.4 billion in 2001 to $2.5 billion in 2006. As of May 2007, the fiscal 2007 justice funding was still up in the air.
Congress has fought to keep justice funding levels near what they were for 2006, fending off the administration's attempt to cut back again.
"The question is will the money be there," said Gary Cooper, vice president of Research and Consulting for CJIS GROUP. "We see authorizing bills, but as far as money is concerned, it's just smoke and mirrors. Until you appropriate it, it doesn't mean anything."
Along the way, violent crime began to rise again.
Between 2004 and mid-2006, the murder rate reached a 20-year high in Cincinnati and a 16-year high in Fairfax County, Va., according to the FBI's Uniform Crime Report (UCR). In Boston; Richmond, Calif.; Virginia Beach, Va.; and Springfield, Mass., the murder rate was at a 10-year high.
In 2005, robbery and aggravated assault increased to a 14-year high. In a 2005 National Crime Victimization survey, attempted robbery with injury was up by nearly 36 percent. Even in Seattle, where violent crime is usually low, there was a 25 percent increase in gun crimes. Robbery is also up in many parts of the country, according to a report by the Police Executive Research Forum (PERF).
UCR statistics for 2005 showed arrests of juveniles for robbery increased by more than 11 percent and were deadlier. Youths look for iPods and use a technique called "rat packing," where the robbers use their cell phones to call their mates and coordinate when to swarm on a victim. Particularly alarming to police is the fact that many of the victims were shot without provocation after the robberies, according to the PERF report.
It's the inner cities where gangs are resurging, and the mixture of youth and guns is creating a volatile mix. With a decreasing police presence, the seeds for more violence get planted, police say.
"The problem is when we are not available in public spaces, citizen fear increases, which undermines community confidence in cities and sometimes their economic viability, and that's happening in a lot of midsize cities," Flynn said. "It's less of a factor in a New York or a Chicago than it is in a
Springfield or a Rochester, [N.Y.]."
Those larger cities, Flynn said aren't as dependent on federal grants because they can tap more indigenous resources. "The burden really falls heaviest on what I'd call the 'cruiserweight' cities. The cities between 100,000 and 300,000 population are the ones that had the biggest overall spikes in violence over the last five years."
In many of those cities, the attrition rate of officers -- through retirement, layoffs and deployment to war -- creates an increased burden. Cities such as Minneapolis, Boston and Detroit employ fewer officers than at the beginning of the decade. The Richmond, Calif., Police Department experienced a 25 percent drop in police officers.
Minneapolis, which has been forced to cut 140 officers since boosting the number to 938 in the late '90s, conversely has seen a rise in robberies by about 20 percent. Detroit is down 1,000 officers, and Richmond and Boston -- two of the cities with the biggest jump in violent crime -- have fewer officers than they did in the '90s.
"We're not being alarmists, but we do believe it's prudent to intervene before one enters a crisis, not after the crisis has occurred," Flynn said, adding the rising trend in violent crime is undeniable. "I'm not going to attribute the crime to law enforcement capacity directly, but I will say that again, nationwide there has been a significant decrease in the number of officers working since the functional ending of the COPS program and with the resultant or coincidental fiscal difficulties of states and cities."
Nobody is willing to say police can prevent rising crime rates, but they do say spending dwindling resources rushing from call to call instead of community policing detracts from the ability to use intelligence from the community and focus on high-priority areas. It also undermines the community's feeling of safety and trust.
In Colorado Springs, Myers said, the situation is critical. "We are increasingly reaching what we call the saturation point, which is when you make a call for service and we don't have one free officer citywide to respond to that call."
Myers' staff recently met to discuss and identify what police activities bring the greatest value to the community and which ones can be eliminated. They may decide to go completely to "cold reporting status" for traffic accidents as they do at busy times of day, meaning that unless there's an injury or drunk driver involved, the police won't come.
Less Money, Less Flexibility
Everyone acknowledges a combination of factors led to the lowered crime rate of the '90s, including a strong economy, a decrease in crack cocaine use and a smaller population of young people. Currently there are several factors boosting the surge in crime, such as the resurgence in methamphetamine use and baby boomers' children at an age when they're most likely to commit crimes.
"It's complicated," said Oklahoma City Police Chief Bill Citty. "Talk to any chief and if they say they have control of crime, they're not being honest. You don't, because there are so many social factors involved."
Oklahoma City didn't accept COPS hiring grants because of the stipulation that officers hired had to be kept for at least a year after the grants ran out, and the city didn't think it could match the funds. "I think a lot of communities had that problem," Citty said. "They took advantage of the COPS grants and all of a sudden, they had to fund it themselves."
Yet, as critics of the COPS hiring program quickly point out, Oklahoma City, too, experienced a drop in crime in the '90s, proving that with or without the extra cops, most cities saw a hiatus in criminal activity anyway. Another criticism of the COPS
hiring program was that some of the money was misspent, and that many open positions were never filled. Federal audits have actually proven as much.
Oklahoma City, however, tapped other justice block grants, such as Justice Assistance Grants, which were used to buy technology and for overtime to staff high-priority areas. "The block grant money was huge," Citty said. "We went from having $1 million to spend before 9/11 to about $300,000 now. That's a big hit for us."
Now there's less money for everyone, and a lot of it is earmarked for homeland security. "We had much more discretion with that [Justice Assistance Grants] funding," Citty said. "We bought computers with it, used it for information systems, our fingerprint systems, overtime programs in high-crime areas and entertainment districts where we really needed additional manpower."
"The grant money that came in from homeland security was extraordinarily restrictive, even for training," said Flynn, who served for more than three years as Gov. Mitt Romney's secretary of public safety and administered homeland security and criminal justice grant money during that time. "You might want to do sniper training but no -- you had to prove you were doing terrorist-related sniper training. Fire departments might have wanted to do HAZMAT training, but if they didn't link it to a nexus with terrorism, they wouldn't fund HAZMAT training."
That lack of flexibility may have been a backlash to claims that some of the early homeland security money was misspent. "It's been limited, and that's been compounded by the fact that in some parts of the country, I don't think the grants have been used wisely," Myers said. "The pie has been carved up in so many pieces trying to satisfy dozens and dozens of constituencies that it's turned into more of a Christmas list for local government managers who couldn't get what they wanted out of their normal budgeting procedures. They've leveraged some of those homeland security grants to get -- I don't want to call them toys -- but operational items that probably should have been a part of the regular operating budget."
As much of the rest of the nation, Oklahoma City is now dealing with a gang problem and an alarming rise in violent crimes involving fatalities. "Last year 35 percent or 40 percent of our homicides had gang members involved," Citty said. "That's high because a couple of years ago we had three, and then last year we had 23. That's a big change."
Citty said there aren't necessarily more gang members, but they are getting more violent, a sentiment other chiefs recently echoed. "We're seeing a lot more gun violence. There are a lot more guns out there, and it's a big issue for us and for most cities."
To address the violence, Citty has to pull officers from other areas. "That's one of those areas where if I had additional funding from our Justice Assistance Grants, I could use that money to get additional people and pay overtime where I need the manpower. It gave us some flexibility."
New Grant Money
There could be some fresh funds from the federal government for fiscal 2008. In May 2007, the House passed the COPS Improvements Act of 2007 (H.R. 1700), which authorizes $1.5 billion annually from fiscal 2008 through fiscal 2013, $350 million of which can be used for technology.
Myers said his department is eligible for grants but would be required, under the act, to match 25 percent of the funding. "We are really struggling with this latest announcement. If we were to seek the full $6 million that we would be eligible for under the grant, I'd have to come up with $1.5 million to match that. I just got marching orders to cut 3 percent of my operating budget for the rest of the year."
Myers said he would love to spend the $6 million on technology. In Colorado Springs, he is operating a department that relies on portable radios. "I don't even have hard-mount radios in the cars. We're working in a portable-only environment, and we're having coverage issues with that and don't have the funds to put car radios in every squad car. It's hard to talk about these technologies and efficiencies we can gain when we're not even meeting our basic technology needs."
Myers sees policing at a crossroads with technology that could dramatically increase the effectiveness of police. "It's been forecast that we're somewhere between 20 and 40 years away from a human interface with chip technology. Somewhere down the road, officers will be able to download mug shots of every wanted person and by looking at the faces recall immediately whether it's a wanted person or not. That's radically going to change how we do business."
There is technology available now that makes life easier for some police forces but to others, it's just a dream. "The issue of when you do a traffic stop and somebody doesn't have their license with an in-car video system and perhaps some fingerprint technology, have that immediately run through a database and search nationally to find out who this person is; that exists, but it's not widely in use by police," Myers said.
Funds for that kind of system are not always available at the local level and technology is getting even less affordable, Myers said.
"The rate of change of technology is occurring exponentially, and the days of being able to spend a whole lot of money on some new technologies and riding that wave for 10 years or until you have to update it, those days are gone," he said. "Technology now requires constant maintenance and updating, and the rate of obsolescence is skyrocketing."
Back in the Community
Part of the funds from H.R. 1700 will allot monies to be spent on school resource officers to help combat gang violence. That's a good first step. Police say communicating with youngsters before they're involved in a shooting doesn't happen enough either in schools or on the streets.
"You can become more efficient with the technology, and we're doing that as far as identifying the issues and trying to be more proactive in addressing those. But you can't get away from perception. People want to be able to see an officer in their area," Citty said.
Part of that is community trust, and that has eroded in the last several years, police say. People in the affected areas are too scared to even call the police, and inner-city youngsters are conditioned not to snitch.
Joe Ryan, chairman of the Department of Criminal Justice at Pace University in New York City and a former New York City police officer, believes police have taken on a "militaristic" approach to policing in the last several years and need to revert to community policing.
"The idea behind the federal government giving money to local police agencies is to promote innovation," Ryan said. "Collecting information about what's in your community is really important. We need to get the officers back in the community, and at the same time, use the information to make them more efficient." | <urn:uuid:c8d13fad-e010-4250-8ee8-111806633ba7> | CC-MAIN-2017-09 | http://www.govtech.com/public-safety/Vanishing-Act.html?page=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00560-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.977912 | 3,798 | 2.546875 | 3 |
Researchers in Japan have come up with a novel way to keep your face out of other people's snapshots taken on digital cameras, smartphones, and possibly Google Glass.
The researchers equipped a pair of glasses with 11 small, near-infrared LED lights strategically placed around the bridge of the nose.
The lights, visible to most digital cameras but invisible to humans, obscure the areas of the face that cameras and software typically rely on to identify a human face.
Whether a camera is trying to focus on a face, or a software program is trying to identify who you are, these glasses will make that job much harder, the researchers claim.
The glasses were developed by Isao Echizen, associate professor at Japan's National Institute of Informatics and Professor Seiichi Gohshi of Kogakuin University.
The concept was originally announced in late 2012, and we briefly covered the story in its early days. More recently, Tokyo-based news site DigInfo.tv caught up with the privacy glasses during a public demonstration of the privacy device. The researchers also displayed a second pair of glasses that use a reflective material to counteract imaging devices unaffected by infrared light.
Physical privacy measures like this also take on new meaning as privacy fears grow spurred on by facial-recognition software and ubiquitous surveillance by wearable technology.
Privacy specs in action
Since human beings can't see light close to the infrared spectrum, only machines will be stymied by the new specs. Mere mortals can still recognize you and may not even realize you're wearing privacy goggles.
Nefarious cyborgs hoping to snap a discrete pic of your face using Google Glass, however, may be out of luck. Here's the full video shot by DigInfo.tv.
The privacy visors are only a prototype at this point, so this isn't something you'd want to wear on a day-to-day basis--Google Glass looks downright fashionable by comparison.
Nevertheless, anti-Glass may be just what society needs as concerns grow that personal technology is turning the entire population into a crowd-sourced collection of security cameras.
Fearing the Glassholes
On Tuesday, nearly 40 data-protection authorities from countries such as Australia, Canada, Israel, New Zealand, and Switzerland voiced their concerns over Google Glass in an open letter to CEO Larry Page.
The privacy hawks want to know about any privacy safeguards Google has put in place for Glass, what kind of personal data Google obtains via Glass, and the ethical questions about the "surreptitious collection of information about other individuals."
The authorities are also concerned about facial recognition. "While we understand that Google has decided not to include facial recognition in Glass," the authorities wrote. "How does Google intend to address the specific issues around facial recognition in the future?"
Google in late May said it wouldn't add facial-recognition technology to Google Glass until strong privacy protections were in place.
This isn't the first time a major tech firm has raised fears about personally identifying people through photos.
Facebook came under scrutiny in 2011 and 2012 after introducing a facial-recognition feature that let the service automatically tag friends in your photos. To avoid a backlash from the Europe Union, the world's largest social network deleted all facial-recognition data for its users on the continent.
Note: desktop software such as iPhoto and Google's Picasa have offered facial-recognition features for several years.
But is it really worth it to buy a pair of special glasses just to prevent computers from recognizing you? If you drive a car or have a passport, the government already has your face. And almost every foreign visitor to the United States is photographed and fingerprinted before entering the country.
The Center for Democracy & Technology spelled out its concerns for facial recognition and privacy in a report last fall. The CDT focused not only on concerns about government use of facial technology but also private industry's use.
Technology such as digital billboards known as "smart signs" are already available and can calculate a person's age and gender to target advertising to passersby, the CDT said. These signs can also figure out how long someone watches an ad and react to the watcher's emotional states.
Even though these signs aren't equipped to identify individuals, it's not a hard leap to see how large corporations would be tempted to use facial recognition to better target individuals with smart-sign advertising.
Facial-recognition technology could enable "any marketer, agency, or random stranger to collect--openly or in secret--and share [those] identities," the CDT said. "Databases built from commercial use of facial recognition can be accessed or re-purposed for law enforcement surveillance."
It's still not clear if facial-recognition technology built into Google Glass or your favorite smartphone app will eradicate personal privacy. But if the concerns become strong enough, these Japanese privacy visors might be the beginning of a new market for personal anti-surveillance gear.
This story, "Google Glass panic triggers rise in facial-recognition blockers" was originally published by TechHive. | <urn:uuid:011d276d-f62e-4ace-95ee-13c324a862f8> | CC-MAIN-2017-09 | http://www.itworld.com/article/2705862/security/google-glass-panic-triggers-rise-in-facial-recognition-blockers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00560-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948359 | 1,045 | 2.6875 | 3 |
3D printing offers a method to produce prototypes of objects that can be used to create production products or create objects for production use; one of its greatest advantages is removing the need for sophisticated engineering of parts.
3D printing employs numerous methods and more than 100 materials, but the basic premise is that layers of plastics, paper or metals are laid down on a hard platform or powder base. Computer-aided design (CAD) programs are used to manipulate images in preparation for printing.
Last week's Inside 3D Printing Conference and Expo in San Jose showcased the impressive advances happening with 3D printing, which is used for everything from constructing buildings to constructing lunch. Here are some of the interesting projects that were on display. | <urn:uuid:023eaf2d-516e-4f55-84a0-66674d97440d> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2474769/emerging-technology/121544-3D-printing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00504-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947098 | 146 | 3.3125 | 3 |
Credit: Nick Barber
By modifying the Microsoft Kinect SDK a research project at the Computer Human Interaction (CHI) conference demonstrated how gamers in a wheelchair could interact with motion games.
By modifying a Microsoft Kinect sensor, a research project at the Computer Human Interaction (CHI) conference demonstrated how gamers in a wheelchair could interact with motion games.
To see the prototype in use, watch a video on YouTube.
[ MORE: Kinect brings the magic to Windows PCs ]
"If we were using the Kinect SDK in the traditional way then people would be sitting in one fixed location and using their hands and arms as input," said Kathrin Greling a Ph.D. student at the University of Saskatchewan. She said the modification that she made to the Kinect meant that the system could take into account the position and movement of the wheelchair.
She said the research isn't aimed at just children, but older adults too who would be able to use motion gaming as exercise. She said some wheelchair bound patients at nursing homes and other long term care facilities could benefit from the exercise and entertainment provided by gaming.
At CHI, one of the researchers sat in a wheelchair and swiveled it to the left and right to control a race car on screen.
"What we've done so far is mapped the wheelchair movements onto commercially available games," she said. "We think there is also a lot of design opportunity related to specific wheelchair gestures."
Greling said that she and her team want to explore how a wheelchair can be used in games that have been designed for people in wheelchairs rather than just games that have been modified. | <urn:uuid:d5fa0266-eb41-401f-a328-ad800e904352> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2166010/data-center/kinect-sensor-modified-for-wheelchair-gaming.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00204-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.970918 | 331 | 2.875 | 3 |
Microsoft hopes to empower the next generation of developers by lending its support to Computer Science Education Week.
It's Computer Science Education Week
, and Microsoft is betting that exposing students to an "Hour of Code" will not only give the IT industry a lift, but will also radically change lives.
Coding can have a dramatic effect on a person's career trajectory, according to Peter Lee, corporate vice president and head of Microsoft Research. In a blog post
, he said that a "computer science education is a ticket to upward mobility," before asserting that "every student deserves to have access to it."
Microsoft employees are participating in many of the nearly 29,000 events planned for Computer Science Education Week (Dec. 9-15). Taking place in 160 countries, the events are expected to reach 4 million students.
Computer Science Education Week is held annually by Code.org and the Computing in the Core coalition. The program kicks off each year to commemorate the birthday of computing pioneer Admiral Grace Murray Hopper, who was born Dec. 9, 1906. (Rival Google made Hopper the subject of the company's "Google Doodle" for Dec. 9, on what would have been her 107th birthday.)
"Hour of Code allows us to reach students, engage them and show how fun programming can be. I am proud that Microsoft's tools will play an important role in doing this," said Lee.
Those tools include Microsoft Research's Kodu Game Lab, based on the company's visual programming language. With Kodu, budding developers can create games and interactive environments.
Another offering, called TouchDevelop, supports the creation of "mobile apps and games on any smartphone, tablet or PC," stated Microsoft News staffer Suzanne Choney. It's a learning tool and development platform that focuses on coding for touch-enabled mobile devices without a PC.
TouchDevelop also enables developers to leverage mobile sensors (GPS, accelerometers) as they build their apps. "You can write scripts simply by tapping on the screen, and you can share your scripts on the TouchDevelop website or submit them to the Windows Store or the Windows Phone Store," said Choney.
Microsoft, like Code.org, seeks to "demystify" computer science. Rane Johnson-Stempson, education and scholarly communication principal research director for Microsoft Research, feels that coding is often portrayed to young people, girls particularly, as an arduous discipline.
Johnson-Stempson said they "only hear about the difficult tasks of programming and algorithms; they don't hear about the art, creativity and problem solving required to ensure an application meets the end user's needs." The company's efforts to attract students to computer programming, and science, technology, engineering and math (STEM) in general, include a number of YouthSpark "digital literacy" programs.
Apart from the inherent educational benefits, an "Hour of Code" can help set students on the path toward in-demand jobs. Microsoft's Lori Forte Harnick, general manager for citizenship and public affairs, noted that programming jobs "are growing at two times the national average in the U.S., yet less than 2.4 percent of college students are graduating with a degree in computer science."
"In light of this continued mismatch between skills and jobs, we are increasing our efforts to bring technology education to youth," she was quoted in the blog post.
In an Oct. 14 statement announcing that Microsoft was joining the Hour of Code campaign
, Brad Smith, general counsel and executive vice president of Legal & Corporate Affairs, indicated that the classroom is just one of many fronts in Microsoft's battle to popularize computer science education. He said his company "wants every American student to have the opportunity to learn computer science, a goal we are supporting through our partnering work in communities, extensive outreach to a broad array of stakeholders and policy advocacy at all levels of government." | <urn:uuid:a96c6d35-61ac-4470-be58-895d55b9a4a7> | CC-MAIN-2017-09 | http://www.eweek.com/developer/microsoft-helps-kick-off-hour-of-code.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00076-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954972 | 794 | 2.96875 | 3 |
It is a great irony of the Semantic Web, which is predicated on the notion of explicit and unambiguous meaning, that no one can quite agree on what we mean by “semantic.” The fallback position is to simply point at the technology stack defined by W3C and say that anything taking advantage of those tools is “the semantic web” or at least a part of it. While this may be valid, as far as it goes, it also misses the point.
I prefer to draw a distinction between “semantic technologies” and the “semantic web”. Narrowly defined, semantic technologies are a family of W3C sanctioned standards and tools that play nicely together to create meaningful relationships between disparate online resources (data, people, anything of use) rather than just documents. They do so in a manner that both machines and people can ingest and interpret without too much confusion. More broadly defined, a semantic technology is anything that makes meaning and relationships explicit. This could be a taxonomy or thesaurus, advanced metadata, automatic classification, entity extraction, this list goes on. Any of these technologies can be used behind the firewall in isolation from the broader web and still bring value to the enterprise. This is not, however, the semantic web.
The semantic web augments and extends the world wide web and so must be a part of that greater web of information and resources. The secret sauce here is the underlying information consumed by semantic technologies. Without access to properly structured and documented (read: lots and lots of metadata) public information, the smartest applications we can build will be little more than idiot savants, very good in their own domain but unable to function in the world at large. It is these smart applications, well-fed with a diverse diet of palatable information that constitute the true semantic web. The particular technologies employed are more of an implementation issue rather than a fundamental property. They are a means to an end rather than the end in itself.
So how do we get there? Fortunately, we are well on our way by means of three concurrent and complimentary movements: open data, linked data and the semantic web proper.
The Open Data Movement posits that certain (if not most) data and information should be freely available. Much of this is an outgrowth of requirements for publicly funded research. If the people paid for it, they should have access to it. As a result, many researchers must publish their data sets in public repositories as a condition of receiving federal dollars. This practice is starting to move beyond the academy as private enterprises realize that by sharing data, they can benefit from the creativity and insight of people not on their payroll. In essence, they are saying “Here’s a bunch of data. Let’s see you do something cool with it.” The problem is that there is little agreement on how the data should be shared. Standards may be followed within a particular community of practice, but true innovation happens when someone from outside the domain brings their expertise to bear. The lack of standardization often presents too high a barrier for this to happen. This is where linked data comes in.
Linked data takes open data a step (actually four steps) further by articulating four fundamental principles for publishing data. In short, (1&2) name things with HTTP URIs. This provides a well understood mechanism for uniquely identifying resources in a manner that can be easily located. (3) When someone does look up that resource, provide useful information in a standardized way. In other words use RDF to provide a common data model and representation. Finally (4) link your resources to other resources so your users, be they human or otherwise, can find related things. As of November of last year, the State of the LOD Cloud report documented nearly 27 Billion triples and nearly 400 Million RDF links that meet these criteria. When compared to the size of the general web, this may seem tiny but considering this has only emerged over the past couple of years, its rate of growth is impressive. If this growth continues, and there is ever expectation that it will, indeed that it is likely to accelerate, the substrate of the semantic web is well on its way.
Which brings us full circle to the semantic web proper. The most extensive, highly linked, well structured, data set is useless if there is nothing to consume it. It is the community of smart applications that utilize the web of linked data that truly comprise the semantic web. By adopting a common data model (RDF) and adhering to the standards, it becomes possible to create applications that can utilize resources (not just documents) across the entire web of data and to interact with each other in a consistent and intelligent manner. Further, because of the inference capabilities fostered by these standards and the adoption of well crafted ontologies, it becomes possible for these applications to act on information that does not explicitly exist anywhere in the web. Just as it’s the people, not the plumbing, that make a community, it’s the applications, not the data, that constitutes the semantic web.
We are in the early days of each of these three initiatives, but as I said, they are growing and the pace of growth is accelerating. We may not get to Sir Tim Berners-Lee’s original vision of semi-sentient agents roaming the web freeing us from the mundane chores of daily life in the information age for some time, but we are seeing the practical benefits of linked open data today.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:f3ad60d4-e528-4cf5-865b-ac256319d72c> | CC-MAIN-2017-09 | http://blogs.gartner.com/darin-stewart/2011/03/11/when-does-semantic-really-mean-semantic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00252-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.933223 | 1,257 | 2.515625 | 3 |
The San Francisco Board of Supervisors this week unanimously passed legislation to require all new commercial or residential buildings with 10 or fewer stories to have solar panels.
The Better Roofs ordinance was penned by Supervisor Scott Wiener, who said the measure was needed to fight climate change and reduce reliance on fossil fuels.
"Activating underutilized roof space is a smart and efficient way to promote the use of solar energy and improve our environment," Wiener said in a statement about the vote. "We need to continue to pursue aggressive renewable energy policies to ensure a sustainable future for our city and our region."
A federal study released earlier this month revealed that installing solar panels on every roof in the U.S. would supply 39% of the nation's total power used.
While the U.S. Energy Information Administration doesn't normally track legislation, a spokesperson for the agency said he was unaware of any other states with laws requiring solar on rooftops. Additionally, the Database of State Incentives for Renewable Energy, a compilation of mostly state-level and significant local-level rules and regulations, had no record of laws requiring all commercial and residential buildings to have solar. That makes San Francisco the first city in the nation to implement such an ordinance.
San Francisco's new law, however, does follow a state law: California's Title 24 Energy Standards, which requires 15% of roof area on new small and mid-sized buildings to be "solar ready." That requires the roof to be unshaded by the proposed building itself, and free of obtrusions. The state law also applies to all new residential and commercial buildings of 10 floors or less.
"This legislation will expand our efforts to cover San Francisco rooftops with solar panels and tackle climate change, while also creating good jobs for our community," said Josh Arce, former President of the San Francisco Commission on the Environment and community liaison for Laborers Local 261, a labor organization that trains solar job seekers.
This story, "San Francisco to require solar panels on new buildings" was originally published by Computerworld. | <urn:uuid:04ff4e0d-64ba-49dd-805e-ab64ca857686> | CC-MAIN-2017-09 | http://www.itnews.com/article/3058731/sustainable-it/san-francisco-to-require-solar-panels-on-new-buildings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00428-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951252 | 423 | 2.5625 | 3 |
For half a decade, Congress debated, but never enacted, cyberthreat information sharing legislation. Then, this past December, Congress approved and President Obama signed the Cybersecurity Act of 2015.
The Cybersecurity Act provides liability protections to businesses to incentivize them to share cyberthreat information with government and with each other.
"Is the legislation perfect?" GovInfoSecurity Executive Editor Eric Chabrow asks in an audio blog (click player beneath image to listen). "Of course, not. What law is? For a bill to become a law, legislators make compromises. And that's what the Cybersecurity Act of 2015 is."
As James Lewis, senior fellow at the Center for Strategic and International Studies says, "It's a really good first step."
In this audio blog, you'll hear:
- An explanation of how the Department of Homeland Security, which will serve as a hub for cyberthreat information sharing, is implementing the new law;
- Elissa Shevinsky, CEO of the privacy company JeKuDo, express reservations about the effectiveness of the law in mitigating cyberthreats and concerns that the Act reduces citizens' privacy;
- Lewis, a top cybersecurity expert, explain why organizations might not be spurred to share threat data. The law, Lewis says, "doesn't really affect what people are going to do to defend themselves, and it keeps us in a reactive posture." | <urn:uuid:1632c4c4-fb95-4cf5-8fa9-2b4addfc8a04> | CC-MAIN-2017-09 | http://www.cuinfosecurity.com/interviews/2016-year-cyberthreat-info-sharing-i-3040 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00548-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934769 | 283 | 2.53125 | 3 |
February 6, 2017
While the World Wide Web is clearly a web, it has not traditionally been presented visually as such. Digital Trends published an article centered around a new visualization of Wikipedia, Race through the Wikiverse for your next internet search. This web-based interactive 3D visualization of the open source encyclopedia is at Wikiverse.io. It was created by Owen Cornec, a Harvard data visualization engineer. It pulls about 250,000 articles from Wikipedia and makes connections between articles based on overlapping content. The write-up tells us,
Of course it would be unreasonable to expect all of Wikipedia’s articles to be on Wikiverse, but Cornec made sure to include top categories, super-domains, and the top 25 articles of the week.
Upon a visit to the site, users are greeted with three options, each of course having different CPU and load-time implications for your computer: “Light,” with 50,000 articles, 1 percent of Wikipedia, “Medium,” 100,000 articles, 2 percent of Wikipedia, and “Complete,” 250,000 articles, 5 percent of Wikipedia.
Will this pave the way for web-visualized search? Or, as the article suggests, become an even more exciting playing field for The Wikipedia Game? Regardless, this advance makes it clear the importance of semantic search. Oh, right — perhaps this would be a better link to locate semantic search (it made the 1 percent “Light” cut).
Megan Feil, February 6, 2017
February 3, 2017
The article on AP titled Browse Free or Die? New Hampshire Library Is at Privacy Fore relates the ongoing battle between The Kilton Public Library of Lebanon, New Hampshire and Homeland Security. This fierce little library was the first in the nation to use Tor, the location and identity scrambling software with a seriously bad rap. It is true, Tor can be used by criminals, and has been used by terrorists. As this battle unfolds in the USA, France is also scrutinizing Tor. But for librarians, the case is simple,
Tor can protect shoppers, victims of domestic violence, whistleblowers, dissidents, undercover agents — and criminals — alike. A recent routine internet search using Tor on one of Kilton’s computers was routed through Ukraine, Germany and the Netherlands. “Libraries are bastions of freedom,” said Shari Steele, executive director of the Tor Project, a nonprofit started in 2004 to promote the use of Tor worldwide. “They are a great natural ally.”… “Kilton’s really committed as a library to the values of intellectual privacy.
To illustrate a history of action by libraries on behalf of patron privacy, the article briefly lists events surrounding the Cold War, the Patriot Act, and the Edward Snowden leak. It is difficult to argue with librarians. For many of us, they were amongst the first authority figures, they are extremely well read, and they are clearly arguing passionately about an issue that few people fully understand. One of the library patrons spoke about how he is comforted by the ability to use Tor for innocent research that might get him flagged by the NSA all the same. Libraries might become the haven of democracy in what has increasingly become a state of constant surveillance. One argument might go along these lines: if we let Homeland Security take over the Internet and give up intellectual freedom, don’t the terrorists win anyway?
Chelsea Kerwin, February 3, 2017 | <urn:uuid:ee6988a9-2b9b-49b3-b6fd-4b8b8b0a1aac> | CC-MAIN-2017-09 | http://arnoldit.com/wordpress/category/internet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00421-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944607 | 718 | 3.09375 | 3 |
One of the drawbacks of electric vehicles (EV) is that it can take up to 8 hours to fully charge their lithium-ion batteries.
Swiss researchers, however, say that by increasing the electrical charge, EVs can potentially be fully charged in about 15 minutes.
In a paper published today, researchers from the Ecole Polytechnique Federale de Lausanne (EFPL) (Swiss Federal Institute of Technology in Lausanne) said an EV charging station with 4.5 megawatts (MV) of power could charge a vehicle in 15 minutes.
Unfortunately, 4.5MW is the power equivalent of 4,500 washing machines. "This would bring down the power grid," the researchers stated.
To overcome drawing such a significant charge from the power grid at one time, the researchers created a buffer storage system that disconnects from the grid before releasing the 4.5MW charge to an EV.
"We came up with a system of intermediate storage," said Alfred Rufer, a researcher in EPFL's Industrial Electronics Lab. "And this can be done using the low-voltage grid (used for residential electricity needs) or the medium-voltage grid (used for regional power distribution), which significantly reduces the required investment."
The EPFL researchers, along with other partner universities, built an intermediate storage battery. In the space of 15 minutes, it provided the 20 kilowatt hour (kWh) to 30 kWh needed to charge a standard electric car battery.
The "Intermediate" storage is achieved using a lithium iron battery the size of a shipping container, which is constantly charging at a low level of power from the grid. When a car needs a quick charge, the buffer battery promptly transfers the stored electricity to the vehicle.
"Our aim was to get under the psychological threshold of a half hour," Massimiliano Capezzali, deputy director of the EPFL Energy Center and leader of the research project, said in a statement. "But there is room for improvement."
Supercharger stations are able to partially charge a Tesla Model S sedan in 30 minutes, giving it a 170-mile range. A full charge takes 75 minutes.
Superchargers consist of multiple Model S chargers working in parallel to deliver up to 120 kW of direct current (DC) power directly to the battery, according to Tesla.
Tesla currently has 591 Supercharger stations with 3,425 Superchargers around the world. Last year, the company released an over-the-air software upgrade for its cars that tracks charging station locations and alerts drivers when they're out of range of those stations.
As part of the EPFL Industrial Electronics Lab's quick charging project, researchers built gas station prototypes to determine how they'd need to be modified as gas-powered cars slowly die out and are replaced by EVs.
The research showed that a quick charging station able to handle 200 cars per day would need intermediate storage capacity of 2.2 MWh, which require an Intermediate battery system the size of four shipping containers.
"Electric cars will change our habits. It's clear that, in the future, several types of charging systems -- such as slow charging at home and ultra-fast charging for long-distance travel -- will co-exist," Capezzali said.
This story, "Researchers move closer to charging an EV as fast as filling a tank of gas" was originally published by Computerworld. | <urn:uuid:1897a386-f4b2-4bed-bf11-1366b1236a55> | CC-MAIN-2017-09 | http://www.itnews.com/article/3025341/car-tech/researchers-move-closer-to-charging-an-ev-as-fast-as-filling-a-tank-of-gas.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00297-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946923 | 701 | 3.15625 | 3 |
The ubiquitous antenna was all the buzz last week as Apple tried to squelch the latest glitch in its popular iPhone. But those antenna issues have nothing on the renovations NASA is taking on to reinvigorate its 70-meter-wide (230-foot-wide) "Mars antenna."
The antenna, a key cog in NASA's Deep Space Network, needs about $1.25M worth of what NASA calls major, delicate surgery. The revamp calls for lifting the antenna -- about 4 million kilograms (9 million pounds) of finely tuned scientific instruments - to a height of about 5 millimeters (0.2 inches) so workers can replace the steel runner, walls and supporting grout. This is the first time the runner has been replaced on the Mars antenna, NASA said.
The operation on the historic 70-meter-wide (230-foot) antenna, which beamed data and watched missions to deep space for over 40 years, will replace a portion of what's known as the hydrostatic bearing assembly. This assembly enables the antenna to rotate horizontally, NASA stated.
According to NASA, the bearing assembly puts the weight of the antenna on three pads, which glide on a film of oil around a large steel ring. The ring measures about 24 meters (79 feet) in diameter and must be flat to work efficiently. After 44 years of near-constant use, the Mars antenna needed a kind of joint replacement, since the bearing assembly had become uneven, NASA stated.
A flat, stable surface is critical for the Mars antenna to rotate slowly as it tracks spacecraft, NASA said. Three steel pads support the weight of the antenna rotating structure, dish and other communications equipment above the circular steel runner. A film of oil about the thickness of a sheet of paper -- about 0.25 millimeters (0.010 inches) -- is produced by a hydraulic system to float the three pads, NASA stated.
The repair will be done slowly but is expected to be done by early November. During that time, workers will also be replacing the elevation bearings, which let the antenna track up and down from the horizon.
Meanwhile the network will still be able to provide full coverage for deep space missions by using the two other 70-meter antennas at Deep Space complexes near Madrid, Spain, and Canberra, Australia, and arraying several smaller 34-meter (110-foot) antennas together, NASA stated.
While officially known as Deep Space Station 14, the antenna got its Mars moniker from its first mission: tracking NASA's Mariner 4 spacecraft, which had been lost by smaller antennas after its historic flyby of Mars, the space agency stated.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:a7e6cd60-dd63-4bca-8b4f-5f733dbe639c> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2231356/security/no-iphone-bumpers-here--nasa-revamps-historic-9-million-lb-mars-antenna.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00297-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939648 | 562 | 2.703125 | 3 |
In 1905, Albert Einstein derived that light was composed of particles by fitting his theory to just a handful of data points. This discovery changed our understanding of basic physics and helped usher in a new era of quantum mechanics. Today, scientists often need to interpret much larger data sets to drive discoveries.
A little more than a decade ago, the first sequencing of a human genome cost $100 million. Now, the same results cost no more than a used car. At about 0.8 to 1 terabyte, the full genome creates more than 4 million times the amount of data that Einstein was investigating. Some scientists and researchers are using tools that were developed by online commerce and search engines to tackle these new questions.
In 2003 and 2004, Google published two papers that explained how the company repeatedly digests almost the entire internet to collect data for our searches every couple days and, eventually, hours. (Google recently moved away from this system of indexing onto something new that could log the Web in real-time and scale up to millions of machines.) The findings shook the industry. Often, to process tons of information, companies bought very expensive, very reliable, very fast computers that churned data as quickly as the newest technology could. Budgets being budgets, only a few of these premium boxes were in place at any one time. Instead, Google segmented the work into small pieces that were distributed onto thousands of cheaper computers that could produce the type of intelligence that we are now accustomed to in searches. If the old way was a single farm to grow flowers and collect pollen, then this new system was thousands of pollen-hoarding bees that distributed themselves to fields far and wide. The less expensive hardware now being employed to crunch data meant more computers were afforded in a budget while maintaining reliability. If a few computers went down, there were thousands left to pick up their duties. | <urn:uuid:dff98eb5-8446-47a5-80af-e7fa8c0320a2> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2013/02/big-data-leading-scientists-ask-bigger-questions/61467/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00473-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.978058 | 376 | 3.3125 | 3 |
The Swiss National Supercomputing Center (CSCS) is going to upgrade its supercomputer with Nvidia GPUs to more accurately predict the weather in the steep mountains of the Swiss Alps.
By upgrading the Cray XC30 system, the CSCS wants to enable national weather service MeteoSwiss to accurately predict the weather in small valleys that can't be covered by the current models, said Thomas Schoenemeyer, associate director of the Technology Integration team of the CSCS, on Wednesday.
"Switzerland has one of the most complex topographies in the world," he said. Steep mountains can cause a difference in weather patterns from valley to valley, making it very hard to make accurate predictions, he said. More computing power is needed to tackle the problem.
The supercomputer is called "Piz Daint", named after one of Switzerland's mountain peaks.
Over the course of the year, CSCS will extend the computer's current 750 teraflops computing power to reach speeds that Schoenemeyer expects will be at least one petaflop or more.
Using a combination of CPUs (central processing units) and GPUs (graphics processing units) leads to better application performance, Schoenemeyer said. "Meteo codes run better on a combination," said Schoenemeyer, adding that using a combination of processing units is also more energy efficient.
The new supercomputer will use NVIDIA Tesla K20X GPU accelerators to "dramatically expand the breadth and depth of the center's research and discovery in climate and weather modeling," as well as a host of other fields, such as astrophysics, materials science and life science, according to an Nvidia blog post announcing the plan.
The CSCS's upgraded Piz Daint will also be used to run 30 slightly different weather forecasting models simultaneously to get a more accurate average result, Schoenemeyer said. In addition, the Center for Climate Systems Modeling (C2SM) in Zurich plans to use the computer to predict climate change in the next 100 years, he added.
Supercomputing company Cray was awarded US$32 million to upgrade the CSCS's system, it announced.
When the upgrade and expansion is completed, Piz Daint will be the first petascale supercomputer in Switzerland and the fasted hybrid GPU accelerated supercomputer in Europe, said Schoenemeyer. "But that is easy to say, Europe is lacking power," he said.
The fastest supercomputer in the world is the Titan supercomputer at Oak Ridge National Laboratory in Tennessee. Most of this Cray XK7 system's compute power also comes from Tesla K20X GPU accelerators. The Titan executed 17.59 petaflops during a Linpack benchmark test, according to the supercomputer Top500 list published in November last year.
CSCS's supercomputer will become operational in early 2014, and will use water from nearby Lake Lugano for cooling, Schoenemeyer said. The heated water will be reused to heat the CSCS's building.
Loek is Amsterdam Correspondent and covers online privacy, intellectual property, open-source and online payment issues for the IDG News Service. Follow him on Twitter at @loekessers or email tips and comments to firstname.lastname@example.org | <urn:uuid:94ea4201-2a3a-4a83-9e00-71c98354c86d> | CC-MAIN-2017-09 | http://www.itworld.com/article/2713796/hardware/swiss-supercomputer-aims-to-predict-mountain-weather-with-help-of-gpus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00649-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935282 | 696 | 3.09375 | 3 |
The idea is cool enough - build a reusable aircraft-like system that could easily and relatively cheaply launch satellites into orbit.
The kinks will be that the system need do that for somewhere in the $5 million per launch range and oh yeah, go well over Mach 10.
As you might have guessed, the project to develop such a system is being put forth by Defense Advanced Research Projects Agency (DARPA) which will more fully detail the program, known as the Experimental Spaceplane (XS-1) in October.
From DARPA: "The objective of the XS-1 program is to design, build, and demonstrate a reusable Mach 10 aircraft capable of carrying and deploying an upper stage that inserts 3,000- 5,000 lb. payloads into low earth orbit (LEO) at a target cost of less than $5M per launch. The XS-1 program envisions that a reusable first stage would fly to hypersonic speeds at a suborbital altitude. At that point, one or more expendable upper stages would separate and deploy a satellite into Low Earth Orbit. The reusable hypersonic aircraft would then return to earth, land and be prepared for the next flight. Modular components, durable thermal protection systems and automatic launch, flight, and recovery systems should significantly reduce logistical needs, enabling rapid turnaround between flights."
DARPA said that the long-term intent is for XS-1 technologies to be transitioned to support not only next-generation launch for government and commercial customers, but also global reach hypersonic and space access aircraft.
The lofty technical challenges that will be part of the XS-1 program include:
- A reusable first stage vehicle designed for aircraft-like operations
- Robust airframe composition leveraging state-of-the-art materials,
manufacturing processes, and analysis capabilities
- Durable, low-maintenance thermal protection systems that provide protection
from temperatures and heating rates ranging from orbital vacuum to
atmospheric re-entry and hypersonic flight
- Reusable, long-life, high thrust-to-weight, and affordable propulsion systems
- Streamlined "clean pad" operations dramatically reducing infrastructure and
manpower requirements while enabling flight from a wide range of locations
For the first round of testing the XS-1, DARPA says it wants to see the spacecraft"
- Fly ten times in ten days
- Fly to Mach 10 at least once
- Launch a representative payload to orbit at least once
"We want to build off of proven technologies to create a reliable, cost-effective space delivery system with one-day turnaround," said Jess Sponable, DARPA program manager heading XS-1. "How it's configured, how it gets up and how it gets back are pretty much all on the table-we're looking for the most creative yet practical solutions possible."
Commercial, civilian and military satellites provide crucial real-time information essential to providing strategic national security advantages to the United States. The current generation of satellite launch vehicles, however, is expensive to operate, often costing hundreds of millions of dollars per flight. Moreover, U.S. launch vehicles fly only a few times each year and normally require scheduling years in advance, making it extremely difficult to deploy satellites without lengthy pre-planning. Quick, affordable and routine access to space is increasingly critical for U.S. Defense Department operations. In the end the idea is to lower satellite launch costs by developing a reusable hypersonic unmanned vehicle with costs, operation and reliability similar to traditional aircraft, Sponable stated.
The agency noted that it already has one quick, cheap satellite launch program working . The Airborne Launch Assist Space Access (ALASA) program looks to develop an aircraft-based satellite launch platform for 100 lbs. payloads and building low-cost, small satellites that could be rapidly be launched into any required orbit, a capability not possible today from fixed ground launch sites, DARPA stated. Boeing, Lockheed Martin and Virgin Galactic are working on separate offerings for that project.
DARPA also has the Integrated Hypersonics program aimed at researching and developing what it calls "next- generation technologies needed for global-range, maneuverable, hypersonic flight at Mach 20 and above for missions ranging from space access to survivable, time-critical transport to conventional prompt global strike. The program seeks technological advances in the areas of: next generation aero-configurations; thermal protection systems (and hot structures; precision guidance, navigation, and control; enhanced range and data collection methods; and advanced propulsion concepts."
DARPA has in the past equated the development of hypersonic equipment to the development of stealth technology in the 1970s and 1980s. The strategic advantage once provided by stealth technology is threatened as other nations' abilities in stealth and counter-stealth improve. "Restoring that battle space advantage requires advanced speed, reach and range. Hypersonic technologies have the potential to provide the dominance once afforded by stealth to support a range of varied future national security missions," DARPA said.
There are a ton of technological issues to be addressed, one of the biggest being the heat generated by extreme speeds.
At Mach 20, vehicles flying inside the atmosphere experience intense heat, exceeding 3,500 degrees Fahrenheit, which is hotter than a blast furnace capable of melting steel, as well as extreme pressure on the shell of the aircraft, DARPA stated. The thermal protection materials and hot structures technology area aims to advance understanding of high-temperature material characteristics to withstand both high thermal and structural loads. Another goal is to build structural designs and manufacturing processes to enable faster production of high-speed aeroshells, DARPA stated.
Check out these other hot stories: | <urn:uuid:f06b994a-2b31-408d-bb5c-18b47c0dca6a> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2225388/security/darpa-hunts-airplane-like-spacecraft-that-can-go-mach-10.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00649-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920956 | 1,177 | 2.953125 | 3 |
by Mark Gavin
For the most part; the basic layout of a PDF file can be fairly simple. A PDF file consists of four primary sections as illustrated below:
The PDF file “Header” is just one or two lines starting with %PDF. The “Body” is a collection of objects which include the page contents, fonts, annotations, etc. The “xref Table”, or cross reference table, is a collection of pointers to locate the individual objects contained in the “Body”. The “Trailer” contains the pointer to the start of the cross reference table.
Starting with the basic layout above; PDF supports the concept of incremental saves. This is the ability to make modifications to the file without altering the actual content of the original saved document.
There are several advantages to incremental saves.
- Saving the file to disk is quicker because you are only tacking the new data to the end of an existing file.
- An incrementally saved document contains an audit trail of changes to the PDF file. This allows the file to be “rolled back” to a previous save.
- The incremental save mechanism is also used to support multiple digital signatures on a single PDF file.
There is also a significant disadvantage to the incremental save mechanism. Selecting “Save” under the Acrobat file menu automatically does an incremental save. When PDF documents are edited; for example, when the user add form fields or comments, the document is typically “saved” multiple times. This leads to file size increase, because the unused or obsolete data remains in the PDF file.
To remove the unused data in an incrementally saved PDF file an Acrobat user needs to perform a “Save As…”. We have seen cases where a 200 KB PDF file increased in size to over 2.5 MB due to incremental saves. In these cases, a simple “Save As” can result in dramatic file size reductions.
The basic file layout becomes more complex with files which have been saved for “Fast Web View”. This is called linearized. For more information see the following blog entry: Linearization | <urn:uuid:1e65107b-584f-440b-ae05-6299608f2497> | CC-MAIN-2017-09 | https://labs.appligent.com/pdfblog/pdf_basic_file_layout/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00173-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.886151 | 450 | 2.9375 | 3 |
In 1988 Garth Gibson at the University of California, Berkeley, co-authored a paper titled “A Case for Redundant Arrays of Inexpensive Disks (RAID) [PDF],” which outlined the basic principles of using big, cheap disks to increase data reliability and I/O performance. RAID went on to become a widely adopted storage technology throughout the industry, while Gibson co-founded Panasas Inc., a storage cluster vendor for high performance computing applications.
This week, Gibson and company claim that they have implemented the most significant extension to disk array data reliability since the original RAID paradigm was developed. Their new architecture is called “tiered parity.” In this model, Panasas has built “vertical parity” and “network parity” on top of their existing RAID 5 “horizontal parity” implementation.
The RAID 5 approach, as it was outlined in the original paper, consists of striping data and parity across multiple disks. It enables error recovery for single disk failures and increases performance via parallel reads and writes. This technology is widely used in storage systems today. Panasas’ own implementation of RAID 5, called “ObjectRAID,” is based on storage objects rather than blocks. The added intelligence is designed to reduce reconstruction times when a disk failure occurs.
But no RAID 5 technology can handle a media error, also know as an unrecoverable read error (URE), if it occurs during reconstruction of a failed disk. When this occurs, the RAID data cannot be rebuilt from disk; a backup (usually on tape) has to be used to recover the entire array. Ten years ago, this wasn’t a serious problem. With 50 GB SATA disk drives, a media error was very unlikely to occur while reading a single disk, since the rate of failure is about one error every 10^14 bits (12.5 terabytes), a rate that has remained constant for over a decade. And when a media error did happen to occur during reconstruction, a 50 GB disk took only a few hours to recover from tape.
Times have changed. Disks have become much bigger and denser. Capacities of 500 to 750 GB are common today, and one terabyte disks will soon be the norm. That means when a disk goes south, the odds of hitting a media error during recovery are much greater, and recovery from tape can take days or weeks.
Imagine a RAID array of seven 1 TB disks. When one disk fails, the chances of hitting a URE while recovering the data from the six remaining disks is now about 50/50. When two terabyte disks hit the market in 2009, the disk failure plus media error scenario becomes almost a sure bet. Recovering the storage array from backup tape could take a month. For high end computing applications that use tens or hundreds of terabytes of data, this would be a disaster.
“I think what people are becoming aware of is that the data integrity provided by RAID 5 is basically no longer sufficient,” says Robin Harris, senior analyst at Data Mobility Group. “RAID 5 will only protect across a single disk failure, so it’s going away as a [standalone] data protection strategy.”
To address this problem, Panasas invented vertical parity. Essentially, they’ve added RAID within each disk, by generating a parity sector from the other sectors. The local parity sector can be used to recompute the missing data in case of a media error. According to Panasas, vertical parity gets the error rate down to between one in 10^18 and one in 10^19, which is 1000 to 10,000 times better than the URE rate. The extra parity information uses 10 percent of the disk capacity, but Panasas claims there is no performance hit. So scalability is built in.
A word here should be said about RAID 6 technology (also known as double parity), which some vendors use for an additional level of data protection. This scheme was designed to guard against a double disk failure, which it does. Sort of. The problem is that RAID 6 doesn’t protect against subsequent media errors after the second disk goes down, which, as discussed above, is becoming increasingly more likely. Here, it has the same problem as RAID 5. However, RAID 6 can be used to recover from the single disk failure plus media error scenario. But the performance hit for dual parity compared to single parity is significant. So it’s a mixed bag and doesn’t directly address the media error problem.
On top of its horizonal and vertical parity schemes, Panasas has added an additional layer of network parity protection. At this level, parity checking is done on the client side, to make sure the data delivered by the storage system wasn’t corrupted on its way to the user. Because of increasing I/O bandwidth and the number of hardware and software components between the external data and the application, there are increasing opportunities for good data to go bad. Firmware, server hardware, server software, network components and transmission media can all potentially mangle valid data unbeknownst to the application. With network parity, the client receives an error notification when bad data is detected.
The tiered parity technology will be included in the next version of Panasas’ ActiveScale operating environment, version 3.2. The beta will be out next month and will be generally available by the end of the year. The additional parity levels can be turned off if the user believes they’re not needed for a particular environment. According to Panasas, the tiered parity technology doesn’t exact a performance hit on top of the existing RAID 5 implementation, but, as stated above, the vertical scheme does eat an additional 10 percent of the storage — that’s in addition to the 10 percent used by the RAID 5 implementation.
Although the overall concepts of the three-tiered architecture are fairly general, Panasas is attempting to protect its new invention. “We actually have a patent pending on this tiered parity concept, particularly the vertical parity,” says Larry Jones, VP of Marketing at Panasas. “Could someone copy it? Who knows? But we are trying to protect this specific idea.” | <urn:uuid:2531334d-09a1-4531-80f6-db2309e3b3b2> | CC-MAIN-2017-09 | https://www.hpcwire.com/2007/10/12/panasas_invents_tiered_parity-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00173-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938095 | 1,302 | 2.796875 | 3 |
There are many definitions to certain common terms associated with healthcare information exchange. These are definitions that I am using in preparing the “simple interop” series of posts. These definitions will not necessarily be suitable for other purposes and are not necessarily consistent with the other sources, including Gartner research notes.
Health domain name a regular Internet domain name, propagated through the regular domain name service. There are only a few things that make it special: (1) it is listed in a special directory of domain names associated with the US Health Internet; and (2) the organization that owns a healthcare domain name will not use it unless it maintains a current valid digital certificate that can be used to authenticate TLS connections. Healthcare organizations will typically have two domain names a regular domain name and a health domain name.
Health email address a standard email address in which the domain name is a health domain name. As with all Internet email addresses the user names are managed locally by the organization associated with the domain name or a third party operating on its behalf. The user name has no meaning except in the context of the domain name.
Depending on the mission of organization that has the health domain name the user names may refer to specific staff members, healthcare consumers, or things such as the input queue for an order processing information system or the workflow queue for the people that are processing referral requests.
Health information exchange a service offering that supports the interconnection of multiple, often competing healthcare organizations using a governance model adequate to enable full interchange of healthcare information among the members of a community.
The community need not be defined geographically although, to date virtually all HIEs are scoped by political jurisdictions or healthcare catchment areas. This is, in part, because “most healthcare is local” and in part because building the trust necessary for one healthcare organization to look up data about its patients is easier when the total community is smaller and the participants are generally familiar with one another.
Our definition avoids creating a list of services provided by an HIE, but the trust issues included in the definition and the lack of a national ID for a healthcare consumer implies that the services will likely include:
- Community-based master-patient index services to assist the recipients in filing incoming information by patient.
- Assistance to HIE users in finding the sources of information about patients in a way that is consistent with local and national policy
- “Mapping” services so that data sources can send or receive non-standard structured documents and yet communicate with recipients that expect standards to be followed.
There are other services that are commonly associated with HIEs although they are not essential to the definition here.
- Providing the ability to look up information for a Web portal to be used in situations where the user does not have an EHR.
- Creating repositories of information to support information lookup and secondary use of data. HIEs may have a single central repository or use various “virtual repository” architectures.
Health Internet client an IT system associated with a healthcare domain name. These systems are the clients that use the servers of healthcare internet nodes. Some organizations may operate healthcare clients but not operate servers. Many small practices would fall into this category.
The organization that uses health Internet clients has a health domain name but it does not operate a health Internet node. That would be maintained by a third-party organization. A wide variety of organizations may operate health Internet nodes on behalf of other organizations. For example, vendors of EHRs targeted at small practices may operate a health internet node on behalf of their clients, where each client has its own health domain name.
Health Internet node a set of one or more servers operated by a single organization under a healthcare domain name. (The servers we refer to here include plain-old Internet servers, such as SMTP servers or HTTP servers.) An organization that operates a health Internet node agrees to configure them to certain levels of security. The levels of security that an organization agrees to in order to operate a health internet node include the following:
- It won’t accept connections from outside its security perimeter that are not mutually authenticated and encrypted using TLS and digital certificates.
- It will check that the digital certificate of the connecting client remains valid.
- It won’t accept such connections that don’t offer a cybersuite that is sufficiently secure by standards set by ONC. (A cybersuite is defined in the TSL Internet RFC as the combination of cryptographic and hashing algorithms used to establish secure communications.
- It will accept such connections using at least one cybersuite that has been established by ONC.
Health Internet registrar any organization that has been accredited by the US government to accept registered domain names, validate that the organization registering the domain name exists and has a valid digital certificate.
Personal health record (PHR) an electronic record of personally identifiable health information on an individual that can be drawn from multiple sources and that is managed, shared, and controlled by or primarily for the individual. This is essentially the definition from the American Recovery and Reinvestment Act of 2009.
Note: this definition will please almost no one, because it is so broad. In discussing the PHR many people want to define the term to include functions that can be created over such a record. Some would prefer to use the term PHR exclusively to describe the overlying applications and refer to their implementation of a record itself as the “ecosystem for PHRs.” Others avoid the term PHR altogether and choose the term “health record bank” In choosing to define the term this way we rely on two points: (1) the “R” in PHR stands for “record”; and, (2) this is the definition in the law.
Nationwide Health Information Network (NHIN) is a collection of standards, protocols, legal agreements, specifications, and services that enables the secure exchange of health information over the Internet (source: HealthIT.hhs.gov site, retrieved 28 December 2009).
Some wonderful work has been done under the NHIN rubric, focusing achieving high-trust interoperability among and between HIEs and large-scale healthcare organizations. Although it may seem trivial outside the beltway one of the most important accomplishments has been to convene the right players and develop agreements and governance process for healthcare information exchange between federal agencies and private healthcare organizations.
We do not desire or expect that this work will fall into disuse or fail to achieve wider adoption for high-trust connection of large-scale organizations that have the resources to follow the protocols, assure their security and participate in the governance. At the same time we don’t believe that all national exchange of healthcare information should flow through the high-end machinery defined by this work. Accordingly we interpret the definition from HealthIT.HHS.Gov as being broader than the former view of a “network of networks.” The NHIN intellectual property and any associated regulations should define a broad framework for many different uses of the Internet for health information exchange.
[This post was revised on 31 December.]
Category: healthcare-providers interoperability vertical-industries
Tags: arra ehr emr health-information-exchange health-internet health-it health-record-bank healthcare-interoperability hie hrb nhin stimulus
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:67587bc6-9857-449d-a8c4-351a54b6f914> | CC-MAIN-2017-09 | http://blogs.gartner.com/wes_rishel/2009/12/30/simple-interop-definitions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00117-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93454 | 1,634 | 2.546875 | 3 |
GCN LAB IMPRESSIONS
E.T. phone me: Join the search for alien life
Advances in computing technology have made it possible for anyone with a PC to listen for signals from the stars.
- By John Breeden II
- Apr 22, 2010
For years the SETI program
(Search for Extraterrestrial Intelligence) has been looking for life among the stars. Technology has made it possible so that very soon, everyone will be able to join in on that quest. Who knows, if you personally discover an alien civilization, you might even get to name it (for our Earth-bound reference anyway).
In some science fiction stories, the aliens are out there, just beyond our perception, listening to all of our radio and television broadcasts. And if they tune into some of the sitcoms on TV, I’m sure they will probably decide to stay well away from our crazy planet. I bet they enjoy tapping their spindly fingers along with “Glee,” however.
The SETI program works in the reverse direction. Powerful radio telescopes scan the stars, listening for any evidence of broadcasts made by alien life. Theoretically these could be sent accidentally from a planet, much like Earthlings do all the time now in the course of a day. The signals might also be communications between ships traveling in space. There’s also the possibility that transmissions we eventually intercept could be directed into the universe to get our attention. And the SETI program is a private endeavor, so no tax money is going toward the program, for those of you who think it’s a waste of money.
Recently the prestigious TED prize (Technology, Entertainment, and Design) was awarded to Jill Tarter of the SETI institute. And she had some very interesting things to say about the future of the SETI program. The biggest is that, very soon, everyone can become a SETI researcher.
In the beginning, the entire SETI program was run from custom-built hardware. A few years ago, it expanded to include home computers, which have become fast enough to handle a slice of the load, scanning thousands of hours of recordings to try to find repeating patterns or obvious signs of language. The off-the-shelf hardware actually does a better job than the clunky systems of the past. You can lend a hand in that effort right now with the SETI home program. This involves setting up your computer to automatically download and analyze radio telescope data. Just imagine if your computer was the one to discover alien life — your system would be famous.
And very soon, you can listen to the sounds of the cosmos yourself. All of the data from the SETI program, according to Tarter, will soon be available at setiQuest.org to download or play. The site is not quite up yet, but you can bet it will be one of my favorite destinations once it goes live. Having humans listen for repeating patterns or language might be even more efficient (or more accurate) than having computers do it.
In these ways, everyone can help search for intelligent life outside of our own little blue world. And if we ever find it, without a doubt, it would be the greatest discovery man has ever made.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:80afcd47-d981-46c7-a209-67a2b023c58f> | CC-MAIN-2017-09 | https://gcn.com/articles/2010/04/22/seti-opens-et-quest-to-anyone.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00117-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958262 | 683 | 2.75 | 3 |
NASA scientists have seen evidence that there's been sledding on Mars.
No, They don't mean there are Martians on toboggans. What NASA scientists have found is that hunks of frozen carbon dioxide, also known as dry ice, may have slid down Martian sand dunes on cushions of gas, like a miniature hovercraft. The sliding digs furrows, called linear gullies, into the sand.
NASA's Mars Reconnaissance Orbiter snapped images of linear gullies in Martian sand dunes. Scientists believe they were caused by sliding hunks of dry ice. (Image: NASA/JPL-Caltech/Univ. of Arizona)
NASA's Mars Reconnaissance Orbiter captured images of the linear gullies from space.
Researchers from the space agency tested their theory by performing experiments on sand dunes in Utah and California.
"I have always dreamed of going to Mars," said Serina Diniega, a planetary scientist at NASA's Jet Propulsion Laboratory. "Now I dream of snowboarding down a Martian sand dune on a block of dry ice."
The grooves in the sand dunes have been found to be relatively constant - measured at an average of a few yards across. They also have raised banks along the sides.
Scientists don't believe the gullies were caused by water flows because water generally leaves aprons of debris at the end of the gullies. Many of these Martian gullies instead have pits at the bottom end.
"In debris flows, you have water carrying sediment downhill, and the material eroded from the top is carried to the bottom and deposited as a fan-shaped apron," said Diniega, in a statement. "In the linear gullies, you're not transporting material. You're carving out a groove, pushing material to the sides."
NASA reported that the gullies are found on sand dunes that spend the Martian winter covered by carbon-dioxide frost. By comparing before-and-after images from different seasons, scientists said they found that the grooves are formed in early spring.
A few of the images captured by the orbiter show objects, believed to be chunks of dry ice, in the gullies.
"[The Mars orbiter] is showing that Mars is a very active planet," said Candice Hansen, of the Planetary Science Institute in Tucson, Ariz. "Some of the processes we see on Mars are like processes on Earth, but this one is in the category of uniquely Martian."
The orbiter is one of several NASA machines studying the Red Planet.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA spots sledding marks in Martian sand dunes" was originally published by Computerworld. | <urn:uuid:061aa7d8-9ce3-402f-b1ba-711520312590> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2167215/data-center/nasa-spots-sledding-marks-in-martian-sand-dunes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00169-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943307 | 625 | 3.46875 | 3 |
NTP: It's About Time
Perhaps one of the most overlooked elements of network management is that of time synchronization. If its importance in the overall health of the network were better understood, then perhaps it would be paid more attention. In this article, I'll discuss time synchronization and the Network Time Protocol.
Why Synchronization Matters
In a local area network (LAN), time synchronization is important because it affects components such as file systems and applications. If the time being issued to the server by the system hardware clock is incorrect, it is quite possible for corruption to occur within applications, particularly in complex systems such as databases. In wide area networks (WANs), time synchronization is even more essential. The distributed nature of WANs greatly increases the probability of an incorrect timestamp, not to mention the fact that WANs often span time zones, further complicating the issue.
One of the best (and most used) examples of the importance of time synchronization is that of e-mail. Imagine receiving an e-mail, the timestamp of which indicates that it was received before it was sent. Very confusing. Another more frightening example includes the computers used by air traffic controllers, but we won't even go there. So, how does an operating system get the wrong time?
For the most part, operating systems take their time from the local hardware clock of the system on which they are loaded. Although hardware clocks have improved in terms of accuracy and reliability, they are still prone to inaccuracies. In addition, the one-to-one relationship of the operating system and the machine on which it is running means that it is very possible for two different systems on a network to have different times. What is needed is a mechanism that allows systems to synchronize themselves with a reliable time source and subsequently with each other. The mechanism is the Network Time Protocol (NTP).
Guidelines pertaining to the use of Network Time Protocol time sources are available on the Internet. A document describing these Rules of Engagement, along with a list of public time servers, can be found here.
NTP operates over UDP on port 123. If you're using a firewall, you may need to change the firewall configuration so that NTP traffic can flow through.
Network Time Protocol
NTP is not a new protocol; in fact, it's been around since the 1980s. The current version of NTP, version 4, is relatively new, and previous versions are still well supported. Great care is taken to ensure that new versions of NTP are backward compatible. The generic nature of NTP means that it is platform independent, and NTP support is available for almost all popular platforms including Linux, Unix, Windows NT/2000, Novell NetWare, Windows 95/98, Mac, as well as other networking devices such as routers. There is even a version for Palm! In many cases, shareware and freeware versions of NTP server and client software are available. Some of these use the lighter Simple NTP (SNTP) protocol, which is based on standard NTP but has less overhead.
Before time can be synchronized by NTP, the correct time must first be ascertained. One of the most popular methods of obtaining this information is from Internet-based public time servers. The servers are structured in a tiered model, with those at the top tier designed to be the most accurate. These top-level Internet time servers are known as Primary, or Stratum-1, time servers. Stratum-1 servers provide accurate time by synchronizing with reliable sources such as the Global Positioning System or purpose-specific radio broadcasts. To ensure that these Primary time servers are not overwhelmed with requests, a number of other servers are also configured as Secondary, or Stratum-2, time servers. Although there may be small differences in time between the Stratum 1 and 2 servers, the possible change is limited and makes no difference to most networks.
The specifics of setting up time synchronization and using NTP on a system will depend on the platform(s) you are using. In most instances, setup is simply a case of installing (and if necessary compiling) the NTP software, loading it, and pointing it at a reliable time source. Depending on how many other devices you want to synchronize, you can then configure NTP on other devices to also point to the reliable time source, or to the original server that is receiving time. In turn, other servers can be configured to receive time from these other servers, creating a stratum model of your own.
The question of which time source to point to is an interesting one. As with many things related to the Internet, time servers are maintained, added to, and amended by people and organizations on a voluntary basis. As such, neither the availability nor the accuracy of the servers and/or service is guaranteed. It would be easy to assume that the Stratum-1 time sources are completely accurate, but it's not always the case. A survey conducted by individuals at MIT in 1999 found that a large number of the Stratum-1 servers were issuing the wrong timein one case, by over six years!
Setting Up NTP
Setting Up an NTP Time Server
Synchronizing your systems with one of these public time servers may be appropriate if all your systems have access to a public NTP time server, but in practicality it may not be possible. A more reliable, secure, and self-sufficient option is to create a reference time server of your own, and then use it to provide time to servers across your enterprise. To create an NTP time server, you will first need a mechanism for ascertaining accurate time, such as a radio receiver or GPS time receiver. These devices commonly come as either plug-in expansion cards or as external devices that plug into the RS-232 port on the system in question. Prices start at a few hundred dollars and go up from there. Using these devices, the local clock on the system is kept accurate. NTP software can then be used to communicate this time to the operating system and other servers.
Although time serving is designed to be a low-overhead service, if you have many clients who will require synchronization, consider creating a dedicated time server or purchasing a purpose-made time server in a box system. The only drawback is that these can easily cost in excess of $5,000. This strategy may sound expensive, but when you consider the money often invested in other areas of network management and resilience, it is still quite reasonable.
As systems become more and more distributed, the importance of having accurate time across the enterprise will increase accordingly. Network Time Protocol fulfills this need in a relatively simple and easy to implement manner. //
Drew Bird (MCT, MCNI) is a freelance instructor and technical writer. He has been working in the IT industry for 12 years and currently lives in Kelowna, B.C., Canada. You can e-mail Drew at firstname.lastname@example.org. | <urn:uuid:6bf768a4-75bd-45d6-b316-6996d5ca6f72> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/625401/NTP-Its-About-Time.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00345-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951888 | 1,437 | 3.15625 | 3 |
The National Aeronautics and Space Administration
The National Aeronautics and Space Administration is responsible for the U.S. space program, including space travel, research and military aerospace programs. The agency was established in 1958.
October 8, 2014 The first American woman to walk in space shares her experience, three decades later.
October 1, 2014 A refrigerator-sized spacecraft will give scientists advance notice of a solar storm affecting Earth.
August 26, 2014 A different kind of space race is coming to an end.
August 18, 2014 The satellite will take infrared photos of the Earth for Peru.
August 15, 2014 "Night Cities" is an ambitious project to create an atlas of urban space photography.
August 13, 2014 Taking science experiments to the skies.
August 11, 2014 Taking the long view on U.S.-Russia planetary defense.
August 6, 2014 NASA satellite images reveal less nitrogen dioxide.
August 5, 2014 New features and images added.
July 25, 2014 SpaceX had charged the government with violating fair contracting procedures in its lawsuit.
July 16, 2014 The agency thinks we'll find life on other planets soon, but we may not be exchanging messages for a while.
July 11, 2014 On Sunday, set your sights east to catch the fiery ascent of a resupply for the International Space Station.
July 8, 2014 As the presumed-dead probe hurtles toward Earth, the deadline for saving it looms.
June 19, 2014 In a new approach to planetary science, a small satellite would rain even smaller satellites on Jupiter's moon.
June 12, 2014 Before humans can land on Mars, scientists have to wrestle with atmospheric conditions back home.
June 4, 2014 But we’ll have to increase NASA’s budget and cooperate with China, report concludes.
June 4, 2014 The image draws on nine year's worth of shots taken by the Hubble Telescope.
June 3, 2014 If the largest ever supersonic parachute works, we'll be a step closer to putting a human on Mars.
May 23, 2014 An unmanned Atlas 5 rocket took off from Cape Canaveral in Florida, equipped with a "classified satellite" for the National Reconnaissance Office.
May 15, 2014 Elon Musk wants a ticket to Mars to cost $500,000. For those left behind, he'll have a cheap electric car for you.
May 2, 2014 University of Connecticut alum Rick Mastracchio will tell the engineering school's graduating seniors to reach for the stars.
April 18, 2014 From space, the U.S. Curiosity rover looks scarab-like.
March 7, 2014 Today, Florida. Tomorrow, Mars.
March 3, 2014 Space agencies across the planet launch the most ambitious plan yet to understand how the world's water works.
August 28, 2013 Nighttime images show the gradual spread of the tremendous wildfire, which is burning brighter than the city lights of Reno.
August 15, 2013 And also seafood chowder ... and curried noodles ... and Spam.
August 6, 2013 From the earliest years of the space program, the exploration of other worlds has been a source of the same techno-anxieties we have today.
July 8, 2013 A way to get 99 percent of the way into space, at 1 percent of the cost of a satellite
June 21, 2013 Nearly 1,000 frames, combined into a single Martian mosaic.
May 17, 2013 Another victory for Opportunity, the spunky little rover driving on Mars
April 4, 2013 It sits on the Red Planet, flapping hauntingly in the wind.
March 20, 2013 Behold: "One of the whitest things" we've seen on the Red Planet
March 12, 2013 NASA's Mars Curiosity rover drilled into a rock and found that it contained a clay-like material.
December 11, 2012 NASA captures, quite literally, gravity's rainbow.
October 31, 2012 "We have the same laws of chemistry, physics. If there are any locations where there are the basic ingredients, there should be the basic ingredients for life."
October 24, 2012 A fleet of vehicles ready to explore lunar and Martian terrains
August 22, 2012 Planned for 2016, NASA's next mission to Mars will examine the planet's geophysics.
August 6, 2012 There are reasons the space agency is so popular on the Internet.
August 6, 2012 SpaceX won a NASA contract worth $440 million to develop the next generation of space transportation vehicles.
July 31, 2012 Rover will inspect Mars' environment for minerals, gases and water. | <urn:uuid:87ed7170-fa21-4ba9-b9bf-01da7bda7295> | CC-MAIN-2017-09 | http://www.nextgov.com/emerging-tech/national-aeronautics-and-space-administration/41122/?oref=ng-trending | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00465-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.89626 | 940 | 3.09375 | 3 |
Some of the same social media analyses that have helped Google and the Centers for Disease Control and Prevention spot warning signs of a flu outbreak could be used to detect the rumblings of violent conflict before it begins, scholars said in a paper released this week.
Kenyan officials used essentially this system to track hate speech on Facebook, blogs and Twitter in advance of that nation’s 2013 presidential election, which brought Uhuru Kenyatta to power.
Similar efforts to track Syrian social media have been able to identify ceasefire violations within 15 minutes of when they occur, according to the paper on New Technology and the Prevention of Violence and Conflict prepared by the United States Agency for International Development, the United Nations Development Programme and the International Peace Institute and presented at the United States Institute of Peace Friday.
These predictions can be improved by adding other data from satellites, surveillance cameras and other sensors, the authors of the big data section of the paper said.
The authors were careful to note, however, that data analysis isn’t a one-size-fits-all solution for conflict and researchers should always take the specific nature of a conflict into consideration.
Crowdsourcing initiatives to encourage citizens to report violent behavior by Latin American drug gangs, for instance, may rely heavily on anonymity to keep the reporters safe from retribution, said author Emmanuel Letouze, a Ph.D Candidate at the University of California-Berkeley. A separate crowdsourcing system seeking evidence of electoral violence in Kenya, on the other hand, may be damaged by anonymity, he said, because it would encourage false reports from both candidates’ camps. | <urn:uuid:dbe33115-b328-4f0e-96b1-a0073f5ee63d> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2013/04/big-data-can-help-prevent-conflicts/62474/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00517-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932849 | 328 | 2.703125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.