text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Sensors get L.A. up to speed
Traffic monitoring system compiles data from thousands of sources to deliver a detailed picture of the area's highways
NETWORK ADMINISTRATORS are accustomed to monitoring traffic to
reduce bandwidth bottlenecks. Allen Chen's task is a little
different: speeding the flow of people, not electrical impulses. As
senior transportation electrical engineer at the California
Department of Transportation (Caltrans), Chen is responsible for
managing traffic on more than 500 miles of freeways in the Los
'We are not going to build more freeways,' he said.
'But with information, we can manage them better and help
drivers make better decisions.'
In October 2007, the Los Angeles Regional Transportation
Management Center (LARTMC), a facility that houses Caltrans and the
California Highway Patrol (CHP), formally opened. There, data
assembled from more than 10,000 sensors and cameras gives operators
an overview of traffic in the area so they can act quickly to
reroute traffic or remove bottlenecks.
Although LARTMC is new, its Advanced Traffic Management System
(ATMS) represents the implementation of research that started about
40 years ago. In 1969, Caltrans began researching techniques for
improving traffic flow. Its final report, issued in 1976,
recommended four steps:
- Install a detection system to monitor traffic volume and
- Meter freeway ramps to balance capacity and demand and improve
- Use changeable message signs that inform drivers of freeway
- Provide a fleet of tow trucks to immediately remove disabled
vehicles from roadways.
'Ramp metering was very revolutionary,' Chen said.
'Freeways are designed for 1,800 vehicles per lane per hour,
but once we slow to 35 miles per hour, our throughput drops to
1,200 per hour. By maintaining proper flow with the ramp meters, we
can increase the flow up to 2,400.'
Those 1976 recommendations have been put into place, with more
than 10,000 sensors embedded in the roadways, 1,280 traffic
monitoring stations, 450 closed-circuit cameras, 960 ramp metering
systems, and 109 changeable message signs and 15 highway advisory
radio stations to keep the public informed.
In 1998, the state legislature approved funds to build a new
five-story facility to house Caltrans and CHP. Located about 10
miles from downtown, the site includes microwave and satellite
communications to CHP and Caltrans employees in the field. The
facility also is the base for a Sonet OC-12 connection to the
freeway sensors and control system.
'Using artificial intelligence, we can build filters and
algorithms that detect any anomaly,' Chen said. 'Then
the system will alert our operators, and the operators can activate
the closed-circuit TVs to investigate.'
Although Caltrans uses commercial software wherever possible
' and the systems and workstations run on Microsoft Windows
XP ' there is no commercial product designed to manage such a
traffic system. Therefore, Caltrans had to assemble software into
an overall package that met its needs. It selected Science
Applications International Corp. as the primary systems integrator
and Delcan as the chief consultant for systems architecture and
ATMS runs on Hewlett- Packard HP-UX 11i servers. Data is stored
in an Oracle 9i database ' which is migrating to Oracle 10g
' and a Gensym G2 business rules engine is used to run
scenarios and present data and suggested actions to operators. The
wall in the control room contains 12 84-inch digital light
projection displays, 12 50-inch DLP displays and two electronic
With ATMS, Caltrans is working to further reduce bottlenecks,
and it is also expanding its information outreach activities to
keep other agencies and the public informed of traffic conditions
via the Internet, radio and TV news, and cell phone/personal
digital assistant alerts.
'If the commuter knows whether it will take five minutes
or an hour to reach their destination, they can make an informed
decision,' Chen said. 'And if they decide to take a
less congested route, it will also benefit others on the | <urn:uuid:ee79b96a-dfcf-473d-8af2-74aed82f9fad> | CC-MAIN-2017-04 | https://gcn.com/Articles/2008/11/14/Sensors-get-LA-up-to-speed.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00156-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897949 | 897 | 2.703125 | 3 |
Bloom’s Taxonomy and Test Item Design
The creation of valid and effective tests requires numerous test-design skills. Instructional design is one of these key skills. When understood and properly applied in the test-design process, instructional design ensures that test items are aligned to the cognitive level being tested. Mapping test items to cognitive levels contributes to the face validity of the test and enables test designers to differentiate between knowledge and performance-based test items.
Bloom’s taxonomy is a commonly used instructional design to create test content. The taxonomy has six cognitive levels: knowledge, comprehension, application, analysis, synthesis and evaluation.
For example, when creating a test item that tests for knowledge, as is often the case with multiple-choice items, the test item will generally be a recall of previously learned material, such as:
What is the highest peak on the Asian continent?
- Cho Oyu
The correct answer is 3. Notice that I did not perform any task as a result of answering the item. Consequently, the item is purely knowledge-based, a recall of prior learning. Test items at the comprehension level are similar, requiring the ability to paraphrase a meaning, explain something, or define or restate ideas.
Let’s move farther along Bloom’s taxonomy and take a look at the application level. Test items at this level typically require the use of prior learning in new situations. The test-taker will demonstrate the ability to take abstract information and use it in a concrete situation. Test items at the application level are more performance-related than at the lower levels of Bloom’s taxonomy, as they typically require a hands-on or simulated task.
Assume I’m going to travel to the Asian continent to climb Mt. Everest. To ensure I’m adequately prepared, one key skill I’ll need is putting on a climbing harness. Creating a test item for putting on a climbing harness is an example of a hands-on test item that correlates to the application level of Bloom’s taxonomy:
You have 10 minutes to complete this task.
Put on the climbing harness, ensure the webbing on each leg loop is not twisted, all webbing threaded through buckles is doubled-back, and the locking carabineer is properly fixed to the front loop and locked.
There are some interesting characteristics to this item. The first is the time component. In order to be truly proficient, the candidate must be quick because in the mountains, time is often critical to ensure safety. Another characteristic is the scoring. In this situation, the candidate must perform all tasks correctly in order to receiving a passing score. The candidate would scored by an observer on three distinct tasks: the webbing on each leg loop is not twisted, all webbing is double-backed through buckles, and the locking carabineer is properly fixed to the front loop and locked. Once again, safety is the big issue, making this an all-or-nothing scenario.
Higher cognitive levels in Bloom’s taxonomy include analysis, synthesis and evaluation. These levels become increasingly more difficult to test. However, there are effective methods for creating appropriate test items. For purposes of this discussion, I’ll focus on the evaluation level.
Evaluation deals with a candidate’s ability to judge the worth of material against stated criteria. It is a culmination of the first five levels in the taxonomy.
Another key mountaineering skill, especially where there is high snowfall, is evaluating conditions for potential avalanche danger. Those serene expanses of sparkling white snow can unleash deadly avalanches in the right conditions. Safe mountain travel dictates competence at selecting a climbing route that minimizes the team’s exposure to avalanches. A corresponding test item might look something like this:
Evaluate the terrain and conditions and select the safest route of travel from point A and point B (both would be defined on the map).
In this scenario, my evaluation may consist of analyzing photos, studying topographical maps, performing a snow analysis and making a judgment call as to the safest route. My final decision will be based on all of the factors and criteria I have at my disposal, as well as experience. This item is comprehensive because it requires that I recall knowledge, use skills and perform analysis. Scoring for this item may consist of having one or more observers rate me against a specified set of criteria.
Bloom’s taxonomy is one tool that test designers can use when designing and developing tests. Other taxonomies include Gagne’s Learning Outcomes and Guilford’s Mental Processes. Regardless of the taxonomy used, it is important to ensure tests are created with sound instructional design principals to ensure alignment between the cognitive level of testing and corresponding item types.
James A. DiIanni is the director of assessment and certification exam development at Microsoft Learning and supports the Microsoft Certified Professional program. His experience with performance testing started in 1986 developing simulators for the U.S. Navy, and he has been involved in the IT certification industry since 1997. He can be reached at email@example.com. | <urn:uuid:99528545-c731-453f-85a1-42562a80e73c> | CC-MAIN-2017-04 | http://certmag.com/design-develop-blooms-taxonomy-and-test-item-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00182-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915461 | 1,069 | 3.5 | 4 |
The Industrial Age reinvented productivity. The Information Age reinvented communication. Now man and machine are coming together and it’s even bigger than the Internet. What is it? The Internet of Things!
The Internet of Things connects the physical world to the Internet through sensors embedded in objects everywhere. These sensors monitor the environment and send data to the cloud for analysis. Every physical object becomes a tool we can interact with in real time. And machines can communicate with each other to respond with no human intervention at all.
This IT inflection point will completely overhaul how humans interact with the physical world.
At David Kirkpatrick’s Techonomy Lab at the Stanford Research Institute, technologists and investors speculated on a mind-boggling future of “Man, Machines and the Network.”
General Electric’s Paul Rogers spoke about how GE jet engines respond to 5000 data samples per second to use less fuel but spend more time in the air. GE trains can identify where they are and what cargo they carry.
In GE’s “power of one” vision, a one percent increase in fuel efficiency can
save an airline $2 billion per year
save a utility $4.4 billion per year.
And a mile per hour increase in locomotive velocity could save a railway $1.8 billion per year.
That’s how much interactive sensors can increase machine efficiency to reduce the cost of transportation. It’s enormous.
Ford Motor Company’s Vijay Sankaran spoke of their vision to become an automotive technology company. Ford uses sensors to increase engine efficiency and create a better driving experience. What car consumers want most is fuel economy. Ford has invested $1 billion in electric vehicles to deliver new levels of fuel economy and future connectivity. Ford is also rethinking the “human machine interface” by which drivers interact with the car. That interface could feel like an iPad in future.
Ford is rethinking security too. How can a car sense environmental threats and respond autonomically the way the human body does? That’s biomimicry – designing technology to emulate nature. Biomimicry is often at the core of the best innovations. You can read more in How Nature Inspires Technology.
Executive Chairman William C. Ford, Jr. envisions using the Internet of Things to rethink transportation all together. With 1 billion cars on the road today, and 4 billion by mid-century, he says “If we do nothing, we will create global gridlock.” Urban mobility is of particular concern. Seventy-five percent of the world’s population will eventually live in cities. And fifty will be mega-cities with more than 10 million people each.
Therefore, designing transportation ecosystems that link cars with sidewalks, subways and other modes people use to get from place to place is how we will prevent global gridlock.
In Singapore, analyzing traffic data revealed that looking for parking was the greatest cause of gridlock. When they fixed the parking problem, they reduced the gridlock.
This is but a small example of what’s possible with the Internet of Things.
A large benefit of the Internet of Things is jobs. According to Bill Ford, no sector of the economy creates more spinoff jobs than the auto sector. “For each new job created in the auto sector, nine more jobs are created to support it.”
Since the Internet of Things will impact all industries, that means new jobs everywhere.
Google GOOG +0.17% Glass and smart watches are at the forefront of wearable computing. This was the hottest topic at the Techonomy event.
After hearing about startup MC10’s “thin-skin electronics” for biomedical applications, I can see why!
“Thin-skin” electronics are “ultra-thin flexible sensors” that look like a Band-Aid ®. But they are chips designed to monitor human body functions, like vital signs or a baby’s temperature. They can also sense when the body needs a targeted medical therapy, like insulin for diabetics. Through wireless communication, they can trigger a real-time response from care givers to intervene if needed.
As MC10 CEO Dave Icke further noted, patient compliance with follow-up home care is a big factor in recovery. Combining wireless connectivity with micro-computing can significantly improve patient compliance. That could mean the difference between life and death.
Andreessen Horowitz investor Frank Chen thinks the most creative opportunity for wearable computing is how the user triggers the sensor. For example, sensors that recognize gestures or process natural language create endless possibilities. Sensors could become intuitive in monitoring the environment and triggering a response, like our human senses do.
Sensors In Space
Finally, NanoSatisfi’s Peter Platzer told us how mini-satellites add a spatial dimension to monitoring. For example, we can increase our food supply ten-fold through precision agriculture that monitors crops, soil and weather conditions on the ground and from above. And we can improve earthquake early warning systems by correlating seismic and storm data with human and animal behavior. Now add a platform to build applications and anyone can launch a mini-satellite to monitor anything.
This is why the Internet of Things is exploding big data to a degree never imagined. We’ve gone way beyond Big Brother.
With man and machine integrating their intelligence via mobile devices and sensors, what’s next?
Sure you want to know?
Follow @JacquelnVanacek for how cloud, mobile, social media and big data are reinventing our world.Back to all News | <urn:uuid:6c2985d8-0264-4849-8b98-a54703288b76> | CC-MAIN-2017-04 | http://www.northbridge.com/are-machines-net-making-humans-obsolete | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903127 | 1,175 | 2.875 | 3 |
There is a lot of talk these days about cloud computing or cloud hosting. Many companies are using these terms loosely to discuss either VPS or cloud servers (public or private). But, what do these terms mean? You will definitely see a difference when you look at the price tag, so understanding what each of these services are will help you in your quest to determine the best option for you or your company.
To help you out, here is a description of each and even some of their pros and cons.
Definition: One physical server, divided into several smaller server slices that each act as their own virtual server environment.
Typically less expensive than cloud servers.
No file or data access occurs between VPS clients on the shared server. They are kept separate.
If needed one VPS can be rebooted without affecting other VPSs on shared server.
They do not offer high-availability. If physical server fails, all VPSs on that server fail.
There can be security concerns. If a customer on your shared server does not take security seriously, and gets hacked or gets a virus, then your VPS could be negatively affected.
Computing resources are shared between all clients, therefore, RAM, bandwidth and CPU performance can be affected if another VPS on the shared server is demanding a higher load.
Only one operating system can be utilized by each physical server.
They are not scalable. Storage is based on physical server limitations. Once you meet your max VPS capacity, you have to either buy more space or look into other options. This could take many hours or days of downtime to migrate to a new solution.
Definition: Cloud servers utilize multiple servers connected together in a cluster which is backed by SAN storage. Customers utilizing a cloud platform will benefit from the multiple servers because they will receive unlimited storage, maximum bandwidth, managed load balancing and no ties to a specific piece of hardware. The basic difference between public and private clouds are in public, the cluster is multi-tenant and a private is a single client.
Scalable – add more server power in a moment’s notice.
Custom Infrastructure - clients can include custom network architecture, firewalls, load balancing and IP deployment.
High Availability - if a physical server fails, cloud servers are migrated to another physical server without experiencing an outage.
Burstable Computing Resources - no concern about lagging RAM or CPU power, even if another cloud customer's load grows.
Completely secure since you virtually have your own server. If a client on the shared cloud gets hacked or gets a virus, your cloud server will be completely separated with no risk to your data.
Each customer on the cloud can select their specific operating system.
Unlimited storage as it is based on SAN storage.
Typically, a little more expensive than VPS.
As you can see, the cloud servers are a little light on the con side. And, if you are utilizing a shared cloud, the cost is not significantly more than a VPS and there is much upside.
Hopefully these points have helped clear up some of the differences between VPS and cloud servers, but if you have any further questions or would like to discuss how a cloud server could benefit your business, simply contact us and we would be happy to help.
Talk to an infrastructure consultant today. | <urn:uuid:5629a3d5-4392-4065-9eda-7459d51b5456> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/whats-the-difference-between-vps-and-cloud-servers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935951 | 681 | 2.59375 | 3 |
Nope, this week, something completely different: The Lytro camera, which is based on a really geeky, way cool, leading edge piece of research; light field photography.
Actually, it's only leading edge because the technology that makes it possible has only been around for a couple of decades. The original theoretical foundations were laid out way back at the turn of the last century.
What's so interesting about this technology is it not only has a place in the consumer world, but it will also be a boon in the business world simply by making it incredibly easy to take good photographs that will always be in focus.
The underlying principle, the light field, is the concept that a scene is comprised of all the rays of light, no matter what direction they are heading. A conventional camera captures only a restricted subset of these rays that are focused by the lens onto the imaging device in the camera, and while the color and intensity of rays are recorded, all of directional data about where the rays came from is ignored.
Conventional camera systems have a focal "plane", the point in front of the lens at which objects are perfectly in focus. The amount of light and other factors determine the "depth of field," the zone that extends in front of and behind the focal plane where, as far as the human eye is concerned, objects are apparently in focus. Anything outside of this zone will appear blurry.
A light field camera uses a very different imaging system that consists of, as Wikipedia explains: "an array of microlenses [placed] at the focal plane of the camera main lens. The image sensor is positioned slightly behind the microlenses. Using such images the displacement of image parts that are not in focus can be analyzed and depth information can be extracted."
So, by analyzing many tiny images and then using software to cleverly manipulate and combine them into a single image, a light field camera can capture, in effect, an enormous depth of field. This means an image captured by a light field camera can be used to create other versions of the image with, in theory, any focal plane and depth of field. So you could use a light field image to create versions where the foreground, mid-ground, or background objects are in focus. Even more intriguingly, at least in theory, a light field camera could even produce an image where everything in the scene is in focus!
This ability of light field imaging technology to produce "refocusable" images means that the ultimate "point and shoot" camera could be built ... no focusing, no aperture adjustment, just, er, point and shoot.
But, there's a potential problem. As the Wikipedia entry explains "The drawback of such a system is the low resolution that the final images have. As one microlens samples the light directions at one spatial point an increase in the number of image pixels can only be done by increasing the number of microlenses by the same amount."
Thus, the depth of field is dependent on the number of microlenses and the resolution is dependent on the number of pixels per microlens. So, if you increase the number of microlenses for a given sensor, the resolution drops but the depth of field increases. In other words, the resolution and depth of field are inversely proportional for given sensor geometry.
It's worth noting that the Lytro camera is specified as capturing 11 "Megarays" ... a wholly meaningless value as there's no definition anywhere as to what a megaray might actually mean.
Enough theory, let's get to the Lytro camera. First of all, the design is sensational. A small rectangular box (1.6 in x 1.6 in x 4.4 in) with a single large lens (f/2) at one end (with a magnetic lens cap -- very slick) and a touch sensitive screen at the other.
The screen end of the box is a textured, rubber-like surface while the lens end of the is a smooth metallic surface. The overall weight and balance of the device is perfect.
On the textured part of the top surface is a depression that you press to take a photo, and sliding your finger left and right above the screen optically zooms in and out. On the bottom surface at the screen end is the power switch and a pop-off cover for the USB 2.0 connector. Physically, that's it; a very clean design and very elegant.
The touch sensitive screen can be swiped left to review and manage photos (you can zoom in and out and refocus on any point) and swiped up to reveal the menu system. When you are taking photos In the "Creative" mode you can tap on the screen to choose the focus point.
I won't bother explaining how the menus work -- that's all covered quite well by Lytro's Web site.
So, the design is great but what of the results? In a word, disappointing.
The pictures are generally not very high resolution and display obvious "jaggies" and aliasing of dark objects in front of bright backgrounds as well as "fringing" under anything other than good lighting conditions.
Moreover, photos are not always in focus at all and refocusing on a specific area doesn't always produce significantly better results. In pretty much anything less than almost full daylight the noise level (visible as colored "graininess") is very noticeable. The company claims "By using all of the available light in a scene, the Lytro performs well in lowlight environments without the use of a flash." This is simply not true.
The software (it currently only runs on OS X) allows you to download and manage images from the Lytro camera (these are in a proprietary format) over USB, select what part of the photo is to be in focus, and upload images to the Lytro site where they can be refocussed dynamically by people viewing your work. You can also select the focal point in a Lytro image and export it to JPEG format though this feature is fairly clumsy.
Add to that the software is slow for almost every task (when it comes to exporting links to Facebook, painfully slow).
In fact, when it comes to pretty much everything a user really cares about in photography, the Lytro camera underwhelms.
One thing that really bothers me is that when you look at the images on the Lytro Web site you'd think that those are typical of what you're going to get -- clear, sharp, and "alive". The reality is far from what those images promise to the extent that I consider them to be false advertising.
So, bottom line; technically, conceptually, and physical design-wise, the Lytro camera is terrific, but the results -- the stuff users will really care about -- are, given the pricing ($399 for the 8GB, 350 picture model and $499 for the 16GB, 750 picture version), simply not worth it when you consider what you can get from a conventional camera for a lot less money. I'm giving the Lytro camera a rating of 2 out of 5 with a "nice try."
Light field technology will become a huge and redefining force in camera technology over the next few years. The Lytro camera is perhaps ahead of its time and delivers too little for too much. I'm sending mine back for a refund.
Gibbs is underwhelmed in Ventura, Calif. Your snaps to email@example.com and follow him on Twitter (@quistuipater) and on Facebook (quistuipater).
Read more about pc in Network World's PC section.
This story, "The Lytro camera: Too little for too much" was originally published by Network World. | <urn:uuid:8b7b0f82-1cea-4e6e-8d5d-8772df45c64d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2728199/personal-technology/the-lytro-camera--too-little-for-too-much.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950276 | 1,592 | 3.0625 | 3 |
Privacy risks and threats arise and surface even in seemingly innocuous mechanisms. We have seen it before, and we will see it again.
Recently, I participated in a study assessing the risk of W3C Battery Status API. The mechanism allows a web site to read the battery level of a device (smartphone, laptop, etc.). One of the positive use cases may be, for example, stopping the execution of intensive operations if the battery is running low.
Our privacy analysis of Battery Status API revealed interesting results.
Privacy analysis of Battery API
Battery readouts provide the following information:
- the current level of battery (format: 0.00-1.0, for empty and full, respectively)
- time to a full discharge of battery (in seconds)
- time to a full charge of battery, if connected to a charger (in seconds)
Those values are updated whenever a new value is supplied by the operating system
What might be the issues here?
Frequency of changes
Frequency of changes in the reported readouts from Battery Status API potentially allowed the monitoring of users' computer use habits; for example, potentially enabled analyzing of how frequently the user's device is under heavy use. This could lead to behavioral analysis.
Additionally, identical installations of computer deployments in standard environments (e.g. at schools, work offices, etc.) are often are behind a NAT. In simple terms, NAT mechanism allows a number of users to browse the Internet with an - externally seen - single IP address. The ability of observing any differences between otherwise identical computer installations - potentially allows to see (and target?) particular users.
Battery readouts as identifiers
The information provided by the Battery Status API is not always changing fast. In other words, they are static for a period of time; it may give rise to a short-lived identifier. At the same time, users sometimes clear standard web identifiers (such as cookies). But a web script could analyze identifiers provided by Battery Status API, which could then possibly even lead to recreation of other identifiers. A simple sketch follows.
An example web script continuously monitors the status of identifiers and the information obtained from Battery API. At some point, the user clears (e.g.) all the identifying cookies. The monitoring web script suddenly sees a new user - with no cookie - so it sets new ones. But battery level analysis could provide hints that this new user is - in fact - not a new user, but the previously known one. The script's operator could then conclude and reason that those this is a single user, and resume with tracking. This is an example scenario of identifier recreation, also known as respawning.
Recovering battery capacity
This was surprising! It turned out that in some circumstances it was possible to approximate (recover) the actual battery capacity in raw format; in particular on Firefox under Linux system. We made a test script which was exploiting the too verbose readout (i.e. 0.932874 instead of a sufficient 0.93) of battery level and combined this information with the knowledge of how the battery level is computed by the operating system, before it is provided to the Web browser. It turned out that it was possible to even recover the battery capacity and use it as an identifier. For more information, please refer to the paper report.
The study achieved an impact.
- a W3C standard is updated to reflect the privacy analysis
- Firefox browser shipped a fix
- the work received some recognition .
Trackers use of battery information
Expected or not, battery readout is actually being used by tracking scripts, as reported in a recent study. Some tracking/analysis scripts (example here) are accessing and recovering this information.
Additionally, some companies may be analyzing the possibility of monetizing the access to battery levels. When battery is running low, people might be prone to some - otherwise different - decisions. In such circumstances, users will agree to pay more for a service.
As a response, some browser vendors are considering to restrict (or remove) access to battery readout mechanisms.
Even most unlikely mechanisms bring unexpected consequences from privacy point of views. That's why it is necessary to analyze new features, standards, designs, architectures - and products with a privacy angle. This careful process will yield results, decrease the number of issues, abuses and unwelcome surprizes. | <urn:uuid:87859d28-b2fa-4497-8a6b-67bfc1bfc9b2> | CC-MAIN-2017-04 | https://blog.lukaszolejnik.com/battery-status-readout-as-a-privacy-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922921 | 896 | 2.53125 | 3 |
ATSC stands for Advanced Television Systems Committee, and about 20 years ago, this group developed the standard that became the basis for our digital terrestrial TV broadcasts. (Remember all that confusion about the digital transition two years ago?) The standard is due for updating, and the group has been hard at work on ATSC 2.0 already. At their annual meeting this week, the committee talked about some of the changes that could come with a revised standard.
One of the key changes would be a provision that supports the transmission of stereoscopic 3DTV content in high definition. This could seem to be a major challenge, especially given the current battles over the broadcast spectrum that has been assigned to terrestrial television stations. The fact is, however, that 3DTV could be accomodated within the existing bandwidth. The current digital broadcast standard is based on MPEG2 compression technology, which is the form that was used for DVDs. We now routinely use MPEG4 for video compression, including Blu-ray and satellite TV transmissions. This newer approach can be roughly twice as efficient as MPEG2, which means that two frames — left and right — could be sent in the same bandwidth currently required for a single frame.
There are other ways that the data stream could be condensed even further. For example, there is no difference in the appearance of distant objects between the two stereoscopic views. As a result, that data only needs to be sent once. The right hand frame could be sent in its entirety, followed by the data that is different for the left frame. This has the added advantage of maintaining backward compatibility with 2D sets; they could just ignore the second frame if they do not have 3D capabilities.
In order to get to this point, we need a standard for the broadcaster and television manufacturers to use. And we’d probably need external adapters for legacy sets, but that would probably be a low-cost addition.
ATSC is reportedly exploring other additions to the standard, including the ability to deliver data files in the background of the signal transmission, interactive features, audience measurement tools, and advanced program guide information.
This is not going to happen any time soon — remember that it took more than 15 years to implement the first ATSC standard — so don’t hold your breath. But it is entirely possible that broadcast television will get the chance to keep up with some of the other program distribution choices. | <urn:uuid:8e945d66-d61b-43d6-b9b2-292c4fa1567c> | CC-MAIN-2017-04 | https://hdtvprofessor.com/HDTVAlmanac/?p=1482 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95894 | 491 | 2.5625 | 3 |
Join Dell’s Plant a Tree Program: Support Reforestation
|We are committed to planting 1 million trees by 2020 (beginning in 2008) to help sequester carbon and restore natural habitats. We announced this goal as a part of our commitments for the American Business Act on Climate Pledge in October of 2015.|
The benefits of trees are undeniable. Not only do they absorb greenhouse gas emissions, but they improve air quality, recycle water, create shade, and provide food and homes for living things across the planet. The simple act of planting trees helps sustain our communities and the environment in countless ways.
Dell’s longstanding commitment to reforestation has led to a partnership with the Conservation Fund, an American environmental and economic development non-profit group. All of the donations made in the Plant a Tree program go directly to our nonprofit partner the Conservation Fund, to help customers and Dell work together to meet our goal of planting one million trees.
About the Program
The Plant a Tree program makes it easy for customers to help decrease the impact associated with the use of their Dell products while simultaneously helping restore habitats and protect forests. By supporting The Conservation Fund, the Plant a Tree program is a simple way to have a positive, lasting impact on the environment.
To help contribute to the Plant a Tree program, customers can make a small payment through Dell’s online store (see links below). All of the money goes directly to the Conservation Fund who uses the funds to plant native trees and rebuild important forest habitats. Current projects are located in several states in the U.S., including Texas, Louisiana and California.
Why customers participate
We already design our products and services to be as energy efficient as possible, but they will still use some electricity. Typical electricity generation results in carbon dioxide emissions, so there is a footprint associated with the use of electronics. This is why Dell is committed to helping to mitigate the impact of our products by planting 1 million trees by 2020. We are partnering with the Conservation Fund, customers, and businesses to help us successfully meet our goal. The Plant a Tree program is a way for customers to both mitigate the footprint of their products, and to help contribute to real reforestation and habitat restoration projects.
|Your contributions in action|
For example, two projects in northeastern Louisiana (USA) are restoring areas stripped of trees by heavy logging. In both Tensas River Reforestation Project and the Upper Ouachita River projects, efforts are aimed at restoring native hardwoods to a region that was once covered by dense forest before heavy logging took its toll. In addition to helping absorb carbon emissions, restoration in this area along the river provides important habitat for many species (including the endangered Louisiana Black Bear and Florida Panther) and helps process floodwaters.
The Plant a Tree program also supports the Conservation Fund’s work in the Big River and Salmon Creek Forests in the coastal Redwood forest area of Northern California. They are working to implement sustainable forest management practices including decreasing the intensity and frequency of harvests, and widening riverfront buffers impaired by erosion from a century of timber harvesting. These changes have improved water quality and habitat for coho salmon, steelhead trout and the spotted owl. Redwood forests themselves also store more carbon per acre than any other forest type, and Plant a Tree supports the Conservation Fund’s efforts to make their 16,000 acres of the Big River and Salmon Creek Forests financially self-sustaining and environmentally healthy.
Through their participation in the Plant a Tree program, customers have helped plant more than 820,000 trees. Help us reach one million by 2020.
How to participate
If you live in the U.S., there are two easy ways to get started with the Plant a Tree program:
- Make a payment today — Whether you’re an individual or a business, you can contribute to the reforestation of our planet with a simple click, offsetting the estimated carbon emissions associated with the electricity used across various IT products.
- Participate with purchase — We make it easy and convenient for you to help offset the approximate anticipated emissions of Dell products. To participate in to the program while shopping, simply add the Plant a Tree option to your shopping cart and you’re done. If you are in a business, contact your account representative or select the Plant a Tree option on your Premier page. Also, ask us about recognition certificates available to communicate your important contributions to your employees, customers and other stakeholders.
If you do not live in the U.S., visit Carbonfund.org/dell to participate. | <urn:uuid:8d0fa8af-2970-4ba9-b530-adc3998296a1> | CC-MAIN-2017-04 | http://www.dell.com/learn/us/en/uscorp1/corp-comm/plantatreeforme | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00018-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932585 | 936 | 2.53125 | 3 |
Botnets are proving to be a difficult hurdle for security professionals, and it’s easy to understand why. Distributed Denial of Service attacks that can knock down servers or services, as well as hordes of remote-controlled zombie computers, are two of the most dangerous ways that hackers use botnets to serve their purposes. What can you do to protect your business from botnets?
Botnets are often-malicious groups of computers that have been infected by a malware that allows for command-and-control functionality from a single-host server. Owners of infected computers often can’t tell that their system has been compromised and they don’t find out until it’s too late to do anything about it. The computers can then continue to spread the infection to as many systems as possible, or use the amount of traffic generated to perform a DDoS attack on a specified target. The infected computers relentlessly ping a website or server until it collapses beneath all of the traffic. Some hackers will even use botnets to generate massive revenue via click-throughs on website ads.
One of a botnet’s most dangerous traits is its accessibility. Anyone who wants to take advantage of a botnet can do so with relative ease. For the average user, DDoS-for-hire botnets are popular and available at a reasonable price. The most dangerous part of this is that they require practically no experience whatsoever, making even a would-be hacker a threat. These DDoS botnets have been estimated to be behind up to 40 percent of all attacks on networks.
It’s safe to say that those who partake in these attacks are usually out to make a bit of chaos, but more powerful, sophisticated botnets are used by government agencies and criminal organizations for various purposes. Attacks of this scale are much more expensive and difficult for the average hacker to use, and the resulting scale of the attack is a testament to this. These botnets can perform DDoS attacks that exceed several GB/second. Corero Network Security found that there has recently been a 25 percent increase in attacks of 10GB/second or higher--unnerving numbers, to say the least.
Rather than one of these immense state-sponsored botnets, you’ll probably be more likely to encounter a typical zombified botnet. Yet, even these are still dangerous, as a botnet will often be sent out into the wild to infect and subvert other computers. One potential use for these botnets is sending spam to spread malware, allowing for the infection of even more systems to bring into the botnet. As the botnet grows, the problem becomes more difficult to deal with.
Botnets and DDoS attacks in general can be challenging to protect against, but your business doesn’t have to face them alone. You can implement enterprise-level security solutions that are designed to keep malware-spreading spam out of your inbox, and with a remote monitoring and maintenance solution, you can have an outsourced pair of eyes on your network traffic at all times. This helps your business focus on operations rather than bracing from an incoming attack.
Nerds That Care can provide your organization with the tools needed to keep these advanced threats at bay. To learn more, reach out to us at 631-648-0026. | <urn:uuid:63bfb0b6-f0de-4bb4-8214-8a8cc3ce0251> | CC-MAIN-2017-04 | https://nerdsthatcare.com/nerd-alerts/entry/a-zombified-botnet-is-as-scary-as-it-sounds | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958923 | 680 | 2.6875 | 3 |
Bridge the interpersonal gap.
Interpersonal communications is the means by which we develop meaningful relationships with others. In this course, you'll learn about the interpersonal communication continuum and basic principles of effective interpersonal communication. You'll explore the concept of the communication loop and the interpersonal gap. You'll also learn how physical and personal space affects the way a message is received.
Virtual short courses do not include materials or headsets. | <urn:uuid:8b5438e0-a570-4283-962f-d88b4eb077f1> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120433/interpersonal-communication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903652 | 83 | 2.953125 | 3 |
With each new technology that solves an old problem comes a new set of problems.Take electric vehicles, which are still a very small portion of the market in the U.S. with just 96,000 sold in 2013. But that share is growing, up 84 percent from 2012. If electric vehicles eventually become widely popular and almost everyone is plugging their car in after work, some worry it could crash the electric grid around dinner time. A team of scientists from the University of Vermont are looking to solve that potential problem.
The team’s solution would use smart meters and electric grids that can look at the whole picture to ensure that all connected cars are charging in an efficient way that won’t crash the grid. One plugged-in car could, for instance, charge for 10 minutes and then get back into line to allow another car on another part of the grid charge for a while to ensure a safe distribution of electricity.
"The problem of peaks and valleys is becoming more pronounced as we get more intermittent power — wind and solar — in the system," said Paul Hines, assistant professor at the School of Engineering, in a press release. "There is a growing need to smooth out supply and demand."
The team aims for a holistic solution that takes into account all drivers as a group, but for situations when people need power urgently, the scientists made an accommodation.
"We assumed that drivers can decide to choose between urgent and non-urgent charging modes,” the scientists wrote in a report to be published in IEEE Transactions on Smart Grid, a journal of the Institute of Electrical and Electronics. A driver requesting urgent charging would be put to the front of the line, but they would be charged a full-market rate, instead of the reduced rate that non-urgent chargers would receive. | <urn:uuid:f35a1b7b-96d9-4355-b08d-52aa2f272b5e> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/Electric-Vehicles-Could-Crash-the-Grid-Unless.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970602 | 369 | 3.015625 | 3 |
Do you know what is Software Defined Networks (SDN)? Along with Windows Server 2012 Hyper-V, Microsoft introduced to the world its solution for Software Defined Networks. This provides added advantages to their Virtual machine management platform System Center Virtual Machine Manager (SCVMM) 2012 SP1. Do you wanted to know the advantages and limitations of SDN in details? Here is the opportunity to know more about these!! Savision, again comes with pretty useful white Papers and Webinars which covers all the about Software Defined Networks (SDN) along with SCVMM 2012 SP1. More details about Software Defined Networks here.
One of the major drivers towards the implementation of SDN is “isolation of networks” along with segregation of networks, cloud administration and Overlap IP address schemes across clouds. Isolation of networks is very important in cloud platforms. Traditionally, we can achieve isolation via VLAN but as you know there are loads of limitations to VLAN technology. More details Click here and Download the Whitepaper. | <urn:uuid:3f52e27b-205a-4859-af19-56edba46e773> | CC-MAIN-2017-04 | https://www.anoopcnair.com/wanted-to-know-more-about-software-defined-networks-along-with-scvmm-2012/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923963 | 213 | 2.5625 | 3 |
5.3.4 What are ISO standards?
The International Organization for Standardization, (ISO), is a non-governmental body promoting standardization developments globally. Altogether, ISO is broken down into about 2700 Technical Committees, subcommittees and working groups. ISO/IEC (International Electrotechnical Commission) is the joint technical committee developing the standards for information technology.
One of the more important information technology standards developed by ISO/IEC is ISO/IEC 9798 [ISO92a]. This is an emerging international standard for entity authentication techniques. It consists of five parts. Part 1 is introductory, and Parts 2 and 3 define protocols for entity authentication using secret-key techniques and public-key techniques. Part 4 defines protocols based on cryptographic checksums, and part 5 addresses zero-knowledge techniques.
ISO/IEC 9796 is another ISO standard that defines procedures for digital signature schemes giving message recovery (such as RSA and Rabin-Williams). ISO/IEC International Standard 9594-8 is also published (and is better known) as ITU-T Recommendation X.509, ``Information Technology - Open Systems Interconnection - The Directory: Authentication Framework,'' and is the basic document defining the most widely used form of public-key certificate.
Another example of an ISO/IEC standard is the ISO/IEC 9979 [ISO91] standard defining the procedures for a service that registers cryptographic algorithms. Registering a cryptographic algorithm results in a unique identifier being assigned to it. The registration is achieved via a single organization called the registration authority. The registration authority does not evaluate or make any judgment on the quality of the protection provided.
For more information on ISO, contact their official web site http://www.iso.ch. | <urn:uuid:35322241-8353-4b0f-9731-3237e39c2183> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/iso-standards.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.856083 | 360 | 3.625 | 4 |
Alternative fuels aren't a perfect alternative to gasoline. Here's why. (TechnoRide)
Alternative fuels arent a perfect alternative to gasoline. They have less energy than gas and cost more; its improbable that production will be ramped up for more than a fraction of Americas vehicles; they have corrosive effects on normal fuel systems; and its not certain well get the technology to work soon.
(May) calculated the cost of driving a small car coast to coast on various fuel sources: It ranged from $60, for an all-electric car using coal-fired powerplants to generate power, to $804, using hydrogen.
Gasoline was pegged at $231 for the trip, although the run-up in prices since the article was written would bring the cost to around $275.
, or grain alcohol, is the special ingredient in gasohol (10 percent ethanol, 90 percent gasoline) and E85 (85 percent ethanol, 15 percent gasoline). Its derived from fermenting corn, apples, or sugar cane (maybe Fidel has held on so long by selling black-market E85?) and its also how you make moonshine
Ethanol fuel mixtures burn cleaner than gasoline, and there are about 6 million flexible fuel vehicles (FFVs) in the U.S.
But ethanol as a primary energy source for all cars doesnt add up: an acre of corn produces 300 gallons of ethanol per season, and all the U.S. ethanol refineries last year turned out 4 billion gallons of ethanolbut Americans burned 200 billion gallons of motor fuel. There isnt enough farmland in the U.S. to grow food, along with the feedstock for ethanol. According to Popular Mechanics
youd need to use 675 million of the nations 938 million acres of farmland to make enough ethanol.
In addition, ethanol is corrosive and has only two-thirds the energy of gasoline. Some critics even say ethanol is energy-negative (in other words, more energy is expended farming, harvesting, fermenting, and transporting it than is saved in vehicles), though the Department of Energy says the process is about 35 percent positive.
is ethanols poor cousin. Also called methyl alcohol
or wood alcohol
, its poisonous, has only half the energy of gasoline, and is much more corrosive than gas on fuel tanks and fittings.
Methanol can be made from a variety of sources. Most typically, natural gas is converted to methane and then into methanol. But sewage, manure, landfill emissions, coal, sawdust, grass clippings, and other plants can also be used. In FFVs, methanol is mixed with gasoline, often to make M85 (85 percent methanol).
Read the full story on TechnoRide: Alternative Fuels: A Primer
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools. | <urn:uuid:d18d81c5-2669-4ac3-b928-d47215d95d8b> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Alternative-Fuels-A-Primer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00459-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941213 | 605 | 2.640625 | 3 |
The Energy Department’s Advanced Research Projects Agency-Energy (ARPA-E) recently announced up to $32 million in funding for 10 innovative projects as part of its newest program: Next-Generation Energy Technologies for Connected and Autonomous On-Road Vehicles (NEXTCAR).
Connected and automated vehicle (CAV) technology utilizes on-board or cloud-based sensors, data, and computational capabilities to help a vehicle better process and react to its surrounding environment. With a goal of reducing individual vehicle energy usage by 20 percent, NEXTCAR projects will take advantage of the complex and connected systems in vehicles to drastically improve their energy efficiency.
“Today, cars and trucks are increasingly being outfitted with new technology that provides information about the vehicle’s environment, mostly to make them safer and to help drivers with basic tasks,” said ARPA-E Director Ellen D. Williams.
NEXTCAR technologies offer efficiency-boosting solutions. By integrating these systems with data from emerging CAV technologies, vehicles will be able to predict future driving conditions and events.
“As our vehicles become creators and consumers of more and more data, we have a transformative opportunity to put that new information to the additional use of saving energy in our road transportation system,” said Williams. | <urn:uuid:e5cc3a9a-02f4-4e52-987d-2ee0991fbbba> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/department-of-energy-to-fund-connected-tech-for-vehicle-energy-savings/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00183-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938821 | 269 | 2.953125 | 3 |
Many of the current Cisco certifications require a working knowledge of IPv6. As you begin your studies of this next-generation Internet Layer protocol for packet-switched internetworks, I thought it might be interesting to share some of the background on this successor to IPv4 with you.
Internet Protocol version 4 (IPv4) has been at the core of a standards-based internetworking process on the Internet for many decades, and is the protocol with which everyone is most familiar. IPv4 was originally described in IETF publication RFC 791 in September 1981. At about the same time, the Department of Defense standardized it as MILSTD-1777.
Because IPv4 was designed to use a 32-bit word to define an IP address, it was limited to providing 4,294,967,290 unique layer-three logical addresses. At the time of its acceptance as a standard, it was thought that this was sufficient address space for any future network needs.
However, the originators of this protocol had no possible way of knowing what the Internet and the World Wide Web would evolve into in future years. In addition to reserving many possible addresses for special purposes such as private addresses (approximately 18 million addresses) or multicast addresses (approximately 16 million addresses) it was obvious they never envisioned the number of addresses that would be required to support You Tube, Twitter, and e-Bay.
Since the 1980s, it has become apparent to everyone that the number of unique IPv4 addresses that were available to support new mobile devices, and always-on devices such as DSL and cable modems, along with the rapidly expanding number of personal and business Internet users, were being exhausted at an alarming rate. A rate never initially anticipated in the original design of the network.
In early 1992, several proposed solutions were circulated to overcome this serious situation that, if not addressed, would bring an end to Internet growth and the introduction of new systems and applications. By the end of 1992, the Internet Engineering Task Force (IETF) announced a call for white papers and the creation of the “IP Next Generation” (IPng) working groups. And, by 1996, a series of RFCs were released that defined Internet Protocol Version 6 (IPv6), starting with RFC 2460.
One of the most important features of IPv6, which directly solves the problem of Layer-Three address exhaustion in IPv4, is the availability of a much larger address space. Instead of a 32-bit word, IPv6 addresses are identified with a 128-bit word. And, to put this new address space into perspective, IPv6 will provide approximately 2 to the 95th power addresses for each of the roughly 6.5 billion people alive today. It is estimated that IPv6 will provide all of the needed unique addresses for many, many years to come.
It must be recognized that the need to ensure geographical saturation with a ready supply of addresses was an important driver for the creation of this new protocol. However, it was not the sole consideration. The longer address space allows a better and more systematic and hierarchical allocation of addresses and efficient route aggregation. Network management and routing has been designed to be much more efficient than with IPv4.
As you pursue your studies for the CCNA exam, you will need to become familiar with IPv6: global unicast addressing, routing, and subnetting, protocols and addressing, configuring routing and routing protocols, and transition options.
Author: David Stahl | <urn:uuid:056f0856-3299-447d-8495-bb39f9c4c07e> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/06/12/ipv6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964219 | 711 | 3.265625 | 3 |
The biggest hurdle we face with implantable chips is that they need to run off of a limited power supply that inevitably needs to be replaced. MIT engineers may have cracked this problem with a new implantable fuel cell that runs off the same sugars you got from your morning bowl of Wheaties.
MIT's fuel cell is built upon a silicon wafer laced with a platinum catalyst that strips electrons from glucose. The chip is able to draw energy from the same sugars that your cells' mitochondria digests to generate the adenosine triphosphates (ATP) that powers your cells.
[ FREE DOWNLOAD: 6 things every IT person should know ]
So far, the prototype fuel cell can generate hundreds of microwatts, which happens to be enough to power an ultra-low-power neural implant. These fuel cells could potentially be integrated into long-term brain implants that help the disabled with "brain-control" interfaces or a neural prosthesis.
The team calculated that the chips would be able to draw all the energy they need from the cerebrospinal fluid (CSF) that bathes the human brain. According to the researchers, CSF contains very few cells that would provoke an immune response. The fluid also carries more than enough energy to power the chip along your brain cells at the same time.
The engineers say that their research is still years away from real medical use, but their proof of concept is a good first step toward developing an implant that does not require an external power source. The next step for the research will be to demonstrate that the system can work in a living animal.
Like this? You might also enjoy...
- This Is What a Portal Gun Looks Like in Real Life--Sort Of
- Future Deep Space Missions Could Have Robotic Vegetable Gardens
- Want to Build Your Own Electric Go-Kart? Now You Can
This story, "MIT develops a fuel cell implant that runs on sugar, turns carbs into electricity" was originally published by PCWorld. | <urn:uuid:781fb84a-a0ae-49a0-b637-3428c208cbe5> | CC-MAIN-2017-04 | http://www.itworld.com/article/2722620/development/mit-develops-a-fuel-cell-implant-that-runs-on-sugar--turns-carbs-into-electricity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937515 | 409 | 3.28125 | 3 |
The European Union (EU) is pledging 1 billion euros on a set of advanced computer technologies, including a supercomputing network, for predicting the future. But in this case, it’s not about forecasting climate change, finding marauding asteroids, or the determining the ultimate fate of the universe. Rather it is specifically designed to forecast social and economic events, in particular crisis events.
The supercomputing side of the effort has been given the grand title of the Living Earth Simulator (LES). The LES is part of the FuturICT Knowledge Accelerator Project, which also encompasses a global sensor network Planetary Nervous System) and a framework for individuals and organizations to share data (Global Participatory Platform). To fulfill its mission as a planetary crystal ball, the LES system will gather information from the sensor network, Twitter, web news searches, and a wide variety of other real-time sources, and try and tease out higher level societal trends. The overarching idea is to apply supercomputing technology and this real-time data feed to social systems in much the same manner as has been done for physical systems.
Here’s how the FuturICT site describes the goals of the project:
The FuturICT project will produce benefits for science, technology and society by integrating previously separated approaches. ICT systems of the future will provide the social sciences with the datasets needed to make major breakthroughs in our understanding of the principles that make socially interactive systems work well. This, in turn, will inspire the design of future systems, made up of billions of interacting, intelligent components capable of partially autonomous decisions. One goal is the creation of a privacy-respecting, reputation-oriented, and self-regulating information ecosystem that promotes the co-evolution of ICT with society. The tremendous growth in social media, mobile applications, Open Data and Big Data will enable complexity science to tackle practical problems by uncovering laws of interaction and help us understand the implications of strong couplings, thereby forging a new science of global systems that are more resilient to disruptions.
If this all sounds ‘Blue Sky,’ it should be noted that this is a long range effort. The 1 billion euros in funding for the project will be spread over 10 years, that is, assuming the euro and the EU are still around in a decade. Speaking of which, a supercomputer to predict the fate of EU finances would be a welcome tool today, given the developing economic crisis in Europe over the solvency of some of the EU member nations.
The system could also be used to identify political unrest, the spread of epidemics, economic bubbles, and other types of systematic instabilities. About 30 computer science centers around the world have pledged support for the project. Most of these centers like ETH Zurich and Oxford University are in Europe, but there is also buy-in from groups in the USA, Japan, China, and Australia. In addition, the project is getting backing from commercial organizations, including Microsoft Research, IBM, Telecom Italia, Yahoo! Research, and Disney Research, among others.
For more a deeper dive on the project, check out the FutureICT site. | <urn:uuid:f91d4124-72ab-452c-a39f-9483496a5200> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/12/05/living_earth_simulator_the_ultimate_hpc_big_data_application/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91241 | 645 | 3.265625 | 3 |
Poultry Production, Past and PresentBy Larry Barrett | Posted 2002-07-10 Email Print
Tyson has cut 17 days from the lifespans of its chickens in the past 40 years. The modern chicken farm has come a long way from its humble, low-tech roots.
The fact that chicken production and consumption has increased nine-fold in the past 40 years while the average cost per pound to produce chickensfactoring in inflationhas declined by eight-fold, demonstrates just how far the modern chicken farm has come from humble, low-tech roots.
Over the past 50 years, chicken producers and farmers have done everything they can to reduce the costs and time it takes to grow a commercial chicken on the farm and process it into ready-to-serve pieces. Now the emphasis is on the supply chain and how to use technology such as an electronic exchange network to get a better read on demand for today's more specialized, ready-to-serve products.
While many of the advances made in the chicken industry over the past 100 years were mainly a product of trial and error, information systems have stepped into the breach to deliver the two most important elements needed for any mass production industry: speed to market and the reduction of manufacturing costs through efficiency.
"The technology has changed everything about chickens," said Brian Sheldon, a poultry science professor at North Carolina State University. "At every step along the way, computers and manufacturing equipment are making it possible to yield more chicken, and better chicken, in a shorter period of time without having much effect at all on the price per pound."
Virtually every chicken produced in the U.S. comes from contract farmers who enter into partnerships with major chicken processors and distributors such as Tyson Foods, Gold Kist, Pilgrim's Pride and ConAgra Poultry. In fact, this quartet produced more than 48% of all the chicken sold in the U.S. last year.
This form of vertical integration is perhaps the most important factor in reducing manufacturing costs in the past four decades by trimming transaction costs, increasing capacity utilization and providing better quality control as well as a more uniform product.
"For example, a Tyson farmer will know before the chicks arrive from the hatchery whether they are destined to be used for Chicken McNuggets at McDonald's or breaded breast filets for Costco," said Nicholas Anthony, a professor of poultry science at the University of Arkansas.
This system also virtually eliminates the costly possibility of either an oversupply or shortage of chickens for the manufacturer and, in turn, the eater of food. Combined with the electronic exchange networks, chicken manufacturers eventually will have a crystal-clear view of the level of demand from distributors and retailers and can adjust their farming specifications and schedules accordingly. | <urn:uuid:4756b688-43e4-40e0-867b-7bac33925a95> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Past-News/Poultry-Production-Past-and-Present | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00175-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9582 | 561 | 2.71875 | 3 |
In order to protect your assets, you must first know what they are, where they are, and understand how they are tracked and managed. Are they secured? Who has access to them? Who tracks and manages them? Do you have functional procedures in place to respond and recover from a security breach quickly? Do you have a process improvement cycle to prevent re-occurrence?
These are all important issues related to assets. It’s important to remember what an asset is — it’s anything used in a business task. Generally, asset protection involves identification of assets, assessment of an asset’s value, and a determination of the technologies needed to provide sufficient security for that asset. There are many facets to the job of asset security including:
- Cloud Computing
- Secure Coding
- Identity Management
- Information Assurance
- Public Key Infrastructure
The cloud offers computing services as a commodity. This involves a wide range of capabilities including online storage and backup, virtual/remote desktop, collaboration services, software as a service, platform as a service, and infrastructure as a service. Popular services include online office productivity (such as Google Docs or Office 365), computing services for custom applications (such as Engine Yard or Windows Azure), or complete back-end scalable datacenters (such as GoGrid or Rackspace). While cloud computing can greatly benefit an organization, it also introduces new and unique security concerns.
Cloud services are at odds with some regulations and security standards. Each organization is responsible for their own compliance of issues like prohibition of comingling of certain data types, hardware types, or data locations. Also, traffic flow must be understood. Is your sensitive and critical data encrypted in transit and while stored/processed in the cloud? Who has access to the encryption keys? What procedures are in place to manage ease of access, recovery options, downtime concerns, backup, privacy protections, and speed of interaction and throughput? Cloud computing revolutionizes technology. The benefits and drawbacks need to be considered carefully before shifting aspects of your infrastructure into the cloud.
Virtualization is the creation and/or support of the simulated copy of a real machine or environment. Virtualization can be used to provide virtual hardware platforms, operating systems/platforms, storage capacity, network resources, and applications. Virtualization can also be used to host applications on a different OS than they were originally designed or allow a single set of server hardware to host several server operating systems in memory simultaneously. Virtualization offers benefits of lower hardware costs, reducing operating costs, efficient backups/restoration, high-availability, portability of services, faster deployment, expandable/scalable, and more. Virtualization adds security to the computing environment by permitting servers to be logically separated from each other. However, virtualization can cause problems with licensing, patch management, and regulation compliance which may cause slower performance of services, greater potential of single point of failure, and potential security concerns due to hardware re-use or sharing.
Secure coding practices are essential to reducing the threat caused by the exploitation of processes, bad/poor coding, and flaws in design. Secure coding includes the consideration of appropriate controls at the onset of development, proper consideration given to design, robust code and error routines, minimizing verbose error messages, eliminating programmer back doors, bounds checking, input validation, separation of duties, and comprehensive change management. Failure to use secure coding practices leads to software that is susceptible to buffer overflow attacks, DoS attacks, and malicious code injection attacks. Non-robust code can also provide a path for database and command injection attacks.
Secure coding practices can include many aspects of secure design integration and attack prevention. For example, software can be designed to authenticate all resource requests and processing actions before allowing a task to operate. Additionally, software needs to limit and sanitize input to prevent scripting, meta-characters, and/or command injection are essential parts of secure coding. Secure coding is more than just a few extra lines of code; it is an entire process and architecture of software development.
Secure coding is an essential security practice not just for vendors that sell/release products to the world-wide market but also for internal software developers that develop code for use exclusively by internal users or which is exposed to the world via an Internet service. One of the biggest mistakes companies make in relationship to the Internet is assuming their Internet servers are secure and cannot be compromised, and if they were ever compromised it would not lead to serious consequences or a breach of their private network. This is usually a poor assumption. With the growing popularity of fuzzing tools to find coding errors, the proliferation and distribution of buffer overflow exploit code, and with several variants of code injection attacks (including SQL, command, XML, LDAP, SIP, etc.), no Internet service can ever be assumed to be immune from breach.
Companies collect a lot of customer and employee data. Identity management involves the protection of all personally identifiable information (PII). This protection includes proper classification of information, delineation of the lines of communication, and strict policies and procedures for access control. Accountability is a key requirement to hold all information requestors (‘subjects’, both internal users and outside attackers) liable for their actions.
Credentials are a popular form of PII subject to attack. All repositories of personal information, access channels to those repositories, and exchange of information with those repositories needs to be protected with strong authentication and encryption. Today’s sharing of information, transient locations of data repositories, and society’s acceptance of weak authentication set the stage for transitive attacks. Transitive attacks occur when a trust is allowed without realizing that it included other trusts that you were unaware of, and that can defeat your security.
Information assurance satisfies management’s desire for a given security profile, indicating that all data is properly protected and able to be accepted as accurate and readily available. The set of processes needed to support this assurance requires the establishment of a reliable means to lock down assets and track their usage. Specifically, information assurance is focused on the security of data or information typically stored in files. It is important to properly manage the risk of using, processing, transmitting, and storing these data files. Secure data management addresses not just electronic or digital issues, but physical storage media (especially portable media) as well.
Public Key Infrastructure
Public Key Infrastructure (PKI) is a security framework and is generally comprised of four main components: symmetric encryption, asymmetric encryption (often public key cryptography), hashing, and a reliable method of authentication. Symmetric encryption is used for bulk encryption for storage or transmission of information. Asymmetric encryption is used for digital signatures and digital envelopes (i.e., secure exchange of symmetric keys). Hashing is used to check and verify integrity.
How will you assure reliable authentication is used to ensure that only valid entities participate in the PKI environment, secure key delivery, secure key use, and key revocation? Customers’ belief in the credibility of certificates, and therefore security of transactions with your website, depend on the reputation and reliability of the CA. Due to recent events by hackers, blind use of digital certificates has been called into question. As with any protection measure, companies need to understand what PKI technology affords us in terms of protection, as well as to be cognizant of the technology’s limitations and vulnerabilities. | <urn:uuid:09839461-1a69-4505-94e1-0db1c53a3a70> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/05/01/asset-protection-what-do-you-have/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00293-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930296 | 1,510 | 2.5625 | 3 |
Those juicy tidbits gleaned from one-side cellphone conversations you happen to overhear are even more distracting than those juicy tidbits gleaned from accidentally overhearing conversations between two nearby people, according to University of San Diego researchers.
While on the surface this might seem like a frivolous piece of research, consider that such distractions could impact anyone from a bus driver to an office worker handling sensitive information.
The research, detailed in a paper titled "The Effects of Cell Phone Converstations on the Attention and Memory of Bystanders" that's been published in the journal PLOS ONE, is based on a study in which participants were asked to complete a task not realizing conversations taking place in the background were part of the study. It turns out that those participating in the study found cell phone conversations not only more distracting, but also remembered more of the conversations.
Lead researcher Veronica Galvan says, "This is the first study to use a realistic situation to show that overhearing a cell phone conversation is a uniquely intrusive and memorable event. We were interested in studying this topic since cell phone conversations are so pervasive and could impact bystanders to those conversations at work and in other settings of everyday life."
Unintentional eavesdropping on cellphone conversations requires the listener to work a bit harder, guessing at what the person on the other end of the call is saying... | <urn:uuid:ad8226b5-b7a6-44de-9777-c58845cbbf79> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224308/wireless/study-exposes-impact-of-second-hand-cellphone-use.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958204 | 273 | 2.734375 | 3 |
Many countries are drafting domestic policies to combat cyber attacks and cyber crime, but the larger question is what can be done on the multilateral level since the digital world routinely ignores national boundaries.
One measure of the problem is provided by the 2011 Symantec survey on the scale of cyber crime, showing that the annual cost of cyber crime to individuals in 24 major countries is $114 billion.
But, so far, international initiatives are plagued by the lack of agreed upon frameworks, institutions and procedures. Below, a few examples—far from a complete list—of the organizations and initiatives dealing with cybersecurity on the multilateral level:
- Perhaps the largest player in the international cybersecurity arena is the International Telecommunication Union (ITU). A United Nations organization comprised of 193 UN member states and over 700 private companies and organizations, the ITU seeks to create guidelines and frameworks for international initiatives. ITU facilitates the World Summit on the Information Society (WSIS) and the Global Cybersecurity Agenda (GCA). It also drafts UN General Assembly resolutions concerning information security and criminal utilization of information technology. ITU initiatives are voluntary and merely provide guidelines, serving as a foundation for customary international law, which means they lack a concrete legal framework. Still, they do serve to raise awareness on cybersecurity issues, which is an essential prerequisite for international action.
- The Asia-Pacific Economic Cooperation (APEC) is a working group of 21 nations, which includes Australia, Canada, China, Japan, Mexico, Russia, Taiwan and the United States. In 2002 APEC created the Shanghai Declaration Program of Action, which illustrates the potential for intelligence sharing and cybersecurity defense through regional partnerships. However, there’s still a lack of clear policy statements to promote cooperation, and the organization has failed to meet the Bogor goals set forth in 1994.
- The European Network and Information Security Agency (ENISA) is a working group tasked with protecting the critical information systems of European Union member states through prevention and reaction to attacks on these critical systems. The prevention measures are focused on raising awareness and information sharing.
- The CERT-EU (Computer Emergency Response Pre-configuration Team) is tasked with responding to cyber attacks on information systems of EU member states. But CERTS often get overloaded with calls and, as a result, responses are frequently delayed. Such delays and call-center overload illustrate the larger challenges of providing adequate funding and member state commitment within this regional organization.
- Cybersecurity is also an issue under discussion within the NATO-Russia Council, as both sides have expressed interest in possible cooperation. However, there are frequent disagreements over definitions, language and terminology. Russia considers “cyber attacks” to be a military issue while the U.S. sees them as criminal activity. The U.S. uses the term “cybersecurity” and for what Russia calls “information security.” The two countries also have very different notions of what constitutes Internet censorship.
EWI’s experiences hosting international cybersecurity summits and leading bilateral Russia-U.S. and China-U.S. efforts have demonstrated that progress on the multilateral level is possible—but also can be hindered by mistrust.
To ensure further progress, all sides need to place a greater emphasis on building up trust as they pursue the common goal of a safer, more secure digital world. | <urn:uuid:50c625a5-4c68-4548-920b-b48a62eacaec> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/21183-Cybersecurity-at-the-International-Level.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00102-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933104 | 684 | 3.015625 | 3 |
Using the Anturis Console, you can set up monitoring of CPU usage for any hardware component (a server computer) in your infrastructure by adding the CPU Usage monitor to the component.
The central processing unit (CPU) is the hardware that performs arithmetical, logical, and input/output (I/O) operations. The two former types of operations are performed by the arithmetic logic unit (ALU), while the latter are carried out by the control unit (CU), which reads instructions from the main memory and writes the result back. The speed at which a CPU executes instructions is called the clock rate. Most computers today have either multiple CPUs or CPUs with multiple cores (multicore processors). These can execute operations in parallel.
The CPU is always either executing operations (busy), or waiting for input (idle). CPU usage is defined as the percentage of time that the CPU is busy. For a computer that performs some sort of calculations, you want CPU usage to be close to 100%, because this ensures maximum efficiency. However, in case of a web server or an email server, the number of operations can vary depending on the number of client requests that need to be processed. If the number of requests exceeds the maximum that the CPU is capable of, then requests will be placed in a queue, thus increasing response time for clients. In this case, you want to have room for occasional peaks of CPU activity.
CPU usage is calculated over a sample period. For example, if CPU usage of 70% is measured over two seconds, this can really mean that the CPU was 100% busy for one second, and then 40% busy for another second. Such short bursts of activity with 100% usage are not critical, because the queue is freed up quickly, but longer periods of high CPU usage can indicate a possible problem. To understand if peaks of CPU activity are causing issues, you can use the CPU Load monitor that shows how many processes are waiting in the queue for execution.
Monitoring CPU usage can help you analyse the average and peak values to decide if you need to increase the number or speed of the CPUs on the server. Depending on your conditions, configuration, and requirements, the critical value of CPU usage, and the period over which CPU usage is high, can vary. You have to exercise good judgement of these parameters based on the monitoring data, in order to ensure a high level of performance.
©2017 Anturis Inc. All Rights Reserved. | <urn:uuid:4c8803b2-a2b4-4000-8d07-1a7448b12948> | CC-MAIN-2017-04 | https://anturis.com/monitors/cpu-usage-monitor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00496-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941024 | 503 | 3.203125 | 3 |
Using optical break locators and OTDRs
An optical time-domain reflectometer, or OTDR, is an instrument used to analyze optical fiber. It sends a series of light pulses into the fiber under test and analyzes the light that is scattered and reflected back. These reflections are caused by faults such as breaks, splices, connectors, and adapters along the length of the fiber. The OTDR is able to estimate the overall length, attenuation or loss, and distance to faults. It’s also able to “see” past many of these “events” and display the results. The user is then able to see all the events along the length of the fiber run.
However, OTDRs do have a weakness?—?a blind spot that prevents them from seeing faults in the beginning of the fiber cable under test. To compensate for this, fiber launch boxes are used. Launch boxes come in predetermined lengths and connector types. These lengths of fiber enable you to compensate for this blind spot and analyze the length of fiber without missing any faults that may be in the first 10–30 meters of the cable.
An optical break locator, or OBL, is a simplified version of an OTDR. It’s able to detect high-loss events in the fiber such as breaks and determine the distance to the break. OBLs are much simpler to use than an OTDR and require no special training. However, there are limitations. They can only see to the first fault or event and do not display information on the portion of fiber after this event. | <urn:uuid:508c770f-d1d4-44cc-9ac9-da9ee2c85d57> | CC-MAIN-2017-04 | https://www.blackbox.com/en-nz/products/black-box-explains/using-optical-break-locators-and-otdrs | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941815 | 326 | 3.265625 | 3 |
Even Satellites Lacked Sufficient ReachBy David F. Carr | Posted 2005-05-04 Email Print
It's nearly half a year since a tsunami reduced the coasts of 11 countries abutting the Indian Ocean to rubble. Roads, bridges and houses still need to be rebuilt. Some relief workers think military and humanitarian organizations could use cheap, portable
Strong Angel II had run on a free wireless network, connected by satellite to the Internet. Something like that would have been very useful in Banda Aceh. In fact, the French organization Telecoms Sans Frontiers (Telecommunications Without Borders) was trying to set it up, but the satellite connection it shared was limited to the bandwidth of a dial-up modem.
The U.N. was having its own problems establishing a high-bandwidth satellite connection.
One problem, Snoad says, was that communications kits for rapid deployment were optimized to work with the satellites over Africa, where the U.N. has seen the most frequent humanitarian emergencies in recent years. The kits were not adjusted for satellites hovering above the Indian Ocean.
Meanwhile, the U.S. had brought plenty of bandwidth ashore, but it was not the kind of bandwidth that could be shared with outsiders.
Limited to a single military transport plane for his gear, the Marine Corps officer setting up the onshore network brought communications equipment that supported the military's classified Internet protocol network, SIPRNet—which is only meant to create a secure, private network.
No one could fault him for following standard operating procedures, Engle says, since maximum security is the military's default mode. But one recommendation he made upon his return was that the military develop a different "fly-away kit" of network equipment for use in humanitarian operations, where information sharing is more important.
"Other than food and water, communications infrastructure should be one of the first things considered," he wrote in his preliminary report to the Pentagon on Feb. 16.
Back on Jan. 8, when the observer team stopped at the joint command center in Utapao, Thailand, en route to Indonesia, Engle was struck by the lack of what the military calls "situational awareness."
Generally, that refers to a military commander's understanding of what is happening on a battlefield, based on the information available at a particular moment. For example, the Navy uses its WebCOP software to give its crews a "common operational picture" from wherever they are, through a Web browser. The system constantly updates maps of a given region, superimposing locations of friendly and enemy forces, to help commanders make better decisions.
A humanitarian mission ideally would have a common operational picture of where food, water, transport planes and trucks are, and where they needed to go. What Engle instead saw at this military command center was a single PowerPoint slide. Projected on the wall was the status of available transport aircraft, among other items. But the data was static, not updated on the spot.
"I would have expected more, frankly," Engle says.
But the U.S. military's battlefield collaboration and decision-support tools aren't well suited to a humanitarian operation, where information needs to be shared broadly. Tools like WebCOP are designed for use in command centers with high-bandwidth networks, whereas the ideal situational awareness tool for a humanitarian operation would be accessible to team leaders in the field with only sketchy network access.
This was essentially what the Strong Angel team tried to construct in Groove Virtual Office, and which Rasmussen and his collaborators from Groove transformed into a virtual emergency operations center after the tsunami. | <urn:uuid:aec517cb-fc7c-4f0b-8722-280b087f4213> | CC-MAIN-2017-04 | http://www.baselinemag.com/mobility/Technology-Disappoints-in-Tsunami-Relief-4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96151 | 739 | 2.578125 | 3 |
There are quite a few ways to send email from the Unix command line and so many good reasons for doing so. You might want to provide feedback on automated processes so that you and fellow admins know about problems long before they start having an effect. Or you might want to monitor critical processes or system resources. Whatever the motivation, alerts delivered via email provide a much more efficient method of managing systems than manually scanning through log files and checking on running processes. Taking advantage of some very easy-to-use scripting techniques, you might use automated email to report on disk space, send performance reports, tell you that a critical process had to be restarted, show who is logging in (and how frequently), report unusual events, and watch over a wide range of issues that need a sysadmin's attention.
A good sysadmin rule of thumb to consider: Make the problems announce themselves and you won't have to go looking for them.
The various tools that you can use for sending email from the command line include these, though only some of these commands lend themselves to sending email from a script.
- mail and mailx
The mail and mailx commands are pretty much the same. In fact, one is often simply a symbolic link to the other. Years ago, mailx might have been the best command when to use when automating email alerts and it was easy to include a subject line with your messages.
Here are some mailx examples, one with the entire message composed with an echo command and one that is sending a file. The first of these just sends a message. The second adds the output of a sar command (the output of which has been stored in a file) to the message.
$ echo "This is my message for you-oo-oo" | mailx -s "Don't worry" firstname.lastname@example.org $ mailx -s "`hostname` performance" email@example.com < /tmp/sar-report
You can also use sendmail to send messages, but adding a subject line is a bit harder. Here we're sending the output of a sar command.
$ ( echo "Subject: `hostname` performance"; echo; cat sar-report ) | sendmail -v firstname.lastname@example.org
And, if you don't want to see the sendmail commands going back and forth, add " > /dev/null" to the end of that to hide it from view.
$ ( echo "Subject: `hostname` performance"; echo; cat sar-report ) | sendmail -v email@example.com > /dev/null
You can use telnet to communicate with a remote system and send it back by replicating the commands that mail servers require, but this process is generally interactive and a lot more work than the other commands. You have to enter commands like "HELO myorg.com", "mail from: firstname.lastname@example.org", and "rcpt to: email@example.com". That great to see how servers chat with each other, but not a good technique to use with scripts.
Another command that you can use is mutt. Mutt is an email client, but can be used to send email as well. In this case, it works a lot like mailx. Either of these commands would work. Note that mutt in these examples is a drop-in for mailx.
$ mutt -s "`hostname` performance" firstname.lastname@example.org < /tmp/sar-report $ cat /tmp/sar-report | mutt -s "`hostname` performance" email@example.com
The script below uses mailx to send sar output via email. How it differs from the simple commands shown above is that it sends only the daily average from as many sar data files happen to be sitting in the /var/log/sa directory. It then labels each line in the output with the appropriate day of the week. Using a script like this, you could see the average performance of a system for the past week or more every day if this command were run from cron.
#!/bin/bash numDays=`ls /var/log/sa/sa?? | wc -l` dayOfWk=`date +%u` if [ $dayOfWk == 0 ]; then start=6 else start=`expr $dayOfWk - 1` fi next=$start echo " CPU %user %nice %system %iowait %steal %idle" for file in `ls -tr /var/log/sa | grep -v sar` do if [ $next == 7 ]; then next=0 fi case $next in 0) day="Sunday...";; 1) day="Monday...";; 2) day="Tuesday..";; 3) day="Wednesday";; 4) day="Thursday.";; 5) day="Friday...";; 6) day="Saturday.";; esac sar -f /var/log/sa/$file | grep Average | sed "s/Average/$day/" next=`expr $next + 1` done cat /tmp/sar-report$$ | mailx -s "$sys Performance Summary" $recip rm /tmp/sar-report$$
Once you get used to sending the results of automated checks via email, you can sit back and wait for the problems you didn't anticipate, knowing that the ones you're watching out for will announce themselves.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:bc19dd1b-4d6e-4e5f-9fc0-6bd0eea52f60> | CC-MAIN-2017-04 | http://www.computerworld.com/article/3019943/open-source-tools/automating-email-alerts-on-unix-systems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915565 | 1,158 | 2.640625 | 3 |
Ndiff is a tool that it can be used to compare two nmap scan files and highlights any changes between them.In order to compare the scans,the files in nmap must be saved in text or xml format.Ndiff will point out the differences between them for easy comparison by using plus and minus signs.
Lets say that we want to compare two scans of a single host.We will use the option -oX and a filename.xml which will save the nmap outputs in a xml file.
As we can see from the first scan the host has only two ports open while in the second has 5.Now lets try to compare these two results with the Ndiff.The comparison can be done very easily just by using the command ndiff [filename.xml filename2.xml]
The above image illustrates the differences between these two scans that we have conducted on the same host.The plus sign (+) highlights the differences of the second file in relation with the first while the minus (-) sign indicates the differences of the first file in comparison with the second.Specifically in the example above we can see that the port 135,1111 and 3389 have the plus sign which means that in the second scan these ports were found open while in the first scan these ports were closed.
Alternatively we can use the -v option (verbose mode) which it will display all the output of these two xml files and it will highlight the differences with the plus and minus signs as before.
Ndiff also provides the ability to produce the results in XML output with the –xml option.This option is useful in cases where we want to import the information from Ndiff into a third party tool that uses this format. | <urn:uuid:b65974eb-5703-43f7-a10b-87d6a3e6387d> | CC-MAIN-2017-04 | https://pentestlab.blog/2012/09/04/ndiff/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00552-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93848 | 350 | 3.640625 | 4 |
Supercomputers, and the people who run them, are the rock stars of the science and engineering world, enabling discoveries and facilitating crucial insights on some of the most challenging problems facing humanity. One of the world’s most powerful computing systems is Stampede, a key resource of the Texas Advanced Computing Center (TACC) that was funded by the National Science Foundation (NSF). This open science research tool is also a cornerstone of the NSF’s strategy to provide American scientists with a first-class cyberinfrastructure.
“Sometimes, the laboratory just won’t cut it,” notes NSF science writer Aaron Dubrow.
“After all,” he continues, “you can’t recreate an exploding star, manipulate quarks or forecast the climate in the lab. In cases like these, scientists rely on supercomputing simulations to capture the physical reality of these phenomena – minus the extraordinary cost, dangerous temperatures or millennium-long wait times.
“When faced with an unsolvable problem, researchers at universities and labs across the United States set up virtual models, determine the initial conditions for their simulations – the weather in advance of an impending storm, the configurations of a drug molecule binding to an HIV virus, the dynamics of a distant dying star – and press compute.”
The Stampede supercomputer and others like it are an indispensable part of the scientific process. These beasts of computational burden rely on thousands of multicore processors to execute workloads in minutes or hours instead of weeks and months, and in doing so, help solve our biggest challenges and toughest scientific questions.
Stampede went into operation in January 2013. The 8.5 petaflop (peak) system currently ranks number 7 in the TOP500 list of fastest supercomputers, which it achieved with a measured LINPACK of 5.2 petaflops. At any given moment, Stampede is crunching hundreds of separate workloads at once. In its first year, Stampede completed nearly 2.2 million jobs by 3,400 researchers, and supported more than 1,700 distinct science projects. The range of research that’s been enabled includes more accurate DNA sequencing, cutting-edge astrophysics, novel biofuel development and colloidal gel simulations.
Built by Intel, Dell and Mellanox, Stampede is one of the first supercomputers to employ both standard Intel Xeon E5 CPUs as well as Intel Xeon Phi coprocessors. The advantage of the Phi is that it performs a lot of calculations using less energy.
Says TACC’s Dan Stanzione: “The Xeon Phi is Intel’s approach to changing these power and performance curves by giving us simpler cores with a simpler architecture but a lot more of them in the same size package.”
While high-performance computing has primarily been concerned with exponential performance increases, that is no longer a tenable strategy going forward. Getting to the next big goalpost – exascale – requires a shift in design to emphasize performance-per-watt and efficient data movement over pure performance. One way to conserve energy is to employ less-performant, manycore chips. The community’s embrace of accelerators, GPUs from AMD and NVIDIA and the Intel MIC coprocesssor, speaks to this new reality.
“The exciting part is that MIC and GPU foreshadow what will be on the CPU in the future,” Stanzione said. “The work that scientists are putting in now to optimize codes for these processors will pay off. It’s not whether you should adopt them; it’s whether you want to get a jump on the future. ”
Phi integration on Stampede is evolving in stages. Currently the Phi coprocessors represent about 10-20 percent of the usage of the system. Researchers are using the Phi chips for the development of flu vaccines, atomic simulations in the domain of particle physics, and weather forecasting. | <urn:uuid:f753725e-9456-4905-b341-f2eb7df4834c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/06/19/stampede-foreshadows-heterogenous-supercomputing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910751 | 831 | 3.453125 | 3 |
Sci-fi movies have warned us again and again: Sooner or later, our technology will destroy us.
The moment will come when machines become so smart, they will become a force for destruction rather than the engine of our general betterment.
Will it be the smart energy grid that pushes us over the line? There is growing concern that the automated, intelligent interplay between elements of the power grid could produce new and deeply hazardous vulnerabilities.
Consider first the upside. Smart grid technology enables two-way communication between disparate elements of the power generation, transmission and distribution chain. Constant feedback allows the system to detect and respond to local changes. As a result, lost power can be restored more quickly; systems can respond to peak demands; renewable sources can be integrated into conventional systems more easily.
“The downside is that with this higher degree of coordination comes a higher degree of vulnerability. If bad actors understand the new control paradigm, they can herd the grid to certain places, they can trick the grid operators or the automated equipment to respond in certain ways,” said Battelle Memorial Institute Research Leader Jason Black.
Within the complexity of the smart grid structure, these threats can take any number of forms.
Vulnerability begins at a mundane level, with the plain old physical attack, explosive or otherwise, targeted at the computing centers that run the smart grid. While power plants may have some level of physical security, data centers often do not have such protections. “Some of these major control centers are located in standard office buildings. They are not even located behind concrete barriers,” said Thomas Popik, chairman of the nonprofit Foundation for Resilient Societies, which conducts research into the U.S. power grid.
In addition to physical threats, more complicated attacks also are possible — attacks that seem to mirror Hollywood scenarios. Specifically, computer-based control systems also may be vulnerable to electromagnetic attack, the kind of mass shock wave that disrupts digital transmissions, as depicted in the movie Ocean’s Eleven. In that case, con artists use such a pulse to take a city’s grid offline for a few crucial moments.
“The same thing can happen to any vulnerable electronic component of the smart grid,” Popik said. Nor would the attackers be particularly noticeable. Such a disruptive device could easily fit into a standard van.
It’s a solvable problem, but the solutions have to be implemented early. One solution is a Faraday cage, an enclosure of conductive material that shields equipment from the pulse. You can cage individual pieces of equipment or enclose a whole room. Defense against electromagnetic attack also can be built into new energy facilities, adding 5 percent to the overall cost. Those who add such defenses as a retrofit typically find the costs to be about four times more.
The most widely recognized vulnerability in the smart grid lies in the software itself, the programming that directs the actions of the system. “We’ve tested around 30 different products from over 20 different vendors since April 2013 and we found 85 percent of those have low-hanging vulnerabilities,” said Adam Crain, a partner at Automatak. In examining energy industry software, Crain’s team has found a range of issues that may lead to possible exploitation.
While standards for security may be adequate, implementation is far less certain. Even with the security standards in hand, “now the coders have to take this complex standard — it’s 1,000 pages long — and translate it into software, and that is no easy task,” Crain said. While that software is then tested for functionality, it is seldom tested for security. Bad actors can slip in through security gaps and spread mass damage relatively easily.
“One of the things we found was that the master stations, the control centers, were vulnerable,” Crain said. All it takes is one unsecured power pole to get to the control center: Because everything is interconnected, even a small gap opens the door back to the master control system, giving a bad actor access to literally the entire system.
Where are the weak links? Virtually everywhere. Power poles, capacitors, voltage regulators, power quality meters, smart readers in people’s homes, electrical vehicle charging stations. Name it. Anything that isn’t locked down is a potential source for exploitation, an open window into the beating heart of the network.
All these cyberthreats are in a sense built into the very nature of the smart grid. “There is a huge culture gap, especially in the electrical power space,” Crain said. “The grid was designed for physical reliability, resistance to storms. It was not designed for resistance to cyberattacks. When you mention this issue to people in the field, the best they can say is that, ‘This is why we have redundancy.’ But redundancy doesn’t help you if that redundant asset has the same software and the same vulnerability.””
Now let’s start to think about the impact of all of this on the emergency management community. There’s the obvious threat of mass blackouts, and we’ll come to that. But consider first the “smartness” of the smart grid and all that it implies for people on the front lines of emergency response.
The smart grid depends on the intelligence of the devices to which it is connected. Diverse elements within the power chain must have some degree of awareness, as it were, if they are to communicate effectively up and down the line. This native hardware intelligence poses real risk as the power system becomes increasingly smart. The more intelligent the devices, the more widespread the risk.
At the Northern California Regional Intelligence Center, Cyber Intelligence Analyst Donovan Miguel McKendrick points to the innocuous-seeming Philips Hue light bulb. The bulb’s color can be adjusted to meet a range of settings, a nice feature for changing the mood in your living room. As the manufacturer describes it, users can “[e]xperiment with shades of white, from invigorating blue to soothing yellow. Or play with all the colors in the hue spectrum. … Relive your favorite memories. Even improve your mood.”
Hue is controlled by a smartphone app. Plug it into the smart grid, however, and it becomes theoretically possible to control the light from outside the app via software hack. Then the system’s own intelligence becomes a point of entry for destructive players. “Now suppose that light bulb is installed in an emergency room and someone shuts it off during a procedure,” McKendrick said. “That’s a worst-case scenario.”
Knowing that such things could happen, one moves quickly to considering the possibility that they will.
“The main concern is that somebody is going to get into the system with the motivation and the skill set to do serious damage and to cause loss of life,” McKendrick said. He points to the example of DarkSeoul, a hacking organization credited with successful cyberattacks on South Korea’s banks, television broadcasters, financial companies and government websites.
“That is exactly the concern,” McKendrick said. “That somebody could use a piece of malware like that and target critical infrastructure in America. With the smart grid, it would not be very difficult.”
The same techniques could of course be used to turn off all the lights. As emergency managers begin to contemplate these unpleasant scenarios, it seems reasonable to ask just how big a threat we’re talking about. While the smart grid is by no means ubiquitous, it is rapidly gaining a place among the most prominent energy management models.
More than 26 percent of public utilities and 28 percent of investor-owned utilities (IOUs) are in the early planning stages of developing a smart grid, according to the latest Strategic Directions in the U.S. Electric Industry report from engineering and consulting firm Black & Veatch.
More than 36 percent of public utilities and 23 percent of IOUs are actively deploying physical infrastructure updates, while about 13 percent of public and 17 percent of IOUs are deploying IT infrastructure. Clearly there’s momentum here.
Let’s back up and consider the big picture. The problems are apparent: buggy software, physical vulnerability, the inherent interconnectedness of intelligent devices. Yet the technology exists to remediate the worst of these. So what’s the problem?
As is so often the case, one of the main problems is the human element.
While the expertise exists to address the technological challenges of the smart grid, it does not always exist in the right places, said Adam Cahn, CEO of Clear Creek Networks, a company that builds computer networks for utilities.
“The problem is that the power engineers who are responsible for the management of the physical electric grid are not experts in the data networks that support the grid. They lack the skills to properly configure network devices,” he said. Data network professionals on the other hand may have a solid grasp on the technology, but may not understand the subtleties of power generation.
For emergency management, there are other aspects of the human element that may be more directly controlled.
When it comes to the prevention of crises, emergency managers often assume the role of educator, whether it comes in the form of a smoke alarm campaign or hurricane response instructions. In the case of the smart grid, that same up-front effort could help prevent disaster or at least speed remediation.
“The holes are generally in the human component. Humans are the weakest link every time,” McKendrick said. “So a large part of the job is not just responding to the cyberthreat but also educating the humans.”
Emergency managers likely will face significant hurdles in trying to get their voices heard. “There’s no awareness about how serious it is,” McKendrick said. “People hear about Anonymous, they hear about these malicious threats. But every time it comes out in the media, they say it is the end of the world. Then when the world doesn’t end, people just tend to write it off. Then you have the majority of the populace that just ignores it altogether. ‘My job isn’t in IT, so what do I care?’”
To help raise awareness, emergency managers should build alliances well in advance of a crisis, Black said. It’s important to forge ties with local utilities, to understand the vulnerabilities, to share response plans. If first responders know in advance that a certain school is going to become an ad hoc shelter, it makes sense to tell the utilities that, so that the school can become a priority for the return of power.
While emergency managers might not be able to control malicious actions on the grid, they can play an active role in shaping the policies that are meant to safeguard the system, Popik said. In particular, he recommends leaders join the North American Electric Reliability Corp., which sets policy for the energy industry.
“They need to become involved in the political process and demand the regulation, security and reliability of the electric grid,” he said.
This story was originally published by Emergency Management. | <urn:uuid:10c29b84-442c-49f3-ad23-417ea9d01f17> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Securing-the-Smart-Energy-Grid-a-National-Concern.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94308 | 2,337 | 3.015625 | 3 |
Overview of Public Key Infrastructure, certificate management and associated components
The public key infrastructure concept is one that we must be conversant with since it is a very important aspect of cryptography. It is the basis of most of the encryption techniques and therefore it is important that we have some basic knowledge about it. Certificate management is also another area we should not ignore if we are dealing with encryption of information. Here are the information about the certifications management and the components which are associated to them;
Certificate authorities and digital certificates
Here are the certifications which are commonly obtained and gained benefit from;
In a public key infrastructure, a Certificate Authority is responsible for the creation and distribution of certificates to the end users and other people that will need them in the environment. These are the public and private keys and one's certificate authority is one's clearing house for this. If one is in a private organization, one may have a private certificate authority which is meant for one's own users and private servers. In such a case, one have to ensure that third party individuals to use one's certificates trust them since in most cases, people tend not to trust certificates that one have built on one's own.
One might also choose to use a public certificate authority so as to get third party individuals to use one's key. There are a number of responsibilities associated with certificate authorities. One I that they ensure the validity of certificates. It makes sure that all the people who are registering for a certificate are legitimate so as not to have some problems later.
Certificate authorities are also responsible for the management of servers that store and administer certificates. These servers are very sensitive since they are responsible for making private keys and one do not want anybody to get their hands on them.
A certificate authority is also responsible for performing key and certificate life cycle management. The certificate authority needs to make sure that if there are keys that are no longer good or if there are keys that could have been fraudulently distributed are revoked.
The CRL which is an abbreviation for certificate revocation lists which are special types of lists that are used to carry out checks. For instance, if one wants to use a certificate, one might first want to check if the certificate has not been revoked. If it is not on the CRL list, then we can use the certificate.
The Online Certificate Status Protocol is also another method that can be used to check for revocation of certificates. It's a little bit easier to use and more streamlined to use than a CRL. Many organizations are starting to use it to carry out checks to make sure that no certificates have been revoked.
The public key infrastructure(PKI) is a mixture of things working together such as policies, procedures, people, hardware and software all put together to create a standard way to manage, distribute these certificates, store them, revoke them. If one are going to venture into public key cryptography and one are making a PKI, it means that one will be making something that is very big which one need to plan out from the very beginning and set all the processes in place so that it can be as successful as possible. The PKI is responsible for building these certificates and binding them to people or resources.
In the public key infrastructure, there is an entire lifecycle that revolves around the keys. The first step is usually creation of the key. This is where we are developing a key with a particular cipher or key string or key size and it is one that is very specific therefore we will have to make decisions of all the exact technical details associated with generation of the keys when we first make them.
What follows next is the creation of a certificate. This is where we will allocate that key to a person who can bind us together and create that x509 certificate that includes the key and other details. What follows is the distribution of the keys to the end users, the certificates are sent out to our certificate servers. This is a process that needs to be available to our users and it should be as secure as possible.
The key life cycle also involves storing of the information. We are creating a lot of certificates and building out a lot of keys and so these are extremely valuable and private and private keys that we do not want to get into the wrong hands. This means that there has to be a proper storage mechanism so as to handle this. Ultimately, there may be a need to revoke these keys. Some of the keys may be compromised, the business may close down, people may leave the organization or there might be some certain amount of time that the key is valid and so at the end of that time from or for one of those reasons we need to have a processing place that is able to properly revoke those keys and make everyone realize that we have revoked them by perhaps having some key revocation lists or other mechanisms in place so that people understand which keys have been revoked and which ones have not.
Finally, is an expiration. Similar to other items, keys may only have a certain short life. For instance, one may have created them to only last for three months and after that time, the key is no longer valid and one will have to create new keys. It is at this point that the cycle goes back to the top and begins again.
As long as one are thinking about one's public key infrastructure and building out one's processes and procedures to take into account every single step along the way of this life cycle, one should have a very successful; public key infrastructure.
The idea of key recovery basically means that we have put some processes in place to make sure that should something happen to that key, we have ways of recovering data that had been encrypted with the lost key. One of the ways it to back-up the private key. However, one need to make sure that one do not have too many backups of the private key or rather too many versions of it to avoid it getting into the hands of other people. Therefore, one need to be sure that the private key is backed up but not too much.
In every organization, there is already a key recovery process that will start up from the beginning if the key is lost. We want to have a process where an organization can recover the data or private key and therefore the recovery process is probably built into one's public key infrastructure. It may be a process that is done automatically every time one comes up with a new set of keys. One of the different approaches one can take to do this is by taking every key one create and just back them up. That way, if one loses one of the keys, one can always go back and recover them from a place one has stored them safely.
If one is using a certificate authority, some of the recovery process is already built into it making the whole recovery process very difficult to implement. The key recovery process is an extremely important process if one are building out a certificate authority, one are creating an entire public key infrastructure for one's organization and it is something that one certainly want to look at.
The public key cryptography methodology is one that was founded by asymmetric encryption. The creation of public keys really involves a lot of mathematics and randomization. A lot of mathematics and prime numbers goes into this so as to create a public key that can be given to anyone in the world. By looking at a public and private key, it is very difficult for one to differentiate between the two.
A private key is a kind of encryption key that cannot be shared with any other person. This means that one are the only one who maintains possession of such a key and one can only use it to decrypt information that has been encrypted using a public key
Key registration in one's public key infrastructure is the role that ensures that the right people are associated with the right keys. It is this registration process that ensures that one have exactly the right people lined up with exactly the right certificate. This process ensures that there is no type of mix-up or fraud with the certificates. The public key infrastructure registration process can either be done casually where it is done over the phone or it can also involve some very many procedures.
The key registration process is one that is very important and it requires one to be very detailed so as to ensure that the specific key is matched correctly with a specific end user.
Basically, when we are talking about escrow we are talking about a third party that is holding something for us. In the context of cryptography, this refers to the encryption keys. In this case, it requires that a third party stores the encryption key so that we can decrypt information in case the original key gets lost. In this case, the encryption key should be kept in a very safe place so that it is not accessed by others. Key escrow also helps when it comes to the recovery of data.
Symmetric encryption in the context of key escrow means that one are keeping one's key somewhere making sure that it is put in a safe so that no one can get access to it.
Asymmetrical encryption in this case means that one need to have an additional private key that one can use to decrypt information. The process of getting to the key escrow is as important as the key since so have to be aware of what circumstances can prompt one to get the key and who can access the key. Having the right process in place and one have the right ideas behind what one are doing with the key escrow, it then becomes a valuable part of maintaining the integrity and security of one's data.
The most important aspect of one's public key infrastructure is that of trust. One has some assurance that the certificates one are using are those that one can trust. This means that the names associated with those certificates are the names associated with people who might be receiving those and might be decrypting the information one are sending to them.
Depending on the type of infrastructure one has and the way one has built one's public key infrastructure, there may be a number of different models that one use for trust. For instance, if one have a single certificate authority, one might find out that everyone is receiving all their certificates from one place and one can now trust that the one certificate authority is managing that process for everyone.
If one is in a large organization, one might need more than one certificate authority and therefore one may end up coming up with a hierarchical trust model. One can have a single root certificate authority server that is issuing certificates to intermediate servers which then issue certificates to leaf certificate authorities and then to one's end users and resources. This depends on one's organization since there could be some geographical requirements or structural requirements that would require that level of control and have the ability to spread trust from the very top root all the way down to other certificate authorities.
There is also another model called the mesh trust model relationship. In this case, every certificate authority trusts all of the other certificate authorities which works perfectly well in a case where one have two or three certificate authorities. Bringing in additional certificate authorities makes it difficult to scale the whole trust model.
Another trust model relationship is the mutual authentication where the server authenticates to the client and the client authenticates to the server and they both trust each other in the same amount.
Basically, the public key infrastructure and other components related to encryption and certificate are very important in cryptography. These are aspects whose working is quite inter-related and thus they are concepts that we must all be familiar with. So, one should know about them all so that he can be the master in this specific field and can improve his career. | <urn:uuid:06fcc222-50a2-4801-8cda-677334b139bb> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-overview-of-public-key-infrastructure-certificate-management-and-associated-components.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968871 | 2,346 | 2.921875 | 3 |
Professor of physics at Georgia State University, Gary Hastings, has discovered a way to simulate the processes that occur during photosynthesis, a discovery that could lead to a more thorough understanding of how plants exchange light for growth energy.
According to GSU, Hastings developed a way of interpreting measurements behind the molecular interactions during the photosynthesizing process, thus offering researchers enhanced ability to develop mathematical models of the process.
Hastings harnesses the power of his university’s IBMp5 supercomputer, URSA, which he said allowed for the processing of huge calculations in a matter of days instead of the months it would have required running high-end desktops. The 36 node, 576-core URSA is based on the Power5+ processor and provides 18 TB or user disk storage with backups.
Although Hastings used the URSA system, his university recently upped its supercomputing might with the addition of CARINA, an IBM p7-755 with peak performance at 14 trillion calculations per second.
In an interview, Hastings said that his main questions revolved around looking at plants as a sort of solar powered batter. He noted that “the process is remarkably efficient, much more so than in artificial materials…the question is: how do electrons get across [a plant’s membrane] with such efficiency?”
This research could help biologist better anticipate the function of new plant strains as well as find a unique application in biofuel research. | <urn:uuid:33d36365-3b68-442c-a778-8917a63a43bf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/05/18/ursa_supercomputer_illustrates_plant_processes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957651 | 299 | 3.453125 | 3 |
The following IP addresses are reserved as private address space, specified in the RFC 1918. This means everybody can set up an internal network using addresses in these blocks, but packets originating from those addresses should never appear on the internet.
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
To allow these addresses to access the Internet you must NAT (Network Address Translate) these numbers via one or more public addresses which are provided by your ISP. | <urn:uuid:1489274e-9500-4c6e-9b0d-0cac7493c6d4> | CC-MAIN-2017-04 | https://support.net-ctrl.com/kb.php?artid=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868163 | 148 | 2.6875 | 3 |
Black Box Explains...Speaker sound quality
A human with keen hearing can hear sounds within a range of about 20 Hz to 20 KHz. But most human speech is centered in the 1000 Hz range, so most old-fashioned analog telephone networks provided audio bandwidth only in this range. This range transmits most voice information but can fail to register voice subtleties and inflections.
Because these older analog phone systems had such a narrow bandwidth, headset manufacturers built their products to operate only in those particular frequencies.
When digital networks and fiber optic connections came into use, however, they provided a much wider bandwidth for voice transmission. This led to a corresponding increase in headset sound quality.
Today, quality headsets take advantage of increased network bandwidth and typically can reproduce sounds in the 300 Hz to 3500 Hz range. This makes voices far easier to understand and enables you to pick up all the nuances and inflections of your caller’s voice. | <urn:uuid:61f16a0b-fa99-4fc8-a6fd-4848b4a981d6> | CC-MAIN-2017-04 | https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-speaker-sound-quality | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913862 | 192 | 3.71875 | 4 |
First spotted almost three months ago, the Boonana Trojan stood out because of its capability to infect both computer running Windows and those running Mac OS X. The Trojan nestled itself in the system, and allowed outside access to all files on it.
And if that wasn’t bad enough, it seems that it has some vulnerabilities that can be exploited by other attackers to collect information about the system or – according to a Symantec researcher – even be used to create a completely functional parallel botnet or take over of the existing one.
The Boonana bots are designed to take part of a P2P network and to communicate with each other via a custom-designed communication protocol.
Apart from making the identification of infected hosts on a particular IP range almost trivial, the P2P protocol also contains an information-disclosure vulnerability which can be used to detect which operating system the computer is running.
According to Symantec, in December 2010, 84 percent of the infected systems were running Windows, and 16 percent a version of OS X.
Windows users are especially at risk, as the malware also installs a keylogger into the system and sends the collected data to the attacker. But all users are in danger of having their systems accessed not only by the original attacker, but by others as well.
A vulnerability in the P2P protocol can also allow attackers who have identified the infected systems to install a backdoor on them and in that way gain further access to the system. Also, the list of peers that each bot has and updates occasionally can lead the attacker to the other infected hosts, allowing him to repeat all these steps and gain access to it, too.
Malware is also a software application, and as such, it has vulnerabilities like any other legitimate software. Symantec’s researcher says that this fact proves that a single malware infection can open the door to further infections, and warns users to check their systems for the Boonana Trojan in particular, and malware in general. | <urn:uuid:04212847-08f9-4f88-b105-fcc2d21c2774> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/01/18/vulnerabilities-in-the-boonana-trojan-increase-the-danger/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00084-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951078 | 410 | 2.875 | 3 |
7.9 What are on-line/off-line signatures?
On-line/off-line signature schemes are a way of getting around the fact that many general-purpose digital signature schemes have high computational requirements. On-line/off-line schemes are created by joining together a general-purpose signature scheme (see Question 2.2.2) and a one-time signature scheme (see Question 7.7) in such a way that the bulk of the computational burden for a signature operation can be performed before the signer knows the message that will be signed.
More precisely, let a general-purpose digital signature scheme and a one-time signature scheme be fixed. These schemes can be used together to define an on-line/off-line signature scheme which works as follows:
Key pair generation. A public/private key pair KP /KS for the general-purpose signature scheme is generated. These are the public and private keys for the on-line/off-line scheme as well.
Off-line phase of signing. A public/private key pair TP/TS for the one-time signature scheme is generated. The public key TP for the one-time scheme is signed with the private key KS for the general-purpose scheme to produce a signature SK(TP).
of signing. To sign a message m, use the one-time scheme
to sign m with the private key TS, computing
the value ST(m). The signature of m
is then the triple (TP, SK(TP),
Note that steps 2 and 3 must be performed for each message signed; however, the point of using an on-line/off-line scheme is that step 2 can be performed before the message m has been chosen and made available to the signer. An on-line/off-line signature scheme can use a one-time signature scheme that is much faster than a general-purpose signature scheme, and this can make digital signatures much more practical in a variety of scenarios. An on-line/off-line signature scheme can be viewed as the digital signature analog of a digital envelope (see Question 2.2.4).
For more information about on-line/off-line signatures, see [EGM89].
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:b4c5f3d9-7cc5-42b1-a2c8-8ed099c3a801> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-on-line-off-line-signatures.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876301 | 760 | 2.875 | 3 |
When Google introduced its Go programming language in 2009, they surely didn’t hope for it to be used for writing malware but, as these things go, it was a only a matter of time until it happened.
Symantec researchers have recently analyzed a piece of malware found in the wild that contains some components written in Go.
The malware is a Trojan dubbed “Encriyoko”, which in this particular instance came disguised as a tool for rooting Android devices.
Once installed and executed on the victims’ PC, the file in question (GalaxyNxRoot.exe) would drop two Go-based executables: an information-stealing Trojan and a downloading component.
The first aims at collecting information about the targeted computer and the processes running on it and sending that information to a remote server, while the second downloads an encrypted DLL file from another remote server.
When this DLL file is decrypted and loaded, it tries to encrypt all source code files, images and audio files, archives, documents and many other files. If it manages to do so, it saves them all into one file (vxsur.bin) in the Temp folder.
“Restoration of the encrypted files will be difficult, if not impossible,” the researchers point out.
I assume they mean that when the user doesn’t have the key to do it – if so, could it be that this Trojan is part of a ransomware attack? Or are the attackers’ intentions simply to wreak havoc? | <urn:uuid:c2c4535e-79f4-4c31-9de9-ec65a4342910> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/09/19/google-go-programming-language-used-for-creating-destructive-trojan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932173 | 317 | 2.515625 | 3 |
This fall, Oahu is poised to be the first place in the United States where electricity generated by ocean waves is connected to a power grid — a milestone in the budding wave energy industry, officials say.
Sometime in September, private firm Northwest Energy Innovations, with backing from the Navy and the University of Hawaii, is slated to start testing its prototype Azura device in waters a kilometer offshore from Marine Corps Base Hawaii at Kaneohe Bay.
It's not the first time a company has tested pioneering wave-energy technologies in those waters, a spot known as the Wave Energy Test Site, or WETS. But it's the first time that electricity generated there will be transmitted back to the base's grid via cable, officials say.
That base grid, in turn, is connected to Oahu's islandwide power grid used by energy customers. The Azura prototype could produce up to 20 kilowatts in peak wave conditions — enough to power just several houses, UH specialists say. But at this early stage, what's key is that any wave-generated produced electricity is entering a U.S. grid at all, they add.
"It's a relatively small … but important step" in the nascent wave energy industry, said Patrick Cross, senior project specialist at UH-Manoa's Hawaii Natural Energy Institute.
The push to use ocean waves as a viable renewable energy source is about 30 years behind the wind power industry, and it still hasn't been commercialized anywhere in the world, said Steve Kopf, founder and senior partner with Northwest Energy Innovations.
"The name of the game right now is validating that your technology has the potential to provide cost-effective commercial power," Kopf said. "Nobody's quite there yet on wave, but we're chipping away at it. We believe we can get there" — where it's competitive with other renewable-energy sources.
Efforts to develop wave energy technology are at such an early stage that companies must still rely heavily on outside investment, and they're experimenting with "wildly different approaches" to find the best way to convert the sea's natural motion into electricity, Cross said.
Last week UH officials announced that the Navy has provided $9 million to support testing of the industry technology at WETS.
The Navy dollars will be directed to the Applied Research Laboratory, a university-affiliated research center run by the Navy and UH which has been met with sharp resistance by some of the school's students and faculty, starting with campus sit-ins to protest the idea in 2005.
Critics of the venture are concerned about the potential for classified weapons research and a shift away from core values, while proponents have argued that a university-affiliated research center would bring millions in research funding and prestige to the school.
The wave-energy funds will help the UH Natural Energy Institute survey WETS with divers and remote-operated vehicles, as well as to monitor local wave conditions and measure the energy produced by the private-sector devices tested there, Cross said. The institute will further study any effects from noise there or other potential environmental impacts, he added.
Meanwhile the Natural Energy Institute will run its own tests to confirm that Azura works as predicted in the firm's computer simulations, Kopf said.
The 50-foot-long device aims to harness power from the heave (up-and-down motion) and pitch (forward push) of waves far enough offshore that they don't break. If the tests go well, the company aims to test larger, more cost-effective models in deeper waters offshore at two more moorings being created within WETS.
The company has an agreement in place with Hawaiian Electric Co. for its wave electricity to enter the grid, Kopf said. It will then join test sites in Portugal, England and Scotland where electricity connects to local grids from wave test sites, he added.
In 2015 and 2016 up to four other companies could test their approaches to converting waves into electricity at the Oahu site, largely using millions of dollars of funding from the Navy and U.S. Department of Energy, Cross said.
"There are so many radically different approaches to how to do it," Cross said. "In the big picture of things, it's yet another renewable energy source. The potential worldwide for wave energy to meet the needs of communities is huge. However, there are enormous challenges as well."
©2014 The Honolulu Star-Advertiser | <urn:uuid:84c9b5cf-b21d-470b-8911-8693124de5cd> | CC-MAIN-2017-04 | http://www.govtech.com/local/Wave-Energy-Comes-Ashore-in-Oahu.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00166-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956267 | 912 | 2.71875 | 3 |
Morris Worm: Life After Death
14 Sep 2002
Morris Worm: Life After DeathThe "Slapper" worm successfully uses 14-year old technology
Kaspersky Lab, an international data-security software-development company, warns about the detection of a new dangerous Internet-worm called "Slapper", which infects computers running Linux operating system and uses the source code spreading technology that was used in the notorious Morris Worm in 1988.
Up to date, Kaspersky Lab has received no user reports that this malicious program has been detected "in-the-wild". However a detailed analysis of the worm confirms its high potential to cause a global virus outbreak and therefore poses a threat to Linux users.
To find a victim, "Slapper" scans computers connected to the Internet and chooses those that are running the Linux operating system and have an Apache Web-server installed. After detecting such a computer, the worm stealthily uploads its copy by exploiting the OpenSSL security breach (buffer overflow). The main distinctive feature of "Slapper" is that the uploaded worm copy is in the source code, not in an already compiled executable package. After the uploading is competed, the worm uses the locally installed C compiler (gcc) to produce an executable copy of the worm and then launches it. Such an original method provides "Slapper" compatibility with all Linux types regardless of the distribution manufacturer and version of the kernel. This method was invented in November 1988 and was applied for the first time in notorious Morris Worm that succeeded to infect more than 6000 companies worldwide (including NASA Research Institute) resulting in $96 million loss. Until now, this method of spreading source code has never been used.
"It is quite possible that "Slapper" will initiate a new wave of multi-platform malware development, which will be able to infect not only Linux, but Windows, Unix and other operating systems simultaneously. This is obvious because C compilers can be found on every commonly used platform as well as security breaches through which malware will "worm" on victim computers," said Eugene Kaspersky, Head of Anti-Virus Research for Kaspersky Lab. "The worm's other side effect will be the appearance of its numerous clones. To create a modified version, a person will only need to apply the necessary changes to the source code that will be available everywhere in the Internet. With this in mind we have already started the development of the applicable add-on to the heuristic technology integrated in Kaspersky Anti-Virus that will allow us to catch even unknown Slapper-style worms," he added.
In addition, "Slapper" also poses a threat to the data confidentiality on the infected computers. The worm contains backdoor-features (unauthorized remote administration) that can allow a malicious person to perform certain unwanted actions, such as the execution of remote commands, data theft, implication in distributed DoS-attack, etc.
Protection against "Slapper" already has been added to the daily update of KasperskyTM
More details about the "Slapper" can be found in the Kaspersky Virus Encyclopedia | <urn:uuid:f770b229-338f-4973-8c2f-6a6a3407f04d> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2002/Morris_Worm_Life_After_Death | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00166-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934729 | 635 | 3.28125 | 3 |
Today (October 11, 2016) marks the seventh annual Ada Lovelace Day (ALD), which recognizes the achievements of women in science, technology, engineering, and math (STEM) careers by looking back at one of the most important pioneers in computing.
Let’s take a look at a few reasons why Ada Lovelace is awesome.
Her mom encouraged her to learn math.
Ada Lovelace was born in 1815, during an era when women did not actively seek careers in science, technology, engineering, or math (STEM), backed by cultural stigma that discouraged women from pursuing these paths. Her mother, Lady Wentworth, didn’t care. She actively encouraged her daughter’s interest in mathematics. She also didn’t want Lovelace to turn out like her late husband, Lord Byron. Wait. What?
Yeah. Lovelace’s dad was Lord Byron, and he was a jerk. A few months after Lovelace was born, Lord Byron fled the country and died eight years later. Lady Wentworth understandably remained bitter, and pushed Lovelace’s natural interest in mathematics and logic to prevent her from becoming anything like her father.
She worked on the first computer ever.
Lovelace described herself as an analyst who studied what she referred to as “poetical science.” Her mathematical talents led her to form an ongoing, working relationship and friendship with “the father of computers,” British mathematician Charles Babbage. Lovelace and Babbage worked together on the Analytical Engine, one of the first mechanical computers, and the first with the same logical structure that would dominate computer design in the electronic era.
She was the first computer programmer.
To be clear, she wasn’t the first female computer programmer. She was the first programmer. Full stop. In 1843, Lovelace translated an article about the Analytical Engine from Italian into English, and then published it appended with her own notes about computer science. Lots of notes. More notes, in fact, than the actual article. Most importantly, the notes (inventively titled “Notes”) contained what many people consider to be the “first computer program.”
Unfortunately, Babbage never finished the Analytical Engine, so she never had the chance to run the algorithms she wrote. “Notes” also described how code could be created for the machine to handle letters and symbols along with numbers, and theorized a method for the engine to repeat a series of instructions, a process that is known today as looping.
Alan Turing took on Ada Lovelace about artificial intelligence.
Yeah, Alan Turing, the father of the Turing Test, very specifically opposed Lovelace’s famous 1842 statement that computers “can never take us by surprise,” dismissing artificial intelligence. Turing famously rebutted her assertion in his seminal work, “Computing Machinery and Intelligence,” saying that machines would one day learn independently. Think about that. She was thinking about machine learning in the 1840s. That’s wild.
She saw computers in her math.
Lovelace envisioned computers going beyond simple calculations back when computers couldn’t even do that much. Her viewpoint of “poetical science” led her to uniquely predict ideas for the capability the Analytical Engine, seeing it as more than a tool that could crunch numbers, even if these capabilities weren’t able to happen during her lifetime. She thought of the machine not as a tangible item, but more as a concept, with her imagination being the only limitation in how far the machine could go.
During her lifetime, Ada Lovelace made huge strides toward developing what has been referred to as the “first computer program.” Her work gave her the standing to be revered as one of the first intellectual minds in computer science. Unfortunately, her work went largely unnoticed during her lifetime. More than century after her death, author B.V. Bowden discovered and republished her work in his 1953 book, Faster than Thought.
After “Notes” came to light, Lovelace was quickly recognized as one of the first computer scientists, receiving numerous posthumous honors for her work — and encouraging a new generation of women to create new and innovative paths in STEM.
Whether your inspiration for forging a path in a career in IT is Ada Lovelace, inventor of the spanning-tree protocol (STP) Radia Perlman, or any other IT great, we at CBT Nuggets want to encourage you to learn as much as you can and train with us. Get started now!
Not a CBT Nuggets subscriber? Start your free week now. | <urn:uuid:1e91bfee-7c8d-43ed-8503-8e432aee98e9> | CC-MAIN-2017-04 | https://blog.cbtnuggets.com/2016/10/ada-lovelace-day-and-women-in-stem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970668 | 982 | 3.734375 | 4 |
Using the TCP monitor in Anturis Console, you can set up monitoring of general availability and response time for any service running on a server connected to the internet. It also enables you to set up a notification when a certificate for a secure TLS/SSL connection is about to expire. You can send requests either from one of the components in your infrastructure, or use one of the available Public Agents that are maintained by Anturis in different geographical locations.
Transmission Control Protocol (TCP) is used along with the Internet Protocol (IP) to send data between computers connected to a network. Together, they form the Internet protocol suite, also known as TCP/IP. In this model, the TCP layer is responsible for dividing a message from an application program into packets, numbering the packets, and forwarding them to the IP layer that handles the actual delivery. On the destination computer, TCP reassembles the packets and forwards the message to the intended target application. So, while actual delivery is carried out by the IP layer, TCP is responsible for error-checking and controlling the order of a data stream.
This model works for most types of data in any network, because other protocols are designed within the framework of TCP/IP. For example, web browsers use HTTP on top of TCP for communication with web servers. The same is true for SMTP, POP3, and IMAP in the case of communication between mail clients and servers.
For secure communication, TCP is layered on top of the Transport Layer Security (TLS) protocol, which was previously known as Secure Socket Layer (SSL). TLS/SSL are cryptographic protocols for secure communication over computer networks. They are based on the exchange of X.509 certificates and public keys for encrypting and decrypting messages. Digital certificates are issued by a certificate authority (CA) trusted by both parties involved in the communication. A certificate binds the public key to a person or organization for a predetermined period of time (until the certificate expires).
As a transport layer protocol, TCP uses ports, which are application-specific endpoints in network communication. When added to an IP address, the port number completes the address for a TCP/IP connection. Port numbers are bound to specific processes on the destination computer. Port numbers are 16-bit unsigned integers ranging from 1 to 65535. Some well-known ports are reserved for the most common services. For example, an HTTP service application (like a web server) listens on port 80, while an FTP service is bound to port 20 for data transfer and port 21 for commands. You should not use any of the well-known ports (0 to 1023) or ports registered by companies and other users (1024 to 49151) for custom or private purposes.
By sending regular requests to a TCP port, you can track the time it takes for a response to return (also known as round-trip delay time, latency, or timeout). This helps you monitor both the reachability and the performance of your critical network services. Long timeouts can greatly affect the quality of services you provide to your customers. For example, you may want to monitor the availability of your Microsoft System Center Operations Manager (SCOM) server on port 5723, and make sure that responses are not delayed. The sooner you are able to detect a possible issue, the faster you will be able to react to it. If the server uses TLS/SSL security, it is also important to monitor the certificate expiration date.
©2017 Anturis Inc. All Rights Reserved. | <urn:uuid:5b759843-1347-4cf3-92b4-fbe7618ded4c> | CC-MAIN-2017-04 | https://anturis.com/monitors/tcp-monitor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915838 | 716 | 3.171875 | 3 |
HTTPS was initially used to prove to Internet users that the website and web server with which they are communicating are indeed the ones they want to communicate with, but later this use was extended to keeping user communication, identity and web browsing private.
But a group of researchers has, unfortunately, proven that HTTPS is a lousy privacy tool, and that anyone who can view, record and analyze visitors’ traffic can identify – with 89 percent accuracy – the pages they have visited and the personal details they have shared.
The group consisting of researchers from UC Berkley and Intel Labs has captured visitors’ traffic to ten popular healthcare (Mayo Clinic, Planned Parenthood, Kaiser Permanente), finance (Wells Fargo, Bank of America, Vanguard), legal services (ACLU, Legal Zoom) and streaming video (Netflix, YouTube) websites.
“Our attack applies clustering techniques to identify patterns in traffic. We then use a Gaussian distribution to determine similarity to each cluster and map traffic samples into a fixed width representation compatible with a wide range of machine learning techniques. Due to similarity with the Bag-of-Words approach to document classification, we refer to our technique as Bag-of-Gaussians (BoG),” they explained in a whitepaper.
“This approach allows us to identify specific pages within a website, even when the pages have similar structures and shared resources.”
Depending on which websites they interact with, this type of attack can have many consequences for Internet users as details such as medical conditions they have or medical procedures they have or are thinking of having might be revealed, legal problems they have and actions they might take might be shown, and financial products they use and videos they watch might point to information they would like to be kept hidden from anyone but themselves.
Who can leverage such an attack? Well, anyone who has access to those web pages and can capture the victims’ traffic – in practice this means ISPs (whether working for the government or not), employers monitoring online activity of their employees, and intelligence agencies.
Fortunately, they have thought of several defense techniques which, if implemented, can drastically reduce the accuracy of such an attack. Also, they pointed out, there are other things that can affect the attack’s effectiveness.
“To date, all approaches have assumed that the victim browses the web in a single tab and that successive page loads can be easily delineated. Future work should investigate actual user practice in these areas and impact on analysis results. For example, while many users have multiple tabs open at the same time, it is unclear how much traffic a tab generates once a page is done loading. Additionally, we do not know how easily traffic from separate page loadings may be delineated given a contiguous stream of user traffic,” they noted.
“Lastly, our work assumes that the victim actually adheres to the link structure of the website. In practice, it may be possible to accommodate users who do not adhere to the link structure.” | <urn:uuid:c8abd5fe-5843-492c-aba0-049aed073d9c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/03/06/https-cant-be-trusted-to-obscure-private-online-activity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941823 | 617 | 2.890625 | 3 |
Nearly all living creatures have some kind of natural defense, whether it's speed, agility, power, a shell, quills or claws. It's hard out there in the ecosystem, so you have be prepared to fight or flee for your life.
But on a pound-for-pound basis, few can match the defensive weaponry that evolution has bestowed upon the bombardier beetle. As National Geographic contributor Ed Yong explains:
These insects deliberately engineer explosive chemical reactions inside their own bodies, so they can spray burning, caustic liquid from their backsides. The liquid can reach up to 22 miles per hour, at temperatures of around 100 degrees Celsius. It’s painful to humans ... and potentially lethal to smaller predators like ants.
And, probably, Ant-Man.
If you want to get all sciency about it, Yong's article will tell you about hydroquinones and p-benzoquinones and stuff. We'd rather focus on the notion of the beetle spraying liquid out of its butt! Ha ha ha!
This story, "See why the bombardier beetle is the most badass insect in the world. Literally." was originally published by Fritterati. | <urn:uuid:e71334be-243d-4694-925b-e08f8a5aba71> | CC-MAIN-2017-04 | http://www.itnews.com/article/2922510/see-why-the-bombardier-beetle-is-the-most-badass-insect-in-the-world-literally.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93752 | 242 | 2.734375 | 3 |
A few months ago it was starting to seem like you couldn’t go a week without a new attack on TLS. In that context, this summer has been a blessed relief. Sadly, it looks like our vacation is over, and it’s time to go back to school.
Today brings the news that Karthikeyan Bhargavan and Gaëtan Leurent out of INRIA have a new paper that demonstrates a practical attack on legacy ciphersuites in TLS (it’s called “Sweet32”, website here). What they show is that ciphersuites that use 64-bit blocklength ciphers — notably 3DES — are vulnerable to plaintext recovery attacks that work even if the attacker cannot recover the encryption key.
While the principles behind this attack are well known, there’s always a difference between attacks in principle and attacks in practice. What this paper shows is that we really need to start paying attention to the practice.
So what’s the matter with 64-bit block ciphers?
Block ciphers are one of the most widely-used cryptographic primitives. As the nameimplies, these are schemes designed to encipher data in blocks, rather than a single bit at a time.
The two main parameters that define a block cipher are its block size (the number of bits it processes in one go), and its key size. The two parameters need not be related. So for example, DES has a 56-bit key and a 64-bit block. Whereas 3DES (which is built from DES) can use up to a 168-bit key and yet still has the same 64-bit block. More recent ciphers have opted for both larger blocks and larger keys.
When it comes to the security provided by a block cipher, the most important parameter is generally the key size. A cipher like DES, with its tiny 56-bit key, is trivially vulnerable to brute force attacks that attempt decryption with every possible key (often using specialized hardware). A cipher like AES or 3DES is generally not vulnerable to this sort of attack, since the keys are much longer.
However, as they say: key size is not everything. Sometimes the block size matters too.
You see, in practice, we often need to encrypt messages that are longer than a single block. We also tend to want our encryption to be randomized. To accomplish this, most protocols use a block cipher in a scheme called a mode of operation. The most popular mode used in TLS is CBC mode. Encryption in CBC looks like this:
The nice thing about CBC is that (leaving aside authentication issues) it can be proven (semantically) secure if we make various assumptions about the security of the underlying block cipher. Yet these security proofs have one important requirement. Namely, the attacker must not receive too much data encrypted with a single key.
The reason for this can be illustrated via the following simple attack.
Imagine that an honest encryptor is encrypting a bunch of messages using CBC mode. Following the diagram above, this involves selecting a random Initialization Vector () of size equal to the block size of the cipher, then XORing with the first plaintext block (), and enciphering the result (). The is sent (in the clear) along with the ciphertext.
Most of the time, the resulting ciphertext block will be unique — that is, it won’t match any previous ciphertext block that an attacker may have seen. However, if the encryptor processes enough messages, sooner or later the attacker will see a collision. That is, it will see a ciphertext block that is the same as some previous ciphertext block. Since the cipher is deterministic, this means the cipher’s input () must be identical to the cipher’s previous input that created the previous block.
In other words, we have , which can be rearranged as . Since the IVs are random and known to the attacker, the attacker has (with high probability) learned the XOR of two (unknown) plaintexts!
What can you do with the XOR of two unknown plaintexts? Well, if you happen to know one of those two plaintext blocks — as you might if you were able to choose some of the plaintexts the encryptor was processing — then you can easily recover the other plaintext. Alternatively, there are known techniques that can sometimes recover useful data even when you don’t know both blocks.
The main lesson here is that this entire mess only occurs if the attacker sees a collision. And the probability of such a collision is entirely dependent on the size of the cipher block. Worse, thanks to the (non-intuitive) nature of the birthday bound, this happens much more quickly than you might think it would. Roughly speaking, if the cipher block is b bits long, then we should expect a collision after roughly encrypted blocks.
In the case of a 64-bit blocksize cipher like 3DES, this is somewhere in the vicinity of , or around 4 billion enciphered blocks.
(As a note, the collision does not really need to occur in the first block. Since all blocks in CBC are calculated in the same way, it could be a collision anywhere within the messages.)
Whew. I thought this was a practical attack. 4 billion is a big number!
It’s true that 4 billion blocks seems like an awfully large number. In a practical attack, the requirements would be even larger — since the most efficient attack is for the attacker to know a lot of the plaintexts, in the hope that she will be able to recover one unknown plaintext when she learns the value (P ⊕ P’).
However, it’s worth keeping in mind that these traffic numbers aren’t absurd for TLS. In practice, 4 billion 3DES blocks works out to 32GB of raw ciphertext. A lot to be sure, but not impossible. If, as the Sweet32 authors do, we assume that half of the plaintext blocks are known to the attacker, we’d need to increase the amount of ciphertext to about 64GB. This is a lot, but not impossible.
The Sweet32 authors take this one step further. They imagine that the ciphertext consists of many HTTPS connections, consisting of 512 bytes of plaintext, in each of which is embedded the same secret 8-byte cookie — and the rest of the session plaintext is known. Calculating from these values, they obtain a requirement of approximately 256GB of ciphertext needed to recover the cookie with high probability.
That is really a lot.
How does the TLS attack work?
While the cryptographic community has been largely pushing TLS away from ciphersuites like CBC, in favor of modern authenticated modes of operation, these modes still exist in TLS. And they exist not only for use not only with modern ciphers like AES, but they are often available for older ciphersuites like 3DES. For example, here’s a connection I just made to Google:
Of course, just because a server supports 3DES does not mean that it’s vulnerable to this attack. In order for a particular connection to be vulnerable, both the client and server must satisfy three main requirements:
- The client and server must negotiate a 64-bit cipher. This is a relatively rare occurrence, but can happen in cases where one of the two sides is using an out-of-date client. For example, stock Windows XP does not support any of the AES-based ciphersuites. Similarly, SSL3 connections may negotiate 3DES ciphersuites.
- The server and client must support long-lived TLS sessions, i.e., encrypting a great deal of data with the same key. Unfortunately, most web browsers place no limit on the length of an HTTPS session if Keep-Alive is used, provided that the server allows the session. The Sweet32 authors scanned and discovered that many servers (including IIS) will allow sessions long enough to run their attack. Across the Internet, the percentage of vulnerable servers is small (less than 1%), but includes some important sites.
So what do we do now?
While this is not an earthshaking result, it’s roughly comparable to previous results we’ve seen with legacy ciphers like RC4.
In short, while these are not the easiest attacks to run, it’s a big problem that there even exist semi-practical attacks that undo the encryption used in standard encryption protocols. This is a problem that we should address, and these attack papers help to make those problems more clear. | <urn:uuid:c88e8394-4b25-4377-a526-5f2874d15269> | CC-MAIN-2017-04 | https://blog.cryptographyengineering.com/2016/08/24/attack-of-week-64-bit-ciphers-in-tls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947035 | 1,797 | 2.8125 | 3 |
PDF and Flash files are ubiquitous with hundreds of millions of files on the web. They contain an enormous amount of useful, important and interesting information most of which is not available on-line in any other format. The audience for these files will include people with disabilities. Electronic files are particularly important for people with disabilities that make it difficult or impossible for them to access hardcopy. This may be because of a vision impairment, or a physical impairment that prevents them holding a book, or because they find reading difficult and benefit from having the text read to them. Electronic files can also provide alternative presentations of material, for example captioning for people with hearing impairments. There are moral, legal and financial reasons for making electronic information, including PDF and Flash, accessible to as wide an audience as possible.
Historically PDF and Flash were not accessible. In 2001 Adobe made changes to enable the files to be accessible. It has worked very hard since then to extend the document formats and the various readers and development tools to make it easier to create and distribute files that are accessible. As well as working on the technology Adobe has been very active on the various standards bodies, in particular ISO, and industry committees such as AIA to ensure the formats are fully defined and supported.
Even with all this effort there are still an enormous number of files being distributed that are not accessible. This is partly through ignorance of the creators of the file and partly because it has been difficult to create well formed files. Adobe has made some announcements recently that tackle both these issues.
A new set of training materials is available made up of a set of documents on accessibility, including one describing how to create accessible PDF from Microsoft Word. This give clear guidance on how to author Word documents and the steps required to turn them into accessible PDF. The steps are clearly laid out and include description of some actions that are required even though it might be thought that they would have been done automatically. This clarity is important as it means that a user will not think they have done something wrong, or missed something out, when they have to touch-up the file at the end.
Although PDF can be made accessible the preferred format for many people with vision impairments is DAISY (the digital talking book standard). The latest version of Adobe InDesign CS4 includes a function to export a file in the DAISY DTBook format. Being able to generate standard print format, large print format, accessible PDF files and DAISY files from a single source with ease means that the document can be provided in the most convenient and usable format for a specific user. This is a better solution than trying to create a one-size-fits-all solution.
Flash CS4 Professional offers improvements to the FLVPlayback video component that make the default player controls accessible automatically, without any coding required by the developer. At present very few Flash videos are controllable from the keyboard but instead require careful mouse positioning which is obviously impossible for people with vision impairments but is also difficult for people with limited hand control. In future any Flash video will be controllable by a small number of standard shortcut keys.
Colour Blind Support
People who are colour blind can find it difficult, or impossible, to distinguish colours that look distinctly different to people with full colour vision. Colour Universal Design (CUD) filters simulate what a person who is colour blind will see; the designer can then modify hues and tones to help the user distinguish the different areas of colour. At the same time the designer should consider distinguishing the areas of colour by other means such as patterns, borders or fonts.
Adobe has integrated support for Colour Universal Design (CUD) filters in Illustrator and Photoshop to help authors create accessible images.
These announcements show Adobe's continued commitment to improving the accessibility of its products and the accessibility of the output of these products. | <urn:uuid:975cab31-4e49-46be-b8f7-5a1f5ed8401c> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/adobe-accessibility-news/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946107 | 784 | 3.375 | 3 |
At the heart of MWR InfoSecurity lies our research and development platform - MWR Labs. Here we dissect industry news and trends, publish research, and share our tools with the security community
+ read more
Hooking is the process of intercepting a program’s execution at a specific point in order to take another action.
This article is the first of two providing a basic overview of a number of dynamic hooking techniques. Part 1 covers techniques that can be used in user mode and part 2 will cover techniques that work in kernel mode.
Hooking is the process of intercepting a program’s execution at a specific point in order to take another action. This may be simply to trace execution at interesting points, or to redirect and modify execution. At the point at which you wish to intercept, you place your hook.
A hook can be placed by modifying a program’s code on disk or even building it in at compile time, but this article focuses on dynamic hooking, where hooks are placed at runtime in memory. This allows hooks to be applied to any software whether we have the source code or not and whether we are able to modify the program’s files or not.
This article assumes that you have sufficient access to modify a program’s memory to place a hook. Practically speaking, this could be through attaching a debugger, injecting code into a process’s memory space or by having compromised an application by exploiting a vulnerability.
Hooking can be useful for a wide range of purposes, including:
The simplest way to visualise a hook is to consider a simple example. If you wish to hook a specific call of a function in a program, the target address of that call instruction can be changed to point to your own code which you have somehow injected into the program’s memory space. You can then choose whether to handle the call completely yourself, possibly providing a fake result, or you may choose to redirect execution to the original function, perhaps also changing the arguments or altering the return value.
So, while originally the call would look like this:
... ... call func1 ... ... func1 ... return
It is altered to look like this:
... ... call hook ... ... hook ... optional call to original ... return func1 ... return
If you wanted to use the same hook function for multiple hooks, you could create a lookup table of return addresses and check this at the start of the hook to work out where the hook call originated, and then take a different action depending on this information. Of course this assumes you are not on a platform with technologies like ASLR enabled, which may randomise addresses making this simplistic method impossible. Nevertheless, this simple example demonstrates what a hook is.
The Import Address Table (IAT) is loaded into memory from PE executables. It allows the memory address of functions imported from DLLs to be located. By locating the IAT in memory, you can patch it to redirect certain API function calls to hooking code.
IAT hooking happens in user mode and is a relatively easy way to hook every call to a specific APIfunction or set of functions. Any time the program makes that API call, your hook code will run. However, DLLs can also be loaded dynamically at runtime and when this happens there will be noIAT for that DLL. Furthermore, while the fact that this is a user mode technique makes it easier to implement, it also makes it easier to detect. As such, IAT hooking is not without its limitations.
Under Unix-like operating systems (including Linux), calls to shared libraries can also be hooked by making use of the LD_PRELOAD environment variable. By writing your own shared library with the functions you wish to hook defined by name you can hook these functions by loading that library at runtime with the target program. The LD_PRELOAD environment variable specifies a list of libraries to load first when a program is executed, so if you put the path to your library in this variable your own function will run instead.
If you also wish to redirect execution to the original function you are hooking, you can do so with the dlsym() function call which resolves a function name in a module to a memory address. Using this, the address of the original function can be located and used to make a call to it.
An inline hook overwrites the start of the function you want to hook to redirect execution. This allows you easily to catch every call to that function, no matter where or when the call happens. Inserting an inline hook obviously destroys some of the early logic of the original function, so if you wish to call the original function as well, the hook code must compensate for the instructions that were overwritten. An inline hooker should therefore save the instructions that were overwritten when placing a hook. This may not be trivial to do on architectures like x86 where instructions are variable length, and saving the bytes that were overwritten may not preserve the meaning of the code. In such cases, automated inline hookers may need to disassemble the start of the function properly to preserve the original function’s meaning.
Inline hooking can be used in user mode, although similar techniques can also work in kernel mode.
Microsoft released a framework to help in placing inline hooks on Win32 API functions called Detours. Microsoft actually places a two-byte dummy instruction that does nothing at the start of functions (the instruction is “MOV EDI, EDI”) to allow space to overwrite with a jump instruction harmlessly. The Detours package provides an API to enable custom hooks to be placed on APIfunctions in this way.
In fact, the dummy instruction is only big enough to replace with a short jump. A long jump (to code further away) would take up too many bytes and still overwrite part of the function. The intention is that a short jump is placed to jump to five bytes before the function, which is set aside as spare padding space. In these five bytes, the long jump can be placed to redirect to your hook code.
The advantage of using the short jump first is that whether in its “MOV EDI, EDI” form or its short jump form, those two bytes are always a single instruction. This means that, if multiple threads are executing the function as you are hooking it, a thread will never end up executing from the middle of an instruction you just inserted for your hook; rather it will either hit your new short jump, or already be beyond it, safely executing the rest of the function. | <urn:uuid:d6ea9415-b343-4357-a081-7a0c17ac21a8> | CC-MAIN-2017-04 | https://www.mwrinfosecurity.com/our-thinking/dynamic-hooking-techniques-user-mode/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919019 | 1,364 | 3.765625 | 4 |
There's a new big dog on the block.
Theres a new big dog on the block. On June 20, this years first update of the top 500 supercomputers list (which is updated twice a year) has a new first-place entry, an NEC-built vector processing system that is 5 times faster than the IBM system that was formerly ranked No. 1.
In fact, the supercomputer, called Earth Simulator, has as much computing power as the next 12 systems on the list combined.
As of just three years ago, the computing power of every one of the worlds top 500 computers would have to be combined to reach Earth Simulators 35- teraflop capacity, showing how fast the front edge of this technology curve is moving.
The system was built for the Earth Simulator Center in Yokohama, Japan.
The computers 35 teraflops of performance, 5,120 CPUs and 10 terabytes of RAM are used to model a virtual earth down to a 1-km resolution. The simulation will be studied by researchers to examine the effects of global warming on climate change, acid rain and other types of air pollution, tectonic plate movement, and other earthwide phenomena.
The worlds now-second-fastest system, ASCI White, located at the Lawrence Livermore National Laboratory, is used by the U.S. Department of Energy to model the effects of nuclear explosions.
The fastest PC-based supercomputer (at No. 35 on the list) is an AMD Athlon MP-based cluster at the University of Heidelberg, in Germany. | <urn:uuid:8ccacbe9-c974-4c7e-9ef3-85192c71e3ef> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/NEC-Builds-Worlds-Fastest-System | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946081 | 327 | 2.921875 | 3 |
For prospective borrowers who have no credit history, a common problem for immigrants whose credit starts anew when they move to the U.S., economists and startups are using metadata from smartphones to see how reliable a borrower is in other areas of their lives to help determine their likelihood of paying back a loan.
A recent article in the New Scientist cites research conducted by Brown University economist Daniel Björkegren and the Entrepreneurial Finance Lab which involved combing through cellphone data of 3,000 borrowers from a Haitian bank to identify such trends as how often they pay their cellphone bills, how quickly they return important phone calls, and travel behavior based on location data.
The researchers used an algorithm to process this data and its relation to the user's credit, finding that the bank examined in the study could have reduced loan defaults by 43%, according to the New Scientist report.
This use of metadata is apparently catching on – the article mentioned three startups that use mobile and web data in similar ways, including one that guesses a user's approximate credit based on the credit scores of their connections on Facebook and Twitter. And since this data could theoretically be used to both identify more reliable borrowers and reduce the number of risky loans, it's probably not long before larger banks embrace this approach. This data could very well provide additional context for those with long, established credit histories.
One important concern raised briefly in the New Scientists report is this practice's potential impact on privacy, which seems quite substantial. Yanhao Wei from the University of Pennsylvania told the publication, "you may give people the choice to give data or deny data, but in those cases, denying itself is a bad sign." Even the increasingly rare privacy-conscious web user will have difficulty withholding their data if it will make it more difficult for them to buy a home, start a business, or send their children to college.
Also interesting is the broader impact this use of data could have on the economy. If smartphone users are aware that the banks are taking into account their location data, maybe they'll think twice before making an impromptu trip to a casino. Even when people use cash to spend their money in reckless ways, banks could be able to figure what they were doing with it. | <urn:uuid:09adb34a-ae78-4f64-8f54-9b812c63ee79> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2913768/opensource-subnet/how-your-smartphone-use-could-affect-your-credit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969299 | 448 | 2.921875 | 3 |
Knowledge is the DNA of a government organization. And knowledge management (KM) is the art and science of organizing, preserving, connecting and sharing that knowledge. The objective is to transform an organization into a learning culture that fosters critical thinking and facilitates better decision-making.
When knowledge is readily reliable, accessible and shared throughout an organization, knowledge workers can perform their jobs efficiently. The late Peter Drucker first discussed the idea of a knowledge worker in 1959 as a person who works primarily with information, or develops and uses knowledge in the workplace.
They make fewer mistakes, avoid redundancy, communicate better with their associates, and can draw upon all the expertise and capabilities of the organization. New knowledge workers can get up to speed quickly. When they can easily tap into institutional knowledge as a basis for action, they can better execute mission-critical transactions, provide better service, and ultimately develop innovative solutions to serve its associates, stakeholders and the public. This makes knowledge management a powerful tool for the public CIO -- who is uniquely positioned to foster KM initiatives through judicious use of technology.
Technology not only provides the means of access to the institution's knowledge, it can also help people make connections they would not ordinarily make, as well as inspire new knowledge. It can frame information in a manner that fosters new ways of thinking. When knowledge is easily available, contextual and relevant, it enhances routines. Through technology, individual knowledge sharing and group learning are supported.
Enterprisewide technology allows business solutions across silos and streamlines workflow processes -- two critical components of a KM infrastructure. Technology facilitates knowledge sharing by offering a wide range of information processing support, including tools for content management, collaboration, publishing, personalization and taxonomy development. It delivers information to local or remote users in a variety of formats, such as wireless or intranet.
Plugging the Knowledge Drain
The need for effective KM in government agencies is greater today. According to a May 2005 Forrester Research briefing, The Retiring Workforce Is Creating a Knowledge Void in Government and Regulated Industries:
46 percent of U.S. government employees are 45 or older.
45 percent of U.S. public employees will be at retirement age within the next five years.
For many government agencies, the turnover of knowledge workers due to retirement, reorganization and job mobility threatens the agency's ability to execute its mission. Mandated downsizing in the 1990s, the growth of outsourcing and increasing retirements in the work force have resulted in a loss of institutional memory, according to a 2001 U.S. General Accounting Office report. When expertise walks out the door, those remaining don't have the knowledge and experience to do those jobs; they must reinvent the wheel or risk failure.
KM can prevent this loss of institutional memory and expose new and existing workers to the expertise residing within their agency. It can help government preserve institutional knowledge by capturing and maximizing hard-won knowledge -- best practices, lessons learned and stories told -- and by connecting prodigies with apprentices.
The very process of collecting an organization's knowledge creates a culture that encourages individual knowledge sharing and learning. Assigning junior and mid-level associates to harvest knowledge from subject-matter experts, collaborative teams and workgroups preserves, and develops resources for a future generation of knowledge workers. At the same time, it embeds a learning- and knowledge-sharing culture in the organization.
Technology delivers the infrastructure to embed, preserve and influence knowledge. For example, many federal government CIOs have successfully piloted and implemented KM practices as part of the e-government initiative. The Federal CIO Council has established the Knowledge Management Working Group, an interagency body whose purpose is to "bring together guidance on the content, process and technology needed to ensure that the federal community makes full use of its collective knowledge, experience and abilities." Its Web site offers a wealth of educational resources, information, speeches and presentations on KM topics.
Over the past 25 years I have watched KM evolve more by trial and error than by design, and have observed a number of practices that can be key contributors to successful KM implementation within an organization.
These have been distilled into 10 key strategies for facilitating KM within your agency.
Better Knowledge Management
1. Identify high-risk business processes likely to produce negative outcomes if errors are made or critical employees leave. The critical business processes of an organization are logical places to seek "pain points" needing solutions.
CIOs should focus on identifying critical business decision points, monitoring workers going through their day, learning about workarounds -- how people expedite completion of a task bypassing protocols -- and identifying where performance can be impaired if the right information is not readily available.
2. Take a user-centric approach to knowledge management. Technology often starts with a structure or methodology requiring users to learn how to take advantage of it. Good KM allows user experience to drive all KM implementations, and tailors projects to the way jobs are done.
CIOs should examine how users employ existing tools and resources, explore user needs and desires for new tools and means of access, and employ these findings in all phases of the design and implementation of a KM initiative.
3. Apply business case analysis methods and information to a KM, to predict its usefulness. The use of authoritative, third-party research can help document return on investment for senior management.
Though it seems obvious that fast and reliable access to information leads to less time wasted and quicker problem solving, the inverse also proves true: The inability to find needed information quickly comes at considerable and demonstrable cost. According to the March 2005 IDC market research report, The Hidden Costs of Information Work, an organization that employs 1,000 knowledge workers:
loses $5.3 million annually in time wasted by employees not finding information. Not finding information is defined as poor search, lack of integrated access to all the enterprise's collections of information and lack of good content;
an average worker spends 3.5 hours per week searching and not finding needed information at a cost of more than $5,000 per worker annually; and
an average worker spends 3 hours per week recreating content at a cost of more than $4,500 per worker annually.
4. Build consensus and ownership. KM initiatives are typically an iterative process, requiring feedback from stakeholders at every major stage of the process. CIOs can align theory and practice by gaining input from stakeholders, including experts, management, technologists and workers closest to business processes.
Consensus building starts by identifying the subject specialists who use knowledge to do their job, identifying big-picture users -- those who understand best practices and business processes -- and gaining senior-level sponsorship to ensure objectives remain aligned with business requirements.
5. Identify information users who "do." This means gathering information about how knowledge supports business goals and concluding with knowledge Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis. To decide which information is valuable, determine what people really want to know rather than what management thinks they need to know.
6. Identify the knowledge experts who "know." CIOs need to develop a formal process for identifying institutional experts, and discover roles and responsibilities other than those listed in a job description (e.g., a person's job is described as "IT specialist," but her expertise includes professional photography and photo archiving for her region's agency).
To harvest institutional knowledge, identify employees a year prior to retirement -- capture those workers' expertise and codify it as stories, best practices, etc. Useful techniques for capturing retiring expert knowledge include exit interviews, mentoring and job shadowing.
Job shadowing captures key learning by having a KM team member work directly with a person on the job. It allows learning by both the KM team member, who audits the knowledge, and the expert, who becomes engaged in thinking about new ways of doing things. Develop a strategy for using or reusing this knowledge (e.g., new employee orientation programs or training).
Monitor users to learn the knowledge experts they rely upon, and identify how knowledge flows through the organization. Several expert locator software packages, such as AskMe, Tacit Software and Sopheon, are available to help agencies identify their experts. Social network analysis software, such as KNETMAP, BranchIt and InFlow provide a visual representation of social networks within organizations and uncover relationships that can be the basis for new communities.
7. "Connect people vs. collect information" is a false argument. KM pundits often argue whether the resulting knowledge from Communities of Practice (CoPs) is more valuable than codifying knowledge through the collection of best-of-breed documents. In practice, implementing KM requires multiple approaches.
Etienne Wenger, originator of the concept of CoPs, defines the practice as a composition of members informally bound by the same problems and learning together through activities that attempt to solve a set of problems. CoPs are not defined by a business unit or a department/regional location, nor are they necessarily temporary, like teams formed around a project. Rather they appear as "islands" of knowledge where the emphasis is on problem solving and shared learning.
Connecting people produces new relationships, solutions, CoPs and sources of knowledge. Collaborative tools help facilitate group communication across multiple locations or virtual communities.
Codifying, managing and utilizing expertise in the form of best practices, lessons learned or storytelling harvests institutional knowledge for the next generation of knowledge workers. This is particularly useful when expertise is retiring or rapidly walking out the door.
8. Help people find information more easily by structuring it. A taxonomy provides entry and connection points to users: It classifies, structures and manages information, making it easier to find what is there. A taxonomy that anticipates user needs and responses, together with a well designed underlying information architecture, is the infrastructure behind good search results. Reducing time spent searching for information creates a more productive organization.
A taxonomy should drive each Web site, enabling users to access the information that is there and mine it for key insights. No two people think alike, nor do they have the same set of search skills. Therefore knowledge repositories should provide multiple ways for people to access information. Consult with CoPs, experts, stakeholders and customers to analyze topics and select keywords for a taxonomy that will reflect the way people perform their tasks.
9. Use technology judiciously to embed KM in the organization. Just-in-time delivery of knowledge changes the existing ways of doing things; it is not a separate activity requiring additional time and effort. When access to knowledge is easy, it becomes intuitive or instinctive, and improves performance.
Integrate and utilize current investments in technology to reach each user. Help a workgroup solve a business problem by providing the collaboration tools that will allow them to communicate more efficiently. Extend the reach of knowledge to local, mobile and remote end-users.
10. Develop metrics and usage statistics to measure success. Test to find out what works and what doesn't. Metrics are critical to evaluate project success/failure, to provide feedback for improvement or to plan the next phase of a project. Measuring techniques must be linked to business objectives and should be developed in the earliest stages of a project. Some techniques used to gather data for metrics can lead to:
less time to complete transactions;
faster response times to key issues;
greater customer satisfaction;
Web site usage measurement;
data mining queries; and
feedback from stakeholders.
Preserving the government's enormous data resources, experience and expertise is a critical aspect of the public mission. The implementation of a strategic plan for KM will have an important and lasting positive impact on the ability of the government to do its job.
And as a final note, keep in mind that, according to Richard Saul Wurman's Rules of Organizing Information in Information Architects, "Most information is useless. Give yourself permission to dismiss it." | <urn:uuid:b944eafd-d1f7-46dd-94ae-31dcc52b78e6> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/pcio/100560509.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933013 | 2,453 | 2.671875 | 3 |
During an emergency letting your family know that you are safe can bring your loved ones great peace of mind. That's why the Red Cross has developed a communication tool to help families and individuals notify loved ones that they are safe during an emergency.
The Red Cross Web site now features a secure section called Safe and Well where people in the disaster-affected area can register their well-being and where friends and family elsewhere can access that information.
Disaster victims and people outside the impacted area can be a part of the communication process by doing the following:
As with any other Red Cross service or product, Safe and Well safeguards the privacy of disaster victims and protects their information according to privacy law standards. Messages - but not locations - will be viewable by friends and family. The Red Cross recommends that people affected should determine how best to communicate their contact information and whereabouts to family members. During the initial hours of this disaster it is important to stay connected with loved ones. | <urn:uuid:28c8713f-e198-4649-b6b0-90fe165e9051> | CC-MAIN-2017-04 | http://www.govtech.com/products/Red-Cross-Offers-Tool-to-Keep.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962785 | 195 | 2.515625 | 3 |
Beginning in 2008, a cyberbullying training program for junior high and high school personnel will be offered by the Illinois Attorney General Office. AG Lisa Madigan announced this new 90-minute training program at the third annual Illinois Internet Crimes Against Children (ICAC) task force year-end meeting Monday.
With the rise of social networking and instant messaging, bullies have moved from the playground to the Internet, out of the sight and earshot of teachers and parents. Cyberbullying occurs when a child is harassed or targeted by peers online or via other digital devices and often can lead to violence. Nearly 60 percent of children state that someone has made mean or hurtful comments about them online. The Attorney General's 90-minute training program will demonstrate what cyberbullying is, how it happens and how educators can recognize signs and take action to thwart this practice.
"Protecting our children against dangers they face on the Internet is one of our greatest challenges," Madigan said. "By keeping up with the ways that children use technology to communicate with each other, our teachers, school administrators and police officers can work together to prevent bullying and violence."
Madigan's office operates the ICAC task force as part of a national program that was created to help state and local law enforcement agencies enhance their investigative response to offenders who use the Internet and digital devices to sexually exploit children.
"From con artists to cyberbullies to online predators, the Internet unfortunately is popular with people who are looking for their next victim, many of them children," Madigan said. "We rely on our dedicated ICAC law enforcement officers to identify, arrest and convict these offenders." | <urn:uuid:746e2654-890d-4233-9b4b-8e3a1faff890> | CC-MAIN-2017-04 | http://www.govtech.com/security/Illinois-Attorney-General-Announces-Cyberbullying-Training.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958523 | 334 | 2.8125 | 3 |
Raytec IR lighting technology is being used to improve security and safety at Malaga Cathedral, on Spain's Costa del Sol. The baroque cathedral is one of the city's main tourist attractions and is situated close to a number of other historic buildings in the city center.
When planners decided to improve surveillance at the site with a new surveillance system, the system should be designed so that it would have minimal visual impact. The use of IR lighting was a key part of the solution. “Lighting is an essential ingredient for surveillance systems. Without adequate lighting at night the images will be grainy and substandard,” said David Lambert, Director of Sales and Marketing, Raytec. “But in tourist areas it may not be acceptable to have much or additional visible lighting which may detract from the ambience of the scene.”
To solve the problem, the IR units were chosen because they don't cause light pollution and allow high performance images to be seen at night. “As the light from the illuminators is invisible to the human eye, they increase security around Malaga Cathedral without detracting from the character of the building, and their neat look fits in well within the architecturally sensitive environment” e Lambert said. | <urn:uuid:d8deb6d2-2ef3-41fa-8382-56b8477e7159> | CC-MAIN-2017-04 | https://www.asmag.com/showpost/11164.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958005 | 253 | 2.578125 | 3 |
The Border Gateway Protocol (BGP), defined in RFC 1771, provides loop-free interdomain routing between autonomous systems. (An autonomous system [AS] is a set of routers that operate under the same administration.) BGP is often run among the networks of Internet service providers (ISPs).
|In order to get AS info, you need to configure your router to include AS info. AS information collection is resource intensive, especially when configured for origin-AS. In case you are not interested in monitoring peering arrangements, disabling AS collection may improve NetFlow Analyzer performance.|
Enter the global configuration mode and issue the following commands to enable BGP routing and establish a BGP routing process:
||Enables the BGP routing process, which places the router in router configuration mode|
||Flags a network as local to this autonomous system and enters it to the BGP table|
BGP supports two kinds of neighbors: internal and external. Internal neighbors are in the same autonomous system; external neighbors are in different autonomous systems. Normally, external neighbors are adjacent to each other and share a subnet, while internal neighbors may be anywhere in the same autonomous system.
To configure BGP neighbors, issue the following command in router configuration mode:
||Specifies a BGP neighbor|
The following example shows how BGP neighbors on an autonomous system are configured to share information.
router bgp 109
neighbor 184.108.40.206 remote-as 167
neighbor 220.127.116.11 remote-as 109
neighbor 18.104.22.168 remote-as 99
In the example, a BGP router is assigned to autonomous system 109, and
two networks are listed as originating in the autonomous system. Then
the addresses of three remote routers (and their autonomous systems) are
listed. The router being configured will share information about networks
22.214.171.124 and 126.96.36.199 with the neighboring routers. The first router
listed is in a different autonomous system; the second neighbor's
router configuration command specifies an internal neighbor (with the
same autonomous system number) at address 188.8.131.52 and the
third neighbor's r
emote-as router configuration command specifies
a neighbor on a different autonomous system.
If you have configured BGP on your network, and want Netflow to report on autonomous systems (AS info), issue the following command on the router in global configuration mode:
||Exports the Netflow cache entries to the specified IP address. Use the IP address of the NetFlow Analyzer server and the configured Netflow listener port. The default port is 9996.|
||Exports NetFlow cache entries in the specified version format (5 or 7). If your router uses BGP, you can specify that either the origin or peer ASs are included in exports – it is not possible to include both.| | <urn:uuid:dc200a0b-26d6-4fe7-a594-fc5d94fc1ba6> | CC-MAIN-2017-04 | https://www.manageengine.com/products/netflow/help/cisco-netflow/cisco-bgp-netflow.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.854561 | 607 | 3.46875 | 3 |
What began as a Silicon Valley startup, Tesla has quickly grown to be one of the most successful and appealing brands in the auto industry. They’re even branching out to other industries, like space travel and home energy.
While they weren’t the first company to create an electric car, their success in the market is unparalleled.
It wasn’t just a case of, “If you build it, they will come.” Other big-name auto companies had tried to make a car without emissions and failed. But with soaring greenhouse gasses, and gas-powered cars being one of the biggest contributors, it was clear that a market existed if a company could figure out how to make an electric car widely available.
So, what was the key to their success? We believe it’s that they understood just how powerful yes can be. Yes to revitalizing an industry. Yes to making electric cars available to the masses. Yes to harnessing new forms of electric energy. Instead of seeing challenges as roadblocks, they saw the opportunity no one else did.
A Brief History
The origins of the modern electric car date back as far as 1832 with Scotsman Robert Anderson’s invention. In the early 1900s, a surprising one-third of cars on the road were electric. It was also around this time that Ferdinand Porsche created the first hybrid vehicle.
So, what happened to electric cars in the 184 years since? Their biggest challenger—electric cars had a short battery life and couldn’t withstand rough rural roads—was the release of Henry Ford’s Model T in 1908. Because it solved those challenges and was affordable—it cost around $650, almost one-third the price of an electric roadster at $1,750—development on the electric model fell by the wayside.
A Fresh Start
In the late 1990s, more people and companies were becoming environmentally conscious and hybrids were growing in popularity, opening the door for Tesla, which set out to build a luxury car that could drive 200 miles on a single charge.
Tesla itself began as a simple question, “Can we build a viable electric car?” The answer to that question hinged on finding a way to increase the short driving range that plagued many early models.
Building on the technology of earlier hybrid and electric models, they found a way to use lithium-ion batteries, which were already widely available—proving that success doesn’t always mean saying yes to something new, but saying yes to a new angle.
Using these batteries, they were able to take advantage of hybrid technology of converting energy from braking back into power, while fueling rocket-like acceleration.
YES to Economy of Scale
Tesla’s history of yes taught them to do the same in all corners of the business. While they had investors early on, they needed to generate cash flow to continue operations.
One way of doing this was choosing to sell cars themselves rather than going through dealerships, which cut fees and sped up the time from assembly line to road.
They also started taking pre-orders to get cash flow, even before the cars were built.
But perhaps the smartest move was financing the more economic vehicles with early, high-end models. In 2008, Tesla successfully launched their high-end Roadster, which paved the way for the next iteration, the Model S. That helped finance the Crossover, which finally led to the Model 3, a car affordable and roomy enough for the average driver.
YES to the Relentless and Unyielding Pursuit of Innovation
While Tesla set out to make sustainable transportation a reality, their attitude of yes is also giving rise to other technologies, like driverless cars, energy, and battery innovation.
They’re working alongside SolarCity, so that people who own both a Tesla and solar panels on their homes can not only offset their car’s electric charge, but if fully utilized, can actually supply energy back into the grid.
They’re taking home energy one step further by creating battery packs that store and supply energy to individual homes, taking them off the grid altogether. Plus, they’re partnering with Panasonic to create scalable battery packs for commercial and utility-scale projects. | <urn:uuid:55098bdc-86d0-4791-b350-995a1e05e7b1> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2016/10/12/how-an-electric-car-company-was-built-on-yes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00149-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971098 | 877 | 3 | 3 |
education to culture. One city official said Gothenberg is a leader in GIS because it has consistently focused on manageable projects, eventually leading to citywide adoption. In addition, the city's elected officials actively supported the strategic development of GIS, providing financial and political support.
Although the technology is being used in even the most remote parts of the world, applications will depend upon multiple factors including a country's economic and political history. In developing nations, GIS can help define what the landscape of a country looks like now and what it may become. ESRI's Thomas said Thailand is using GIS as a tool for urban planning. In Latin America, particularly in Columbia, GIS is focused on crime analysis and public safety. And, the Israeli police use GIS in the public safety and defense arena.
Japan recently launched an aggressive e-government campaign that is using GIS data for management and planning. The island-nation is tracking development to maintain its natural and historical beauty as populations migrate from urban centers, potentially changing the traditional character of the land.
But, one of the most daunting efforts at mapping a country whose character is radically changing is taking place in Eastern Europe. The former Soviet Union was a palette of state-owned land with meaningless or nonexistent boundaries. Scrambling to create economic opportunities, Russia looked to private ownership of land as a resource. "Now they realize that one way to leverage money is to sell property and then tax it," said Kevin Daugherty, special projects manager for ESRI, who spent the last decade working with the Russian government. "Land reform in the former Soviet Union is one of the biggest things going for economic growth. GIS is at the core of that effort."
In the former Soviet Union, there had been no businesses for land or aerial photography or surveying, "You need the infrastructure as well as the technology to even begin to construct a parcel of land," Daugherty said. Early projects included some failed procurements as Russians learned about the world marketplace. The changes also required new legislation and political buy-in -- still a challenge in Russia as local jurisdictions continue to distrust the federal government.
Today, Russia is buying GIS for other uses, such as crime analysis and police work, utility and forestry management, and emergency management. Daugherty said the Russian Central Bank uses GIS to analyze the flow of money and tax generation.
Creating New Opportunities
GIS applications are key to development for many emerging nations, helping facilitate changes that might otherwise take centuries, according to URISA's Gentes. "Land records in the old Soviet Union and Latin America are often sketchy and confused," he said. "GIS is allowing them to go from 16th century, archaic land deeds to precise-to-the-inch recording of actual boundaries and land management issues."
Gentes, who is also the mayor of Round Lake, Ill., observed the power of GIS in an emerging economy during 9-11 when URISA members were at an annual conference in Jamaica. Stranded there during the air travel ban, he investigated and was impressed by the country's use of GIS.
"The prime minister has given a lot of ownership to the minister of the interior, who has a complete shop that manages GIS throughout the entire country," Gentes said. "They have a very sophisticated operation. The political leadership saw the value and they realized what GIS could do for them."
Gentes said he had a similar experience in his own Illinois village and now insists that all decisions are made with the aid of a GIS map.
Political buy-in and executive support is critical to getting value from GIS. "Educating leaders on the value of GIS, can help them see why a project is worthwhile," he said. "I see the value inherently, | <urn:uuid:4de7dc9d-db17-4e9c-8ca3-85a2cbbb5b38> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Picture-the-World.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97388 | 782 | 2.6875 | 3 |
College students work to address vaccine storage problems with temperature monitoring
Thursday, May 16th 2013
Vaccines work best when kept within a narrow temperature range, but many medical facilities fail to meet these standards, often erring on the side of keeping their refrigerators too cold. Since poor temperature management can damage vaccines and reduce their effectiveness, researchers are looking for ways to ensure temperature stability in vaccine storage areas. A recent project from engineering students at Rice University paired temperature sensor equipment with a controller that regulates power to make refrigeration more consistent.
The Rice University project was prompted by Dr. Patrick McColloster, an associate professor of family and community medicine at Baylor College of Medicine, whose research in vaccine storage has shown that many medical providers fail to implement proper temperature controls. In a 2011 study of Houston medical facilities, McColloster found that many were freezing their vaccines, which reduces their effectiveness.
His findings were supplemented by a U.S. Department of Health and Human Services study that found vaccine refrigeration problems around the country. In that study, vaccines at 76 percent of the providers audited were exposed to inappropriate temperatures for at least five cumulative hours during a two-week study period.
"The problem isn't that the vaccines are wasted or thrown away because of improper temperature management," said Amanda Walborn, a member of the SAFE Vaccine team, which developed the refrigeration control solution. "It's that the vaccine gets damaged and nobody knows it. And it gets administered anyway."
Solving the storage problem
The problem of inconsistent temperatures could be addressed by installing laboratory-grade refrigerators at medical providers' facilities, but doing so is cost prohibitive for most smaller practices, McColloster explained. Many of these providers use commercial refrigerators, which lack sophisticated temperature monitoring and control features.
The Centers for Disease Control and Prevention's vaccine handling guide mandates most vaccines be kept between 2 degrees C and 8 degrees C. Vaccines are often exposed to temperatures outside this range due to a lack of temperature regulation in the refrigerator unit. Since nurses and doctors are constantly opening refrigerators throughout the day, they often turn down the refrigerator's thermostat to compensate for the outside heat, McColloster found. This approach can cause the vaccines to freeze when the door is left closed for extended periods.
"They set it to the lowest setting to keep it cold enough," explained Andres Martin de Nicolas, another student on the SAFE Vaccine team. "But if they leave it there overnight and during weekends, the temperature will drop too much. It's very easy to overlook."
The SAFE Vaccine team's solution addresses this issue by connecting temperature sensors inside the refrigerator to a controller mounted on the outside. Based on the sensor input, the controller regulates the amount of power the refrigerator draws, thus impacting the amount of cooling.
Some of the students plan to keep working on the project over the summer to create a functional prototype that incorporates features such as a power backup system, essential for avoiding temperature swings in a blackout or brownout. While McColloster's original project plan called for the construction of a new refrigerator, the students determined it would be easier, more precise and more cost effective to develop an external temperature monitoring system.
Medical facilities can implement similar technology to keep their vaccines at a stable temperature by deploying temperature sensors like the ones from ITWatchDogs. Temperature sensor data can be used to create regulating systems such as the SAFE Vaccine team's device or to enable remote monitoring that allows facility owners to respond quickly in the event of unusual temperature conditions. By leveraging sensor data, medical providers can improve vaccine storage and reduce the prevalence of vaccines being rendered ineffective due to poor temperature conditions. | <urn:uuid:89582ca9-243e-4ba3-8bbe-bf9aa86b028a> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/college-students-work-to-address-vaccine-storage-problems-with-temperature-monitoring-440954 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948804 | 750 | 2.78125 | 3 |
In July 2004, the U.S. Department of Health and Human Services (HHS) announced a landmark, 10-year plan to build a Nationwide Health Information Network (NHIN) to link health records nationwide. The plan's success, however, depends on eliminating paper medical files and creating an electronic health record (EHR) for every American.
Developing EHRs -- which, like traditional patient files, would contain detailed information about an individual's medical care and health history -- is a high-visibility issue for the HHS because of skyrocketing health-care costs and the dramatic impact they have on public-sector budgets.
Studies over the past several years point to health IT as a tool for improving quality of care, reducing errors and delivering significant cost savings, according to the HHS, and the potential value of the interoperable exchange of health information among disparate entities is substantial.
National implementation of fully standardized interoperability of health information between providers and other health-care organizations could save $77.8 billion annually -- approximately 5 percent of the projected $1.7 trillion spent on health care in the United States in 2003 -- according to the HHS Office of the National Coordinator for Health Information Technology (ONCHIT).
Other studies estimate that 20 percent to 30 percent of healthcare spending in the United States -- up to $300 billion each year -- goes to treatments that don't improve health status, are redundant, or are not appropriate for the patient's condition, said ONCHIT.
It's clear that EHRs and a national health IT strategy can play a huge role in streamlining the delivery of health care to patients across the country.
Roughly two years later, however, widespread adoption of EHRs is far from smooth. Making the transition is clearly a difficult process, but regional efforts within states appear to be accomplishing the most.
Texas and Massachusetts are strong advocates for EHRs, and have begun implementation in various health care facilities. The role of state government in this transition, however, remains unclear.
Though the push toward a NHIN is decidedly top down, prodding physicians, hospitals and health systems to adopt EHRs is necessarily grass-roots.
The best person to persuade a roomful of skeptical doctors of EHRs' efficiency gains and enhanced access to health information is another physician. It's similar to the spread of e-government: State and local governments look to see the root of a particular trend, then talk to the officials from that jurisdiction to pick their brains.
"I think this is probably what President Eisenhower faced when trying to grow an interstate highway system that all of us can use and travel on," said Pat Wise, vice president for Healthcare Information Systems at the Healthcare Information and Management Systems Society (HIMSS).
Designing a highway system capable of carrying all sorts of traffic -- local, regional and national -- requires significant advance planning, Wise said, and EHR adoption is no different.
"You have to map out, in some fashion, how these documents should travel," she said. "And clearly, not only do some people want to travel from the East side of the nation to the West and from the North to the South, but the larger number of people just want to travel in and around the city themselves. They don't even want to go to another state."
This is because most health-care referral patterns are local, Wise explained, and physicians, hospitals and health systems in cities have begun building health information exchanges (HIEs), often funded by federal grant money. According to the HIMSS's latest data, approximately 137 HIEs dot the country, and more crop up every month.
The push behind the HIEs' creation is to improve and simplify health care. HIEs serve as the mechanism for physicians, hospitals and health systems to exchange patient information electronically -- information that, so far, hasn't been readily shared by health-care providers.
"There are very few locales that can do it effectively yet, but they're all kind of at the starting line," she said. "They don't know what their strategies are. This is too new. There are not a lot of success stories out there. There's not a lot of, 'Do step one, then step two, then step three and then step four,' for locales to follow. They're all kind of feeling their way."
The Massachusetts e-Health Collaborative (MAEHC) kicked off its pilot with a conference called "EHR in Your Office -- Let's Get Started!" to introduce physicians to EHRs. The conference focused on the fundamentals of implementation, the daily effect they make in the lives of clinicians and staff, and factors involved in making the transition. The pilot is made possible by a $50 million fund from Blue Cross Blue Shield Massachusetts, said Micky Tripathi, president and CEO of the MAEHC.
Hospital systems serving three regions in the state were selected to participate in the pilot. Physicians, health-care providers, administrators and selected staff from Brockton Hospital, the Good Samaritan Medical Center, Anna Jacques Hospital and North Adams Regional Hospital attended the October training conference.
Over the long term, the pilot will equip the practices of more than 600 physicians with EHR software, support tools and data-exchange capabilities to support patient care, Tripathi said.
In 2004, the Massachusetts physician community formed the MAEHC -- composed of 33 organizations -- to bring together the state's major health-care stakeholders and begin establishing an EHR system. The sole government entity is the Massachusetts Executive Office of Health and Human Services (EOHHS), which is on the board of directors and has representation in the MAEHC's executive committee.
"It's incredibly valuable to have the Executive Office of Health and Human Services at the table," Tripathi said. "So much of what we're doing is what I would call a public good issue. There are certain things that are good for society to get done, and if you depend on the individual actors to do those things, it's not in any actor's individual interest to do it. I would put EHRs in that category for physicians -- particularly in small offices."
Because state government was created to consider societal benefits and the public good, the MAEHC needs that perspective as it starts making EHRs a reality in Massachusetts. In addition, the EOHHS is part of the MAEHC for a much more practical reason.
Because of the amount of money the state spends under its Medicaid program -- MassHealth -- it is an important individual stakeholder, Tripathi said, and stands to benefit as much as any commercial health plan.
"To the extent that any payer has issues with this and is also going to be a beneficiary of this, MassHealth is one of those payers," he said, adding that in a way, by sitting at the table almost as the CEO of a health plan, the EOHHS has a direct interest in this. "Similarly public health is another important angle in all this. There's no other organization that would speak for those issues, except for state and local public health organizations."
The infrastructure that's ultimately created to support statewide EHRs will also play a huge role in how public health agencies obtain and share information in times of emergency, and the EOHHS can give the MAEHC an important perspective on shaping that infrastructure.
East Texas EHRs
In Tyler, Texas, a Regional Health Information Organization (RHIO) called Access Medica is orchestrating the rollout of EHR software to 29 physicians in six independent clinics during the first phase of a regional deployment. The second stage will see 50 to 60 more physicians get the software by mid-2006.
Access Medica was founded in 2005 to provide health-care IT services to physicians, hospitals and patients of east Texas. The nonprofit RHIO includes members of the Physicians Contracting Organization of Texas and other independent practices.
The RHIO's infrastructure is being designed with the ultimate goal of connecting more than 600 regional physicians across 10 counties, said Dr. Kenneth Haygood, CEO of Access Medica. When complete, he said, it will be the first operational RHIO model in Texas.
Though the need to push for these sorts of IT changes in health care is widely accepted by the federal government, and increasingly by state governments, Haygood said, the actual movement in the EHR arena is much closer to the ground.
"If you look at places around the country where these efforts are taking hold and happening, it's really from physicians groups and medical practice associations -- cooperating sometimes with their local hospitals -- really coming together to make the investments in putting together these health information networks, or RHIOs," Haygood explained.
A RHIO can act as a neutral, central collaborator for all the diverse parties across the health-care community, whether that's physicians, medical clinics, hospitals, labs, governments and other entities with an interest in EHRs. One thing Access Medica could use is more funding, Haygood said, which isn't necessarily coming from government sources.
"We're really in a fortunate position because we have a lot of physicians here who realize the importance of making these changes," he continued. "They know that, one way or the other, they've got to make these changes for the benefit of their patients, for the benefit of their practices and to take the health care they provide to a higher level."
As a result, Haygood said, doctors have personally invested in moving toward EHRs. Access Medica has applied for several grants at the state and federal levels, and is waiting for approval.
"We're hoping for funding from places like that because the rate at which we could make progress is very dependent on our level of funding," he said. "As it is right now, we're just
able to keep moving forward with our initial group of doctors."
In Michigan, doctors can turn to the Virtual Community Health Center, owned and operated by the nonprofit Michigan Primary Care Association (MPCA), for help with making the transition to EHRs.
The VirtualCHC is a hosted service that offers a range of medical applications -- practice management, electronic claims interface, general ledger and EHRs -- on a subscription basis to physicians belonging to the MPCA, said Bruce Wiegand, the MPCA's IT director.
Five years ago, the MPCA tapped a federal grant to design and develop an infrastructure that used the Internet as the backbone for providing a centralized medical billing system for the state, Weigand said, and that infrastructure evolved into the VirtualCHC.
It's a way for physicians to get the benefits of modern medical software without having to buy it themselves.
"Seventy percent of EHR implementations to date have failed on two different fronts," he said. "On the IT side, you need a lot more IT than you could ever imagine to maintain EHRs. A lot of doctors went into EHRs thinking it was just paperless. What they ended up doing was putting paper under glass."
The problem is that making the move to EHRs necessitates a radical overhaul of the way physicians run their medical practices -- including electronic interfaces between a doctor's office, pharmacies, labs and clinics; patient management software; and billing systems.
"We're trying to corral all these disparate systems into a single data center, which makes it easier to secure, provides better IT oversight, and we can better manage these interfaces between systems," Weigand said. "We're trying to suck the cost of IT out of health care."
A high-speed Internet connection is all that's needed, he said, and a primary care doctor can tap into the VirtualCHC for all his or her medical software applications needs. In part, the evolution to EHRs is hampered by the stand-alone medical software systems that physicians use.
"You have all these islands of needs," Weigand said. "Most EHRs don't do billing. Most billing systems don't do clinical. So you have to interface the two, and typically you're dealing with two different vendors with two different types of databases."
Need Is There
The medical community and its stakeholders, including the public sector, see the need to move to EHRs. But, as with any technological evolution, not everybody's on board.
Weigand said some older doctors, perhaps a year or two away from retiring, aren't rushing to modernize their practices. On the other hand, he said, younger doctors can't imagine a medical practice without EHRs and related technology.
"It's a tough nut to crack because there are so many variables," he said. "But now you have the feds pushing it. The states are pushing it. Pay for performance is coming down the pike. The reality is that the doctors moving to EHRs quicker are the ones that are going to survive and prosper."
Pay for performance is a new approach to the way the federal government pays for health care under Medicare.
The idea is to use Medicare reimbursements to reward innovative approaches to health care to get better patient outcomes at lower costs, according to the Centers for Medicare and Medicaid Services (CMS). To test pay for performance, the CMS announced in January 2005 that 10 large physician groups across the United States agreed to participate in the first pay-for-performance initiative for physicians under the Medicare program.
The problem is that Medicare's physician payment rates for a service are the same regardless of its quality, its impact on improving patient's health or its efficiency, said Herb Kuhn, director of the Center for Medicare Management, in testimony at a hearing before the Senate Committee on Finance in July 2005.
Kuhn said ample evidence shows that by anticipating patient needs -- especially for patients with chronic diseases -- health-care teams that work more closely with patients can intervene before expensive procedures and hospitalizations are required.
Kuhn told the committee that, as a result of Medicare's payment practices, patients often end up seeing more physicians and using more medical services -- but without obtaining positive results. This, of course, means the CMS spends more on medical care for Medicare members.
"Providers who want to improve quality of care find that Medicare's payment systems may not provide the flexibility to undertake activities that, if properly implemented, have the potential to improve quality and avoid unnecessary medical costs," Kuhn told the committee. "Linking a portion of Medicare payments to valid measures of quality and effective use of resources would give providers more direct incentives to implement the innovative ideas and approaches that actually result in improvements in the value of care that people with Medicare receive."
Private insurers also back pay for performance, and the trend toward this way of practicing medicine will drive physicians and hospitals to improve their electronic records and reporting capabilities to demonstrate positive outcomes in treating patients.
Physicians that lag behind, that can't show positive outcomes -- that they're tackling diabetes, that they're tackling asthma, and that they're managing their patients -- risk losing substantial payments from private insurers and the federal government, Weigand continued.
"There's no other way to do it than to have this stuff electronic -- not in a Word document. You can't pull data out of a progress note, but you can out of an EHR," said Weigand. States have more vested interest today than just six months ago, he added, but the push to EHRs is still so new that the public sector is unsure of the outcome. One thing that is -- patient information must be available to who needs it, in real time. States have begun exploring ways to map information sharing across geographical and political boundaries, but it's not an easy task.
"Another barrier to this has been how incredibly decentralized health care is, as opposed to almost every other industry you look at in the country," said the MAEHC's Tripathi. "The individual entities at the end are these one- and two-physician practices way out there who, up until now, have had absolutely nothing compelling them to be electronically connected with anything else." | <urn:uuid:b83da425-86d8-4fc9-8299-cd4950c41aba> | CC-MAIN-2017-04 | http://www.govtech.com/security/Moving-Medicine-Forward.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965715 | 3,309 | 2.703125 | 3 |
In the business world, email continues to be the most widespread form of communication. While consumer email traffic has slowed due in part to preference of other forms of communication (i.e. social media sites, instant messaging, text messaging, etc.), business email traffic is soaring. According to the Email Statistics Report for 2014-2018 by The Radicati Group, 108.7 billion business emails were sent and received per day in 2014. That boils down to an average of 121 emails per user.
With this in mind, the question becomes how do we protect ourselves from the risk associated with this fundamental source of communication? After all, signing up for any online activity, whether it is social media sites, shopping, or banking, an email address is required.
A virus is a piece of code that is capable of copying itself and acting in a detrimental way by corrupting your system or destroying data. Unfortunately, one of the most popular and easiest ways in which viruses are spread is through email.
A worm is also self-replicating; however, it is a computer program that infiltrates an operating system with the intent of spreading malicious code. Worms cause harm by consuming bandwidth, deleting files, or sending documents via email. It harnesses the infected user’s machine and contacts to send out copies of the original code to other computers. Usually, the user is completely unaware that these viral emails are being sent from their machine. Even more unnerving, some worms have the ability to spoof the From address into producing an email that looks like it is actually being sent by the user. This causes their contacts to have no idea of the threat until it’s too late, proving that knowing the sender is not enough proof that the email is safe.
Phishing is when a threat poses as a trustworthy entity and attempts to acquire sensitive information such as usernames, passwords, and credit card details. They can go as far as including real logos and brand colors in order to appear like they come from the actual organization. It can be tricky to spot a fake, so if you believe that the email has the possibility of being authentic, it’s better to call the company directly by obtaining their number off of a paper statement or invoice and proceeding to verify the email legitimacy with an authorized representative.
The use of antivirus software is not always enough in terms of complete protection; we need to take it upon ourselves to be diligent when it comes to our email safety. By only opening email from trusted sources, only opening attachments that are expected, utilizing the email preview options, and scanning attached files with antivirus software before opening, you can help protect your information and hardware from these malicious attacks.
Below are a few ways to spot a phishing email: (Click image for larger view)
Spam refers to the unsolicited commercial advertisements that are distributed online trying to sell products or circulate internet hoaxes. It is a huge time waster, and will not only clog your email accounts, but also your networks and servers.
There are many ways to help reduce the amount of spam you receive and the risk that can be involved with it. First of all, you should be very cautious about where and with whom you post your email address. Maybe not every online shopping site needs it, especially if you are just browsing. Only subscribe to websites and newsletters that you actually need, and consider creating a generic email account for those specific subscriptions; this way you can keep your important emails separate.
If you do not know the vendor or you did not sign up to receive emails for them, that is considered unsolicited email, and you should never open it. It could be a scam, or there could be worms or viruses attached; if it was supplied without being requested, you shouldn’t open it. If you accidentally do open an email like this, do not click on any of the links offering to unsubscribe or remove you from the mailing list. Many email services have an option for reporting spam without having to open the message.
Email is not as safe and secure as many believe it to be. Remember to identify and understand the intent of the email and any attachments that may have come with it. Executable type attachments have the potential to be infected; they should never be opened unless you specifically requested or expected it. If you do not need the attachment or email, then do not open it, just delete it. Take steps to secure your mail client by enabling antivirus screening and remembering to always run your updates and patches for that system. Some mail clients provide update sites where you can have your system automatically scanned then receive a list of the specific updates needed.
Keep in mind that new susceptibilities are frequently discovered, so take precautions and keep your operating system up-to-date to help you combat these potential risks. | <urn:uuid:33f11327-0093-43eb-a3e6-36d07843a094> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/practice-safe-email-with-these-tips | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956912 | 980 | 2.8125 | 3 |
What you should know about the Internet Standards process.
What you should know about the Internet Standards process.
By Pete Loshin
All about Internet Standards
Some of the solutions that researchers and developers have come up with since 1969 to do interoperable internetworking have been quite clever, some of them have been pretty simple. All of them that are considered to be Internet standards are published in a series of documents called Request for Comments or RFCs. Though these are the most "famous" Internet documents, they are far from the only ones.
The first and possibly most important thing to remember about Internet standards is that while all Internet standards are documented in RFCs, not all RFCs document Internet standards. Only a relative few RFCs document actual Internet standards; many of the rest document specifications that are on the standards track, meaning they are at some point in the process that takes a specification and turns it into a standard. Non-standard, non-standards track RFCs may be published for information purposes only, or may document experimental protocols, or may simply be termed historical documents because their contents are now deemed obsolete (this is the RFC "state" which we'll come back to later).
There are several important published document series related to Internet standards and practices. They include:
- These are Requests for Comments, and are an archival document series. RFCs never change. They are intended to be always available, though RFC status can change over time as a specification moves from being a proposed standard to a draft standard to an Internet standard to an historical RFC.
- The STD (for "standard") series of documents represents Internet standards. A particular STD may consist of one or more RFCs that define how a particular thing must be done to be considered an Internet standard. An STD may consist of a single document, which is the same as some single RFC. However, the STD number stays the same if that RFC is deprecated and replaced by a newer RFC, but the document to which the STD points changes to the newer RFC.
- This series consists of "for your information" documents. According to RFC 1150, "F.Y.I. Introduction to the F.Y.I. Notes", the series is designed to give Internet users information about Internet topics, including answers to frequently asked questions and explanations of why things are the way they are on the Internet.
- The Best Current Practices series are defined in RFC 1818, "Best Current Practices", which says that BCP documents describe current practices for the Internet community. They provide a conduit through which the IETF can distribute information that has the "IETF stamp of approval" on it but that does not have to go through the arduous process of becoming a standard. BCPs may also cover meta-issues, such as describing the process by which standards are created (see below or RFC 2026 for more on the Internet standards process).
- There are a bunch of different documents that, over time, have been treated with more or less respect. This includes RTRs (RARE Technical Reports), IENs (Internet Engineering Notes), and others. We won't be covering these, as they rarely come up in discussions of current issues. Also, while the STD, FYI, and BCP document series contain RFCs, these other documents are not necessarily RFCs.
Part 2: RFCs and Internet Drafts
RFCs and Internet-Drafts
Some readers may wonder why Internet-Drafts (I-Ds) are not included in the list above with all the rest, but I-Ds are quite distinct from RFCs. For one thing, anyone can write and submit an I-D; RFCs are published (from I-Ds) only after an I-D has been through a sequence of edits and comments. For another thing, I-Ds expire six months from the time they are published. They are considered works in progress, and each one is supposed to state explicitly that the document must not be cited by other works. Where the RFC series is archival, I-Ds are ephemeral working documents that expire if no one is interested enough in them to move them forward through the standards process.
This is a critical distinction. Networking product vendors often claim that their product, protocol, service, or technology has been given some kind of certification by the IETF because they have submitted an I-D. Nothing could be further from the truth (though even publication as an RFC may mean little if it is published as an Informational RFC).
I-Ds become RFCs only after stringent review by the appropriate body (as we'll see in Part 4). Some more differences:
- RFCs are numbered, I-Ds are not (they are given filenames, by which they are usually referenced). RFC numbers never change. RFC 822 will always be the specification for Internet message format, written in 1982. If substantial errors are found in an RFC, a new RFC may have to be written, submitted, and approved; you can't just go back and make edits to an RFC.
- RFCs are given a "state" or maturity level (where they are on the standards track, or some other indicator) as well as a "status" that indicates a protocol's status as far as requirements level. We'll come back to these topics in the next section. I-Ds, on the other hand, are just I-Ds. The authors may make suggestions about what kind of RFC the draft should eventually become, but if nothing happens after six months, the I-D just expires and is supposed to simply vanish.
- RFCs are usually the product of IETF working groups, though I-Ds can come from anywhere and anyone.
Part 3: RFCs states and status
RFC State and Status
RFCs can have a state, meaning what kind of an RFC it is; and a status, meaning whether or not the protocol specified in the RFC should be implemented (or how it should be implemented). Valid RFC states include:
- Standard Protocol
- These are Internet Standards with a capital "S", which means that the IESG has approved it as a standard. If you are going to do what the protocol in the RFC does, you have to do it the way the RFC says to do it. Very few RFCs represent full Internet Standards.
- Draft Standard Protocol
- Draft standards are usually already widely implemented, and are under active consideration by the IESG for approval. A draft standard is quite likely to eventually become an Internet Standard, but is also likely to require some modification (based on feedback from implementers as well as from the standards bodies) and the authors are supposed to be prepared to deal with that (by making the changes that have to be made).
- Proposed Standard Protocol
- Proposed standards are being proposed for consideration by the IETF in the future. A proposed standard protocol must be implemented and deployed, so it can be tested and evaluated, to be given proposed standard state. Proposed standards almost always get revised (sometimes revised significantly) before advancing along the standards track.
- Experimental Protocol
- Experimental RFCs describe protocols that are not intended for general use, and that are, well, considered experimental. In other words, don't try this at home.
- Informational Protocol
- Informational RFCs are often published without the intention of putting the protocol on the standards track, but rather because they provide useful information for the Internet community. For example, the Network File System (NFS) protocol was published as an Informational so that implementers other than Sun Microsystems could build NFS clients and servers.
- Historical Protocol
- Though most of the other designations are included in first page of the RFC, an RFC that was once on the standards track can be redefined as historical if the protocol is no longer relevant, was never accepted, or was proved flawed in some way.
The status of a particular protocol relates to how necessary it is to implement. The status levels include:
- Required Protocol
- A required protocol must be implemented on all systems.
- Recommended Protocol
- All systems should implement a recommended protocol, and should probably have a very good reason not to implement it if they don't. It's not really optional, but it's not entirely required either.
- Elective Protocol
- A system may choose to implement an elective protocol. But if it does implement the protocol, the system has to implement it exactly as defined in the specification.
- Limited Use Protocol
- Probably not a good idea to implement this type of protocol, because it is either experimental, limited in scope or function, or no longer relevant.
- Not Recommended Protocol
- Not recommended for general use. In other words, there's probably no good reason for you to implement this protocol.
Part 4: Turning I-Ds into Standards
Turning I-Ds into Standards
You can't tell the players without a scorecard, and there are a number of different players in the Internet standards game. Before you can truly understand how the process works, it helps to know who is involved.
It might be nice if there were a nice, orderly org chart that laid out the different entities involved in the standards process. On the other hand, the standards process is an organic, human one that sometimes, over the years, adapts to market or political forces. The figure gives an idea of the entities involved.
- Internet Society (ISOC)
- ISOC is the umbrella organization to all Internet standard activity. Positioned as the professional organization for the Internet and TCP/IP networking, ISOC sponsors conferences, newsletters, and other activities pertaining to the Internet.
- Internet Architecture Board (IAB)
- The IAB was first formed in 1983, when it was known as the Internet Activities Board, and then reconstituted as a component of ISOC, the Internet Architecture Board, in 1992. Its early history is documented in RFC 1160 and its current charter in RFC 1601. The IAB chooses the steering groups' members and provides oversight to the Internet standards process, publishes RFCs and assigns Internet-related numbers.
- Internet Engineering Task Force (IETF)
- Though often portrayed as a very formal entity, the IETF consists of anyone who shows up (either in person or by mailing list) and participates in IETF activities. IETF activities are organized by areas (active IETF areas are listed at the IETF website), and within each area are more focused working groups. Each area has one or two area directors, and each working group has one or two chairs as well as an area advisor; these individuals guide the work of the groups.
- Internet Engineering Steering Group (IESG)
- The IESG consists of the IETF area directors and the IETF chair, and this is the body that has final say over whether a specification or protocol becomes a standard or not.
- Internet Corporation for Assigned Names and Numbers (ICANN)
- ICANN is the controversial new entity with shaky finances that was formed last year to take over the functions of the RFC Editor and the Internet Assigned Numbers Authority (IANA). Both those functions had previously been carried out by the late Jon Postel, whose untimely death highlighted the need for a new structure to handle the publication of RFCs as well as maintaining lists of protocol and address numbers that have been assigned or reserved for all the different mechanisms, specifications, and protocols defined by the IETF.
The Internet Research Task Force (IRTF) and Internet Research Steering Group (IRSG) fulfill similar functions for long-term planning and research, but the IRTF is not generally an open organization as the IETF is, and these two entities have less immediate impact on Internet issues, so they generally perform their functions in the background.
RFC 2026, "The Internet Standards Process - Revision 3," documents the process by which the Internet community standardizes processes and protocols. On its face, the process is simple: a group or individual submits their draft for publication as an Internet-Draft. This is the first step. At this point, the document is publicly posted on the Internet and a notification of its publication is posted to the IETF-announce mailing list (IETF mailing lists are archived at the IETF website). Most I-Ds don't progress beyond this point.
Assuming that there is enough interest in the draft to generate discussion, the authors may be called upon to incorporate edits. Once there is consensus among those who are working on the draft (usually work is done in working groups, much of the work taking place in the context of the working group mailing list), a "Last Call" will be issued for further comments on the draft. Any further comments may be incorporated into the draft after the Last Call period (usually some number of weeks), at which point the draft can be submitted to the IESG for approval and publication as an RFC.
Of course, the draft may be published as an experimental or informational RFC; but if it makes it onto the standards track, it starts out as Proposed Standard. Over time, the specification may advance along the standards track (depending on whether it is accepted by the community and implemented, how well it works, and whether or not something better comes along).
A standards track specification may need to go through a revision process as it progresses. Thus, the same specification may be rewritten several times over a period of years before it becomes an Internet standard. There are only about 50 full Internet standards; most of the protocols that we take for granted as being "standards," including HTTP, DHCP, MIME, and many others, are actually either Proposed Standards or Draft Standards. Lynn Wheeler maintains a web page that lists all current RFCs along with their status. It is an interesting and instructive list.
Few people really understand the distinctions between different types of Internet standard (and non-standard) documents-not so much because the concepts are so complicated but rather because they are relatively obscure. And networking vendors often misrepresent standards activities when they announce them. Armed with the information in this article, you can easily determine whether the latest and greatest protocol just released by Novell, Microsoft, or Cisco is actually a new Internet standard or just documented in yet another Internet-Draft.
Part 5: Finding RFCs
The IETF publication mechanism does not provide the best interface or search engine for locating RFCs. However, it should be considered canonical. This list is not comprehensive, as there are probably scores if not hundreds of websites and FTP servers serving some or all RFCs. However, these are good places to go to when you need to locate a particular RFC or to find out more about what might be in an RFC or I-D.
- Internet Standards Archive. This is a good easy site, and they've got good search facilities for RFCs and I-Ds. A good "go-to" site for RFCs.
- Lynn Wheeler's RFC Index. This is another excellent resource if you're trying to figure out what's current, what is a standard, and what is not.
- The NORMOS Standards Repository. Another good site, it has particularly good search capabilities; very flexible. It also returns all the hits (unlike the Internet Standards Archive above).
- Invisible Worlds RFC Land. This seems to be a pretty cool site. Carl Malamud and others had a neat idea about XML-tagging RFCs. There's a lot of graphics and very involved programming underneath the website, so I'd like it better if it were simpler without all that stuff, and they still need to finish the XML-ification (at least, that's how it seems), but this site is recommended as well.
- The RFC Editor Page. This is the official place, and there's lots of good information here as well.
- The RFC Editor's Search through the RFC Database Page. This used to be nothing more than a simple listing, but has become virtually overnight one of the best resources on the web for RFCs. It is canonical, and you can download the whole RFC database from here too.
Pete Loshin (email@example.com) began using the Internet as a TCP/IP networking engineer in 1988, and began writing about it in 1994. He runs the website Internet-Standard.com where you can find out more about Internet standards. | <urn:uuid:b215c213-4521-483c-8926-9305216382b6> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/616051/What-you-should-know-about-the-Internet-Standards-process.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951322 | 3,384 | 2.625 | 3 |
Less than two months into its mission, NASA scientist say the Curiosity rover has found evidence of a "vigorous" thousand-year water flow on the surface of Mars.
On Thursday, scientists said they have found new evidence that water, one of the keys to sustaining life, once raced across an area not far from where Curiosity landed in August.
The evidence came in the form of an outcropping of rocks that appears to have been heaved up and covered with streambed gravel. The rover drove by the outcropping, which had earlier been spotted by one of NASA's Mars orbiters.
"We found interesting outcrops," said John Grotzinger, a project scientist with NASA's Jet Propulsion Laboratory and a geology professor at the California Institute of Technology. "It looks like someone came along with a jackhammer and lifted up a sidewalk... This is a rock that was formed in the presence of water... a vigorous flow in the surface of Mars. We are really excited about this because this is the reason we came to this site."
Scientists said they believe the water moved at about three feet per second, at a depth between ankle and hip deep. While they could not say when the water flow started or stopped, they estimated that it lasted more than 1,000 years.
This isn't the first time that NASA has reported discovering evidence of water on Mars.
Just over a year ago, NASA announced that the Mars Reconnaissance Orbiter had taken images showing evidence of past water flows on the Martian surface. NASA scientist said the planet might hold frozen water just beneath its surface.
William Dietrich, Curiosity's science co-investigator, called this latest discovery a turning point.
"This is the first time we're actually seeing water-transported gravel on Mars," he said. "This is a transition from speculation about the size of streambed material to direct observation of it."
Dietrich noted that scientists now have visual evidence of particle collisions during the transportation of water, a finding that shows there was a "vigorous" water flow and not a small amount of water seepage.
"Now that we're down on the ground with Curiosity we can see the textural evidence, the individual pebbles, the rounding that gives us a sense of that," he said. "It wasn't a single burst of water that ran down the canyon all in a day. There are too many things that point away from that. How long would it take? This is opening the door to answering that question."
Grotzinger said the discovery signifies the real beginning of the rover's science mission.
Now that scientists have proof that water once flowed across Mars, they can begin to look for signs of other elements, such as carbon, that are also needed to support life.
"Now we look at more rocks," said Grotzinger. "We get more context. We need to recreate the environment with even greater detail, with an understanding of the chemistry going on at that time, to understand if this was an area where an organism once lived. Was this an inhabitable environment? That is still to be determined."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA says Curiosity rover finds evidence of water on Mars" was originally published by Computerworld. | <urn:uuid:6909f050-44c7-45ae-aac5-010cc4c3db68> | CC-MAIN-2017-04 | http://www.itworld.com/article/2721666/hardware/nasa-says-curiosity-rover-finds-evidence-of-water-on-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969948 | 745 | 3.90625 | 4 |
In 2012, Gartner predicted that enterprise adoption of virtualization would increase 14 percent by the end of the year. The trend toward virtual environments and the accompanying technologies is showing no signs of cooling off and is predicted to continue its growth at astonishing rates. By becoming the new industry standard, virtualization has had a substantial impact on data center architecture and management.
There has been an explosion in the development and use of virtual machines (VMs) as data demands continue to grow. In virtual environments, a software program simulates the functions of physical hardware, creating new levels of hardware utilization, flexibility and cost savings. The rapidly growing popularity of virtualization empowers organizations to run multiple applications at once, creating a need for unheard-of levels of storage. This sudden rise in demand has warranted a fresh approach to storage: specifically, a solution that offers effective management, flexibility and efficiency.
The Benefits of Virtualization
Enterprises have realized a number of benefits from virtualizing their servers—namely, cost savings and flexibility. Virtualization enables organizations to make more efficient use of the data center’s hardware. The majority of the time, physical servers in a data center are simply idling. By implementing virtual servers in the hardware, the organization can optimize the use of its central processing units (CPUs), as well as its hardware. This solution gives enterprises an ideal use for virtualization’s benefits and cost efficiencies.
Virtualization also allows for increased flexibility. It gives organizations the convenience of reducing the need for physical machines in their infrastructure and of moving to virtual machines. In the event that an organization decides to change hardware, the data center administrator could simply move the virtual server to the newer, advanced hardware, achieving improved performance for a smaller cost. Before virtual servers, administrators needed to install the new server and then reinstall and move all the data stored on the old server—a much more convoluted approach. Moving a virtual machine is considerably simpler than moving a physical machine.
Virtualizing at Scale
The increases in popularity of virtualization are widespread. But there has been a significant spike in demand for virtualization by data centers hosting a large number of servers—somewhere in the range of 20–50 or above. These organizations can achieve considerable levels of the cost-efficiencies and flexibility benefits defined above. Moreover, servers are far easier to manage once virtualized. The pure physical challenge of administrating a number of physical servers can become arduous for data center staff; virtualization enables administrators to run the same number of servers on fewer physical machines, simplifying data center management.
Keeping Pace With Demand
Regardless of the benefits of virtualization, the increasing adoption of virtual servers is placing stress on traditional data center infrastructure and storage devices.
In a way, the problem stems directly from the popularity of virtual machines. The original VM models used local storage in the physical server, making it impossible for administrators to move a virtual machine from one physical server to one with a more powerful CPU. The introduction of shared storage—either network-attached storage (NAS) or a storage-area network (SAN)—to the VM hosts solved this problem, introducing the ability to stack on several virtual machines. This configuration eventually evolved to today’s server virtualization scenario, where all physical servers and VMs are connected to a unified storage infrastructure.
The setback to this approach? Data congestion.
A single point of entry can quickly lead to failure. Since data is moving through a single access point, data gets gridlocked during episodes of excessive demand. Considering that the amount of VMs and data are only expected to increase, it is clear that storage architecture must be improved. Infrastructure must keep up with the pace that data growth has set.
Proceeding With Caution
Organizations converting their data centers to virtualization will all face these growing pains. The early adopters of virtualized servers have already experienced the problems associated with single entry points and are working toward moderating their impact.
Fortunately, there is hope for organizations looking to maximize the benefits of virtualization. They are able to avoid data congestion created by traditional scale-out environments by eliminating the single point of entry. Today’s NAS or SAN storage solutions inevitably have a single access point that regulates the flow of data, leading to congestion during heightened demand. Alternatively, organizations should opt for a solution that has several entry points and distributes data uniformly across all servers. Even if multiple users are accessing the system at any given time, it will be able to maintain optimal performance while reducing lag time.
Currently, this is the most direct solution, but the next generation of storage infrastructure has some intriguing new alternatives to offer.
Computing and Storage Integration
The next generation of storage infrastructure has introduced a new strategy to combat the storage challenge of scale-out virtual environments. This new approach involves actually running VMs inside the storage node themselves (or running the storage inside the VM hosts)—subsequently turning it into a compute node.
This approach essentially flattens out the entire infrastructure. For instance, if an organization is using shared storage in a SAN, normally the VM hosts from the highest storage layer, ultimately reconstructing it into a unified, single entry storage system. To solve the data gridlock problems associated with this approach, many organizations are moving away from the traditional two-layer architecture that has both the virtual machines and storage running on the same layer
Despite the challenges faced during the early developmental stages of virtualization, it has proven to be quite successful. The flexibility, efficacy and cost savings that accompany infrastructure virtualization have made a lasting impression on enterprises. If organizations continue to learn from the mistakes of those before them, they will be able to develop an effective scale-out virtual environment that enhances performance and decreases infrastructure expenditures.
Leading article photo courtesy of NeoSpire
About the Author
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise-scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010, Stefan worked in this field for Storegate, the wide-reaching Internet-based storage solution for consumer and business markets with the highest possible availability and scalability requirements. Previously, he worked with system and software architecture on several projects for Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile- and fixed-network operators. | <urn:uuid:086a0d16-aef4-4c10-8bb2-715bd2244f79> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/frontier-virtualization-storage-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922732 | 1,306 | 2.71875 | 3 |
With the Federal Communications Commission's (FCC) approval of a radio spectrum dedicated to wireless medical monitoring devices, hospitals will get some much-needed network bandwidth, patients will become untethered and device manufacturers can reduce their costs by standardizing.
The new radio spectrum, approved last month, will be used for Medical Body Area Networks (MBANs) -- low-power wideband networks consisting of multiple body-worn sensors that transmit a variety of patient data to a control device. The devices can be attached to a patient in a hospital or post-care settings for ongoing evaluation.
The approval also opens up a whole new industry of home monitoring devices for chronically-ill people. For example, 75% of healthcare costs are related to the treatment of chronic conditions, according to Lynne Dunbrack, an analyst with market research firm IDC Health Insights. And, by 2030, the number of Americans over the age of 65 is expected to double, further increasing the number of chronically ill people.
"To have a device that can be worn and transmit is a vast improvement, as opposed to patients using a device and hand-keying it into an excel spreadsheet and bringing it into their physician," Dunbrack said. He noted that patients often "shave" points to make it appear as if they're taking better care of themselves than they are.
"So having the veracity of a machine-to-machine reading is certainly helpful," she said.
While hospitals today use a very limited number of wireless devices to monitor patients, they're most often used only for the most critical cases, as the equipment is expensive and adds data traffic to already clogged Wi-Fi networks.
Today, Wi-Fi networks in hospitals are saturated with data as clinicians use laptops and tablets to view and input patient data, employees bring their own wireless devices to work, and patients and family members use their own devices for entertainment and communication.
In almost all cases, patients are still tethered to monitoring devices by wires that can take five to 10 minutes to detach in order to allow them to move, and that can cause infection if they become contaminated. Other patients, outside of intensive care units, often aren't monitored at all because of the issues surrounding the devices.
MBANs provide a cost-effective way to monitor every patient, so clinicians can provide real-time and accurate data, allowing them to intervene and in some cases save lives. In some cases, the monitors are cheap enough that they're disposable.
The FCC has allocated 40MHz of spectrum at 2360-2400MHz for MBAN use. Wireless devices that operate on MBAN spectrum can be used to actively monitor a patient's health, including blood glucose and pressure monitoring, delivery of electrocardiogram readings and even neonatal monitoring systems.
Patient monitoring device manufacturers are heralding the previously-contested radio spectrum's approval because it will allow them to make products that only use one spectrum and do not need to account for other data moving across a common Wi-Fi network.
How a Medical Body Area Network works.
Anthony Jones, chief marketing officer for patient care and clinical informatics at Philips Healthcare, said his company is looking to release wireless upgrades to current wired products within a year.
One new product Philips is considering releasing in the next year is a wireless respiration monitor that would adhere to a patient's abdomen. Respiratory rate is a key indicator of patient health, but it's often only taken visually by nurses who watch for chest movement, Jones said.
"You end up getting a lot of incidences [that], if caught early, are relatively minor, but if not caught early, the patient can deteriorate very quickly and end up back in the ICU or worse, actually dying in the hospital from something that could have been prevented, like a heart attack," Jones said. "Take a patient in the general ward who suffers a heart attack. He has less than a 5% survival rate in the hospital."
But with the pressure to get patients discharged from a hospital more quickly these days, Jones said an even larger market for MBAN wireless devices will be at-home monitoring equipment for post-treatment and chronically ill patients.
Only a tiny fraction of ambulatory patients are monitored at home, Jones said.
"Vital signs are called 'vital' for a reason. They're good indicators of what's going on with the patient," he said.
A depiction of an MBAN wireless respiratory monitor
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about healthcare it in Computerworld's Healthcare IT Topic Center.
This story, "'Body Area Networks' should free hospital bandwidth, untether patients" was originally published by Computerworld. | <urn:uuid:d3141e65-6aa2-4712-b5be-04a51c914651> | CC-MAIN-2017-04 | http://www.itworld.com/article/2727474/networking/-body-area-networks--should-free-hospital-bandwidth--untether-patients.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955388 | 1,018 | 2.8125 | 3 |
In the age of the digitally innovative classroom, nearly all school districts have some kind of Internet blocking and filtering mechanism to keep kids safe from inappropriate online content. It is also likely that almost all students have figured out ways to get around these blocks and filters to get to content that they want.
This poses the question, “Is blocking the only answer?” We propose a new way of thinking: Monitoring online behavior in addition to blocking and filtering content is more beneficial to schools, teachers, and students. Here’s why.
monitoring software aids in qualifying for federal funding
In order to qualify for federal e-RATE funding for technology, school districts must comply with the Children’s Internet Protection Act (CIPA). The act states that protection measures must include blocking or filtering obscene, illegal, and harmful content.
In addition, schools subject to CIPA have two certification requirements: their Internet safety policies must include monitoring the online activities of minors; and they must educate minors about appropriate online behavior. This includes education on interacting with other individuals on social networking websites and in chat rooms, cyberbullying awareness and response to online bullying or abuse.
This is where Impero Education Pro monitoring software steps in. In addition to blocks and filters that prevent access to indecent content, the software’s comprehensive view of all student screens enables teachers to manage online behavior in real time. Our advanced monitoring software will also allow specific websites to be blocked or allowed when required, so students can be provided with access to websites, such as YouTube (which can be great for educational purposes), in a controlled environment. This all-inclusive software allows schools to easily comply with CIPA rules.
monitoring software prevents and addresses cyberbullying
Monitoring software works by using categories, such as lists of words or phrases, to capture and identify inappropriate activity on desktop and laptop computers and other digital devices. Once captured, an automatic screenshot or video recording is logged; this allows school staff to identify the context of any potential concerning activity, such as a screenshot identifying a concerning word or phrase, a logged-in user, or an IP address.
When students use certain keywords, the software alerts the teacher. This can identify cyberbullying and present a way to confront the situation. As new slang terms emerge, keyword lists can be updated on a regular basis.
In addition to keyword detection, Impero monitoring software provides students with a confidential way of reporting questionable online activities through its Confide function. Student submissions are anonymously sent to authorities, and this allows the safe exposure of a predator without fear of further harassment to the victim.
monitoring software saves time for teachers
Impero’s student monitoring software acts like a digital classroom assistant by saving teachers time and helping them manage their students online. The software’s tools allow teachers to prevent access to inappropriate websites, including the monitoring of usage patterns to identify popular sites and applications.
The software prevents unauthorized use of proxy sites, enforces acceptable usage policies, and restricts Internet, application, and hardware usage. All of this is monitored through a single interface that provides live thumbnails of all network computers.
Additionally, if a student disrupts the class, teachers can turn off screens and lock keyboards, disable Internet access, USB ports, sound and printers, and broadcast the educator’s screen to all or selected students. Talk about a time-saving powerhouse!
monitoring software teaches students responsible online behavior
According to Bloom’s Taxonomy Cognitive Domain, there are six levels of thinking. The highest level of thinking is evaluation. Student behaviors that show the Evaluation level of thinking are assessing effectiveness of whole concepts, in relation to values, outputs, efficacy, viability, critical thinking, strategic comparison and review, and judgment related to external criteria.
When Internet sites are blocked, the student is not given the opportunity to evaluate and create strategy – other than how to strategically hack through to banned sites. By monitoring students’ online activity, combined with providing procedures and communicating about problems, the teacher is providing opportunities for the highest level of thinking.
the best way to learn Internet usage
The Impero software team believes that monitoring online usage is the best way to help students learn to use the Internet safely. Research has shown that blocking measures have little impact when students are determined to access content.
Now is the time to adopt a different approach and add monitoring of online behavior instead of only blocking. This allows schools to be proactive and react appropriately in the event of protocol breaches. In addition, this affords teachers more time, promotes higher-level thinking in students, and provides schools with better tools to comply with CIPA.
Impero Education Pro software provides schools with the ability to proactively monitor the online activities of digital devices while they are being used in classrooms. To find out more about this solution, go to the product features page here. Impero offers free trial product downloads, webinars, and consultations. Call us at 877.883.4370 or email us at email@example.com today for more information. | <urn:uuid:d714b184-f6a5-4b16-856f-164bbcbfef85> | CC-MAIN-2017-04 | https://www.imperosoftware.com/the-key-benefits-of-adding-monitoring-software-to-your-school-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920509 | 1,042 | 2.609375 | 3 |
March 16th, 2015 - by Walker Rowe
This document explains some of the technical details of DNS. Specifically, it:
- explains how DNS works;
- reviews how your ISP assigns you to a DNS server, and which servers they most commonly use;
- provides examples of third-party DNS providers you can use to protect from DDOS attacks;
- explains the A, AAAA, and MX DNS records;
- highlights security issues with DNS open resolvers; and
- discusses how governments use DNS for censorship.
DNS (domain name servers) translate domain names into IP addresses. This allows a user to look up a company by a simple name such as coke.com, instead of by a complicated number, like nnn.nnn.nnn.nnn.
A company can run their own DNS servers, but most companies use DNS servers operated by their ISP or connect to one of the public DNS servers, such as Google or OpenDNS.
How DNS Works
ICANN is the organization that oversees the assignment of IP addresses worldwide. They assign blocks of IP addresses to different countries and different registrars around the world for top level domains (TLDs). For example, the .com domain is operated by Verizon Business. In Russia, the top level domain .ru is run by The Coordination Centre for TLD RU, which is a pretty boring name.
When a person registers a new domain they get an IPv4 and, optionally, an IPv6 IP address for their domain. When a user types a domain into their browser, their computer sends a DNS lookup to a DNS server. The DNS server tries to resolve the IP address by looking in its cache. If the address is not found it will query other DNS servers. This request could go all the way to the TLD DNS servers. Usually it does not have to go that far as a DNS server nearby will have the IP address in its cache.
DNS records have an expiration of a few hours. So if you change an IP address, it usually takes a few hours for those cached records to expire and for the new IP address to take effect.
DNS Settings on your Computer or Smartphone
When you use your cell phone, or connect to the internet in your office, the DNS settings on your computer are changed to a DNS server selected by your internet service provider (ISP) or wireless carrier. The DNS settings might be set to the internet router in your office, which basically does the same thing since it sends out the DNS lookups to the DNS server assigned by the ISP.
You can override the DNS settings on your PC or cell phone if you want to pick specific DNS servers. Many ISPs and wireless carriers use Google or OpenDNS instead of their own DNS server since Google and OpenDNS have cached replicas around the world. Presumably they could return a DNS query a few milliseconds faster than other servers. That would reduce latency for people in different parts of the world who are looking for your website.
DYN.com and Cloudflare
Companies that have more than one data centre usually use a third party high-availability DNS provider like DYN.com. They failover their customers to a redundant site in the case of an outage by changing the DNS records. They also provide assistance during DDOS (distributed denial of service) attacks. A change at DNS.com takes effect immediately for the company, since the client is using that DNS server and not a public one. So the company does not have to wait hours to get back online.
Cloudflare is another company that helps when a website is under a DDOS attack. They do this by proxying your web traffic to their enormous data centres temporarily. Then they block DNS lookups that look like they are coming from hackers.
DNS A Record
There are different types of DNS records. One domain usually has several. The A record gives the IP address of the domain’s website.
On Linux systems you can use the dig command to query a DNS server. Or you can run it online at different websites.
For example, here is the A record for anturis.com.
In the screen below, I selected the DNS servers at Google. The website shows me that the IP address for Anturis is 18.104.22.168. This means that I could enter https://22.214.171.124 to open the Anturis website instead of the domain https://anturis.com (Anturis forwards all their http traffic to https, so if you enter http:://126.96.36.199 nothing will answer) in order to skip the DNS lookup.
DNS MX Record
The MX record for a domain is the FQDN of the domain’s email server. When an email server sends email, it queries the DNS MX record for the domain of the email recipient. It then sends mail to the MX record with the lowest priority. It uses the next server priority if the first one is busy or not available. Email is send using SMTP which is an interactive conversation. Thus there is a limited number of SMTP connections available per mail server. For this reason it’s often the case that the first server is busy.
The MX record is a domain name and not an IP address. Most of the time domain names in the MX record have a dot (“.”) at the end. It does not mean anything. It is just a convention. The mail server would have to remove the dot to resolve to a valid IP address.
In the example below, you can see that Anturis uses Google for email.
OpenDNS and Encrypted DNS
DNS has two security issues: open resolvers and the fact that DNS lookups are not encrypted.
When you are using a VPN connection such as IPsec to connect one data centre to another, or to connect to your company’s network with your laptop, DNS lookups are in clear text. They go around the VPN tunnel. This means a hacker knows what addresses you are looking up if they are snooping on your traffic.
Here is how to illustrate that. I have connected to a VPN, so my traffic is going out over IP address 10.0.1.196 per the routing table. As you can see the VPN company I have connected to has set my DNS IP address to use Google for DNS lookups (188.8.131.52 and 184.108.40.206).
Now, if I type “wsj” in the browser looking for The Wall Street Journal, my computer sends out a DNS lookup, asking for the IPv6 IP address (AAAA record) for “wsj”.
I used Wireshark to snoop the traffic going out my laptop. You can see that the DNS lookup is going out in clear text. If this traffic was encrypted you would not be able to read the text in the packet.
If you want to make your computer send out DNS lookups using encryption, you can install DNSCrypt from OpenDNS by downloading it here. You can only use that with their DNS servers, since encryption is not part of the DNS standard.
The Open Resolver Problem
A DDOS attack is designed to take down a website. Often this is done by having DNS servers greatly amplify the DNS lookup and then forward that data to a webserver. This is possible because the IP address for the recipient can be spoofed. In other words, it can be set to whatever value the hacker wants. The DNS server does not check it. This is because of an older DNS design, which lots of DNS servers still have not fixed.
You can read more about this at the Open Resolver Project.
Some countries, including Turkey and China, have poisoned DNS servers to block websites. They do this by adding fake DNS records to DNS servers. Those addresses have the wrong IP address, thus blocking sites like The New York Times in China. In China, all internet traffic is required to use government-approved DNS servers.
Turkey’s censorship is not so sophisticated. There, when the government tried to block certain websites last year, people went around painting the IP address of Google’s DNS servers on the walls of buildings: 220.127.116.11 and 18.104.22.168.
So this is a basic summary of some of the technical details, options, and issues around DNS. The main take away message is that most people use Google, like they use Google for everything else, although you do have different options for your company. Also key is the message that you should partner with another company to provide DDOS protection for your site. | <urn:uuid:2d5c6e6a-e917-4ff4-8761-28cda2c3fed9> | CC-MAIN-2017-04 | https://anturis.com/blog/dns-servers-security-issues-and-options-for-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00498-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939352 | 1,806 | 3.609375 | 4 |
Speech recognition, executed without mistakes, is the Holy Grail of translation. If we could talk into our smartphones and have vocalized a flawless translation into any language of our choosing, be it Pashtun or Portuguese, the language barrier would no longer exist. But like the Babel Fish in “The Hitchhiker’s Guide to the Galaxy,” perfect translation is still a thing of fiction.
Today, we’re still working on mastering the component parts of speech recognition translation, that is, transcribing something spoken into text (speech-to-text), translating the text from one language to another (text-to-text) and uploading that translation back into spoken form. After years of slow progress, we’re finally making major strides in all three areas, edging ever closer to the mythical Babel Fish.
Speech Recognition Goes Mobile
Speech recognition has roots that go back to the days of Thomas Edison’s phonograph. Machine translation, which began to evolve in the 1950s, is newer. The two technologies working effectively together, however, is a much more recent development.
Until the 1990s, machine translation lived inside of mainframe computers and was limited largely to R&D labs. Personal computers took off with the .com boom in the late 1990s, bringing online machine translators like Yahoo! Babelfish to the desks of individual users.
Such mobile technologies helped the government in its campaigns in the Middle East. Faced with a chronic shortage of qualified translators, agencies reached for newly-mobile translation technologies to fill the gap. In the early 2000s, DARPA deployed the Phraselator in the battlefields of Iraq. Babylon was a handheld, one-way speech recognition device. A soldier would speak a phrase into the machine in English, and it would play back a pre-recorded translation in Arabic. The device, however, only generated accurate translations about 50 percent of the time.
In 2006, IBM took the idea a step further and equipped the U.S. joint forces command with 35 laptops that came equipped with the company’s Multilingual Automatic Speech-to-Speech Translator (Mastor), a two-way program in which a soldier could speak into the laptop in one language (English, for example) and it would output a spoken Iraqi Arabic translation a couple of seconds later.
IBM’s technology was an improvement on the Phraselator, and reflected how automatic speech recognition and machine translation were improving over time. We’re at the point today where both technologies are good enough to gain traction on the consumer side, in the form of Apple’s SIRI and the Google Droid’s Talk to Me app, to name a couple. Microsoft’s Chief Research Officer recently presented in China the latest breakthrough’s in Microsoft’s speech recognition technology, which reduces the error rate by over 30%.
How Machine Learning is Helping Speech Recognition Evolve
On the government side, federal contractor SAIC is making big strides into speech recognition. SAIC works with Arabic content suppliers to automatically translate spoken material into text, essentially a form of closed captioning. Their machine translation technologies are excellent, but where there are gaps, a human specialist will step in to perfect the interpretation. That person will listen to the original audio content and input a better translation into the machine translator, helping its engine, and automatic speech recognition, improve in the process.
That kind of machine learning is what’s really driving the evolution of translation. As human specialists continue to feed high-quality translations into speech recognition technologies and translation memory systems, speech-to-text, text-to-text, and text-to-speech translations will only grow more refined. When a human spots and corrects a mistake in translation, the technology is, in effect, learning from its mistakes. The automatic speech recognizer will have better information to work from when it issues its translations, whether they’re vocalized or put into text. This is especially important for localization efforts — understanding cultural nuances in order to communicate clearly and avoid faux pas that aren’t obvious in print.
How Quality Scores Fit In
SAIC is a good example of some of the exciting things coming down the line with this kind of machine learning as we move into 2013 and beyond. SAIC’s engine automatically recognizes speech, machine translates language and trains its translation memory to generate higher-quality results through a system of automated quality scores and quality metrics.
This kind of scoring is becoming increasingly important in the government’s intelligence efforts, specifically for real-time intelligence, a form of instant information gathering. Machine translation is king when it comes to real-time intelligence, because humans take too long to turn things around. The government uses a number of sensors for intelligence, from drones to Web crawlers that look for suspicious language. If a sensor finds something suspicious in another language, the next step is for machine translation to quickly digest the content and get a gist of what’s being said.
Because there’s so much content, humans receive summaries of large amounts of data at once. At this point, the quality metric jumps in to score data, flagging which content might need humans to jump in for interpretation. The scoring engine could flag keywords, authors and locations of origin, leading to further analysis from human intelligence officials.
A Way to Go Before the Babel Fish
As exciting as developments are, there’s still a lot to be learned by automatic speech recognition and translation. Just take a look at these poorly-translated YouTube Christmas songs to get an idea of how much machines still need to learn.
It is difficult to perfect translation technology because everyone speaks differently, even if they’re speaking the same language. We use different tones, tempos, accents and registers. These things confuse the automatic speech recognition engine, and it takes a lot of quality data to help the engine understand that it’s the same language being spoken.
Moreover, language is evolving over time. I don’t use the same phrases today that I did five years ago. It’s hard for machines to keep up with that evolution, and I don’t think that they ever will. That’s why 10-50 percent of translation will always be left to human experts, while we make the technology as intelligent as possible. In the future, if the Babel Fish does come to fruition, human minds will be inside of it.
Disclaimer: Neither Aaron Davis nor Lingotek has any financial interest in any company or product mentioned in this post. | <urn:uuid:50e6f253-f397-4e1f-832c-d01a8b742c57> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474081/government-it/the-futuristic-marriage-of-machine-translation-and-speech-recognition.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00406-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930065 | 1,365 | 3.0625 | 3 |
NASA's virtual institute powers solar system exploration
- By Kathleen Hickey
- Dec 05, 2013
Virtual institutes — bringing individuals together in a collaborative virtual setting to solve complex problems — are rapidly gaining steam, with the latest being NASA’s Solar System Exploration Research Virtual Institute (SSERVI).
SSERVI, a collaborative effort of nine research teams from seven states, focuses on questions concerning space science and human space exploration. The teams, in cooperation with international partners, will address scientific questions about the moon, near-Earth asteroids, the Martian moons Phobos and Deimos and their near space environments.
Not much is required to collaborate. Full participation among teams requires only the following:
- An H.323 standards-based video teleconferencing system with an HD camera, connected to a high-speed network capable of at least 2 Mbps.
- A computer with Flash installed that is connected to a high-speed network to run Adobe Acrobat Connect Pro.
- A dedicated operator to administer the systems and set up the conference rooms 30 minutes prior to the start of any meeting.
NASA is no stranger to virtual institutes. Last March, the agency announced the opening of the Aeronautics Research Institute at the Ames Research Center to create new tools and technologies for reducing air traffic congestion and environmental impacts, improving safety and designing aircraft with unconventional capabilities.
The NASA Astrobiology Institute (NAI), another NASA virtual institute, has been around for a while. It was developed two decades ago and is dedicated to the field of astrobiology and to providing a scientific framework for flight missions. Astrobiology is the study of the origins, evolution, distribution, and future of life in the universe.
An NAI presentation identified several more collaboration tools based on a needs assessment survey:
- Desktop video
- Web-based emailing lists
- Web-based photo directory
- Web-based, searchable information repository/knowledge management system
- Scientific visualization/imaging capabilities
- Room-based videoconferencing system
- Wireless data sharing tools
- Web-based document sharing
- Desktop data sharing
- Live chats/real-time online meetings
Of those tools, perhaps the most complicated is a Web-based searchable information repository/knowledge management system.
According to Chris Mattmann, a principal investigator for the big data initiative at NASA Jet Propulsion Laboratory in Pasadena, Calif., “NASA in total is probably managing several hundred petabytes, approaching an exabyte, especially if you look across all of the domain sciences and disciplines, and planetary and space science," he told InformationWeek. "It's certainly not out of the realm of the ordinary nowadays for missions, individual projects, to collect hundreds of terabytes of information."
NASA uses Apache TIKA, an open-source tool for detecting and extracting metadata and structured text from documents, to decipher the 18,000 to 50,000 file formats available online, said Mattman. Using open-source tools saves the agency money and is good for the government, he added.
Other agencies that have built virtual collaboration spaces include Veterans Affairs, the National Institutes of Health, Interior’s U.S. Geological Survey and the National Weather Service.
The VA for Vets Virtual Collaboration Workspace is a site available 24/7 to veterans and military members to help them find civilian jobs. The site provides opportunities for veterans and military members to interact with employment coordinators, representatives, human resource professionals and coaches.
The USGS’ National Institute of Invasive Species Science is a virtual consortium that uses a variety of Web applications for mapping and modeling invasive species to predict and reduce the effects of harmful non-native plants, animals, and diseases in natural areas and throughout the United States.
NWS’ Virtual Lab (VLab), built on an open-source Java portal framework, “enables NWS employees and their partners to share ideas, collaborate, engage in software development and conduct applied research.” According to NWS’ VLab website, the virtual collaboration will “reduce the time and cost of transitions of NWS field innovations to enterprise operations; minimize redundancy and leverage complementary, yet physically separated, skill sets; forge scientific and technical solutions based on a broad, diverse consensus; and promote an NWS culture based on collaboration and trust.”
Not every agency is a fan of virtual workspaces, however. The Internal Revenue Service has prohibited collaborative tools, whether third-party or agency hosted, for transmission of unclassified Federal Tax Information, due to potential malware infections, loss of data confidentiality and network attacks. Instead, “agencies should use agency-controlled Virtual Private Networks that provide FIPS 140-2 or later compliant cryptography,” the agency advises.
Kathleen Hickey is a freelance writer for GCN. | <urn:uuid:db979131-5051-4a75-8bee-69828d8cb225> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/12/05/nasa-virtual-institute.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894216 | 994 | 2.921875 | 3 |
Cloud Computing: Google Speeds Up Web-Page Downloads with SPDY Protocol
Pronounced Just As It Appears
SPDY (derived from the word "speedy" rather than an acronym) is a TCP-based application-level protocol for transporting Web content. It is proposed by Google and is being developed as one of their Chromium open-source projects. A white paper on SPDY states that it is intended to augment, rather than replace, HTTP.
SPDY (pronounced "speedy") could become a staple within the next wave of improvements in the Internet and World Wide Web. This is a Google-led, open-source, TCP-based application-level protocol for transporting Web content faster than it has ever traveled before. SPDY is currently in its second year as an experiment run by Google's Chromium group with the simple goal of reducing the latency of Web pages. Speeding Web-page delivery is achieved by prioritizing and multiplexing the transfer of several files so that only one connection per client is required. The average Web-page download entails 28 connections; thus, there is great opportunity for a hiccup somewhere that can slow a page down page loading. Also, in SPDY, all transmissions are encrypted and compressed by design, in contrast to HTTP, where the headers are not compressed. Google Chrome and Chromium browsers already use SPDY when communicating with Google services, such as Google Search, Gmail, Chrome sync and when serving Google's ads. Google acknowledges that the use of SPDY is enabled in the communication between Chrome and Google's SSL-enabled (Secure Sockets Layer-enabled) servers. Up to now, SPDY is limited to the application layer. It does not require kernel changes, and applications do not have to be rewritten. New Web servers and clients are needed, however. Thus, SPDY is still years away from general use. Chromium software engineer Mike Belshe and Cotendo Vice President of Product Strategy Ido Safruti offered an information session on SPDY at the recent O'Reilly Velocity conference in Santa Clara, Calif.; highlights are in this eWEEK slide show. | <urn:uuid:87358daa-505d-4f1f-9b08-46f394583db9> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/Google-Speeds-Up-WebPage-Downloads-with-SPDY-Protocol-240303 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00132-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932852 | 438 | 2.703125 | 3 |
Cloud computing promises to displace wholly owned information technology hardware and software. How does this assertion stand up over time?
When the price of a durable item remains stable, the market for that item tends toward equilibrium between the item’s cost of ownership and its rental or lease rate. When the price is near that equilibrium, other factors influence the decision to buy or rent. For instance, a person may choose to buy, lease or rent a car. In New York City, a person may dispense with a car entirely, relying instead on taxis or mass transit. The “hassle factor”—the cost of parking and insurance—overwhelms the value of independent means of transportation. As another example, economists speak of the balance between the cost of home ownership and the price of renting an apartment. An apartment isn’t a home, but they provide similar enough functions to make a price comparison meaningful. For some people, moving from a home to an apartment or condominium makes sense. Renting reduces the hassle factor.
These scenarios rely on stable prices over time. If the item’s price changes, the balance between ownership and renting shifts. When the price of a home steadily rises, the home becomes a good investment, despite the hassle factor of home ownership. Seeing the value of a new car drop in the first year of ownership, some people choose to lease. Others may buy, holding the car for much longer, and put up with the hassle factor of having an older car.
Information technology faces similar economics. When the price of computing was very high, most users rented (using time-sharing). The lower price of computing over time led more businesses to buy, and those businesses could afford greater amounts of computational power.
Increasingly powerful and affordable computing environments addressed larger sets of data and more complex algorithms. The range of problems itself didn’t change, but the cost of solving those problems dropped as the cost of technology steadily declined. Problems that had been impossible to solve became expensive to solve, and eventually became inexpensive to solve. For instance, American Airlines created a powerful competitive advantage by deploying the Sabre online reservation system in the ’60s. Now, any airline that doesn’t have an online reservation system isn’t a real airline. Companies across many industries applied computing following two mandates: Wherever there were 50 people doing the same thing, automate; and wherever there were 50 people waiting for one person to do something, automate.
Companies with large computing infrastructures discovered their available spare capacity. A company with a million servers running at 98 percent utilization realized it had the equivalent of 20,000 machines idle. The executives knew capacity wasn’t waste, but simply the consequence of varying workload. They also realized they could “rent out” small chunks of their available computing resource, as long as they could fence those users from the internal processing the company needed to run. These companies re-created the time-sharing model and monetized their idle spare capacity.
Smaller organizations valued cloud computing. It eliminated a barrier to entry for software development firms. Software start-ups once raised capital to purchase technology for development and test. With cloud, they could rent just the capacity they needed for the time they required. Larger firms could get additional capacity to deal with workload spikes, and then release that capacity when the demand lessened. Companies of all sizes rediscovered that most of their IT workload could run on a generic computational resource. They realized renting was cheaper than owning, especially when they considered the hassle factor.
This appealing economic model deteriorates as the underlying price of technology drops. Would any venture capital firm fund a start-up that intended to deliver cloud computing? Today’s cloud computing providers rely on the sunk cost of their existing infrastructure. The initial capital expense is an insurmountable barrier to entry, as long as that cost remains high.
The cost of computing continues to drop, eroding that barrier to entry. Today’s start-up can acquire multicore computing platforms for a few hundred dollars. A midsized company can acquire computing capacity at one-eighth to 1/64th the price charged five years ago. This represents the continuing impact of Moore’s Law, which lowers the unit cost of computing by half every nine, 12 or 18 months, compounded across a five-year horizon. (Network capacity tends to double at unit cost over nine months, disk storage over a year and processors at the longer end of the scale.)
The Upside of the Hassle Factor
Today’s public cloud provider will presumably continue to grow, but not exponentially. In five years, the cost of their marginal capacity will be miniscule compared to today’s prices. Public cloud consumers will realize the hassle factor associated with owning technology is trivial compared with the business risk of depending on someone else’s availability, security, recoverability, privacy, service levels and overall care of their Internet-connected generic technology. Companies will rediscover the benefits of having a captive IT supplier staffed by its own employees. When a public cloud fails, all it can give its customers is more capacity at a lower price, later. An executive has more choices when managing his or her own IT staff should they fail to deliver what the business needs. That’s the upside of the hassle factor.
Is there any long-term viable strategy for a public cloud vendor? The first challenge would be to ride the price/performance improvements as rapidly as they arrive, so the business would have to continually invest in new IT—which is costly. Firms such as Google, Microsoft and Amazon, which have already invested heavily in IT, have developed brilliant innovations to contain costs. Modular design with minimal site preparation. Power and cooling added as needed. Standardized containers delivered and wired in on demand. Amazon also builds modular data centers to minimize construction costs and optimize heating, cooling, cable runs and energy consumption. These businesses strive relentlessly for margin performance through monetizing unused capacity—but the core business drives their IT procurement strategy.
A public cloud vendor has no funding source to support that level of IT investment. If a public cloud vendor could identify a core set of long-term customers that guaranteed to spend at least some amount annually, that could anchor the public cloud vendor similarly to a large department store anchoring a shopping mall. All the existing public cloud vendors have such a customer—their parent company. But a customer of cloud who promises to spend a minimum amount annually, regardless of actual utilization, isn’t buying cloud computing—they’re outsourcing.
The cost/benefit analysis between an external supplier and a captive supplier comes down to this: Can the business run its data center efficiently enough to compete with an outsourcer? The outsourcer has the same capital costs, software costs, personnel costs and also must make a profit. If a business is inefficient, cloud is only half the solution. The whole solution is either outsourcing or running the data center more efficiently.
Public cloud vendors exploit the temporary difference between the decreasing cost of computing and the increasing demand. As that cost continues to drop, those businesses that need computing will find it increasingly affordable. More and more complex problems will be tractable with owned resources. The benefits of ownership will outweigh the apparent simplicity of public cloud. Cloud computing will continue—but as private and community cloud, not public cloud.
For some companies, the great migration to public cloud will flow in reverse. For most companies, it will stop before it even begins. As the market dries up, the end game will evolve as W. Chan Kim and Renée Mauborgne describe in Blue Ocean Strategy. Expect to see frantic attempts at service differentiation and price wars as public cloud providers collapse into a “red ocean.”
Note: This article follows the “NIST Definition of Cloud Computing” as defined in NIST SP 800-145, from http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf. The key elements of this definition are on-demand, self-service, broad network access, resource pooling, rapid elasticity and measured service. Public cloud refers to cloud capabilities available to the general public. | <urn:uuid:620dfcd6-45a0-4341-9b19-07020b89fe31> | CC-MAIN-2017-04 | http://enterprisesystemsmedia.com/article/the-fuzzy-economics-of-public-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947848 | 1,698 | 2.515625 | 3 |
Vendor Network Architectures�Part III: Nortel
Alexander Graham Bell, a Scotsman by birth, emigrated with his family to Brantford, Ontario, Canada in 1870. Bell subsequently moved to Boston and taught speech to the hearing impaired. During a return to Canada in the summer of 1874, Bell conceived the idea of the telephone, and was granted a U.S. patent for that work in 1876. The patent rights were assigned to the National Bell Telephone Company (U.S.) and the Bell Telephone Company of Canada. To maintain these patent rights, Bell Telephone of Canada started its own telephone manufacturing company in 1882, named the Mechanical Department, which incorporated as the Northern Electric and Manufacturing Company in 1895. In 1899, Bell Canada added a cable and wire manufacturing company to their portfolio, which was formally merged with the manufacturing company in 1914, under the name of the Northern Electric Company, and jointly owned by Bell Canada and the U.S. firm Western Electric. In 1949, the U.S. Justice Department required AT&T to split off its Western Electric and Bell Laboratories subsidiaries, and as a result, the Western Electric stock holdings in Northern Electric were sold to Bell Canada.
Northern continued their research into telephony, and added electromechanical switching systems, television, microwave, and video switching to their areas of expertise. Research into digital speech processing followed, with the company releasing the Digital Multiplex Switch (DMS) system for central office switching, and the SL-1, a fully digital PBX, in the early 1970s. By the early 1980s, Northern Telecom had established manufacturing facilities in the United States, and expanded as a worldwide supplier of both central office and enterprise switching systems. In 1995, and in celebration of the one-hundredth anniversary of the founding of Northern Electric, the companys name was changed to Nortel.
Given their strong history in both central office and premises switching systems, it should come as no surprise that Nortel targets their switching products to both the enterprise and carrier markets. Their overall architecture is called ACEthe Architecture for the Converged Enterpriseand it consists of three key products: the Business Communications Manager (BCM), the Communications Server (CS) 1000 and 2000, and the Multimedia Communications Server (MCS) 5100 and 5200.
The Business Communications Manager is an IP-enabled, single-platform communications system for small and medium sized businesses and branch offices. It primarily addresses the enterprise market, but can also serve as a small site solution for hosted services. The BCM scales from 10 to over 200 digital or IP-based stations, and includes a large suite of applications, including routing, fax, voice messaging, interactive voice response, multimedia call center, and wireless capabilities within a single system.
The Communication Server 1000 is a server-based IP-PBX, which combines the benefits of a converged network and IP applications with over 450 telephony features. The CS 1000 addresses the enterprise markets from either a private, customer-managed solution, or as part of a managed service solution from a service provider. It can handle up to 15,000 IP clients per call server, with support for ISDN, H.323 and SIP signaling.
On the carrier side, the Communication Server 2000 softswitch is a single platform for very large enterprises. It is designed as the basis of a multi-customer hosted solution, or as a foundation for IP-centrex services. It provides local, long distance and tandem call services, and functions as the intelligent core of a multiservice network. Each CS 2000 can handle up to 250,000 lines, with support for H.248, H.323, MGCP and SIP protocols.
The Multimedia Communications Servers deliver SIP-based multimedia and collaborative communications applications to augment existing voice and data capabilities. The applications provided include video calling and conferencing, picture caller ID, white boarding and file exchange, co-web browsing, instant messaging, plus personal call management features. The MCS 5100 is an enterprise version, supporting up to 60,000 active subscribers, while the MCS 5200 is a service provider version for hosted services, with support for 100,000 subscribers.
Further details on the Nortel architecture and products can be found at www.nortel.com. Our next tutorial will continue our examination of vendors softswitch architectures.
Copyright Acknowledgement: © 2005 DigiNet ® Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:6e241158-81a3-4903-923a-1d1eae0ee503> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/unified_communications/Vendor-Network-Architectures151Part-III-Nortel-3570491.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931434 | 970 | 2.765625 | 3 |
Navy tests satellite connection in the Arctic
For the first time, military users have demonstrated they can transfer large megabyte data files over stable satellite connections in the Arctic.
During the Navy's 2014 Ice Exercise, the Mobile User Objective System (MUOS) satellites provided nearly 150 hours of secure data connections.
Satellite communications in the Arctic are becoming increasingly important as the polar ice sheet shrinks and shipping traffic increases. Most geosynchronous satellites can’t reach users in the Arctic.
MUOS is a next-generation narrowband tactical satellite communications system designed to significantly improve ground communications for U.S. forces on the move. MUOS gives military users more communications capability over existing systems, including simultaneous voice, video and data.
From March 17 to 27, MUOS provided over 8,800 minutes of service to Ice Camp Nautilus. Navy users at the camp could connect to both secure and classified communication systems and send data files.
"Last year we proved the constellation's reach, but this is the first time MUOS has been used for secure government exercises," said Paul Scearce, director of Military Space Advanced Programs at Lockheed Martin, the MUOS prime contractor and systems integrator. "This means users could traverse the globe using one radio, without needing to switch out because of different coverage areas. This goes far in increasing the value that MUOS provides mobile users, not just in traditional theaters of operation, but those at the furthest extents of the planet."
Lockheed Martin first demonstrated the MUOS constellation's ability to reach arctic users in tests during 2013. Those tests marked a significant gain in signal reach from the required latitude of 65 degrees north—roughly Fairbanks, Alaska.
This expansion in coverage, inherent with the system, comes at a time when governments are focusing on arctic security.
"We downloaded multiple files—up to 20 megabytes—nearly at the top of the world," said Dr. Amy Sun, Narrowband Advanced Programs lead at Lockheed Martin. "We sent a steady stream of photos, maps and other large data pieces securely through the system, something that could never be done by legacy communication satellites."
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:b6e2ea0d-d5a4-4a00-80e0-f19ced6552fb> | CC-MAIN-2017-04 | https://gcn.com/articles/2014/05/07/navy-muos-acrtic.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00434-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899229 | 458 | 2.71875 | 3 |
It's the beginning of 2009 and gas prices are on their way back up. Perhaps more importantly, 2020 is not that far off. That is the year -- according to AB 32 -- that California is expected to reduce its carbon emissions to 1990 levels, growth in the state's population notwithstanding.
That challenge was the focus of the Target 2030 Conference held last week in Sacramento. The conference brought together policymakers and technology companies to discuss and demonstrate ways in which that challenge can be met.
A reality that was apparent from those in attendance was that the car is not going away anytime soon, so the question became how to reduce the impact cars have on the environment and manage congestion. There was a lot of talk about measuring performance with a new metric--person-miles traveled as opposed to vehicle miles traveled.
What the industry needs, representatives said, is a level playing field, a consistent regulatory environment and government to spur innovation without picking winners. Some capital would be nice, too.
A number of vehicles running on a variety of fuels were available for ride and drive at the conference. The California Department of General Services was there with one of its flex-fuel vehicles. Other vehicles included a utility truck, a bus and several others.
Solazyme, a San Francisco-based biofuel manufacturer, demoed a Jeep running on its biodiesel produced from algae. The company has also tested its fuel in a Mercedez Benz diesel.
Solazyme's biodiesel is a "drop-in replacement" for traditional diesel, and it burns with a lower carbon footprint than diesel, a company spokesman said. In fact, the car used in the demo was purchased used online from Nevada and the car has run on Solazyme biodiesel for about a year. The Jeep gets 21-22 mpg on the highway.
It takes days for Solazyme to grow algae in the dark as compared with algae grown in sunlight. And that algae is 50 to 80 percent oil. The only byproducts are algae and water, which can both be reused.
The algae can be fed a variety of foodstocks including switchgrass, woodchips, molasses or any kind of sugar.
The company recently raised $70 million in investments and has passed proof of production, according a company spokesman. It has done tests of the fuel with the Department of Defense and recently began producing jet fuel. Just like traditional crude oil, Solazyme's crude can be refined into oil similar to olive oil, cosmetics as well as fuel, which is the lowest grade use of the crude.
The company is planning to achieve economies of scale allowing it sell its crude for between $40-80 a barrel in 24-36 months, Genet Garamendi, Solazyme vice president corporate communications said. It announced a distribution deal with Chevron back in 2008 and it is currently searching for a dedicated refinery facility.
Propel, another promising company profiled at the conference, is building a network of alternative fuelling stations and provides a system that tracks and displays the reductions in carbon emissions from the use of alternative fuels purchased at the company's fueling stations. In doing so, Propel's CleanDrive system opens the door for users to comply with growing governmental standards for the use of renewable fuels and carbon emissions reduction such as California's 2020 Targets.
As states such as California continue to develop carbon emissions reduction targets, products such as CleanDrive will be essential to help track and report on emissions reductions.
With all that, perhaps the biggest step toward meeting California's climate change goals would come if the legislature mandated the use of clean diesel instead of gasoline in cars driven on the state's roads. | <urn:uuid:7b9e1ac3-d9fe-4b70-8a2a-afdaff441f86> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/California-Examines-Obstacles-to.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969488 | 743 | 2.546875 | 3 |
We assert that if this decision is not made explicitly with an objective analysis, the outcomes for the development program will be less than optimal. Yet, there is a sweet spot between the two extremes that yields optimal business results. It's called FTD, or federated technology development.
In this article, last of a three-part series (see Part I, Part II), we look at some examples where FTD has taken place. In many cases, this has been the product of visionaries who applied their intuition. The application of FTD makes this process more predictable and less dependent on a strong visionary to make it happen.
FTD patterns happened with the Unix operating system, not once but twice: first during the operating systems initial inception and, once again, during the open source software, or Linux, revolution.
Grid computing technology constitutes another example of FTD in mid-flight.
It is not driven by a single company, but by multiple players through standards-making bodies. Organizations with an understanding of the FTD dynamics behind the evolution of grid technology can translate this understanding into specific strategic plans and, hopefully, competitive advantage.
The Unix operating system was developed at Bell Labs by Ken Thompson and other researchers in 1969. Anti-trust considerations affecting Bell Labs parent company at the time, AT&T, prevented the company from building products out of this technology.
Partly because of this, the OS was made available to universities and research institutions in source form for a nominal fee.
The Unix code base, because of its relatively small size, was ported to literally dozens of different machines and became the subject of research for countless number of papers and Ph.D. theses. Each instance can be counted as an example of an FTD parallel jump.
Unixs compact size was at the root of its competitive advantage. The OS succeeded because the action of licensing it to universities unwittingly triggered FTD dynamics.
Students in these projects eventually graduated and worked or founded companies that built new computers: Apollo, Sun Microsystems, Multiflow, Convex, Alliant, NeXT, Apple and many others.
Unix became the OS of choice and, at this point, its place in history was assured. The diversity of Unix flavors, often criticized, is a testament of the dynamism of concurrent technology development in myriad organizations.
A similar phenomenon, from an FTD perspective, occurred around 1991. This time initiated by a single person, Linus Torvalds, instead of a company. In case you missed it, Torvalds developed a kernel for a Unix-like system that eventually became Linux.
The FTD dynamics were triggered by the emergence of open source software. The continued development and adoption of Linux was assured by the participation of thousands of individual contributors, organizations and companies; with each instance of participation becoming an instance of an FTD touch point.
Grid computing is interesting from an FTD perspective in that its hard to productize in a traditional sense: You cant go to the store and purchase a grid.
Every grid is unique and cannot be shoehorned into a line of products. This suggests that FTD dynamics can better capture the essence of the grid evolution and point more easily to useful business development strategies than a traditional product approach.
Yet, a product-centric view of the universe gives an incomplete picture of the dynamics behind grid evolution. A key factor that bets considerationespecially in an enterprise contextis that grid adoption will be driven by business processes and the IT processes supporting the business.
Once these processes are understood, it will be possible to determine which products and technology building blocks will be most appropriate. At this point it will be possible to assess desirable features and incorporate specific features into product planning.
Today, grid adoption is slow because the customer community that would potentially benefit from the technology has no awareness of how useful the technology can be. Because of this, even if grid-ready products were available, they would not sell because the processes that would provide the context for grid deployment are not there.
Hence a strategy to grow the grid market needs to encompass the business processes in the business segments targeted. The strategy must be holistic, integrating various sources of expertise. It must also be process based, linking various players through FTD touch points.
Grid computing will make strides to the extent the various parties are able to work independently, but cooperatively through mutually agreed standards. This captures the essence of FTD.
Enrique Castro-Leon is currently an enterprise architect and technology strategist for Intel Solution Services with 22 years at Intel. He has taught at the Oregon Graduate Institute, Portland State University, and has authored over 30 papers. Castro-Leon holds Ph.D. and M.S. degrees in Electrical Engineering and Computer Science from Purdue University. | <urn:uuid:292a2e7f-984a-476a-846f-a98d48a54bab> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3585451/FTD-Optimizing-Outcomes-Part-III.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95704 | 974 | 3.03125 | 3 |
Understanding Two-factor AuthenticationBy David Strom | Posted 2008-06-26 Email Print
There’s a lot to consider before you implement two-factor authentication, because it touches your enterprise infrastructure, applications and networks.
The notion of using something whose only purpose is to help identify you to computing systems is older than the Web, but it’s gaining traction as the number of phishing and hacking exploits rises. Called two-factor authentication (the first factor is something you know, like a user name and password, while the second is something you have, like a token), this type of security can help enterprise IT managers safeguard their applications. Two-factor authentication can be token- or nontoken-based.
Token methods use a small electronic device, roughly the size of a large USB thumb drive or key fob, with a small LCD screen and a button. When a user presses the button, the screen displays a sequence of numbers for 30 to 60 seconds. The sequence must be typed into the application during that time period. This is called a one-time password. If a user mistypes the sequence, he or she must press the button to get a new sequence.
There are many token vendors, including CryptoCard, Positive Networks, RSA and Secure Computing. They have been around a long time, and millions of tokens are now in use in a wide variety of organizations.
The University of Minnesota distributed more than 5,500 Secure Computing SafeWord tokens in a project begun about a year ago. “A number of users have given us positive feedback because they don’t have to remember as many passwords now,” says Mark Powell, manager of the Office of Information Technology Data Security. The university has custom-branded the tokens with its colors and logo, calling them “M Keys” and setting up a Web site to help students and faculty use the tokens.
Token-based systems have their implementation quirks, mainly in how applications process authentications and interact with enterprise authentication services, such as Radius and Active Directory. “Some of our users had to upgrade to newer versions of desktop software or had to change the desktop software configurations to work with the M Keys,” Powell says. | <urn:uuid:116225e3-dfe2-4bdc-a272-50c75db25cb8> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Security/Understanding-Twofactor-Authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93623 | 453 | 2.84375 | 3 |
President Obama in 2011 called for a STEM moonshot of placing 100,000 K-12 teachers in classrooms across the nation by 2021.
“A call like this is not achievable by any organization alone,” said Talia Milgrom-Elcott, founder of 100Kin10–“not your grandfather’s coalition”–which was created to answer the president’s call.
The coalition has three parts to achieve this goal. The first part is to bring organizations across sectors to work together and make commitments to the goal, from recruiting future teachers with strong STEM backgrounds to training current teachers. They began with 28 partners in 2011 and now have more than 230 partner organizations, including museums, Federal agencies, associations, and more.
“Helping to collaborate and learn from each other, [provide] access to research and expertise [partners] couldn’t find on their own,” said Milgrom-Elcott to describe part two. 100Kin10 connects two or more partners through collaboration grants and creates forums for partners to address problems, and find solutions, through their Solution Lab.
“Many organizations are confronting the same challenges in their own context,” said Milgrom-Elcott. “We have an opportunity to identify those shared challenges and bring people together to solve them.”
For part three, this coalition maps out the grand challenges their partners face and “bring people together against those [challenges] to solve problems and often to ideate, offer solutions, [and] co-fund products or solutions that partners could use,” said Milgrom-Elcott.
This past spring, it was announced that 100Kin10 is on track to hit the 100,000 goal by 2021, validated by the American Institute of Research.
“[We] want to get to 100K the right way, by addressing these big challenges [and] actually solving these challenges,” emphasized Milgrom-Elcott.
Another way they work toward solutions is through challenge grants. Their current grant, “Early Childhood STEM Learning Challenge,” addresses the need to create active-learning STEM opportunities for preschool to third grade. These are “critical years for getting kids engaged and excited, inspired and competent in STEM,” said Milgrom-Elcott. “Elementary teachers themselves experience very bland STEM education [and are] often nervous about their own math and science skills.”
“[We are] looking for big ideas–[they] don’t have to be expensive or complex–but have the potential to be highly impactful,” said Ayeola Kinlaw, the director of the Funders’ Collaborative for 100Kin10.
Partners need to address the following:
- Identify and propose a solution to a clear problem.
- Focus the solution on the 100Kin10 goals.
- Identify a community, articulate the specific needs and challenges the community faces, and detail how the proposed solution will solve those challenges.
- Prototype and test solutions within that community.
Submissions must be completed by Oct. 26. | <urn:uuid:125b9861-2458-4a78-a168-4928a9211de2> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/stem-moonshot-100000-teachers-by-2021/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951542 | 654 | 2.765625 | 3 |
So! Everything is turned in, edited, spreadsheets are checked, R code is checked. I've even assembled a playlist to go along with the middle school dance floor clustering tutorial.
2. Some algorithms just feel natural using the "drag the formula down to fill the cell" approach that you have in Excel. It's like an artisanal apply() function. ;-) For example, when looking at the error correction formulas for Holt-Winters, you can do a single time period, and then a second one, and then drag everything down. It feels a bit like induction.
3. Spreadsheets are great for teaching predictive modeling/forecasting, data mining/graphing, and optimization modeling. While many of the techniques are opaque in R when you use packages, if you do them by hand in R, they're actually pretty clear. Except for optimization. If you want to teach other modeling techniques plus optimization in R then you're kinda screwed, because all the optimization hooks in R just take a full-on constraint matrix and a right hand side vector. Contrast this with Excel Solver where you get to build constraints individually. It's totally better for teaching. Now, that said, Python has some nice hooks into optimization modeling that would be similar to Excel. Since spreadsheets are so nice for viewing data, then prepping data, objective functions, and constraints, and then optimizing, it means that algorithms such as modularity maximization using branch and bound plus divisive clustering can be taught there, and it's actually easier to see than it would be in nearly any other environment. Plus, if you're careful you can actually cluster data better than even Gephi's native Louvain method implementation can. Bam!
4. Quite simply, I didn't need to teach any code in the book. Yes, in two places I have the reader record a macro of some clicks and then press the macro shortcut key a couple times, but that's it. And actually watching this loop run using keypresses is in itself a valuable lesson for those who don't intuitively get how something like a monte carlo simulation works.
So there are a few things I really enjoyed about using spreadsheets to teach data science. Where did the spreadsheets fail?
1. Visualization. Visualization in Excel is nice when there's native support for the particular type of graph you want. But if you want a fan chart or a correlogram with critical values marked, then things get slightly annoying. You can often graph what you need by doing formatting cart wheels. Grrrrr.
3. Spreadsheets are occasionally slow. While Solver is awesome for teaching, its simplex and evolutionary algo implementations aren't going to blind anyone with speed. That's why in the book I recommend using OpenSolver plugged into Excel any time the reader can.
Anyway, I think that on balance the book is extremely powerful as a teaching tool, especially for a particular type of student...a student like me. Someone who has a deep seated fear of script-kiddie-ness. Someone who needs to teach and see the data in order to believe. I am the Doubting Thomas of data scientists, but once I do work through a problem piece by piece, then I'm able to internalize a confidence in the technique. I know when and how to use it. Then and only then am I happy to stand on the shoulders of R packages and get work done. | <urn:uuid:5588bf09-dd98-4cb0-9150-6757fb35a6be> | CC-MAIN-2017-04 | http://www.john-foreman.com/blog/beef-to-slurry-to-wiener-reflections-on-teaching-algorithms-in-spreadsheets | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00480-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94603 | 710 | 2.5625 | 3 |
Yesterday, I attended a special conference-within-a-conference about Green Displays, hosted by IMS Research. It turns out that this topic runs deep and wide, but there was one particular aspect that I thought would be worth sharing. Consider the following comparison between two Sony 46″ LCD HDTVs.
The Qualia 005 was the first LCD HDTV to use LEDs for the backlight. It consumed 612 Watts of electricity, was 5″ thick, and weighed 130 pounds. Oh, and it sold for a mere $15,000.
Compare that with one of Sony’s current offerings: the 46″ Bravia KDL-46EX520. This also uses LEDs as a light source, but is rated at just 103 Watts. That’s about the same as a single 100 Watt incandescent lightbulb, and it is an 83% reduction in power consumption. At the same time, the weight has dropped to 31 pounds, and the case is just 1.65″ thick. That means a 67% reduction in thickness, and 76% reduction in weight.
It’s obvious that energy will be saved because the set will draw less electricity when operating, but there are energy implications for the other specifications. For example, you can fit three of them in the same space as the other model. That means more pieces per container, which means lower shipping costs. And the lower weight also lowers the shipping costs. And all those are savings that can eventually find their way to the consumer in the form of a lower purchase price. Clearly, not all the savings in the new model come from lower energy costs, but it’s one factor that makes it possible to sell the newer model for $989, which is about 93% less than the first model.
And whether you care about environmental green or not, that’s a lot of folding green that you save. | <urn:uuid:d9788d5b-9d70-4cf0-a5e8-d95fb5f65b50> | CC-MAIN-2017-04 | https://hdtvprofessor.com/HDTVAlmanac/?p=1486 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964216 | 392 | 2.515625 | 3 |
DEP runs in two modes: Hardware-enforced DEP for CPUs that support it, and software-enforced DEP for CPUs that don’t. Software DEP is performed by the operating system, and as such, has a (small) performance hit.
It may make sense to disable DEP in virtual machines (especially test VMs) to eek out a little more performance. Read on for an explanation of how to do this.
Software DEP configuration is controlled through switches in the Boot.ini file.
There are four options to set the DEP mode are:
- OptIn - Enables DEP only for OS components, including the Windows kernel and Windows drivers. Administrators can enable DEP for selected executable files with the Application Compatibility Toolkit (ACT).
- OptOut - Enables DEP for the OS and all processes, including the Windows kernel and Windows drivers. However, administrators can disable DEP on selected executable files with the Control Panel System applet.
- AlwaysOn - Enables DEP for the OS and all processes, including the Windows kernel and Windows drivers. All attempts to disable DEP are ignored, and all DEP configuration options are disabled.
- AlwaysOff - Disables DEP. Attempts to enable DEP selectively are ignored, and the DEP GUI is disabled.
In Windows Server 2008 and Vista, you use bcdedit to set the DEP mode. The DEP configuration can be viewed using the bcdedit /enum osloader /v command. To configure DEP, use the /set nx switch. For example, to set the currently booted OS to DEP AlwaysOff, you would use the command:
bcdedit /set nx AlwaysOff
You configure DEP in other operating systems from the Advanced tab Performance Settings of the System Control Panel applet. | <urn:uuid:82f8657f-242a-49a1-adb1-c1895285b651> | CC-MAIN-2017-04 | http://www.expta.com/2008/01/dep-and-virtual-machines.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.859826 | 383 | 2.671875 | 3 |
Data Center Virtualization
What are data center virtualization services?
Data Center Virtualization Services are a means of optimizing data storage space and reducing digital footprint by storing data at locations, geographically separated from applications. Data can be accessed, analysed and modified, without knowing the physical storage location or other parameters of the data. Data center virtualization services use abstraction methods to abstract the technical aspects of stored data such as location, storage structure, storage technology, language etc. from the data values. | <urn:uuid:2cb6cfcf-e6b6-4576-9e5d-1d478299356f> | CC-MAIN-2017-04 | https://www.hcltech.com/technology-qa/what-are-data-center-virtualization-service | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00324-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.825009 | 98 | 2.75 | 3 |
everyone.Please help in knowing the difference between the Top down approach languages(eg.COBOL) and Bottom up approach languages(eg.C++) because for every language the control comes from the top.Please help me from the above confusion.
In the top-down model an overview of the system is formulated, without going into detail for any part of it. Each part of the system is then refined by designing it in more detail. Each new part may then be refined again, defining it in yet more detail until the entire specification is detailed enough to validate the model.
By contrast in bottom-up design individual parts of the system are specified in detail. The parts are then linked together to form larger components, which are in turn linked until a complete system is formed. | <urn:uuid:32c2ac45-1f71-4ea3-9134-cb7718287a85> | CC-MAIN-2017-04 | http://ibmmainframes.com/about11239.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922767 | 158 | 2.546875 | 3 |
2.1.8 What are interactive proofs and zero-knowledge proofs?
Informally, an interactive proof is a protocol between two parties in which one party, called the prover, tries to prove a certain fact to the other party, called the verifier. An interactive proof usually takes the form of a challenge-response protocol, in which the prover and the verifier exchange messages and the verifier outputs either "accept" or "reject" at the end of the protocol. Apart from their theoretical interest, interactive proofs have found applications in cryptography and computer security such as identification and authentication. In these situations, the fact to be proved is usually related to the prover's identity, such as the prover's private key.
It is useful for interactive proofs to have the following properties, especially in cryptographic applications:
- Completeness. The verifier always accepts the proof if the fact is true and both the prover and the verifier follow the protocol.
- Soundness. The verifier always rejects the proof if the fact is false, as long as the verifier follows the protocol
- Zero knowledge. The verifier learns nothing about the fact being proved (except that it is correct) from the prover that he could not already learn without the prover, even if the verifier does not follow the protocol (as long as the prover does). In a zero-knowledge proof, the verifier cannot even later prove the fact to anyone else. (Not all interactive proofs have this property.)
A typical round in a zero-knowledge proof consists of a "commitment" message from the prover, followed by a challenge from the verifier, and then a response to the challenge from the prover. The protocol may be repeated for many rounds. Based on the prover's responses in all the rounds, the verifier decides whether to accept or reject the proof.
Let us consider an intuitive example called Ali Baba's Cave [QG90] (see Figure 2.8). Alice wants to prove to Bob that she knows the secret words that will open the portal at R-S in the cave, but she does not wish to reveal the secret to Bob. In this scenario, Alice's commitment is to go to R or S. A typical round in the proof proceeds as follows: Bob goes to P and waits there while Alice goes to R or S. Bob then goes to Q and shouts to ask Alice to appear from either the right side or the left side of the tunnel. If Alice does not know the secret words (for example, "Open Sesame"), there is only a 50 percent chance she will come out from the right tunnel. Bob will repeat this round as many times as he desires until he is certain Alice knows the secret words. No matter how many times the proof repeats, Bob does not learn the secret words.
There are a number of zero-knowledge and interactive proof protocols in use today as identification schemes. The Fiat-Shamir protocol [FS87] is the first practical zero-knowledge protocol with cryptographic applications and is based on the difficulty of factoring. A more common variation of the Fiat-Shamir protocol is the Feige-Fiat-Shamir scheme [FFS88]. Guillou and Quisquater [GQ88] further improved Fiat-Shamir's protocol in terms of memory requirements and interaction (the number of rounds in the protocol). | <urn:uuid:4364ca8e-ed1a-493f-a031-3ac490160135> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-interactive-proofs-and-zero-knowledge.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927673 | 702 | 3.328125 | 3 |
Functional programming in Python, Part 1
Making more out of your favorite scripting language
This content is part # of # in the series: Charming Python
This content is part of the series: Charming Python
Stay tuned for additional content in this series.
We'd better start with the hardest question: "What is functional programming (FP), anyway?" One answer would be to say that FP is what you do when you program in languages like Lisp, Scheme, Haskell, ML, OCAML, Clean, Mercury, or Erlang (or a few others). That is a safe answer, but not one that clarifies very much. Unfortunately, it is hard to get a consistent opinion on just what FP is, even from functional programmers themselves. A story about elephants and blind men seems apropos here. It is also safe to contrast FP with "imperative programming" (what you do in languages like C, Pascal, C++, Java, Perl, Awk, TCL, and most others, at least for the most part).
Personally, I would roughly characterize functional programming as having at least several of the following characteristics. Languages that get called functional make these things easy, and make other things either hard or impossible:
- Functions are first class (objects). That is, everything you can do with "data" can be done with functions themselves (such as passing a function to another function).
- Recursion is used as a primary control structure. In some languages, no other "loop" construct exists.
- There is a focus on LISt Processing (for example, the name
Lisp). Lists are often used with recursion on sub-lists as a substitute for loops.
- "Pure" functional languages eschew side-effects. This excludes the almost ubiquitous pattern in imperative languages of assigning first one, then another value to the same variable to track the program state.
- FP either discourages or outright disallows statements, and instead works with the evaluation of expressions (in other words, functions plus arguments). In the pure case, one program is one expression (plus supporting definitions).
- FP worries about what is to be computed rather than how it is to be computed.
- Much FP utilizes "higher order" functions (in other words, functions that operate on functions that operate on functions).
Advocates of functional programming argue that all these characteristic make for more rapidly developed, shorter, and less bug-prone code. Moreover, high theorists of computer science, logic, and math find it a lot easier to prove formal properties of functional languages and programs than of imperative languages and programs.
Inherent Python functional capabilities
Python has had most of the characteristics of FP listed above since Python 1.0. But as with most Python features, they have been present in a very mixed language. Much as with Python's OOP features, you can use what you want and ignore the rest (until you need it later). With Python 2.0, a very nice bit of "syntactic sugar" was added with list comprehensions. While list comprehensions add no new capability, they make a lot of the old capabilities look a lot nicer.
The basic elements of FP in Python are the functions
filter(), and the operator
lambda. In Python 1.x, the
apply() function also
comes in handy for direct application of one function's list return value
to another function. Python 2.0 provides an improved syntax for this
purpose. Perhaps surprisingly, these very few functions (and the basic
operators) are almost sufficient to write any Python program;
specifically, the flow control statements (
def) can all be handled in a functional
style using exclusively the FP functions and operators. While actually
eliminating all flow control commands in a program is probably only useful
for entering an "obfuscated Python" contest (with code that will look a
lot like Lisp), it is worth understanding how FP expresses flow control
with functions and recursion.
Eliminating flow control statements
The first thing to think about in our elimination exercise is the fact
that Python "short circuits" evaluation of Boolean expressions. This
provides an expression version of
else blocks (assuming each block calls one function, which is
always possible to arrange). Here's how:
Listing 1. "Short-circuit" conditional calls in Python
# Normal statement-based flow control if <cond1>: func1() elif <cond2>: func2() else: func3() # Equivalent "short circuit" expression (<cond1> and func1()) or (<cond2> and func2()) or (func3()) # Example "short circuit" expression >>> x = 3 >>> def pr(s): return s >>> (x==1 and pr('one')) or (x==2 and pr('two')) or (pr('other')) 'other' >>> x = 2 >>> (x==1 and pr('one')) or (x==2 and pr('two')) or (pr('other')) 'two'
Our expression version of conditional calls might seem to be nothing but a
parlor trick; however, it is more interesting when we notice that the
lambda operator must return an expression. Since -- as we
have shown -- expressions can contain conditional blocks via
lambda expression is fully general in
expressing conditional return values. Building on our example:
Listing 2. Lambda with short-circuiting in Python
>>> pr = lambda s:s >>> namenum = lambda x: (x==1 and pr("one")) \ .... or (x==2 and pr("two")) \ .... or (pr("other")) >>> namenum(1) 'one' >>> namenum(2) 'two' >>> namenum(3) 'other'
Functions as first class objects
The above examples have already shown the first class status of functions
in Python, but in a subtle way. When we create a function object
lambda operation, we have something entirely
general. As such, we were able to bind our objects to the names "pr" and
"namenum", in exactly the same way we might have bound the number 23 or
the string "spam" to those names. But just as we can use the number 23
without binding it to any name (in other words, as a function argument),
we can use the function object we created with
binding it to any name. A function is simply another value we might do
something with in Python.
The main thing we do with our first class objects, is pass them to our FP
filter(). Each of these functions accepts a function object
as its first argument.
map()performs the passed function on each corresponding item in the specified list(s), and returns a list of results.
reduce()performs the passed function on each subsequent item and an internal accumulator of a final result; for example,
reduce(lambda n,m:n*m, range(1,10))means "factorial of 10" (in other words, multiply each item by the product of previous multiplications).
filter()uses the passed function to "evaluate" each item in a list, and return a winnowed list of the items that pass the function test.
We also often pass function objects to our own custom functions, but usually those amount to combinations of the mentioned built-ins.
By combining these three FP built-in functions, a surprising range of "flow" operations can be performed (all without statements, only expressions).
Functional looping in Python
Replacing loops is as simple as was replacing conditional blocks.
for can be directly translated to
map(). As with
our conditional execution, we will need to simplify statement blocks to
single function calls (we are getting close to being able to do this
Listing 3. Replacing loops
for e in lst: func(e) # statement-based loop map(func,lst) # map()-based loop
By the way, a similar technique is available for a functional approach to
sequential program flow. That is, imperative programming mostly consists
of statements that amount to "do this, then do that, then do the other
map() lets us do just this:
Listing 4. Map-based action sequence
# let's create an execution utility function do_it = lambda f: f() # let f1, f2, f3 (etc) be functions that perform actions map(do_it, [f1,f2,f3]) # map()-based action sequence
In general, the whole of our main program can be a
expression with a list of functions to execute to complete the program.
Another handy feature of first class functions is that you can put them in
while is slightly more complicated, but is still
possible to do directly:
Listing 5. Functional 'while' looping in Python
# statement-based while loop while <cond>: <pre-suite> if <break_condition>: breakelse: <suite> # FP-style recursive while loop def while_block(): <pre-suite> if <break_condition>: return 1 else: <suite> return 0 while_FP = lambda: (<cond> and while_block()) or while_FP() while_FP()
Our translation of
while still requires a
while_block() function that may itself contain statements
rather than just expressions. But we might be able to apply further
eliminations to that function (such as short circuiting the
if/else in the template). Also, it is hard for <cond>
to be useful with the usual tests, such as
since the loop body (by design) cannot change any variable values (well,
globals could be modified in
while_block()). One way to add a
more useful condition is to let
while_block() return a more
interesting value, and compare that return for a termination condition. It
is worth looking at a concrete example of eliminating statements:
Listing 6. Functional 'echo' loop in Python
# imperative version of "echo()" def echo_IMP(): while 1: x = raw_input("IMP -- ") if x == 'quit': breakelseprint x echo_IMP() # utility function for "identity with side-effect" def monadic_print(x): print x return x # FP version of "echo()" echo_FP = lambda: monadic_print(raw_input("FP -- "))=='quit' or echo_FP() echo_FP()
What we have accomplished is that we have managed to express a little
program that involves I/O, looping, and conditional statements as a pure
expression with recursion (in fact, as a function object that can be
passed elsewhere if desired). We do still utilize the utility
monadic_print(), but this function is completely
general, and can be reused in every functional program expression we might
create later (it's a one-time cost). Notice that any expression containing
monadic_print(x)evaluates to the same thing as
if it had simply contained
x. FP (particularly Haskell) has
the notion of a "monad" for a function that "does nothing, and has a
side-effect in the process."
After all this work in getting rid of perfectly sensible statements and substituting obscure nested expressions for them, a natural question is "Why?!" All of my descriptions of FP are achieved in Python. But the most important characteristic—and the one likely to be concretely useful—is the elimination of side-effects (or at least their containment to special areas like monads). A very large percentage of program errors—and the problem that drives programmers to debuggers—occur because variables obtain unexpected values during the course of program execution. Functional programs bypass this particular issue by simply not assigning values to variables at all.
Let's look at a fairly ordinary bit of imperative code. The goal here is to print out a list of pairs of numbers whose product is more than 25. The numbers that make up the pairs are themselves taken from two other lists. This sort of thing is moderately similar to things that programmers actually do in segments of their programs. An imperative approach to the goal might look like:
Listing 7. Imperative Python code for "print big products"
# Nested loop procedural style for finding big products xs = (1,2,3,4) ys = (10,15,3,22) bigmuls = # ...more stuff... for x in xs: for y in ys: # ...more stuff... if x*y > 25: bigmuls.append((x,y)) # ...more stuff... # ...more stuff... print bigmuls
This project is small enough that nothing is likely to go wrong. But
perhaps our goal is embedded in code that accomplishes a number of other
goals at the same time. The sections commented with "more stuff" are the
places where side-effects are likely to lead to bugs. At any of these
points, the variables
y might acquire
unexpected values in the hypothetical abbreviated code. Furthermore, after
this bit of code is done, all the variables have values that may or may
not be expected and wanted by later code. Obviously, encapsulation in
functions/instances and care regarding scope can be used to guard against
this type of error. And you can always
del your variables
when you are done with them. But in practice, the types of errors
indicated are common.
A functional approach to our goal eliminates these side-effect errors altogether. A possible bit of code is:
Listing 8. Functional approach to our goal
bigmuls = lambda xs,ys: filter(lambda (x,y):x*y > 25, combine(xs,ys)) combine = lambda xs,ys: map(None, xs*len(ys), dupelms(ys,len(xs))) dupelms = lambda lst,n: reduce(lambda s,t:s+t, map(lambda l,n=n: [l]*n, lst)) print bigmuls((1,2,3,4),(10,15,3,22))
We bind our anonymous (
lambda) function objects to names in
the example, but that is not strictly necessary. We could instead simply
nest the definitions. For readability we do it this way; but also because
combine() is a nice utility function to have anyway (produces
a list of all pairs of elements from two input lists).
dupelms() in turn is mostly just a way of helping out
combine(). Even though this functional example is more
verbose than the imperative example, once you consider the utility
functions for reuse, the new code in
bigmuls() itself is
probably slightly less than in the imperative version.
The real advantage of this functional example is that absolutely no
variables change any values within it. There are no
possible unanticipated side-effects on later code (or
from earlier code). Obviously, the lack of side-effects, in itself, does
not guarantee that the code is correct, but it is nonetheless an
advantage. Notice, however, that Python (unlike many functional languages)
does not prevent rebinding of the names
starts meaning something different later in the program, all bets are off.
You could work up a Singleton class to contain this type of immutable
bindings (as, say,
s.bigmuls and so on); but this column does
not have room for that.
One thing distinctly worth noticing is that our particular goal is tailor-made for a new feature of Python 2. Rather than either the imperative or functional examples given, the best (and functional) technique is:
print [(x,y) for x in (1,2,3,4) for y in (10,15,3,22) if x*y > 25]
I've shown ways to replace just about every Python flow-control construct with a functional equivalent (sparing side-effects in the process). Translating a particular program efficiently takes some additional thinking, but we have seen that the functional built-ins are general and complete. In later columns, we will look at more advanced techniques for functional programming; and hopefully we will be able to explore some more of the pros and cons of functional styles. | <urn:uuid:49efd36a-14da-4935-a390-99087d7021de> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/linux/library/l-prog/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875191 | 3,499 | 3.203125 | 3 |
Widespread interest and excitement about cloud computing have emanated from businesses, government agencies, Information Technology (IT) executives, and other organizations seeking more dynamic, resilient and cost-effective Information Technology systems than previous generations of technology allowed.
Although the term cloud may connote a transitory or short-lived quality to the type of computing it describes, the benefits of cloud computing to customers are very tangible. Businesses around the world are adopting cloud computing in recognition of its potential to usher in a new era of responsiveness, effectiveness and efficiency in Information Technology service delivery.
Cloud computing technology is a type of computing technology whose foundation is the delivery of services, software, and processing capacity using private or public networks. The focus of cloud computing technology is the user experience, and the essence is to decouple the delivery of computing services from the underlying technology.
Cloud computing technology is incredibly user friendly. The technology behind the cloud remains invisible to the end user. Cloud computing is an emerging approach to shared infrastructure in which large pools of systems are linked together in private or public networks to provide Information Technology (IT) services. The need for such environments is fueled by dramatic growth in connected devices, real-time data streams and the adoption of service oriented architectures and Web applications, such as mashups, open collaboration, social networking and mobile commerce.
With cloud computing, Information Technology (IT) professionals can devote more energy to enhancing the value of using Information Technology for their enterprises and less on the day to day challenges of Information Technology. Cloud computing supports massive scalability to meet periods of peaks in demand while avoiding extended periods of under-utilized IT capacity. With the click of a mouse, services can be quickly expanded or contracted without requiring expenditure in deploying extra resources. The benefits include lower cost of ownership, which drives higher profitability, enabling you to invest much more in expanding your business. Cloud computing also yields significant cost savings in the real estate required for the data center as well as power and cooling costs.
Many application programs such as QuickBooks, Peachtree, Windows Server, etc. can be easily hosted in the cloud. Cloud computing fosters business innovation by enabling organizations to explore quickly and cost effectively the potential of new, IT-enabled business enhancements that can grow with unprecedented scale.
Cloud computing represents the next phase in the logical evolution in the delivery of Information Technology (IT) services, building on previous innovations that include grid, on-demand, and utility computing. | <urn:uuid:4e5a2907-d189-4cad-9d8b-270f4ea17ee3> | CC-MAIN-2017-04 | http://www.myrealdata.com/blog/178_cloud-computing-%E2%80%93-a-logical-evolution-in-it | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00013-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928959 | 496 | 2.828125 | 3 |
As we prepare for the New Year, we wanted to leave you with a piece of logic taken out of an older PIC16C series microcontroller. We want you to guess which micro(s) this gate (well the pair of them) would be found in. After the New Year, we'll right up on the actual micro(s) and give the answer :).
An AND gate in logic is basically a high (logic '1') on all inputs to the gate. For our example, we're discussing the 2 input AND. It should be noted that this is being built from a NAND and that a NAND would require 2 less gates than an AND.
The truth table is all inputs must be a '1' to get a '1' on the output (Y). If any input is a '0', Y = '0'.
The photo above shows the schematic layout using P and N type FETs. A P-FET is conducting between the source and the drain when a logic '0' is presented on its gate. The N-FET is the exact opposite (a '1' conducts).
As seen above, there are 2 signals we labeled 'A' and 'B' routed in the Poly layer of the substrate (under all the metal). This particular circuit is not on the top of the device and had another metal layer above it (Metal 2 or M2). So technically, you are seeing Metal 1 (M1) and lower (Poly, Diffusion).
It's quickly obvious that this is an AND gate but it could also be a NAND by removing the INVERTER and taking the '!Y' signal instead of 'Y'.
The red box to the left is the NAND leaving the red box to the right being the inverter creating our AND gate.
The upper green area are PFET's with the lower green area being NFET's.
After stripping off M1, we now can clearly see the Poly layer and begin to recognize the circuit.
This is a short article and we will follow up after the New Year begins. This is a single AND gate but was part of a pair. From the pair, this was the right side. We call them a pair because they work together to provide the security feature on some of the PIC16C's we're asking you to guess which ones :)
If you have Photoshop, here is a link to a Photoshop image we created for you that you can control the layer opacity to see the remove the top metal to see how the poly and M1 layers connected virtually.
Happy Holidays and Happy Guessing! | <urn:uuid:17b8b6c1-eee4-4e3b-8e10-40a3d2d93284> | CC-MAIN-2017-04 | http://blog.ioactive.com/2007/12/and-gates-in-logic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943865 | 543 | 3.59375 | 4 |
CRIME is short for “Compression Ratio Info-Leak Made Easy.” In their presentation, Rizzo and Duong reminded us that HTTPS provides confidentiality, integrity and authenticity; however, CRIME decrypts portions of an HTTPS message, such as a cookie, which can lead to the victim’s session being hijacked.
As previously stated, the CRIME attack takes advantage of compression. Compression is used at many levels and the speakers discussed TLS compression, SPDY header compression and HTTP response gzip compression.
If we compare plain text, encrypted text and encrypted compressed-text, you might think that we are going from least secure to most secure. But that’s wrong. If an adversary can trick you into compressing and encrypting a message of his choice, you could be revealing sensitive information in another part of the message, such as a cookie header.
For TLS compression or SPDY header compression, you must use DEFLATE. DEFLATE uses LZ77 to scan the input, look for repeated strings and replace them with back-references to the last occurrence. DEFLATE also uses Huffman coding to replace common sequences with shorter codes.
The resulting compression will make the encrypted message shorter. If the attacker can inject known information into the message before compression and encryption, then they can find out if the added information is a match (i.e., the compressed message will be shorter). If the message stays the same length, then they know the added information was not a match and they try again.
Through repeated use, the attacker can eventually match the session cookie. Voila. The session is compromised.
For a better explanation of the SSL compression attack, please take a look at these references:
- Explaining CRIME weakness in SDPY and SSL by Billy Hoffman
- DETAILS ON THE “CRIME” ATTACK by Tom Ritter
- CRIME by Adam Langley
In order to implement CRIME, the attacker needs two items. First, they need to sniff the victim’s network traffic. This can be done in many ways such as sharing a LAN, he’s hacked a router or he’s a network administrator.
Second, they need to load code into the victim’s browser. This can be done by tricking the victim into visiting a compromised or malicious website or by injecting the code into the victim’s legitimate HTTP traffic.
The good news is the risk of CRIME attacks on SSL has been mitigated. Chrome and Firefox have disabled TLS compression and SDPY header compression. Internet Explorer, Safari and Opera did not support TLS compression or SPDY, so no changes were required.
Regarding SPDY moving forward, the plan is SPDY/4 will not be susceptible to CRIME.
Rizzo and Duong concluded that the problem is not over, however. TLS compression may also be a problem in the future with other non-browser implementations. They also think HTTP gzip may be a bigger problem than either SPDY or TLS compression.
They reminded everyone that compression is everywhere. I assume that means: watch out. | <urn:uuid:53e6d33e-514a-47c1-8bf9-be5decd6e91d> | CC-MAIN-2017-04 | https://www.entrust.com/summarization-of-crime-attack-on-ssl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919122 | 651 | 2.828125 | 3 |
Young I.,University of Guelph |
Young I.,Public Health Agency of Canada |
Rajic A.,University of Guelph |
Rajic A.,Public Health Agency of Canada |
And 6 more authors.
Journal of Food Protection | Year: 2010
Provincial broiler-chicken marketing boards in Canada have recently implemented an on-farm food safety program called Safe, Safer, Safest. The purpose of this study was to measure broiler chicken producers' attitudes toward the program and food safety topics and use of highly recommended good production practices (GPP). Mailed and Web-based questionnaires were administered to all producers registered in British Columbia, Ontario, and Quebec in 2008. The response percentage was 33.2% (642 of 1,932). Nearly 70% of respondents rated the program as effective in producing safe chicken, and 49.1% rated the program requirements as easy to implement. Most respondents (92.9%) reported that they do not raise other poultry or keep birds as pets, and 79.8% reported that they clean and disInfect their bams between each flock cycle. Less than 50% of respondents reported that visitors wash their hands or change their clothes before entering barns, 38.4% reported that catching crews wear clean clothes and boots, and 35.8% reported that a crew other than from the hatchery places chicks. Respondents who rated the program requirements as effective or easy to implement were more likely to report the use of five of six highly recommended GPP. Only 21.1% of respondents indicated that Campylobacter can be transmitted from contaminated chicken meat to humans, and 26.6% believed that antimicrobial use in their industry is linked to antimicrobial resistance in humans. Continuing education of producers should focus on improving their awareness of these issues, while mandatory GPP should include those that are known to be effective in controlling Campylobacter and Salmonella in broiler chicken flocks. Copyright © International Association for Food Protection. Source
Parmley E.J.,University of Guelph |
Soos C.,University of Guelph |
Soos C.,Environment Canada |
Breault A.,Canadian Wildlife Service Pacific Yukon Region |
And 10 more authors.
Journal of Wildlife Diseases | Year: 2011
Surveillance for avian influenza viruses in wild birds was initiated in Canada in 2005. In 2006, in order to maximize detection of highly pathogenic avian influenza viruses, the sampling protocol used in Canada's Interagency Wild Bird Influenza Survey was changed. Instead of collecting a single cloacal swab, as previously done in 2005, cloacal and oropharyngeal swabs were combined in a single vial at collection. In order to compare the two sampling methods, duplicate samples were collected from 798 wild dabbling ducks (tribe Anatini) in Canada between 24 July and 7 September 2006. Low pathogenic avian influenza viruses were detected significantly more often (P,0.0001) in combined oropharyngeal and cloacal samples (261/798, 33%) than in cloacal swabs alone (205/798, 26%). Compared to traditional single cloacal samples, combined samples improved virus detection at minimal additional cost. © Wildlife Disease Association 2011. Source
Furtula V.,Environment Canada |
Farrell E.G.,Environment Canada |
Diarrassouba F.,Agriculture and Agri Food Canada |
Rempel H.,Agriculture and Agri Food Canada |
And 2 more authors.
Poultry Science | Year: 2010
Veterinary pharmaceuticals are commonly used in poultry farming to prevent and treat microbial infections as well as to increase feed efficiency, but their use has created public and environmental health concerns. Poultry litter contains antimicrobial residues and resistant bacteria; when applied as fertilizer, the level and effects of these pharmaceuticals and antimicrobial-resistant bacteria in the environment are of concern. The purpose of this study was to investigate poultry litter for veterinary pharmaceuticals and resistance patterns of Escherichia coli. Litter samples were collected from controlled feeding trials and from commercial farms. Feed additives bacitracin, chlortet-racycline, monensin, narasin, nicarbazin, penicillin, salinomycin, and virginiamycin, which were present in the feed on commercial farms and added to the feed in the controlled trials, were extracted in methanol and analyzed by liquid chromatography-mass spectrometry techniques. Sixty-nine E. coli were isolated and identified by API 20E. The susceptibility of the isolates to antibiotics was determined using Avian plates and the Sensititer automated system. This study confirmed the presence of antimicrobial residues in broiler litter from controlled environments as well as commercial farms, ranging from 0.07 to 66 mg/L depending on the compound. Concentrations of individual residues were higher in litter from controlled feeding trials than those from commercial farms. All E. coli isolates from commercial farms were multiresistant to at least 7 antibiotics. Resistance to β-lactam antibiotics (amoxicillin, ceftiofur), tetracyclines, and sulfonamides was the most prevalent. This study concluded that broiler litter is a source of antimicrobial residues and represents a reservoir of multiple antibiotic-resistant E. coli. © 2010 Poultry Science Association Inc. Source
Power B.A.,Azimuth Consulting Group |
Tinholt M.J.,SNC - Lavalin |
Hill R.A.,Azimuth Consulting Group |
Fikart A.,Azimuth Consulting Group |
And 4 more authors.
Integrated Environmental Assessment and Management | Year: 2010
The Crown Land Restoration Branch (CLRB) of the British Columbia Ministry of Agriculture and Lands is responsible for managing thousands of historic and abandoned mine sites on provincial lands (referred to as Crown Contaminated Sites). For most of these sites, there is limited information available regarding the extent of potential contamination or potential human health and ecological risks. Given the large number of sites, the CLRB sought a system for prioritizing investigation and management efforts among them. We developed a Risk-Ranking Methodology (RRM) to meet this objective, which was implemented in 2007/2008 with an emphasis on historic mine sites because of the significant number of sites and related potential risk. The RRM uses a risk-based Preliminary Site Investigation to gather key information about the sites. The information for each site is analyzed and summarized according to several attributes aimed at characterizing potential health and ecological risks. The summary information includes, but is not limited to, generic comparisons of exposure with effects levels (screening quotients) for human and ecological exposure pathways. The summary information (more than 25 attributes) is then used in a workshop setting to evaluate relative rankings among sites, and also to identify subsequent management actions for each site. Application of the RRM in 2007/2008 was considered successful, because there was confidence in the process, the content and the outputs. A key challenge was keeping the number of attributes to a manageable level. Ranking was based on discussion and consensus, which was a feasible approach given the relatively small number of sites that need to be ranked each year, and facilitated transparency in the ranking process.We do not rule out the future possibility of developing a quantitative function to capture trade-offs among attributes. © 2009 SETAC. Source | <urn:uuid:3582da4e-a784-4d8f-84ef-0a4a8de803fc> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/british-columbia-ministry-of-agriculture-and-lands-1004684/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939054 | 1,538 | 2.859375 | 3 |
Secure Socket Layer (SSL) technology has been and is still at the heart of secure web communication for just about as long as there has been a commercial internet. Based on algorithms developed by RSA, SSL certificates are available from more than several certificate authorities. Verisign built an Internet powerhouse on the basis of its SSL business. Consumers have grown up knowing that when they see that closed lock in their browser they can be confident that their communication is secure.
Lately, though, the fact that the lock was there has not always meant all was well. There have been incidents of forged certificates, hacking into Certificate Authorities, and the planned phase out of the 1024-bit RSA certificates. The fear is that with the pure computational power available today, the venerable 1024-bit certs could be brute forced. NIST has mandated the move to a new 2048-bit RSA standard.
Some are questioning whether the new 2048 certs are strong enough. But a bigger issue is the larger the bits of encryption, the more time and cycles it takes to use on both servers and in browsers. Moving up to 2048 bits is going to make an impact on speed and resources. Worse, if we have to move to 3072 bits, the load on servers and browsers is even worse. As you go up the ladder in bits, the load becomes increasingly more heavy.
Yes, the good old SSL is a little bit long in the tooth. Seems like a good refresh might be in order. The good news is that a refresh is coming, too. In addition to upgrading to the 2048-bit RSA standard, we now have two new standards for SSL. One is the DSA standard developed by the NSA. It uses a different algorithm then the RSA certificate, but in terms of size, the new NIST standard would require a 2048-bit certificate.
Another option is the Elliptic Curve Cryptography (ECC) algorithm. At only 256 bits, it is much smaller and therefore easier on the resources of both servers and endpoints. You can read a lot about it and about Symantec's new SSL certificates coming out that support it in a good article by Ellen Messmer here. The chart below shows some of the advantages of ECC.
By the way, those new Symantec certs support all three algorithms. Other certificate authorities are offering some "hybrid" ECC support. Symantec, as the owner of the Verisign business, is way out in front there, though. It has already ceded the ECC root in most of the major browsers, and most web servers have support for it as well. The question will be how quickly the ECC standard is adopted by other authorities and users of SSL certificates. But it would seem the advantages of ECC lead to its inevitable adoption.
But there is more news on the SSL front as well. The major certificate authorities have formed a new industry association to bring some leadership to the SSL certificate industry. Not a standards body, the new CA Security Council, according to this article on Dark Reading, "plans to serve as a research, security advocacy, and education organization for the SSL CA world, its founders say. It plans to support the work of the CA/Browser Forum and other standards bodies, and to help develop enhancements to SSL and the security and operation of the CA process."
By the way, there is a great FAQ on SSL over at the CASC site. If you are not up on SSL technology, have sort of just gone with the flow and want an easy to understand explanation, it is a good place to start.
In the meantime, that wily old padlock looks like it is getting another chance. With new technology and Certificate Authorities working together, we can probably all sleep easier. | <urn:uuid:8162ae80-2248-44c1-8707-ddf5b4c854a1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224044/opensource-subnet/ecc-and-the-ca-security-council--making-ssl-and-the-web-safe-today-and-tomorrow.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00279-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964203 | 765 | 2.546875 | 3 |
Privacy by Design (PbD) is a data privacy and protection concept developed by our friendly Canadian neighbors.
In general, PbD espouses the embedding of data privacy elements into organizations’ technologies and business practices. The goal is to bake privacy into the data life cycle, thereby foregoing the inefficient ad hoc privacy bolt-ons that we’re all familiar with. It’s great in theory. But is it possible in practice? If possible, is it even feasible?
Maybe…but first, let’s outline some more information about PbD. As stated on privacybydesign.ca:
Privacy by Design is a concept that was developed by the former Information and Privacy Commissioner of Ontario, Dr. Ann Cavoukian, back in the 90’s, to address the ever-growing and systemic effects of Information and Communication Technologies, and of large–scale networked data systems.
At the time, the notion of embedding privacy into the design of technology was far less popular – taking a strong regulatory stance was the preferred course of action. Since then, things have changed considerably and the Privacy by Design approach is now enjoying widespread popularity.
Privacy by Design advances the view that the future of privacy cannot be assured solely by compliance with legislation and regulatory frameworks; rather, privacy assurance must ideally become an organization’s default mode of operation.
Organizations that want to implemented Privacy by Design can follow the seven PbD Foundational Principles:
1. Proactive not Reactive; Preventative not Remedial. The PbD approach must be implemented using proactive rather than reactive measures. Properly structured, PbD anticipates and prevents privacy-invasive events before they occur.
2. Privacy as the Default Setting. Sensitive, personal and other similar data is automatically protected by an organization’s IT system and business practices. To be effective, no action is required on the part of the individual to protect his/her privacy – it is built into the system.
3. Privacy Embedded Into Design. Privacy is embedded into the design and architecture of IT systems, and business practices become an essential component of the organization. By embedding privacy, it becomes essential to the core functionality.
4. Full Functionality – Positive Sum, not Zero Sum. PbD reconciles all of an organization’s legitimate data privacy interests and objectives in a positive-sum, win-win manner — not through a zero-sum approach, where unnecessary trade-offs are made. PbD avoids the false dichotomy if privacy versus security, and demonstrates that it is possible to have both without sacrificing the functionality of the other.
5. End-to-End Life Cycle Protection. By virtue of PbD being baked into the environment, its concepts should thus be integrated into the entire life cycle of the data. Employing end-to-end privacy and data security capabilities prevent breach events or data loss because the data is handled appropriately from cradle to grave.
6. Visibility and Transparency. The component parts and operations of an organization’s data protection program remain visible and transparent to both end users and providers. Allowing visibility and transparency is a fundamental data privacy concept, creates accountability and builds trust.
7. Respect for User Privacy. PbD requires architects and operators to respect the interests of the individual by providing strong privacy defaults, adequate notice and consent functions, and empowering user-friendly options. PbD must bet user-centric. (For more information on these principles, click here.)
So back to my original question, is PbD even possible?
In fact, PbD’s refusal to be pigeonholed into one single aspect of data privacy program governance is why it’s valuable. It can be leveraged to provide a true organization-wide tool that, if used properly, will be tremendously effective at modernizing a company’s privacy program. And, it’s still highly relevant considering how far U.S. businesses are behind the data privacy power curve.
While PbD is possible in practice, what about its feasibility? Does it require an unrealistic expenditure of resources? Will the implementation of PbD be at odds with other security initiatives? Is PbD overkill if existing compliance programs are in place? The short answer to all of these questions is: It doesn’t have to.
In business, cash rules. Accordingly, one significant obstacle to creating legitimate privacy programs is the perceived high cost of doing so. It certainly doesn’t make sense to spend a bunch of money without have a useful return, so what are businesses to do? One insight is that a lot of existing, in-house knowledge and capabilities can be re-purposed to align with data privacy initiatives. (IT, HR, legal and other departments probably know more about data privacy than you think.) Harnessing such talent will reduce the costs otherwise needed to go find it outside the company. Also, I wouldn’t be doing my job if I didn’t take another kick at data breaches. The cost of data breaches can be crazy, especially when compared to what it would have cost to create a privacy program focused on minimizing unauthorized disclosures of data.
PbD also integrates very nicely with the steps needed to establish a dedicated enterprise privacy program. (Read more about that here). So, if progress is already being made in creating enterprise privacy practices, PbD can acts as facilitator — not a hindrance. Along those lines, PbD already has a lot of momentum in practice. For example, the FTC has adopted many PbD concepts in the guidance it provides. International data sharing partners are far along in implementing PbD principles to bridge privacy gaps. And the U.S. commercial sector has seen PbD’s advantages.
Finally, and what I consider the most relevant point, is that PbD does not have to constrain business operations or adversely effect information security. There’s a long held tenet in the business world that in order to have a successful privacy campaign, information security must suffer. This is refuted directly by the fourth principle of PbD: Privacy and security can co-exist symbiotically! This is something I’ll discuss in a future post. So rather than discuss the privacy versus security debate at length, I’ll say that the PbD principles are themselves a roadmap leading to the harmonious integration of privacy and security. Perhaps this symbiosis is inevitable because it has to be. | <urn:uuid:c2999448-4784-4c85-844a-2e08f543de99> | CC-MAIN-2017-04 | https://lunarline.com/blog/2015/09/privacy-design-data-protection-useful-useless/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926017 | 1,351 | 2.703125 | 3 |
Necessity is the mother of invention – Plato
Gartner’s “Hype Cycle for Emerging Technologies 2012” predicts that cloud computing will reach a plateau of productivity in 2-5 years’ time. The key enabling technologies for this are fast wide area networks, powerful expensive server computers and high performance virtualization for commodity hardware. However, a lack of standardization to guide development, deployment and integration efforts around technical challenges like interoperability, portability and reusability, and business concerns like security/compliance, regulation/jurisdiction and vendor lock-in are cited as major barriers for wider adoption and success.
NIST (National Institute of Standards and Technology) cloud computing standards roadmap report published in 2011 well documents the fact that broad standards are already available in support of certain functions and requirements for cloud computing. While most of these standards were developed in support of pre-cloud computing technologies, such as those designed for web services and the Internet, they also support the functions and requirements of cloud computing. Other standards are now being developed in specific support of cloud computing functions and requirements, such as virtualization.
The NIST report further goes to state that from a standardization point, the cloud interfaces presented to cloud users can be broken down into two major categories, with interoperability determined separately for each category.
The interface that is presented to (or by) the contents of the cloud encompasses the primary function of the cloud service. This is distinct from the interface that is used to manage the use of the cloud service.
Now, if we have to understand this in Infrastructure as a Service (IaaS) cloud offering parlance, the NIST report elucidates that the functional interface is a virtualized Central Processing Unit (CPU), memory and input/output (I/O) space typically used by an operating system (and the stack of software running in that operating system [OS] instance).
The cloud user utilizes the management interface to control their use of the cloud service by starting, stopping, and manipulating virtual machine images and associated resources. It should be clear from this that the functional interface for an IaaS cloud is very much tied to the architecture of the CPU being virtualized. This is not a cloud-specific interface, and no effort is being put into a de jure standard for this interface since de facto CPU architectures are the norm.
The self-service IaaS management interface, however, is a candidate for interoperability standardization.
From a functional viewpoint Platform as a Service, PaaS is a set of libraries and components to which the application is written mostly to take advantage of existing application platforms standards such as those found in J2EE or DOTNET.
SaaS application leverages the standards designed for web services and the internet.
Apart from interoperability, there is a lot of focus on cloud portability as the means to prevent being locked into any particular cloud or service provider. Portability is generally the ability to move applications and data from one computing environment to another. Standards are fundamental to achieve portability.
Security ensuring the confidentiality, integrity, and availability of information and information systems forms the 3rd aspect where a standardized approach is warranted to alleviate the high priority concerns and perceived risks related to cloud computing.
Forrester predicts IaaS will become more standardized by 2015, which is somewhat in line with Gartner’s hype cycle prediction. There’s a lot of effort taking place which is worth looking at.
DMTF’s (Distributed Management Task Force, Inc.) Virtualization Management (VMAN) Virtualization Profiles have achieved ANSI adoption. As DTMF defines it, the VMAN standard is comprised of two components: the Open Virtualization Format (OVF) specification, which provides a standard format for packaging and describing virtual machines and applications for deployment across virtualization platforms, and the Virtualization Profiles, which standardize many aspects of the operational management of a virtualized environment. Together, these components deliver broadly supported interoperability and portability standards to virtual computing environments for deploying pre-configured solutions across heterogeneous computing networks.4
Next in line from the DMTF stable is CIMI (Cloud Infrastructure Management Interface). Version one has been released, and the specification standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers. CIMI is developed as a self-service interface for infrastructure clouds which allows users to dynamically provision configure and administer their cloud usage.4
Coming to Data portability, SNIA (Storage Networking Industry Association) is behind CDMI which defines the functional interface that applications will use to create, retrieve, update and delete data elements from the cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.5
Service-Oriented Cloud Computing Infrastructure Framework (SOCCI), made available by the Open Group, is for enterprises that wish to provide infrastructure as a service in the cloud and SOA. It outlines the concepts and architectural building blocks necessary for infrastructures to support SOA and cloud initiatives.
On the open source side, OpenStack, the initiative with the largest vendor community, is creating a lot of de facto standards for operating systems that will be deployed on the cloud.
Open Cloud Computing Interface (OCCI) published by Open Grid Forum, is a RESTful boundary protocol and API that acts as a service front-end to a provider’s internal management framework. OCCI describes APIs that enable cloud providers to expose their services. It allows the deployment, monitoring and management of virtual workloads (like virtual machines), but is applicable to any interaction with a virtual cloud resource through defined http(s) header fields and extensions. 6 OCCI endpoints can function either as service providers or service consumers, or both. Further, the OCCI working group and the OpenStack team are working together to deliver an OCCI implementation in OpenStack.
The nonspecific web and internet technology standards enabling the cloud are TCP/IP, HTTP, HTML, SSL, TLS XML, JSON, DNS, etc.
Another point worth mentioning here is SDN (Software Defined Networking), and understanding how it shall impact cloud computing. The SDN approach makes virtual networking with elastic resource allocation which is an engineering realization of network reaction to application requirement.
SDN separates the control plane from the data plane in network switches and routers. Under SDN, the control plane is implemented in software in servers separate from the network equipment, and the data plane is implemented in commodity network equipment. The Open Networking Foundation has specified the OpenFlow protocol standard as an implementation of SDN.
Now , what we get combining all these is shared pool of configurable computing resources, e.g., networks, servers and storage, that can be rapidly provisioned , orchestrated and released in a standardized way.
The industry is already warming up to this prospect, which is evident from the early steps taken in this direction. Notable is CISCO’s ONE (Open Network Environment) that brings together CISCO, OpenStack and OpenFlow.
A few questions still remain to be answered. How do industry behemoths VMware and Microsoft plan to integrate standardization in their next product plan? What role could TSPs/carriers play in shaping standardization for cloud computing? Read about HCL's suite of services here.
- Hype Cycle – http://www.gartner.com/it/page.jsp?id=2124315
- NIST – http://www.nist.gov/itl/cloud/index.cfm#
- Forrester – http://blogs.forrester.com/james_staten/11-12-02-when_will_we_have_iaas_cloud_standards_not_till_2015
- DMTF – http://dmtf.org/
- CDMI – http://www.snia.org/cdmi
- SOCCI – http://www.opengroup.org/soa/source-book/socci/intro.htm
- CISCO – http://www.theregister.co.uk/2012/06/14/cisco_one_sdn_openflow_openstack/ | <urn:uuid:9b5e2591-5d90-4436-a377-924afcd621d5> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/engineering-and-rd-services/cloud-computing-standards | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913955 | 1,756 | 2.546875 | 3 |
In St. Charles County, Mo., a couple visited the local library for reference assistance. Planning to move into the county, the couple were concerned about finding a home within a subdivision and school district that would allow their children to mingle with others of Asian background. Using a geographic information system (GIS), the librarian showed them which areas were most populated by people of Asian descent, and then layered the data with the school district map, allowing the couple to determine which neighborhoods and schools would be the best location.
In a growing number of libraries, patrons use GIS to determine the best places to open small businesses, launch community projects or apply for grant applications. GIS software has been used for some time by local government, natural resource agencies and utilities, but lately librarians are discovering and implementing this technology and integrating it into their patron services.
Geographic data in digital form is becoming more common in libraries. Many librarians were first exposed to GIS by the TIGER files distributed with the U.S. Census Bureau's 1990 census data. More than 1,300 government-depository libraries receive information from the federal government, including the TIGER files, which can be used to produce street maps of the entire United States.
In 1992, the Association of Research Libraries initiated the GIS Literacy Project, convinced that libraries could harness the power and capabilities of GIS. By teaching librarians the skills needed to provide access to spatial data, the association enabled participating libraries to design GIS programs suited for their particular needs.
As desktop GIS software become less costly and easier to use, and as increasing amounts of data become available, more librarians are bringing the technology into their operations. According to GIS vendor ESRI, its software is used in 300 libraries of various types, 100 of which are public libraries.
Public libraries are facing globalization of information, increased competition for public funding, the rapid pace of technological improvements in computing and telecommunications, demographic changes and increasing alternatives to library services. To remain competitive, libraries have responded by developing initiatives to provide quality services that meet and anticipate the needs of their user communities.
"The time of passive sitting and waiting for a patron is gone. We compete with other information providers, and we'd better be very proactive," says Anna Sylvan, GIS/government information resources coordinator of the St. Charles City-County Library District.
Libraries are appropriate facilities for the management and distribution of GIS maps and data. They are neutral, unbiased institutions, and are an established nationwide infrastructure. People who need access to information automatically think of libraries. It is libraries that users depend on for their data needs, and for resources that can interpret data. In addition, librarians are proficient in collection development, cataloging, access and preservation issues. All this makes for a strong case to provide GIS services in libraries.
Patterns of Use
The St. Louis Public Library, in partnership with the Southern Illinois University at Edwardsville, began providing patrons with GIS services in 1992 with the development of the St. Louis Public Library's Electronic Atlas. The library's GIS atlas replaced the federally developed Urban Atlas, an electronic atlas of St. Louis city and county that had not been updated since 1970.
The atlas is accessible on a Pentium workstation in the library's public service area and provides 35 thematic city and county maps containing selected data elements from the 1990 census. These maps were created and saved as files using ESRI's ArcView software and are estimated to meet 80 percent of patron needs. The atlas also allows patrons to match addresses and map their own data. Use of a color printer is available at no charge.
The system offers a simplified, customized user interface so that users of varying expertise can try GIS. Many patrons use the system quite easily. Staff assist those patrons who require help working with the system. Those patrons with more complex inquiries are referred to the Illinois and Missouri state data centers, which offer full GIS service. Conversely, the centers refer users to the library when the library is able to meet their needs.
"There are advantages in limiting patron options," said Ann Watts, the St. Louis Public Library's special coordinator. "We don't want a full-blown GIS. We have it available administratively, and we use it for internal planning. But we have removed certain degrees of functionality from the public workstation because, if you don't, patrons lose their file or they rename your files."
Watts explained that the library has steered away from the fully-functional GIS a university library might offer for assistance. "We're trying to create a reference problem-solving tool which makes it very different from use in an academic institution," she said.
The St. Louis Public Library (SLPL) does not purchase data sets, but uses public domain data customized by an outside consultant. The typical patrons are from the not-for-profit community and students working in the health-care field. Others are small-business developers working on business plans, and people doing grant applications.
SLPL reference librarians use the GIS as an integral part of their services. "We have no plans to be mapping beyond our immediate area. That's where our patron needs are and we are entirely patron driven," said Watts. "To me, GIS is a very traditional reference service."
St. Charles Goes Spatial
In 1995, Anna Sylvan was hired into her current position despite having no background in GIS. She realized that if she wanted to develop GIS within the library district, she would have to rely on herself, because no one within the county was really interested in the project.
"We had to come to a very definite understanding within the district as to what it is that we expect GIS to do for us," commented Sylvan. "Because we have financial restraints and a lack of GIS professional know-how or expertise, we came to the conclusion that GIS for us would be to supply information based on public domain data primarily."
The library also required that data be formatted for ArcView, because staff did not have the technical or financial means to do it themselves.
According to Sylvan, the library couldn't afford to purchase commercial data sets, which can cost several thousand dollars each. Instead, she turned to CARES (Center for Agricultural Resources and Environmental Systems), a consortium developed at the University of Missouri-Columbia for GIS information.
"CARES received a huge grant to develop a user-friendly application with custom programming that would offer people an easy way to access information on demography and physical attributes on a county level," Sylvan explained. "The director said they chose St. Charles County as a pilot test and he would gift me with that set." The library also received street-address data free of charge from ESRI and was able to purchase, at a reasonable cost, a commercial data set of the businesses in St. Charles County.
The St. Charles County Library uses ArcView 3.08 on a single stand-alone Pentium workstation running Windows NT. The computer has 48MB of RAM, a 2GB hard drive, a 21-inch monitor and a color printer. In July, patrons began using the workstation to view maps created from data sets provided by the library.
The library promotes its GIS through in-house brochures, publicizing it in local papers and by spreading the word to the Chamber of Commerce and businesses within the county.
"If all goes well, and we do expect it to, within one year, probably fiscal year 1999, we are definitely moving toward putting GIS on the Internet," added Sylvan. "This means that the county is going to be offering interactive online services with GIS capabilities to anybody who is going to hit our site."
According to Sylvan, to use GIS successfully and effectively as a reference tool in libraries, first state the mission and then decide what you want to use the GIS for. "You can't have everything for everybody," she cautioned. "It's just simply too massive, too expensive. I was able to argue that GIS is nothing more than using the same type of information that we get in different formats, just simply used in a geospatial way."
The biggest hurdle to offering GIS in a library is having data readily available, converted to a format that libraries can use. "In public libraries, we don't have programmers, nor do we have ArcInfo [a sophisticated GIS software package], which costs $20,000," Sylvan said. "We don't have people who can create data and I think that is the biggest limitation.
"We have to really develop a very cohesive, coherent plan for promoting our services," she said. "Public libraries will survive only if they have the support of the community, because with that goes the tax base and everything else."
Just as important, Sylvan said, GIS requires teamwork in the library, networking with other librarians and partnerships with outside entities. Librarians need to be entrepreneurial. While data sets are usually expensive to produce, by talking with data producers, especially in local, regional and state government agencies, librarians are often able to get the data free of charge by pointing out that agencies will then be able to refer users to the library instead of burdening themselves with answering queries.
A Director's Tool First
GIS presents a new way of thinking for libraries and their patrons. Among librarians, an understanding of GIS technology has grown steadily. More and more libraries are providing GIS services. However, libraries still have a way to go in making GIS available to the public.
Boston's Simmons College offers one of the country's leading library sciences programs. Peter Hernon teaches GIS at Simmons and feels that the technology will take on a stronger, more visible role in libraries once librarians take advantage of GIS in their own strategic planning. For instance, librarians can use GIS to analyze patterns of patron use in public libraries and their branches. According to Hernon, librarians can employ GIS to answer a host of questions:
* From what areas of the community are the patrons coming to use the various branch libraries?
* What are the characteristics of the populations served by each library?
* Do the materials used vary by area served?
By using the volumes of data available in their computerized circulation systems, together with GIS, librarians can find the answers to many of these questions. Information generated by GIS can assist librarians in making more effective decisions about the location of library facilities, the number of branch locations required and the provision of appropriate services for different populations.
"GIS needs to be marketed not to reference librarians, but library directors, as a management tool," said Hernon. "A lot of libraries are not going to adopt GIS until they understand that it has a management application to it that offers real value to libraries, and that library directors could be using GIS for their own planning processes. Then we will see more widespread use of GIS in libraries."
Pat Newcombe is the reference librarian at Western New England College School of Law.
October Table of Contents | <urn:uuid:03f0a72a-e675-49cd-9c8e-9c70b1e7b84a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Libraries-Turn-New-Leaf-with-GIS.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958975 | 2,317 | 3.078125 | 3 |
Barring bad weather, NASA said the space shuttle Discovery mounted atop the space agency's 747 Shuttle Carrier Aircraft will make a series of low passes - 1,500 ft. around parts of Washington DC on April 17 between 10-11 am eastern daylight time.
The exact route and timing of the flight, which has the blessing of the Federal Aviation Administration, depends on weather and operational constraints, NASA said. The aircraft/shuttle combo is expected to fly near a variety of landmarks including the National Mall, Reagan National Airport and National Harbor.
After its done taking a tour of the area, the aircraft will land at Dulles Airport which is next door to the Smithsonian's National Air and Space Museum, Udvar-Hazy Center where Discovery will be towed and ultimately displayed.
The other retiring shuttles Endeavour and Atlantis will make their retirement trips later this year with Endeavor taking the piggyback 747 flight from Florida to Los Angeles this fall. Atlantis will be transported from the Orbiter Processing Facility to the Kennedy Space Center Visitor Complex in November, NASA said.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:866b712f-64ca-433a-81ea-4c6d8ceeee4f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222099/data-center/nasa-shuttle-discovery-set-to-buzz-washington--dc.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915458 | 231 | 2.59375 | 3 |
Software engineers at Intel are exploring new ways for computers to perceive the human voice, gestures and head-and-eye movements to supplement the traditional ways that people use the keyboard and mouse.
BARCELONA -- Software engineers at Intel are exploring new ways people can use the human voice, gestures and head-and-eye movements to operate computers.
Intel's Barry Solomon uses hand gestures in a demonstration of a perceptual computing toolkit being used by independent developers. (Photo by Matt Hamblen/Computerworld)
In coming years, their research is expected to help independent developers build computer games, doctors control computers used in surgery and firefighters when they enter flaming buildings.
"We don't really know what this work will become, but it's going to be fascinating to watch it play out," said Craig Hurst, Intel's director of visual computing product management, in an interview at Mobile World Congress. "So far, what we've seen has gone beyond what we thought of originally."
Intel's visual computing unit, created two years ago, has grown to become a top priority for the chip maker, Hurst said. Last fall, the unit released several software toolkits that are used by independent developers to create a raft of new and sometimes unusual applications.
One of the toolkits, called the Perceptual Computing SDK (software developer kit), was distributed to outside developers building applications that will judged by Intel engineers. Intel is planning to award $1 million in prizes to developers in 2013 for the most original application prototype designs, not only in gaming design, but also in work productivity and other areas.
Barry Solomon, a member of the visual computing product group, demonstrated how the Intel software is being used by developers on Windows 7 and Windows 8 desktops and laptops. With a special depth-perception camera clipped to the top of his laptop lid and connected over USB to the computer, Solomon was able to show how the SDK software rendered his facial expressions and hand gestures on the computer screen, accompanied by an overlay of lines and dots to show the precise position of his eyes and fingers. A full mesh model can then be rendered.
With that tracking information easily available, a developer can quickly insert a person's face and hands into an augmented reality scenario. Or, the person can be quickly overlaid onto a green screen commonly seen in video applications to make a weather or news report. The person's gestures could be used by a developer to interact with functions in a game or productivity application.
A company called Touchcast is building a green-screen application that will be available later in 2013. The prototype camera, called the Creative Interact Gesture camera, which Intel uses in its perpetual computing demonstrations with the SDK, will also be for sale later this year.
Hurst said Intel's role in building the SDK's for developers is to "reduce the barriers" to making creative new applications. Voice software company Nuance worked with Intel on the speech recognition capabilities, while SoftKinetic provided depth recognition software for the camera and augmented reality software.
The depth of field is not room-sized like Microsoft's Kinect games for Xbox 360. Intel's version reaches from six inches to three feet from the camera attached to the laptop lid, Solomon said. Eventually, Intel expects to provide a perceptual computing toolkit to use with smartphones and tablets, which people interact with differently than desktops and tablets.
The basis for perceptual computing concepts that Intel is building with independent developers and partners has been around for years, but Hurst said that faster processors and better cameras, as well as consumer demand, "have risen to the point that it's become interesting."
It's possible that a surgeon could some day use a hand gesture to move through pages on a computer screen, rather than touching the screen and risking hand contamination. Voice commands could have the same advantage.
"There's a lot of nuance that you don't get from the keyboard and a mouse," Hurst said.
Hurst also said that the various apps that developers build by using Intel's free tools will mushroom. At some point, Intel may decide to charge for the tools, but so far the company wants to build a community of developers globally.
Hurst predicted that Intel's tools will get plenty of competition from other companies in the computing world. "Once developers see how easy it is to get access to these development capabilities, there will be an explosion in the ecosystem," he said. "This work is a very high priority for Intel."
In a remote corner of the massive MWC trade fair in hall 8.1, Intel is showing one of the prototype perceptual computing apps that an independent German developer built. The app allows hand gestures and voice commands to be used to move through an on-computer catalog of photos.
Solomon smiled as he pointed out that the prototype was built by a small game controller company that took the enigmatic name of "4tiitoo."
The company's playful reference to "42" also refers to the number 42 that had a central role in the science fiction novel The Hitchhiker's Guide to the Galaxy where it is described as the "Answer to the Ultimate Question of Life, the Universe, and Everything."
It's an ambitious name for an ambitious idea.
See more Mobile World Congress coverage from our team in Barcelona.
Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Intel demos perceptual computing software toolkit" was originally published by Computerworld. | <urn:uuid:6863ae8f-9200-4702-9efd-e5f167f9ad26> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2163838/applications/intel-demos-perceptual-computing-software-toolkit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948682 | 1,174 | 2.53125 | 3 |
We describe a resource discovery and communication system designed for security and privacy. All objects in the system, e.g., appliances, wearable gadgets, software agents, and users have associated trusted software proxies that either run on the appliance hardware or on a trusted computer. We describe how security and privacy are enforced using two separate protocols: a protocol for secure device-to-proxy communication, and a protocol for secure proxy-to-proxy communication. Using two separate protocols allows us to run a computationally-inexpensive protocol on impoverished devices, and a sophisticated protocol for resource authentication and communication on more powerful devices.
We detail the device-to-proxy protocol for lightweight wireless devices and the proxy-to-proxy protocol which is based on SPKI/SDSI (Simple Public Key Infrastructure / Simple Distributed Security Infrastructure). A prototype system has been constructed, which allows for secure, yet efficient, access to networked, mobile devices. We present a quantitative evaluation of this system using various metrics.
Download the paper in PDF format here. | <urn:uuid:2bd91c88-f467-4b9a-ba6a-d471fdc2d50e> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/07/30/proxy-based-security-protocols-in-networked-mobile-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876844 | 210 | 2.6875 | 3 |
Malicious DNS tunnelling is a big problem in cybersecurity. The technique involves the use of the Domain Name System (DNS) protocol to smuggle sensitive corporate or personal information out of a network, and to enable malware command and control communications in and out.
Indeed, as the Infoblox Security Assessment Report revealed recently, two in five enterprise networks showed evidence of DNS tunnelling in the second quarter of 2016.
However, all is not necessarily as it seems. Close scrutiny of apparent DNS tunnelling traffic repeatedly reveals an amount of anomalous activity which appears harmful but is, in fact, being sent intentionally by users and services on enterprise networks, and tended not to be malicious in nature.
Exploiting the DNS protocol
Typically DNS queries are very small data packets, and their intended purpose is not to transport any data other than that needed to perform name-resolution services. And, although the introduction of authentication mechanisms such as DNSSEC and DKIM may have changed the landscape over recent years, their primary intent is also only to serve up information on a domain name, rather than transporting any other data.
But, there is sufficient flexibility in the DNS protocol that unrelated data can be inserted into a DNS query and then sent in to or out from a targeted network.
DNS signalling, the most basic form of this technique, typically involves using a cryptographic hash function to encode information into query strings or response records. Performance tends to be quite slow though, as the restrictive size of DNS packets mean a large number are required even for a small amount of data.
This is taken a step further by DNS tunnelling which, by employing surprisingly basic techniques, uses DNS queries to encode other protocols such as http, ftp or SMTP, over a DNS session.
For the sake of simplicity, and given their essential similarity, both of these techniques can be viewed under the header of DNS tunnelling.
“Legitimate” DNS tunnelling
Within an enterprise, the use of DNS for legitimate communications can often set off false alarms with networking and security teams on the lookout for malicious DNS tunnelling. Most companies that employ this unsanctioned use of DNS tend not to advertise the fact and this can present a challenge to security teams looking for insidious use of the protocol. After all, legitimate and malicious use can look practically identical at first glance.
Of course, those using DNS in this way are generally taking creative shortcuts rather than deliberately abusing their organisation’s networks.
It all started around twenty years ago, when paywalls in certain hotels and airports blocked direct access to the internet via standard protocols such as HTTP. It was noticed, however, that DNS wasn’t blocked and tools including NSTX, Dnscat and iodine were subsequently released allowing web sessions and email to be tunnelled through a user’s DNS connection. Over the years these tools have evolved to provide full VPN services over DNS, with dozens of examples freely available on GitHub and elsewhere.
Not a wise use of the protocol
As well as setting off false alarms and raising concerns around theft-of-service, DNS tunnelling, even as a means of legitimate communication, is not a wise use of an organisation’s DNS protocol. Indeed, using DNS to transport data is misusing the protocol to deliberately circumvent measures put in place by the network operator.
It could be used to proxy past workplace productivity filters designed to block Facebook or personal email services, for example, or for something more sinister that could represent a risk to the whole company.
However, it appears there are a large number of commercial products which use DNS signalling as a means of providing data transfer services.
For example, at around the same time that DNS tunnelling was becoming popular as a technique, some manufacturers of customer presence equipment (CPE) were experiencing issues in sending updates out to their various consumer-grade Wi-Fi routers or cable and DSL modems across consumer and SMB networks.
It transpired that there was some inconsistency on the various types of traffic allowed through certain ISPs, and setting up proper connections through NAT-based routers was proving less than straightforward. DNS was seen as being a viable alternative and it wasn’t long before some of the CPE companies were using the protocol to perform software updates and other maintenance tasks with their installed base.
Today, most enterprise-grade networks will handle such tasks using proper communications and authentication channels. Internal departments and branch offices can often have cheaper CPE equipment, however, meaning that these signals are being transported over DNS – even in an enterprise network.
Elsewhere, the need for nearly continuous communication with their customers has seen some anti-virus (AV) vendors set up file hash identification routines via DNS. While this is undeniably a quick and effective way of determining whether a suspect file is infected or not, it can potentially open up a network to malicious communications.
Essentially, the main problem with DNS tunnelling techniques is that they circumvent controls put in place by a network team, opening up security, compliance and operational concerns while, at the same time, overloading the DNS protocol and anomaly detection systems put in place to examine DNS traffic.
Businesses are increasingly trying to protect their DNS as its importance becomes clearer, and are beginning to realise how much extraneous DNS traffic is running on their networks.
It’s perhaps optimistic to expect the practice to stop completely, but efforts could be made to persuade IT vendors and manufacturers to become less reliant on it, and ultimately make it less difficult to secure this valuable and vulnerable protocol. | <urn:uuid:5c027588-19fe-40b8-b9a9-ea5bb11f8266> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/09/19/dns-data-transport/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956472 | 1,143 | 2.796875 | 3 |
Trust, applied to the cloud, means that, even though organizations no longer have physical custody of their files, by embedding security into the document itself they have the means to secure sensitive documents so that they can be shared and still remain private.
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Employees are increasingly turning to consumer grade file sharing services such as Dropbox for business activities, and even if that use is sanctioned by IT, custody remains a challenge because, although the enterprise still owns the data, custody moves to the cloud provider. It is difficult, if not impossible, to maintain visibility and control over data in the cloud and prove chain of custody. Complicating the situation, data can be compromised without IT’s knowledge, since they may not even be aware that documents are being stored and shared in the cloud.
What’s needed is a trustworthy cloud. Trust in the physical world is achieved through relationships and contracts, and enforced using oversight and punitive action in response to a breach of trust. Building on the concept of trust, trustworthiness is a model that uses carefully designed and implemented technology, policies and reputation networks to achieve data security. Applied to the cloud, it means that even though organizations no longer have physical custody of their files, by embedding security into the document itself they have the means to secure sensitive documents so that they can be shared and still remain private.
Trustworthiness uses low-level cryptographic algorithms to enforce policies, revoke access rights and monitor access activities. It is defined and controlled exclusively by the data owner without any intervention from the cloud service provider. In a trustworthy cloud scenario, authorized users have visibility into groups and documents—limited by their role—but in a manner that doesn’t weaken the cryptography or open the system to additional attacks. This approach prevents the misuse of cloud data from going undetected by creating a comprehensive audit trail of who is accessing files.
When content is stored in a trustworthy cloud, policies set up by the data owner are enforced by a solution provider without the solution or cloud provider ever having access to the data itself. This is called zero knowledge and relies on advanced federated key management technology.
Zero knowledge-based document sharing enables collaboration across organizational boundaries using any cloud storage provider, since federated cryptography is attached to the content rather than depending on the cloud container. For IT, it provides the ability to accommodate the growing popularity of BYOC (bring your own cloud) for business document sharing, while maintaining the visibility and control required for Governance, Risk Management, and Compliance. As an added benefit, the Trustworthy Cloud does not force users to adopt new tools or impose changes to an organization’s existing security and audit infrastructures.
Implementing a Trustworthy Cloud
A trustworthy cloud establishes a provable separation of authority between the custodian of the information (the service provider) and the content owner and others who may have varying degrees of authorization to view or modify this information. The aggregate cryptographic algorithms and protocols provide strong guarantees of data privacy and chain of custody.In this context, privacy is the ability of participants to control disclosure of sensitive business data. Confidentiality, meanwhile, refers to the commitment by the service provider to refrain from accessing or disclosing the data. A trustworthy cloud solution replaces the conventional need to rely on confidentiality on the part of a cloud service provider with the ability to rely on technological controls to enforce data privacy. This is made possible by implementing an emerging approach that places security on the content instead of on the container itself.A trustworthy cloud has the all-inclusive ability to establish an electronic chain of custody record. This indelible record captures where the data originated, who may have accessed or modified it during its lifetime, and where and when there was a transfer of possession, no matter where it resides. In addition, content owners or the parties with fiduciary responsibilities for its lifecycle management can specify, monitor, and enforce fine-grain retention, disposition, and hold policies on data that is not in their possession. In practice, these records and policies can be carried as metadata that is based on the content but may be stored and encrypted separately from it.
A trustworthy cloud also provides scalable security federation to enable the secure sharing of documents across organizational trust boundaries (e.g. outside their firewall) in a manner that is as simple for employees as using consumer solutions such as Dropbox. This capability is especially important for companies in regulated industries such as Healthcare, Accounting, Pharmaceutical and Finance where data privacy and provenance control are mandated.
For example, a trustworthy cloud would enable a publicly traded company to comply with SOX 404 even when a third party cloud provider possesses the data. In HIPAA- or FDA-regulated environments, a trustworthy cloud would allow an organization that uses public cloud services to meet the requirements of HIPAA/HITECH or Business Associate Agreement (BAA) contracts.
With the rapid authorized and unauthorized use of public cloud sharing by their employees, organizations can no longer afford to ignore the data privacy issues these services engender. A trustworthy cloud approach that enforces security on the content itself eliminates the cloud container as a potential point of compromise. This enables organizations to implement and enforce “zero knowledge” encryption that is transparent to employees, and prevents both the cloud service provider and the security vendor from accessing business information.
AlephCloud is a provider of cloud content privacy solutions. | <urn:uuid:429c2459-f3c8-4518-b349-c5ee28c02568> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2170276/tech-primers/a-trustworthy-cloud-guarantees-data-privacy-and-chain-of-custody.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93361 | 1,120 | 2.703125 | 3 |
IBM Research Creates Programming Model to Mimic Human Brain Power
IBM scientists unveiled a new programming model to support chips that mimic the way the human brain works.IBM Research has created a new programming model to support its chips that mimic the workings of the human brain, known as Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, chips. In an effort in which a portion of the project was funded by the Defense Advanced Research Projects Agency (DARPA), IBM on Aug. 8 announced a breakthrough software ecosystem designed for programming silicon chips that have an architecture inspired by the function, low power and compact volume of the brain. The technology could enable a new generation of intelligent sensor networks that mimic the brain's abilities for perception, action and cognition, IBM said. IBM's long-term goal is to build a chip system with 10 billion neurons and a hundred trillion synapses, while consuming merely 1kilowatt of power and occupying less than two liters of volume, Big Blue said. To get there, IBM researchers knew they had to deliver not only the new hardware, but a new software paradigm. IBM said the new programming model is dramatically different from traditional software. Indeed, IBM's new programming model breaks the mold of sequential operation underlying today's von Neumann architectures and computers. It is instead tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures.
Cognitive computing, of course, is nothing new to IBM. Its Watson supercomputer is perhaps the best example of cognitive computing the company has to offer. "Cognitive computing systems are not based on programs that predetermine every answer or action needed to perform a function or set of tasks; rather, they are trained using artificial intelligence (AI) and machine learning algorithms to sense, predict, infer and, in some ways, think," IBM says on its Web page defining cognitive computing.
- Simulator: A multithreaded, massively parallel and highly scalable functional software simulator of a cognitive computing architecture comprising a network of neurosynaptic cores.
- Neuron Model: A simple, digital, highly parameterized spiking neuron model that forms a fundamental information processing unit of brainlike computation and supports a wide range of deterministic and stochastic neural computations, codes and behaviors. A network of such neurons can sense, remember and act upon a variety of spatio-temporal, multimodal environmental stimuli.
- Programming Model: A high-level description of a "program" that is based on composable, reusable building blocks called "corelets." Each corelet represents a complete blueprint of a network of neurosynaptic cores that specifies a based-level function. Inner workings of a corelet are hidden so that only its external inputs and outputs are exposed to other programmers, who can concentrate on what the corelet does rather than how it does it. Corelets can be combined to produce new corelets that are larger, more complex or have added functionality.
- Library: A cognitive system store containing designs and implementations of consistent, parameterized, large-scale algorithms and applications that link massively parallel, multimodal, spatio-temporal sensors and actuators together in real time. In less than a year, the IBM researchers have designed and stored more than 150 corelets in the program library.
- Laboratory: A novel teaching curriculum that spans the architecture, neuron specification, chip simulator, programming language, application library and prototype design models. It also includes an end-to-end software environment that can be used to create corelets, access the library, experiment with a variety of programs on the simulator, connect the simulator inputs/outputs to sensors/actuators, build systems and visualize/debug the results. | <urn:uuid:722bd246-ee33-481d-81cc-ef4e8805e393> | CC-MAIN-2017-04 | http://www.eweek.com/developer/ibm-research-creates-programming-model-to-mimic-human-brain-power | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918352 | 767 | 3.390625 | 3 |
While CODECs, like H.264 and MJPEG, get all the attention, a camera's 'quality' or compression setting has a big impact on overall quality. In this training, we explain what this level is, what options you have and how you should optimize it.
Let's start by reviewing these two images from two cameras - (A) and (B):
Here's a question. Please answer before continuing:
If your gut feel is that this is a trick question you are right. Indeed, with the information presented, the best answer is likely that it cannot be determined. In this case, the technically correct answer is 'neither - they are the same resolution'. We used the same camera for each image and simply lowered the quality level for the 'B' image (while keeping everything else the same, including resolution - 720p - and CODEC - H.264).
The fact that two exact shots with the same resolution can look significantly different has a number of important implications. Inside, we dig in. | <urn:uuid:c13ee7e4-b987-4f12-a0b5-b1bcb70461f2> | CC-MAIN-2017-04 | https://ipvm.com/reports/video-quality | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945444 | 214 | 2.640625 | 3 |
Do you know that the everyday objects around us now possess the capability to talk? This is as much about stuff you typically find in your home—coffee makers, refrigerators, microwaves, lamps, fans—as it is about industrial heavy-duty machines—engines, oil drill rigs, construction equipment, etc.—virtually anything with any form of connectivity to the Internet. The embedding of sensors into the things we have been using all this while means we have a whole new plethora of “intelligent" machines endowed now with the capability to provide us feeds and feedback. Everything is now connected by the Internet of Things.
So what is it after all? The Internet of Things or Industrial Internet of Things is the massive network that connects all these things or physical objects, helping us to “talk" to devices, turn them on/off, monitor them, and even predict how they would/should function over a time period. IoT, as it stands, serves to blur the lines between the animate and the inanimate. The phenomenon that it is applies to almost everything and everyone from banking and insurance to energy/utilities, and further even to agriculture and manufacturing. All of a sudden, things that were seemingly always a part of our day-to-day life are now digital, implying that we can connect and manage them better. As we talk to these objects and they talk back, the world around us inevitably becomes an extension of one's own self.
IDC estimates that the IoT will encompass ~30 bn installed/connected units by 2020, and the opportunities this creates for consumers are phenomenal—imagine something as trivial as your alarm clock signaling the coffee maker to start brewing a fresh cup in 30 minutes or a main door lock triggering a forced turn-off of all lights in the house. Thinking even broader, a flight delay could well trigger airport retailers to spring into action and dole out offers to the flight's understandably harried passengers. (Click here to view how impactful smart machines could be.)
Businesses, too, stand to gain substantially from the multiple avenues opened up by IoT—they could create more efficiencies and optimized throughput for business processes, personalize offerings based on customer needs/preferences, even predict and prescribe actions at every step in your value chain, and all in real time. Who'd not want an inventory that monitors itself and issues a re-order request when the quantity drops below a threshold, or, maybe, a retail shop shelf that transmits a personalized offer to a user's smartphone based on a fetched-in-real-time customer preference/sentiment or her/his most recent store activity? In another parallel, a service provider could well have a remote ops center connected to machinery equipment, collecting all vital signals coming in, and further analyzing them to correlate patterns and undertake prognostic maintenance.
The broader applications would, of course include, among several others:
- Smart cities
- Smart grids
- Waste management
- Emergency alerts
- Energy management (identify leakages/efficient usage)
- Real-time security surveillance
- Pollution monitoring
- Predictive maintenance of equipment
- Smart lighting (or heating/air conditioning)
- Climate—temperature/weather monitoring for associated weather-based actions
- Remote-controlled appliances
- Utilities monitoring
- Fire/smoke detection
- Patient care
- Elderly surveillance
- Intelligent tags to monitor drug usage in real time
- Remote diagnostics
- Medical Equipment monitoring
- Smart parking
- Monitoring package movements
- Traffic routing
Foundations of IoT
Nothing is devoid of challenges/issues, and IoT is faced with some, too:
- Sensors that detect and transmit unique data for an event; could include location, temperature, speed, machine reading, usage levels, or virtually anything
- Connectivity—critical to ensuring that information reaches from the producer (the thing/machine/device) to the consumer (another thing/device/machine/user)
- People and processes—could constitute location-based services, intelligent operations, emergency response, security surveillance systems, and the like
- Privacy concerns regarding what should/should not be allowed to be monitored
- Deemed lack of personal touch, as every entity is an IP address after all
- Standardization across communications from devices, systems, et al, across heterogeneous countries, jurisdictions, and environments
- Managing vast amounts of data
- Change management and adoption issues with people given the pervasive nature of IoT
Nevertheless, IoT, by the sheer nature of its applications and the volume, variety, and velocity of data produced, has clear and present implications for service providers (BPM/technology/analytics). IDC predicts the market will grow from $1.3 trillion back in 2013 to ~$3 trillion in 2020 (a CAGR of 13%).
Sensors, RFIDs, and QR devices produce data in real time and hence have a high requirement for 24× 7 delivery, the ability to respond quickly, and, importantly, skilled domain experts to understand business process nuances.
Again, wireless connectivity is everywhere, but the key will be the proficiency and effort needed to synthesize and analyze this plethora of big data with all its dynamics and nuances, and produce a discerning output. A service provider with appropriate software and mechanisms in place to “listen" to the steady stream of bits and bytes from a machine could well connect the dots and produce meaningful data--> information-->insight--> action paths.
Regardless of the thing in question, IoT backed by a competent service provider enables seamless control and intelligent operations—rule-based behavior and exception handling, easy prediction of working conditions based on known past behavior, and proactive and customized targeting of any issues on a case-by-case basis.
In sum, more and more devices getting online effectively means a digital nervous system that can be harnessed by all kinds of digital technologies—mobility, cloud, and the like—further implying that service providers need to ramp up fast on capabilities to become relevant in this increasingly connected world.
Learn how Genpact is helping large enterprises harness the power of the Internet of Things (IoT). | <urn:uuid:7e778dd3-7dd1-4372-b5cf-92456e6d18ad> | CC-MAIN-2017-04 | http://www.genpact.com/home/blogs/bloginner?Title=Demystifying+the+Internet+of+Things&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+GenpactBlogs+%28Genpact+Blogs%29 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924973 | 1,268 | 2.640625 | 3 |
Mind maps have been used for many years as a way of structuring information in a visual way.
Spark Space was set up to create mind map technology that would help people with dyslexia create complex documents.
What people have not realised is that it could and I believe should be used by everyone. This is just another example of technology that has been created for a niche user group but has the potential to help a much wider audience.
1. When did I see it?
I went to BETT, a vast exhibition specifically for the education sector, and went to the special education needs (SEN) area to see if there were any products that could be useful to people with disabilities outside the education arena.
It was in fact a good place to go see a large range of assistive technologies and related products.
2. What is it?
Spark-Space is a mind-mapping tool with a difference. It has a word processor built into it and it can be used to create complete documents. I have used it to produce this short report. The mind map you can see should be read in clockwise fashion. So this report is structured:
- When did I see it?
- What is it? (this section)
- What is it designed for?
- Why is it great?
The text you are reading now sits behind the 'What is it?" node.
2.1. What was it designed for?
The Spark-Space founder has dyslexia and found it very difficult at school to write good essays or reports. She was later diagnosed with severe dyslexia alongside a high IQ.
To help others overcome similar situations she designed and developed, with her husband, a 3D Mind mapping tool. The idea is that many dyslexics can brainstorm a concept come up with lots of ideas but cannot then structure them in a meaningful linear fashion. The mind map allows the visual grouping and reordering of ideas. Colours, shapes and icons can be used to further illustrate and clarify the concept.
The individual ideas in the concept can then be described by short sections of text using the built-in document processor. These bite-size chunks of text are manageable by many people with dyslexia.
When each of the ideas has been expanded with the text the whole concept can be viewed as a single document or report, with each idea being a heading or sub-heading. The complete report can then be exported to Microsoft Word or a web format.
It has been shown that people with dyslexia find it much easier to produce document in this manner than trying to write in a linear fashion. It is also true that they find it easier to read a document if they can see its structure as a mind map.
2.2. Why is it great?
For anyone with dyslexia the ability to map ideas visually has been shown to be a real help. But Spark-Space has much more:
- It automatically creates a structured document based on the map.
- It enables text to be written in bite/idea size chunks.
- It allows the easy restructuring of the document. For example I could decide that the idea 'What it needs?' should be moved to just before the conclusion, I could do that by a few mouse clicks on the map rather than a laborious process of cut and paste in the document.
- It enables very quick navigation to parts of the document. Whilst I was writing this document I had some thoughts on the product, so I added another idea for them, so anytime I had another thought I just double-clicked on the idea and added the thought.
- Text to voice is built-in. A single mouse click reads the text back. Important for dyslexia but convenient for all.
- Having developed the ideas the document can be exported to Microsoft Word or similar word processors for wider distribution.
- Complex maps with many ideas on them can be hard to view. Space-Spark enables sub-trees to be hidden, and also has a facility to turn the map in three dimensions so bringing area of immediate interest to the fore.
- A mind map is a very good technique for presenting to a group, people can see it and discuss it very easily. The ability then to dive into the text of an idea makes it a very powerful tool for a team to develop and review a document.
2.3. What it still needs?
When I see a novel idea my mind immediately starts thinking how it could be improved or extended, this is especially true when the idea has been designed for a niche market but has value to a much wider audience.
The word processor built-in to the product is adequate for most processes but does not have the scope of a full function word processor such as Microsoft Word or Open Office. When Spark-Space was originally developed in 2001 it would have been very difficult to integrate Word into it because of the internal document format. However, now that both Word and Open Office have XML formats, as does Spark-Space, a tighter integration would be very interesting. The ideas map could become an alternative view of the document like a very powerful version of the outline view.
3. Why should everyone have it?
Most people, unless they have severe vision impairment, find visual representations quick and easy to absorb. Spark-Space is an extra tool for developing documents that provides:
- A fast method of documenting a brainstorm.
- An easy way to structure ideas.
- A quick method of navigation and filling in the ideas.
- A way to present the document to a team.
As I research accessibility I keep coming across products that have been developed for a special user group but could have a much wider market. Spark-Space is such a product, it is undoubtedly useful to people with dyslexia but it could also be used to improve the productive of people who create documents, whilst at the same time creating higher quality documents.
I look forward to it being integrated into standard word processors and becoming a pervasive tool. | <urn:uuid:7004d83a-1301-48d1-a5e1-fb83868ef358> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/automatic-generation-of-documents-from-mind-map-improves-pro/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963855 | 1,242 | 3 | 3 |
Depledge D.P.,Center for Medical Molecular Virology |
Kundu S.,Center for Medical Molecular Virology |
Jensen N.J.,National VZV Reference Laboratory |
Gray E.R.,Center for Medical Molecular Virology |
And 8 more authors.
Molecular Biology and Evolution | Year: 2014
Immunization with the vOka vaccine prevents varicella (chickenpox) in children and susceptible adults. The vOka vaccine strain comprises a mixture of genotypes and, despite attenuation, causes rashes in small numbers of recipients. Like wild-type virus, the vaccine establishes latency in neuronal tissue and can later reactivate to cause Herpes zoster (shingles). Using hybridization-based methodologies, we have purified and sequenced vOka directly from skin lesions. We show that alleles present in the vaccine can be recovered from the lesions and demonstrate the presence of a severe bottleneck between inoculation and lesion formation. Genotypes in any one lesion appear to be descended from one to three vaccine-genotypes with a low frequency of novel mutations. No single vOka haplotype and no novel mutations are consistently present in rashes, indicating that neither new mutations nor recombination with wild type are critical to the evolution of vOka rashes. Instead, alleles arising from attenuation (i.e., not derived from free-living virus) are present at lower frequencies in rash genotypes. We identify 11 loci at which the ancestral allele is selected for in vOka rash formation and show genotypes in rashes that have reactivated from latency cannot be distinguished from rashes occurring immediately after inoculation. We conclude that the vOka vaccine, although heterogeneous, has not evolved to form rashes through positive selection in the mode of a quasispecies, but rather alleles that were essentially neutral during the vaccine production have been selected against in the human subjects, allowing us to identify key loci for rash formation. © 2013 The Author. Source | <urn:uuid:f39275d2-1c7f-4816-8369-9f25dafc643d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-for-medical-molecular-virology-1929250/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922762 | 420 | 2.75 | 3 |
Primer: Virtual ServersBy David F. Carr | Posted 2006-01-14 Email Print
Virtual servers can improve efficiency in your data center by replacing lots of small servers with fewer, bigger, easier-to-manage machines.
What is a virtual server? Virtual servers make it possible to place multiple applications on a single physical server, yet run each within its own operating system environment, known as a virtual machine. So, when one virtual server crashes or is rebooted, the others continue operating without interruption.
Is it new? Early virtual machines appeared on IBM mainframes back in the 1960s. What's new is the realization that virtualization software for Intel-based servers can now support enterprise applications. Products such as VMware's ESX Server, originally used for development and testing, are finding increasing acceptance in data center environments.
What are other vendors doing? Microsoft Virtual Server 2005, while not yet widely deployed for enterprise-level production applications, represents a first step toward incorporating virtual server support into the Windows server architecture.
While there are hardware-based partitioning schemes in which the processors on a server are divided among virtual machines, software-based systems are attracting the most attention. Software virtual machines have the advantage of flexibility because the fraction of server resources they use is independent from the underlying hardware and can be easily reconfigured. Software-based approaches, though, have higher overhead than hardware approaches because of the layer of software added to supervise and coordinate the guest operating systems.
What are the benefits? The common practice in data centers of assigning each application its own server avoids clashes between programs, such as requirements for different versions of operating system libraries. But it can also result in low utilization of each server. By running many applications on a single server but each within a virtual machine, a data center can achieve more efficient use of server resources. "If they have to reboot their transportation planning server, they don't want to also take down the human-resources server," says Jaime Gmach, president of Evolving Solutions, a Hamel, Minn., consulting firm.
How does it work? The virtual server breaks dependencies between application, operating system and underlying hardware. An older application can be ported to a virtual Windows NT environment that simulates the 1990s hardware on which it was designed to run. "When you virtualize it onto a more modern machine that's faster, you're going to see huge performance gains," says Al Muller, a consultant with Callisma and co-author of Virtualization with VMware ESX Server. Virtualization makes sense for simple applications and network services, such as backup, e-mail, Web and domain name servers. Muller and Gmach say they would hesitate, however, to move a critical enterprise application onto a virtual server.
How do the offerings differ? VMware provides two options, GSX Server and ESX Server. The difference is the virtual machine monitorthe software that controls how many server resources are assigned to each virtual machine. GSX Server uses a full copy of Windows or Linux; ESX Server runs a stripped-down proprietary operating system, known as a hypervisor, to minimize the overhead of running multiple operating systems.
Like GSX Server, Microsoft Virtual Server 2005 is hosted on a full copy of the operating system, here Windows 2003. Another option is the Xen open-source project from England's University of Cambridge. Achieving a similar effect by different means is SWsoft's Virtuozzo, used by many Web hosting firms to parcel out resources to developers who in turn resell that capacity to customers. | <urn:uuid:0c0bd28d-91b1-4f1d-b261-9b3bfcf5ee0c> | CC-MAIN-2017-04 | http://www.baselinemag.com/it-management/Primer-Virtual-Servers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924281 | 728 | 2.765625 | 3 |
Once the exclusive domain of a small number of geniuses, hacking has gone mainstream as an element of national defense.
The United States has established a four-star Cyber Command to provide coordinated military digital response after suffering massive data breaches. NATO established the Cooperative Cyber Defense Center of Excellence in Estonia after that nation was the target of extensive cyber attacks.
When Georgian government systems came under cyber attack during the Russian offensive in Abkhasia and South Ossetia, the nation shifted critical Internet assets to a private hosting company in Atlanta, USA. Subsequently those systems came under attack.
At what point does hacking (read, “computer network attack”) rise to the level of warfare? Could United Nations Article 51 be invoked to engage collective self-defense against an attacker? How well informed are political leaders that will decide how a nation will pursue its cyber objectives? What role should we play as cyber-citizens? We’ll examine some of the skirmishes that have set the stage for all-out cyberwarfare, and explore reasons why we haven’t yet fought the “big one.”
View G. Mark Hardy’s video from Shmoocon 2013 below: | <urn:uuid:e28fedf0-2f1e-4b40-bae4-c0050545da1f> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/03/19/hacking-as-an-act-of-war/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929237 | 245 | 2.953125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.