text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Banks and their customers are facing new threats of phishing attacks, making it more difficult than ever to protect customers from identity theft and fraud. The increasing sophistication of phishing scams makes it harder for consumers to discern the difference between a legitimate bank e-mail message and a fraudulent one, according to industry experts.
One new type of phishing attack is particularly hard to identify. The technique can result in stolen personal data even if the recipient of the fraudulent e-mail is not fooled by it. When a bank customer simply opens the e-mail, a program attached to the e-mail by the phisher silently runs a script - even if the customer deletes the message without clicking on any embedded links. When that customer attempts to visit his or her bank's legitimate Web site - during that session or a future session - the malicious code redirects the person being phished to a fraudulent Web site.
Even a savvy Web-banking customer is vulnerable to this type of attack. Banks are educating customers on how to identify a fraudulent e-mail, but financial institutions can't do much to protect clients from simply opening fraudulent e-mail, according to Alex Shipp, senior antivirus technologist, MessageLabs (New York), a provider of e-mail security services. "It is difficult because banks don't own their clients' computers," Shipp says. "They can't do much to protect customers, but what they can do is, as soon as they learn about these sites, they can take them down," he continues. "It's more of a reactive thing; there is not much they can do proactively."
Recently, three Brazilian banks, including Unibanco (Sao Paulo), were the target of this scheme, according to Shipp. And MessageLabs expects to see more phishing attacks of this type, he says. Shipp points out that this particular scam only works on machines running Microsoft Windows, but Mac and Linux users can be affected if they use Windows updates. He suggests using only Windows systems that have had all available security patches installed.
Did You See That Masked Man?
Another phishing technique that has flourished is actually a combination of hacking and spamming. As with a traditional phishing attack, the assailant sends a fraudulent e-mail to consumers. However, this technique directs recipients to a legitimate bank Web site. With a false sense of security, users are more likely to enter personal information, which is then hacked by the fraudster, according to Susan Larson, vice president of global content, SurfControl (Scotts Valley, Calif.), a Web and e-mail filtering solutions provider.
In this type of scam, the phishers take advantage of security holes in financial institutions' Web sites, Larson explains. "Anyone doing any e-commerce is at risk," she adds. "The customers think they are on the [legitimate] site, [but the data] is really going to a fraudulent site."
SunTrust (Atlanta; $199 billion in total assets) customers were the target of this type of phishing. As soon as SunTrust became aware of the threat, the bank corrected the security flaw in its Web site, according to Hugh Suhr, a SunTrust spokesperson. The bank has a fraud alert section on its Web site and warns customers that it does not solicit personal information through e-mail. "We never ask for confidential information via e-mail," Suhr says. SunTrust also is taking proactive steps to combat phishing, but Suhr says he cannot divulge which technologies are being leveraged - for security reasons, of course. | <urn:uuid:a501b599-e4f5-460b-88d8-d7b58b705bb9> | CC-MAIN-2017-04 | http://www.banktech.com/channels/14067/d/d-id/1290070 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953784 | 736 | 2.53125 | 3 |
New biometric software developed by Michigan State University researchers and funded by the Gates Foundation promises to increase vaccination rates in developing countries by allowing for better record-keeping.
It also may concern some privacy advocates and explode a head or two among anti-vaxxers.
From the MIT Technology Review:
Billions of dollars a year are spent vaccinating children in developing countries, but about half as many immunizations are administered as could be because of unreliable vaccination records. Biometric researchers from Michigan State University have developed a fingerprint-scanning system for children under five years old that could replace ineffective paper vaccination records.
Until now, biometrics experts believed fingerprints of babies and toddlers were too unreliable because image sensors are designed for the ridges and valleys of adult fingertips. The Michigan State University researchers developed software that makes it feasible to accurately match fingerprints of children under five with off-the-shelf equipment. They intend to present a paper detailing their work at a biometrics conference later this month. Paper-based vaccination records are easily lost and don’t reliably provide health workers with up-to-date information on patient history. Fingerprints are a better biometric trait than the iris of the eye or palm and footprints because they are easier to record from young children and the sensors are small and work quickly.
The researchers acknowledge that the software needs refining to increase the accuracy of print matching, which was 70 percent in one field trial conducted in Benin, West Africa. They believe 95 percent is attainable.
They also believe the software could be useful in other applications such as countering insurance fraud.
While the latter prospect may concern some privacy advocates, the researchers note that it has not been an issue among the parents they are looking to serve first. | <urn:uuid:3711b687-457c-4d6a-97c7-24b8374b43ec> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2602582/software/software-should-save-lives-by-increasing-vaccinations-in-developing-countries.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00439-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962126 | 358 | 2.796875 | 3 |
The Next Tech Wave
How machine learning changes the game
- By Konstantin Kakaes
- Jul 15, 2013
What do video surveillance, speech recognition and autonomous vehicles have in common? They're all getting better amazingly quickly -- and needing less and less human help to do so. (FCW illustration)
In the past decade, computer scientists have made remarkable progress in creating algorithms capable of discerning information in unstructured data. In controlled settings, computer programs are now able to recognize faces in photographs and transcribe spoken speech. Cars laden with laser sensors can create 3-D representations of the world and use those representations to navigate safely through chaotic, unpredictable traffic.
In the coming decade, improvements in computational power and techniques will allow programs such as voice and face recognition to work in increasingly robust settings. Those technological developments will affect broad swathes of the American economy and have the potential to fundamentally alter the routines of our daily lives.
There is not one single reason for these improvements. Various approaches have proven effective and have improved over the years. However, many of the best-performing algorithms share a common trait: They have not been explicitly programmed by humans. As David Stavens, a Stanford University computer scientist, wrote about Junior, an autonomous car that earned Stanford second place in the Defense Advanced Research Projects Agency's 2007 Urban Challenge: "Our work does not rely on manual engineering or even supervised machine learning. Rather, the car learns on its own, training itself without human teaching or labeling."
In a wide range of examples, techniques that rely on self-supervised learning have leapfrogged traditional computer science approaches that relied on explicitly crafted rules. Supervised learning — in which an algorithm is first trained on a large set of data that has been annotated by a human and then is let loose on other, unstructured data — is effective in cases where an algorithm benefits from some initial structure. In both cases, the rules the computer ultimately used were never explicitly coded and cannot be succinctly described.
Such so-called machine learning algorithms have a long history. But for much of that history, they were more interesting for their theoretical promise than on the basis of real-world performance. That has changed in the past few years for a variety of reasons. Chief among them are the availability of large datasets with which to train learning algorithms and cheap computational power that can do such training quickly. Just as important, though, are developments in methodology that make it possible to use that data — millions of images tagged online by, say, Flickr users, or linguistic data stretching to the billions of words — in advantageous ways.
The new generation of learning techniques holds the promise of not only being able to match human performance in tasks that have heretofore been impossible for computers but also to exceed it.
The market for speech recognition is huge and will only grow as the technology improves. Call centers alone account for tens of billions of dollars in annual corporate expenditure, and the mobile telephony market is also worth billions. Nuance, the company behind Apple's Siri voice-recognition engine, announced in November 2012 that it is working with handset manufacturers on a telephone that could be controlled by voice alone.
According to Forbes, Americans spend about $437 billion annually on cars and buy 15.1 million automobiles each year. According to the General Services Administration's latest tally, federal agencies own nearly 660,000 vehicles. As technologies for autonomy improve, many and eventually most of those cars will have detectors and software that will enable them to drive autonomously, which means the potential market is enormous.
The impact of image-analysis technologies such as facial recognition will also be transformative. Government use of such technologies is already widespread, and commercial use will increase as capabilities do. Video surveillance software is already a $650 million annual market, according to a June report by IMS Research.
Just as the commercial stakes for those and other applications of machine learning are high, so too are the broader questions the new capabilities raise. How does the nature of privacy change when it becomes possible not only to record audio and video on a mass scale but also to reliably extract data — such as people's identities or transcripts of their conversations — from those recordings? The difficult nature of the questions means they have largely escaped public discussion, even as the debate over National Security Agency surveillance programs has increased in recent weeks following Edward Snowden's disclosures.
Li Deng, a principal researcher at Microsoft Research, wrote in a paper in the May issue of IEEE Transactions on Audio, Speech and Language Processing that there are no applications today for which automated speech recognition works as well as a person. But machine learning techniques, he said, "show great promise to advance the state of the art."
There are many machine learning techniques, including Bayesian networks, hidden Markov models, neural networks of various sorts and Boltzmann machines. The differences between them are largely technical. What the techniques have in common is that they consist of a large set of nodes that connect with one another and make interrelated decisions about how to behave.
Those complicated networks can "learn" how to discern patterns by following rules that modify the way in which a given node reacts to stimuli from other nodes. It can be done in a way that simply seeks out patterns without any human-crafted prompting (in unsupervised learning) or by trying to duplicate example patterns (in supervised learning). For instance, a neural network might be shown many pairs of photographs along with information about when a pair consisted of two photographs of the same person and when it consisted of photographs of two different people, or it might be played many audio recordings paired with transcriptions of those recordings.
Deep neural networks have, since 2006, become far more effective. A shallow neural network might have only one hidden layer of nodes that could learn how to behave. That layer might consist of thousands of nodes, but it would still be a single layer. Deep networks have many layers, which allow them to recognize far more complex patterns because there is a much larger number of potential ways in which a given number of nodes can interconnect.
But that complexity has a downside. For decades, deep networks, though theoretically powerful, didn't work well in practice. Training them was computationally intractable. But in 2006, Geoffrey Hinton, a computer science professor at the University of Toronto, published a paper widely described as a breakthrough. He devised a way to train deep networks one layer at a time, which allowed them to perform in the real world.
In late May, Google researchers Vincent Vanhoucke, Matthieu Devin and Georg Heigold presented a paper at the IEEE International Conference on Acoustics, Speech and Signal Processing describing the application of deep networks to speech recognition. The Google researchers ran a three-layer system with 640 nodes in each layer. They trained the system on 3,000 hours of recorded English speech and then tested it on 27,327 utterances. In the best performance of a number of different configurations they tried, the system's word error rate was 12.8 percent. That means it got slightly more than one word in 10 wrong. There is still a long way to go, but training a network as complicated as this one would have been a non-starter just a few years ago.
Agencies that interact with the public on a massive scale will have to decide to what extent they wish to replace human operators with automated voice-recognition systems.
Nevertheless, speech-recognition technologies have already had a dramatic impact on call and contact centers. As the technology improves, agencies that interact with the public on a massive scale — such as the Social Security Administration, the National Park Service and the Veterans Health Administration — will have to decide to what extent they wish to replace human operators with automated voice-recognition systems.
On June 17, Stanford associate professor Andrew Ng and his colleagues presented a paper at the annual International Conference on Machine Learning describing how even larger networks — systems with as many as 11 billion parameters — can be trained in a matter of days on a cluster of 16 commercial servers. They do not yet know, they say, how to effectively train such large networks but want to show that it can be done. They trained their neural network on a dataset of 10 million unlabeled YouTube video thumbnails. They then used the network to distinguish 13,152 faces from 48,000 distractor images. It succeeded 86.5 percent of the time.
Again, that performance is not yet at a level that is of much practical use. But the remarkable thing is that Ng and his team achieved it on a dataset that wasn't labeled in any way. They devised a program that could, on its own, figure out what a face looks like.
Current commercial facial recognition systems include NEC's NeoFace, which won a competition run by the National Institute of Standards and Technology in 2010. NeoFace matches pictures of faces against large databases of images taken from close up and under controlled lighting conditions. NeoFace can work with images taken at very low resolution, with as few as 24 pixels between the subject's eyes, according to NEC. In the NIST evaluation, it identified 95 percent of the sample images given to it.
In a May 2013 paper, Anil Jain and Joshua Klontz of Michigan State University used NeoFace to search through a database of 1.6 million law enforcement booking images, along with pictures of Dzhokhar and Tamerlan Tsarnaev, who are accused of setting off the bombs at the Boston Marathon in April. Using the publicly released images of the Tsarnaev brothers, NeoFace was able to match Dzhokhar's high school graduation photo from the database of millions of images. It was less successful with Tamerlan because he was wearing sunglasses.
Eigenfaces allow computers to deconstruct images of faces into charcteristic components to enable recognition technoologies. (Image: Wikimedia Commons)
Jain and Klontz make the point that, even today, facial recognition algorithms are good enough to be useful in a real-world context. The methods for automatically detecting faces, though, are likely to get much better with machine learning. NeoFace and other commercial tools work in part by deconstructing faces into characteristic constituents, called eigenfaces, in a way roughly analogous to the grid coordinates of a point. A picture of a face can then be described as a distinct combination of eigenfaces, just as any physical movement in the real world can be broken down into the components of up-down, left-right and forward-backward.
But that approach is not very adaptable to changes in lighting and posture. The same face breaks down into very different eigenfaces if it is lit differently or photographed from another angle. However, it is easy for a person to recognize that, say, Angelina Jolie is the same person in profile as she is when photographed from the front.
Honglak Lee, an assistant professor at the University of Michigan in Ann Arbor, wrote recently with colleagues from the University of Massachusetts that deep neural networks are now being applied to the problem of facial recognition in a way that doesn't require any explicit information about lighting or pose. Lee and his colleagues were able to get 86 percent accuracy on a 5,749-image database called Labeled Faces in the Wild, which now contains more than 13,000 images. Their results compared favorably to the 87 percent that the best handcrafted systems achieved.
But the deep learning systems remain computationally demanding. Lee and his colleagues had to scale down their images to 150 by 150 pixels to make the problem computationally tractable. As computing power grows, there is every reason to believe that machine learning techniques applied on a larger scale will become still more effective. At present, it might seem that facial recognition programs are of interest only to law enforcement and intelligence agencies. But as the systems become more robust and effective, other agencies will have to decide whether and how to use them. The technology has broad potential but also threatens to encroach fundamentally on privacy.
In a sense, the machine learning algorithms for facial recognition are doing something analogous to speech recognition. Just as speech-recognition programs can't try to match sounds against all possible words that might have generated those sounds, the new generation of face-recognition techniques doesn't attempt to match patterns. Instead, the learning methodology allows it to discern global structure in a way loosely analogous to human perception.
The pace of such progress can perhaps best be seen in the case of autonomous cars. In 2004, DARPA ran a race in which autonomous cars had to navigate a 150-mile desert route. None of the 21 teams finished. The best-performing team, from Carnegie Mellon University, traveled a little more than 7 miles. In 2005, five teams finished DARPA's 132-mile course. Last year, Google announced that about a dozen of its autonomous cars had driven more than 300,000 miles.
Suddenly, DARPA's efforts to bring driverless vehicles to the battlefield look a lot closer to reality. Many elements must come together for this to work. As Chris Urmson, engineering lead for Google's Boss, which won the 2007 DARPA Urban Challenge, autonomous vehicles combine information from many sources. Boss had 17 sensors, including nine lasers, four radar systems and a Global Positioning System device. It had a reliable software architecture broken down into a perception component, mission-planning component and behavioral executive. For autonomous cars to work well, all those elements must perform reliably.
But the mind of an autonomous car — the part that's fundamentally new, as opposed to the chassis or the engine — consists of algorithms that allow it to learn from its environment, much as speech recognizers learn to recognize words out of vibrations in the air, or facial recognizers find and match faces in a crowd. The capacity to effectively program algorithms that are capable of learning implicit rules of behavior has made it possible for autonomous cars to get so much more capable so quickly.
A 2012 report by consultants KPMG predicts that self-driving cars will be sold to the public by 2019. In the meantime, the Transportation Department's Intelligent Transportation Systems Joint Program Office is figuring out how the widespread deployment of technologies that will enable autonomy will work in the coming years. DOT's effort is focused on determining how to change roads in ways that will enable autonomous vehicles. Besides the technical challenges, it raises a sticky set of liability issues. For instance, if an autonomous car driving on a smart road crashes because of a software glitch, who will be held responsible — the car's owner, the car's passenger, the automaker or the company that wrote the software for the road?
Clearly, autonomy in automobile navigation presents a difficult set of challenges, but it might be one of the areas in which robots first see large-scale deployment. That is because although part of what needs to be done (perceiving the environment) is hard, another part (moving around in it) is relatively easy. It is far simpler to program a car to move on wheels than it is to program a machine to walk. Cars also need to process only minimal linguistic information, compared to, say, a household robot.
Groups such as Peter Stone's at the University of Texas, which won first place in the 2012 Robot Soccer World Cup, and Yiannis Aloimonos' at the University of Maryland are creating robots that can learn. Stone's winning team relied on many explicitly encoded rules. However, his group is also working on lines of research that teach robots how to walk faster using machine learning techniques. Stone's robots also use learning to figure out how to best take penalty kicks.
Aloimonos is working on an even more ambitious European Union-funded project called Poeticon++, which aims to create a robot that can not only manipulate objects such as balls but can also understand language. Much as autonomous vehicle teams have created a grammar for driving — breaking down, say, a U-turn at a busy intersection into its constituent parts — Aloimonos aims to describe a language for how people move. Having come up with a way to describe the constituent parts of motions, called kinetemes — for instance rotating a knee or shoulder joint around a given axis — robots can then learn how to compose them into actions that mimic human behavior.
This is all very ambitious, of course. But if machine learning techniques continue to improve in the next five years as much as they have in the past five, they will allow computers to become very powerful in fundamentally new ways. Autonomous cars will be just the beginning. | <urn:uuid:431fcfd8-ba91-4f7d-a983-c227f0218b49> | CC-MAIN-2017-04 | https://fcw.com/articles/2013/07/15/machine-learning-change-the-game.aspx?admgarea=TC_ExecTech | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957667 | 3,379 | 3.171875 | 3 |
The National Electronic Disease Surveillance System (NEDSS) will "revolutionize public health by gathering and analyzing information quickly and accurately," according to the Centers for Disease Control and Prevention (CDC).
This may be the case, but not without considerable adjustment and some aggravation, states have found.
The NEDSS initiative's goal is to promote the use of data and information standards to develop a more efficient way for the CDC to collect and disseminate information about diseases. The CDC is responsible for prevention and control of chronic-health diseases, environmental diseases and communicable diseases, and collects data to get a national picture of public health practices in the country.
Each state's health department is responsible for collecting data from its counties and passing it to the CDC. The problem, however, is state and municipal governments use more than 100 different data systems and data from many jurisdictions, which have different vocabularies and codes.
The CDC hopes the initiative -- which sets forth guidelines not only for data collection, reporting and common vocabularies, but also standard architecture -- will generate a more standard case report that can be sent to state health departments.
The NEDSS base system, which has not yet been released, is the computer application the CDC will offer. The Java-based application developed by Computer Sciences Corp. will create a standard NEDSS compliant interface for jurisdictions that have yet to develop their own.
"That's basically a re-engineering of current systems and getting it over the Web," said Dr. Claire Broome, a senior adviser with the CDC. "It's for use at the state level, and that will be useful to our state partners. The thing that is really different about NEDSS is the capacity to receive and transmit standard electronic messages."
The NEDSS initiative also provides a single user face with the core common data for the different systems. As diseases are diagnosed in laboratories around the country, an information system generates a Health Level 7 (HL7) standard message and sends it to a NEDSS "inbox." HL7 is a standards-developing organization geared toward creating interoperable clinical data exchange. Its information exchange model is widely used in the United States and internationally.
The system is designed to replace current CDC surveillance systems, including the National Electronic Telecommunications System for Surveillance (NETSS), the HIV/AIDS reporting system, and systems for vaccine-preventable diseases, tuberculosis and infectious diseases.
The CDC said NEDSS pilot projects have resulted in more than double the number of cases reported, and in some instances, the information provided was more complete.
Despite the CDC's confidence in these initial pilot projects, states are still unsure about key parts of the new system.
Some states expressed frustration over the NEDSS initiative, which was characterized as a "moving target."
"[The CDC] is close to being able to say, 'This is our final data model,' but they've gone through more than a dozen versions," said an anonymous source from a state department of health. "It's hard for us to say, 'OK, there's the model we're going to use, and we're going to set up this big database [based on that model].'"
The fact that the CDC has not set mandates to implement any of the standards has made it difficult to convince counties that NEDSS is the way to go, the source said.
"[Counties] keep coming back saying, 'Where is the department letter that says all of our systems should be NEDSS compliant?'" the source added.
One early problem with the NEDSS plan was that it didn't give the NEDSS initiative a "friendly face," said the source; this is crucial to the ability of state departments of health to sell the NEDSS initiative to counties.
The CDC's new approach to the initiative focuses more on functionality, calling for a Web-based standard with one interface.
"What we've realized is that to increase the functionality of a system, we have to offer some slick stuff through an Internet-based front end," the source said. "If I hit the submit button, I can get myself a nice little GIS report on where diseases have been occurring in my county."
Congress has provided federal grants for state and local governments to take advantage of NEDSS, and many jurisdictions are using the funds to modify existing systems to make them NEDSS compatible.
California considered adopting the base system, but chose to develop its existing system for compliance. The state will use the federal funds to "take care of infrastructure items that usually don't get covered through government categorical funding," said Ed Eriksson, the state's NEDSS project manager. "We want to be able to own the kinds of functionality that would be rolled out to counties.
"Usually, they're not going to fund a project manager or an integration-minded person who will think about how it's going to integrate with other applications and business processes," Eriksson continued. "We're using NEDSS to address some of these infrastructure items, such as security data modeling and coming up with a vocabulary that's based out of HL7 so a case or specimen has the same definition across all applications."
Other states, such as Alabama, are using NETSS and waiting for the base system to develop further.
"Why reinvent the wheel?" said Richard Holmes, director of surveillance for Alabama's Department of Health, adding that Alabama is "essentially sitting in a holding pattern waiting for [the CDC] to put us in the queue for release of the base system."
That type of delay is one reason other states chose to develop their own systems.
"The CDC tends to take its time developing things," said Carol Hoban, project manager for the Georgia Department of Health. Georgia developed its own Web-based system that is NEDSS compatible, also making sure its system would be tailored to the state's needs.
"The reporting needs vary from state to state, although there are national reporting standards," Hoban said.
National standards, like HL7, are driving NEDSS, the CDC said. Without them, results of disease outbreaks across multiple counties or states would be difficult to interpret because of the different vocabulary used by different jurisdictions.
"Total flexibility would not be useful," said the CDC's Broome.
Maine pondered before deciding to go with the base system, but with some modifications.
"We talked with the CDC and asked about porting it over from a Wintel platform to a UNIX platform," said Mike Wenzel, health program manager for the Maine Immunization Program. "We're also looking into using iHUB [middleware] as a translation methodology between different entities."
The NEDSS guidelines will become the backbone of Maine's public health infrastructure, Wenzel said.
"It will become a unifying set of standards for how we perform all our public health operations," he said. "In essence, we're going to de-silo our Bureau of Health applications into an Oracle-NEDSS kind of situation and call it the Maine Public Health Infrastructure." | <urn:uuid:abdbc3f6-74a0-434b-81c4-9acf0af11f31> | CC-MAIN-2017-04 | http://www.govtech.com/security/Streamlining-Standards-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957489 | 1,460 | 3.0625 | 3 |
HPC programmers who are tired of managing low-level details when using OpenCL or CUDA to write general purpose applications for GPUs (GPGPU) may be interested in Harlan, a new declarative programming language designed to mask the complexity and eliminate errors common in GPGPU application development.
GPUs are increasingly being used to provide a boost in computing power in HPC systems. Attaching NVIDIA Kepler or Intel Xeon Phi co-processing cards to a traditional CPU architecture can provide a big increase in the performance of parallel workloads. However, programming the GPUs can be difficult, as it requires different tools and a different skill set than traditional X86 development.
The idea with Harlan is to keep developers focused on the high-level HPC programming challenge at hand, instead of getting bogged down with the nitty gritty details of GPU development and optimization.
Eric Holk, a Ph.D. candidate at the University of Indiana, is the driving force behind the Harlan project. Harlan is a domain-specific language that uses a declarative approach to coordinating computation and data movement between a CPU and GPU, according to a paper that Harlan and his colleagues presented at the September 2011 International Conference on Parallel Computing.
Harlan’s syntax is based on the language Scheme, and compiles to Khronos Group’s OpenCL, a GPU framework that competes with NVIDIA’s Compute Unified Device Architecture (CUDA). The language was designed to provide a “straightforward mechanism for expressing the semantics the user wants” for areas such as data layout, memory movement, threading, and computation coordination. In effect, it lets developers declare the “what,” and leaves the “how” up to the language, the researchers say in their paper.
The benefits of this approach will be even higher for hybrid applications that utilize a combination of GPUs and CPUs, since they introduce even more complexity for the developer, who has to take into account additional levels of memory hierarchy and computational granularity, the researchers say.
“Not only does a declarative language obviate the need for the programmer to write low-level error-prone boiler-plate code, by raising the abstraction of specifying GPU computation it also allows the compiler to optimize data movement and overlap between CPU and GPU computation,” Holk and his colleagues write in the paper, titled “Declarative Parallel Computing for GPUs.”
In addition to Harlan, Holk and his colleagues are developing Kanor, another declarative language for specifying communication in distributed memory clusters. Kanor is unusual, Holk writes, in that can automatically handle the low-level details when appropriate, but gives the programmer the option to step in and hand code the communications when necessary. This provides a “balance between declarativeness and performance predictability and tenability.”
Harlan will provide a productivity boost, but don’t expect it to transform your average coder into a super coder. “It is important to emphasize at this point that we are not proposing a ‘silver bullet’ or ‘magic compiler’ that will somehow make GPGPU or hybrid cluster programming easy,” Holk and his colleagues write.
“Rather, we are seeking to abstract away many of the low-level details that make GPU/-cluster programming difficult, while still giving the programmer enough control over data arrangement and computation coordination to write high-performance programs,” they add.
Harlan will run on Mac OS and Linux. The Harlan project is hosted at GitHub, and has five contributors. | <urn:uuid:92d45be4-67ed-4242-bf5f-4fb680c47fd5> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/07/11/harlan_hides_complexity_for_gpgpu_programming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925785 | 753 | 2.53125 | 3 |
Uh oh. Did you notice what today is?
That’s right: Friday the 13th.
According to Wikipedia, the superstition surrounding this day comes simply from the fact that Fridays and the number thirteen are both independently considered unlucky, and so in conjunction they are like a Big Mac of ill fortune.
Maybe you believe in luck, maybe not. But no matter where you fall on the superstition spectrum, we’ve got some advice for you: better safe than sorry.
It’s easy to feel that hacking and identity theft are things that happen to other people. But unfortunately, those “other people” numbered almost 12 million last year. According Smartphones and mobile devices have made personal data that much more vulnerable, and so breaches are on the rise. According to Javelin Strategy and Research, the number of reports of identity fraud increased last year by one million.
It’s not luck–it’s statistics. Protect yourself with Keeper. | <urn:uuid:f3d988d9-bbba-4283-a40b-712eab54c5cb> | CC-MAIN-2017-04 | https://blog.keepersecurity.com/2013/12/14/avoiding-bad-luck-in-cyberspace/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941155 | 204 | 2.546875 | 3 |
Xie M.-M.,University of Science and Technology of China |
Xie M.-M.,Key Laboratory of the Ministry of Land and Resources of China for Land Regulation |
Bai Z.-K.,University of Science and Technology of China |
Bai Z.-K.,Key Laboratory of the Ministry of Land and Resources of China for Land Regulation |
And 6 more authors.
Meitan Xuebao/Journal of the China Coal Society | Year: 2011
Landsat TM thermal infrared(TIR) bands was used to estimate the spatial dynamics of surface temperature by single-channel method in 1987, 1996, 2001, and 2005 in Pingshuo opencast coal mine area in Shanxi Province, China. The land disturbance types were classified by remote sensed data, which include opencast coal pit, soil removed land, piled land, reclamation land, and industrial centers. An area weighted method was proposed to evaluate the contribution rate of certain disturbance type on temperature increase. Results show that the surface temperature in coal pit and industrial centers are the highest, which are 5~10°C higher than the lowest temperature in study area; the contributions of coal pit and industrial centers to temperature increase are great; surface temperature heterogeneity is high in piled land for the differences in waste types and piling structures, and the temperature values are at a middle level; piled land contributes to temperature increase evidently in the first stage of mining; temperature of soil removed land and reclamation land are the lowest in each period; temperature increase contribution of reclamation land decrease for the climate controlling effects of vegetation. Source | <urn:uuid:12742353-c510-4283-95a7-fbe0d28a9e9e> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/key-laboratory-of-the-ministry-of-land-and-resources-of-china-for-land-regulation-1978902/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00091-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89439 | 322 | 2.6875 | 3 |
This will be a short Reverse Proxy Caching Overview that will explain what proxy is and what is reverse proxy all about.
Normal proxy cache topology is one where the server called proxy server will be some kind of intermediate device between client and server. Proxy will receive all requests from clients and it will forward those requests to servers. The clients will think that the proxy is really the server with the content and the server will think that the Proxy is really the client asking for some resources. Proxy server is used to intercept the communication from client and evaluate the request or control the request for security reasons. On other side clients are sometimes using proxy servers to hide their identity and location because server will only see the location and IP address of proxy server and it will think that that is really the client.
We can say, normal proxy is when proxy server is proxy for clients.
In the reverse proxy, the reverse proxy server acts as a proxy for the server.
Reverse proxy is used for replication of content to different far locations and in other case for replication of content for load balancing.
In a reverse proxy cache, the proxy server is published to the internet with public IP address. When clients want to access some sources from the server, they are actually directed to published reverse proxy server. This is done by DNS resolution of a domain name, normally. This reverse proxy server appears like a web server for example. The reverse proxy then forwards the request to real server with those sources, but only the first time. The next time some client ask for the same thing, he will have that content in his cache and it will be able to respond to client without bothering the server.
In this way it will assure faster response times, higher availability, and the ability to better secure the server.
For example, it will be normal to have more reverse proxy servers on different continents for the same web server. In that way they will be closer to the clients and be able to respond with cached content faster than the real server. In case someone is trying to attack our website, it will take down only the proxy server and not the real sever. | <urn:uuid:7ea1e8f1-eafd-4fa3-a545-e0ca138a2254> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2013/reverse-proxy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00027-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941481 | 432 | 3.21875 | 3 |
It’s something tech-savvy grandchildren have warned their elders about for years: Microsoft’s Internet Explorer has long been ridiculed as a “lame duck” Web browser, although version 11 was a step in a more modern direction.
Now, a grave and widespread vulnerability in the application makes the argument for switching to a different browser about more than just aesthetics.
The United States Computer Emergency Readiness Team (CERT) identified the security flaw and published a report Saturday.
The flaw allows Internet Explorer versions 6 through 11 to be exploited remotely, possibly causing a complete compromise of a user’s machine.
More than half of all computers run Internet Explorer, according to market share data.
Hackers can exploit the bug through Adobe Flash, according to CERT, and they’ve already begun taking advantage of it.
“Although no Adobe Flash vulnerability appears to be at play here, the Internet Explorer vulnerability is used to corrupt Flash content in a way that allows (address space layout randomization) to be bypassed via a memory address leak,” reads the report on the CERT website.
“This is made possible with Internet Explorer because Flash runs within the same process space as the browser. Note that exploitation without the use of Flash may be possible.”
After getting a user to view an HTML document – either through an email attachment or soliciting a phony website and inviting users to click – online ne’er-do-wells can execute code affecting their system without detection or security flags. The vulnerability allows hackers to bypass Windows authentication because their code is hiding behind existing memory addresses.
An intruder would have the same administrative rights as the user who was duped, according to Microsoft. So, child users who may have restricted system access would make less of an impact if pinched by exploiters.
Although Microsoft has not yet fixed the vulnerability, there are myriad workarounds. For starters, users can simply install a different browser. The two most popular options are Google Chrome and Mozilla Firefox.
Most browser- and client-based email applications, including Microsoft Outlook, Outlook Express, Windows Mail, Google Mail and Mozilla Thunderbird, open HTML attachments securely, disabling scripting functionality that could be used maliciously. Not all email applications do this, however. The sound advice is to be more vigilant of suspicious emails or links to potentially malicious websites.
According to cybersecurity firm FireEye Inc., IE users can also enable “Enhanced Protected Mode” to break the exploit or simply disable the Adobe Flash plug-in in their browser, but this greatly limits the online content users will be able to view.
Instructions on employing any of these solutions can be found on the right sidebar.
According to Microsoft, any Internet Explorer version being used on a system that is running the Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012 and Windows Server 2012 R2 operating systems is, by default, safe.
©2014 The Tribune-Democrat (Johnstown, Pa.) | <urn:uuid:d87f1ae6-af41-4562-86eb-2815c1a674af> | CC-MAIN-2017-04 | http://www.govtech.com/internet/Internet-explorer-vulnerability-makes-browser-switch-a-good-idea.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.893131 | 624 | 2.578125 | 3 |
A debris flow and flash flood warning system developed jointly by NOAA's National Weather Service and the U.S. Geological Survey will help protect Southern Californians from potentially devastating debris flows-commonly known as mud slides- and flash floods in and around burn areas created by the recent wildfires.
"Much of the smoke has cleared from the region's devastating wildfires last month, but the danger is not over," said Jack Hayes, Ph.D., director of the National Weather Service. "Moderate amounts of rainfall on a burned area can lead to flash floods and debris flows. The powerful force of rushing water, soil, and rock can destroy everything in its path, leading to injury or death."
"Our science can help determine the location, size and occurrence of potentially destructive debris flows and floods from last month's Southern California wildfires," said USGS Director Mark Myers. "The public, emergency managers and policymakers can use this information to prepare for and react to these potentially devastating natural hazards."
Post-wildfire debris flows are closely linked to precipitation and are therefore more predictable than other landslides. The USGS has developed precipitation thresholds that help identify potential debris flows in recent burn areas and provides this information to National Weather Service forecast offices in Southern California. Using a flash flood monitoring and prediction tool, weather forecast offices monitor rainfall, and if it approaches the thresholds developed for burn areas, incorporate wording about debris flow hazards into flash flood warnings and public information statements. Flash flood warnings are communicated to the public through the Emergency Alert System and NOAA Weather Radio-All Hazards, and directly to local emergency managers.
Even before the fires started, the agencies agreed to extend for another year the debris flow and flash flood warning system pilot project started in September 2005. In a 2005 report, the agencies outlined the initial plan for the project, identified the need to expand the warning system nationwide and focused on developing improved technologies to characterize flash flood and debris flow hazards.
The project will continue to cover San Luis Obispo, Santa Barbara, Ventura, Los Angeles, San Bernardino, Orange, Riverside, and San Diego counties, most of which were affected by the recent wildfires.
NOAA, an agency of the U.S. Commerce Department, is celebrating 200 years of science and service to the nation. From the establishment of the Survey of the Coast in 1807 by Thomas Jefferson to the formation of the Weather Bureau and the Commission of Fish and Fisheries in the 1870s, much of America's scientific heritage is rooted in NOAA.
NOAA is dedicated to enhancing economic security and national safety through the prediction and research of weather and climate-related events and information service delivery for transportation, and by providing environmental stewardship of our nation's coastal and marine resources. Through the emerging Global Earth Observation System of Systems (GEOSS), NOAA is working with its federal partners, more than 70 countries and the European Commission to develop a global monitoring network that is as integrated as the planet it observes, predicts, and protects. | <urn:uuid:4113a6a0-d41b-434c-8b24-e444edff592a> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/NOAA-USGS-Warning-System-to-Help.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00265-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931457 | 602 | 3.921875 | 4 |
Around the world, more and more companies and government organizations are reporting massive data breaches. These breaches don’t just cost them a significant amount of money—they can also jeopardize the hard-earned trust of their customers and employees.
With this in mind, businesses of all sizes are tuned into the importance of data security. When traditional virus and malware software don’t make the cut, how can developers and programmers ensure that their sensitive information is protected – or in the worst case scenario – that a data breach is discovered and dealt with quickly, before it escalates out of control?
Testing: Data Security’s New Best Friend
Each website has its own unique behavior, and the first hint of a problem or dip in performance is a change in that behavior. Through regular testing, your development team can learn how a site or application reacts to a variety of situations, including a security breach, enabling quick and decisive response to any sudden business risk.
There are a variety of factors that can be tested to determine the security level of your system. One of the first symptoms of malware or a virus is a slow or failed connection. Load tests and uptime tests can help determine if the site’s load speed has suddenly changed. A program of regular testing is important so you can test (and retest) to identify a potential data breach and diagnose any perceived performance degradations.
Regular testing will also help you to track a site’s behavior over time. By charting a site’s performance over the span of several months, your team should be able to recognize a change in normal behavior (which may signal a threat) before it poses any serious risk.
Penetration Testing, also called Network Threat Testing, is critical for determining how vulnerable your system may be to a cyber attack, and can help hone your team’s response time to such an event.
A company like Apica can simulate an Advanced Persistent Threat (APT) or Distributed Denial-of-Service (DDoS) attack and determine how and when your system detects the threat. Based on the results of the test, your team can then get to work improving any areas of concern.
It’s critical to lessen the time between the breach and the victim’s response (known as the Security Gap). Unfortunately it’s not always easy to tell if a system has been breached, and hackers are only becoming smarter and stealthier. Ongoing testing is one way to stay on top of system health and performance, and helps ensure your team understands the warning signs in the event of a data breach.
Monitoring to Improve Security
Developers and programmers should also track website performance in between testing cycles. A good monitoring service will track all components of the system at all times, including all desktops and mobile devices. During and between testing cycles, your site will be constantly monitored for even the most subtle attacks and performance dips.
If the site or application is ever down, or if performance is otherwise affected, the software should send instant alerts. Your team can then respond to threats instantly, from any location. Monitoring services will help you establish key objectives for future growth, so you can keep sight of your baseline as new changes are implemented.
Testing and monitoring services are essential — and surprisingly affordable — investments for today’s cyber-centric companies. With the rising number of cyber attacks carried out every year, it’s more important than ever to protect your information—and that of your employees and clients.
Are You Ready to Achieve Peak Web and Mobile Performance?
Start a 30-day full-featured trial of Apica LoadTest or a 30-day trial of Apica WPM.
Start Your Free Trial! | <urn:uuid:e93e8050-5ddd-4591-96fd-a7f2e064c1fe> | CC-MAIN-2017-04 | https://www.apicasystem.com/blog/how-performance-testing-promotes-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00265-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928823 | 768 | 2.53125 | 3 |
Whether it happens with that key memo left unfinished, the last scene of a movie unwatched or an epic gaming battle interrupted, it's likely that, at one time or another, you've been left with a dead notebook battery at the worst possible moment. What can you do about it?
"Notebooks are not as efficient as they could be," says Robert Meyers, data center product manager for the Energy Star Program at the Environmental Protection Agency, "and they waste a lot of energy."
The payoff for being aware of how much power your system uses and how to control it can be huge -- because every watt saved can run the notebook that much longer. "The natural incentive is that greater efficiency translates directly into longer battery life," Meyer says.
In this article, I'll go through 11 ways you can cut down on your laptop's power usage. Some may be appropriate for your style of work and/or play, some not; but even if you follow one or two, it could give you those crucial extra operating minutes.
But first, it might be useful to look at which components are the most power thirsty in your device -- and how they are being improved.
What uses battery power?
While there's a lot of variation between an 11-in. Chromebook with an Intel Celeron processor and a 17-in. gaming laptop with an Intel Core i7 Extreme chip, each has a similar array of components that turn electricity into an interactive computing experience.
There are six components that are the major power users in a computing device. They are listed here roughly in order of power use, although that can vary based on the notebook itself. They have each been redesigned over the past decade for greater efficiency, but there's still work to be done.
The processor is a power hog, often using as much as half of the total power in a system. Smaller is better; as the size of the microscopic wires and electronic architecture within the chip shrinks with each generation, its power use declines.
A decade ago, the best Intel processors used the company's 90-nanometer (nm) production process, codenamed Dothan. Today, the company's Haswell chips have 22nm architecture -- less than one-quarter the size and roughly 100,000 times smaller than the width of a pencil point. Chips made with 14nm microarchitecture, a.k.a Broadwell, have been promised for later this year or in early 2015.
Meanwhile, current AMD processors are made using a 28nm process, including the new Karavi laptop CPUs, but the company's Project SkyBridge promises a series of new chips for mobile devices using a 20nm manufacturing process.
2. Graphics processors
Graphics processors are often integrated into a notebook's system, but can significantly drain a battery as well. For example, Intel Graphics 4000 and 5000 integrated video chips typically range in power use from about 15 watts for the HD 4200 at the entry level to upwards of 50 watts for the Iris Pro 5200.
AMD's Radeon graphics engines also vary in how much power they pull. For instance, the mid-range HD 6290 graphics chip consumes about 18 watts at peak use, while the more sophisticated HD 8650G chip uses upwards of 35 watts.
Plus, many high-end engineering and gaming notebooks also have discrete graphics chips with dedicated memory from Nvidia or AMD that can consume a lot of power when they're being used.
Displays have improved -- no doubt about it. The move in the late 2000s from CCFL backlighting to LED backlighting reduced a typical LCD's power drain by about 25%.
More recently, Panel Self-Refresh (PSR) technology can lower power use even further by stopping screen refresh if what's being displayed doesn't change. This can add as much as 20 minutes to a battery's run time, according to Ajay Gupta, director of commercial notebook products at HP. PSR is currently used on a limited number of devices, including the HP EliteBook Folio 1040 and the LG G2 smartphone.
In the long term, display power use could decrease by another 40% by using Organic Light Emitting Diode (OLED) screens that produce their own light and don't require backlighting. These screens are currently being used in phones like the Nokia Lumia Icon.
Traditional hard drives that use rotating magnetic discs are giving way to SSDs that store data on solid-state chips. Solid state storage still costs four to five times what a hard drive goes for, but uses a lot less power.
For instance, the 500GB Seagate Momentus Thin 2.5-in. mobile hard drive (starting at $50) uses 1.20 watts, while a 480GB Crucial SSD (about $236) consumes 0.28 watts, less than a quarter as much. And more lower-cost laptops -- including such lightweight models as the HP Chromebook 11 -- are shipping with SSDs.
According to Gupta, the next step is to stop making SSDs that mimic 2.5-inch hard drives in size and shape, and move to M.2 circuit board technology that puts all the components on a small circuit board, such as the one included in HP's EliteBook 840. This can reduce power use further, he says.
Every watt used inside a computer system turns into heat -- and so the system has to be cooled in order to keep running. The less power used, the less cooling is needed. As a result, current systems that use power more efficiently also use smaller fans that don't need to run as often (and so conserve power themselves).
6. AC adapter
The technology that turns a wall outlet's alternating current into the direct current that a notebook needs has made great strides: From being roughly 50% efficient 20 years ago to between 80% and 90% efficient today. Still, a lot of power is wasted, because for most computers the adapter still draws phantom current after the system's battery is fully charged.
Today, some adapters -- like that of the Lenovo ThinkPad X1 Carbon Touch -- are smart enough to shut themselves off when the battery is full. Hopefully, more are on their way.
According to HP's Gupta, a high-efficiency adapter could be made for a single voltage, like the 110 volts we use in the U.S., rather than switchable between 110- and 220 voltage for global use. Theoretically, it could hit 94% efficiency, he says. | <urn:uuid:6e953dde-ca0f-4e87-a139-8868936820e4> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2489462/windows-pcs/boost-that-battery--tips-and-tricks-for-laptops.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00506-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951047 | 1,325 | 2.53125 | 3 |
Opinion: Developers should use higher-level programming languages and tools to help cut back on errors.
At the apogee of software challenges are the rigorous demands of robotic deep-space missions. NASAs July 4 success in hitting a bullet with a bulletspecifically, hitting a comet nucleus with a probe about the size of a coffee table at a relative speed of 23,000 mphupdates the archive of these high-profile tests.
Ironically, such missions during the last few years have been used to argue both sides in the religious debate over use of higher-level programming languages and tools. Personally, I gravitate to the high endbut Ed Posts canonical essay, "Real Programmers Dont Use Pascal," took the other side with its anecdotes of missions to outer planets.
"Allegedly, one Real Programmer managed to tuck a pattern-matching program into a few hundred bytes of unused memory in a Voyager spacecraft that searched for, located, and photographed a new moon of Jupiter," Post wrote in his semiserious commentary in Datamation magazine in July 1983. Speaking of the then-prospective Galileo mission, Post described its planned gravity-assist maneuver and said, "This trajectory passes within 80 +/- 3 kilometers of the surface of Mars. Nobody is going to trust a Pascal program (or Pascal programmer) for navigation to these tolerances."
The hazards of such low-level programming were spectacularly revealed, though, in 1999 when the Mars Climate Orbiter crashed into that planet because of confusion between English and metric units of force. My first thought, on learning the cause of the crash, was: "Thats the kind of thing that Ada can prevent"Ada being, of course, quite Pascal-like. A properly written Ada program strongly associates explicit data types, such as force_newtons or force_pounds, with any numeric values. An Ada compiler refuses to use one type where another is specified. Errors are detected, moreover, when a program is compiled, rather than surfacing at run-time.
Java can be used in the same way, and Ada source code can even be compiled to Java bytecode to run in any Java virtual machine environmentalthough some purists would say that neither Java nor Ada is strongly typed because both languages enable forms of type conversion within a program. Personally, as long as all conversions are explicit, Im OK with that.
Arguably even better than a strongly typed language, though, is a high-level system model that lets designers and users reason about desired behaviorsand generate the code that actually runs the system as a side effect only after the model behaves satisfactorily. Thats what NASA did when preparing its Deep Impact comet probe for its brief burst of glory.
"You compose a block diagram to represent the system," said Jason Ghidella, a product manager at The MathWorks. Ghidella described NASAs use of The MathWorks Stateflow, a tool that has been favorably mentioned in numerous NASA research papers during the last several years. Stateflows not without quirks: Researcher John Rushby, at SRI International, described Stateflow as having "ghastly semantics" in his presentation to a NASA workshop in June 2003, and a research team at Carnegie Mellon has compiled a Web page to share its "firsthand trial and error" discoveries as to Stateflows specific behaviors. Stateflow nonetheless enjoys a growing reputation for at least elevating trial and error to higher levels of abstraction.
Because of the long delays for radio commands to reach the Deep Impact probe, more than 7 light minutes from Earth, it had to have a fair amount of autonomous fault-handling capability. "Youre able to describe that in a more natural form using finite state machines in a graphical form. You can simulate all the scenarios," said Ghidella during our conversation a week before the July 4 climax of the Deep Impact mission. "They did a lot of testing upfront, they verified the design at the model level [and] then they were able to generate the C code directly without introducing errors by hand."
Handcrafted charm enhances antique furniture, but it has no place in modern systems. Development teams should always be looking for higher-level tools, or the debris of their failure may someday be studied just like the debris of Deep Impacts success.
Technology Editor Peter Coffee can be reached at email@example.com.
To read more Peter Coffee, subscribe to eWEEK magazine.
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools. | <urn:uuid:103fe1af-a775-4570-b9d1-bcc8506eb1cc> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/HighLevel-System-Models-Reduce-Errors | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00506-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94528 | 940 | 2.75 | 3 |
Cybergeddon – the total shutdown of internet access across the planet – is much more likely to be caused by human error than an act of cyber aggression by a nation state, experts have said.
At a panel debate held at London’s Imperial War Museum, security experts debated the possibility to the total collapse of the internet and whether a nation state would attempt such a move.
Rather anti-climatically they all agreed it was unlikely. That’s primarily because states use the internet to their advantage, so shutting down access across the planet is rather self-defeating.
Professor Fred Piper, Head of Information Security Group at Royal Holloway University of London, used the example of the cyber attacks against Estonia as proof that Cybergeddon is possible, at least on a smaller scale. However he added it is unlikely anything larger would happen.
"If Cybergeddon is the destruction of the whole internet infrastructure I don’t see anybody – and I mean anybody – having any advantage in doing that, because they will damage themselves as much as they will damage their enemy. However the attacks on Estonia could be called a local Cybergeddon," he said.
Hugh Thompson, chief security strategist at Blue Coat Systems, agreed and said launching a Cybergeddon-style attack is unlikely, even as a show of power by one nation state.
"It’s a very difficult calculus to show a display of power because whenever you do you’ve burnt a channel that could be useful for you in the future," he said. "That’s very serious when it comes to cyber; the new nuclear arms online are things like zero-day vulnerabilities and web servers that are everywhere. Once you use one of those it becomes no longer a factor. It’s like the September 11th terrorist attacks – it would be very difficult for someone to pull off the same thing now."
What is much more likely is human error will result in the infrastructure of the internet collapsing, according to Paul Simmonds, co-founder of the Jericho Forum and former CISO of AstraZeneca and ICI.
He used BlackBerry’s service outage in 2011 as an example of how a cascade action can cause an extended disruption.
"I see it being taken out by a glorious cock-up rather than anything state-sponsored. Look at what happened to BlackBerry – it was taken down by a faulty router. Or there is a software upgrade that goes wrong," he said.
"It has a cascade action. Systems are so complex these days that often people don’t understand how they work," Simmonds added. "I think you are more likely to see the DNS root servers taken down by a cascade action by a botched router upgrade."
"With any kind of cascade action it’s the law of unintended consequences. The internet probably has all your water and electricity systems and controls your nuclear reactors. If you overload and takedown that infrastructure you take down the world. You will never be able to confine it to, say, just China," Simmonds concluded.
The panel agreed that Cybergeddon and the cyber wars in general should not be looked at in isolation but instead as part of the theatre of war where it could be used to disrupt communication services, for example. | <urn:uuid:274c10bb-89e3-483c-83ed-3c7ce4014890> | CC-MAIN-2017-04 | http://www.cbronline.com/blogs/cbr-rolling-blog/cybergeddon-dont-bet-on-it-say-experts-301112 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00414-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970431 | 674 | 2.546875 | 3 |
A major vulnerability in the GNU C Library could result in remote code execution, and may affect most Linux machines.
The vulnerability affects all version of the GNU C Library, commonly known as glibc, since version 2.9. According to research by Google’s Staff Security Engineer Fermin J. Serna and Technical Program Manager Kevin Stadmeyer, a full working exploit was enabled and a patch made available.
Serna and Stadmeyer said in a statement: “You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack.
“The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack.”
The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Remote code execution is possible, but requires bypassing the security mitigations present on the system, such as ASLR.
The bug was reported to glibc maintainers in July 2015, but has been present in glibc 2.9 since May 2008. Carlos O’Donnell, Principal Software Engineer at Red Hat, said in an advisory that the vulnerability has likely not been publicly attacked, but that execution control can be gained without much more effort.
Tod Beardsley, Security Research Manager at Rapid7, said that like the GHOST vulnerability from 2015, this will affect lots of Linux client and server applications, and like GHOST, it's pretty difficult to "scan the internet" for it, since it's a bug in shared library code.
“There are certainly loads and loads of IoT devices out in the world that aren't likely to see a patch any time soon,” he says. “So, for all those devices you can't reasonably patch, your network administrator could take a look at the mitigations published by RedHat, and consider the impact of limiting the actual on-the-wire size of DNS replies in your environment. While it's may be a heavy-handed strategy, it will buy you time to ferret out all those IoT devices that people have squirrelled away on your network.”
Dave Palmer, Director of Technology at Darktrace, said: “It seems that this bug primarily affects the servers that run company applications and internet services, but probably also much of the IoT. However, it is still unclear how easy it is to exploit.
“Uncertainty surrounds not only this bug, but all future threats. It is simply impossible to guess where next vulnerabilities will be discovered. So as companies run around trying to work out if and how this will affect them, they should also fundamentally re-think how they are protecting the entirety of their systems. Without an immune system, which automatically monitors for abnormality, it is extremely difficult to keep up with today’s threat landscape.”
David Flower, MD EMEA at Carbon Black said: “Linux users have long since held the belief that their systems are secure by design and are invulnerable to attack. However, the string of high-profile Linux malware; from last year’s Mumblehard, which had gone undetected for five years, to 2012’s Snakso, which gave hackers remote access to servers, has proven this belief to be false. Google’s discovery of Glibc has delivered another significant blow to this misconception, highlighting that a basic flaw has been present within the code itself.
“Whilst it has yet to be exploited by hackers, those that fail to patch the vulnerability will face a significant threat now that the bad guys have been alerted to its presence.” | <urn:uuid:4c546d2f-93a9-4b21-9ef8-246005cdd93c> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/glibc-flaw-affects-linux-machines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961038 | 820 | 2.578125 | 3 |
While there have been numerous studies seeking to find and understand Android malware, this study examined benign Android apps that could be exploited by third parties. The subject of the study is SSL and its successor TLS, the protocols used to secure internet communications. Since communications and the internet lie at the heart of Android use, many apps quite legitimately seek internet permissions; but users have no way of knowing whether the communications are secure. “This paper,” say the researchers, “seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit.”
The researchers examined 13,500 popular free apps from Google Play. To help their analysis they developed their own tool, called MalloDroid, to perform a static analysis of the apps’ code, and found that “1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks.” To confirm these findings, they selected 100 apps for manual investigation, and were able to “successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data.”
The purpose of SSL/TLS is to secure communication between the user and the destination website. If this isn’t done, or is implemented insecurely, attackers can sit between the user and the website (hence ‘man-in-the-middle’) and read the data that passes. During the manual tests, the researchers “were able to capture credentials for American Express, Diners Club, Paypal, bank accounts, Facebook, Twitter, Google, Yahoo, Microsoft Live ID, Box, WordPress, remote control servers, arbitrary email accounts, and IBM Sametime, among others.”
A separate survey of 754 Android users indicated a wide lack of understanding about SSL security. 378 users did not accurately judge the security state of a browser session, while 419 had not seen a certificate warning, and even then considered the risk to be medium or low.
The combination of a poor understanding about SSL, Android’s open approach to app development, and insecure SSL implementations means that many millions of users are exposed to MITM attacks. “The cumulative number of installs of apps with confirmed vulnerabilities against MITM attacks is between 39.5 and 185 million users, according to Google’s Play Market,” say the researchers. They outline several avenues for future research to solve or at least alleviate the problem, but will in the meantime, they say, “provide a MalloDroid Web App and will make it available to Android users.” With this app, users will at least know whether the apps they use are susceptible to MITM. | <urn:uuid:e0f34229-0ab8-48f2-9ed5-7c1d096c961a> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/thousands-of-android-apps-and-millions-of-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93609 | 566 | 2.765625 | 3 |
To counter loopholes in current biometrics, new scanning techniques are becoming available to better safeguard privacy and personal property
A recent recognition technique was developed by the Wright State Research Institute (WSRI): skeletal scanning. Targeting airports, theme parks, sports stadiums and even hotels and other private establishments, this invention capitalizes on X-ray, gamma ray and other scanning elements, and boasts near-perfect accuracy as no two sets of 206 bones in an adult human body are identical. “We also believe that you may not need an entire body scan,” said Phani Kidambi, Research Engineer. “Maybe just part of the body is sufficient.”
Two big challenges remain. First, the sensors require a person to stand within 2 meters; the US federal officials, however, prefer that the equipment extracts a scanning of the object as far as 50 meters away. “If we had that solved, we'd be in the market right now,” said Julie Skipper, Associate Research Professor.
Ryan Fendley, the institute's Director of Operations and Strategic Initiatives, adds that that the other challenge is creating a database of body scans of suspects. “You can have a great tool that collects body scans of the general public, but if you don't have anything to compare them to, you haven't done anything.”
While researchers are still discovering subtle features in particular bones that differentiate individuals, WSRI has a couple of bone density scanners available to combine into a prototype. A working version could be deployed in the field within a year, Kidambi predicted.
Active Sweat Pores
A new technology has been introduced by scientists at the British Brunel University. It automatically locates and extracts active sweat pores in fingerprint scans to detect liveness by using high-pass and correlation-filtering techniques. This helps existing fingertip scanners to instantly determine whether the fingertip scanned is a real, live person or a feigned fingertip stamp made from liquid silicon or gelatine, thus granting or denying access accordingly.
The research is still ongoing, but the final product shall be commercially available by the end of 2011, Dr. Wamadeva Balachandran said in a prepared statement. “If an industry is interested in collaborating with us, we would be happy to explore the possibility of working with them.”
Biometric Data Cards
In a recent release, SmartMetric announced that the company's fingerprint-activated biometric data cards can now store a person's complete medical record for instant access.
The data card, in the size of a standard card, holds significant memory capacity, where gigabytes of medical information such as CT and MRI images can be stored among other records. To recall data, the card owner first needs to provide a fingertip scanning right on the card. Only when the scanned image matches the original preloaded inside the card can the data be accessed.
“Our R&D team has pioneered another exciting breakthrough in our card,” said Colin Hendrick, President and CEO. “Nothing like this exists anywhere in the world today. We're currently in negotiations with several worldwide corporate entities regarding the rollout and commercialization of our products.” | <urn:uuid:60e342b7-2f64-4839-9981-67519f2a605a> | CC-MAIN-2017-04 | https://www.asmag.com/showpost/10733.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941302 | 660 | 2.78125 | 3 |
The SpaceX Dragon spacecraft safely splashed down in the Pacific Ocean Tuesday, winding up a three-week mission to resupply the International Space Station.
Dragon returned to Earth at 12:34 p.m. EDT a few hundred miles west of Baja California, Mexico, marking a successful end to the second mission contracted by NASA to deliver and return scientific experiments and supplies to and from astronauts on the space station.
SpaceX co-founder Elon Musk today used Twitter to talk about Dragon's journey home. Prior to its landing, he tweeted that the spacecraft had a good deorbit burn and that all thrusters were working, and then said that the recovery ship heard the sonic boom from the spacecraft's reentry.
And then 28 minutes after tweeting that Dragon's parachutes had deployed, Musk tweeted that the recovery ship had secured the spacecraft and all secondary systems were being powered down.
"Cargo looks A ok," he tweeted.
NASA reported that SpaceX had confirmed to engineers there that Dragon splashed down at 12:34 p.m., right on schedule.
Dragon came home loaded with about 2,600 pounds of used hardware, completed experiments and trash. It had ferried about 1,200 pounds of supplies and new experiments to the space station.
The Dragon capsule blasted off atop a SpaceX Falcon 9 rocket from Cape Canaveral Air Force Station in Florida on March 1. After a thrusters glitch delayed its rendezvous by a day, the craft docked at the space station on March 3. It remained there until early this morning.
During the three weeks that Dragon was docked at the space station, the astronauts unloaded and then reloaded the spacecraft.
The next SpaceX resupply mission to the space station is scheduled for late September.
The mission that ended today is the second of 12 SpaceX flights contracted by NASA to resupply the space station. It also was the third trip by a Dragon capsule to the orbiting laboratory.
SpaceX made a demonstration flight in May 2012 and then launched its first official resupply mission last October, delivering 882 pounds of supplies to the space station.
Being able to launch successful commercial missions is critical to NASA since the space agency retired its fleet of space shuttles in 2011.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is firstname.lastname@example.org.
Read more about government/industries in Computerworld's Government/Industries Topic Center.
This story, "Splashdown!! SpaceX Dragon returns to Earth" was originally published by Computerworld. | <urn:uuid:4a4221c7-5d87-47fc-8ef6-613f5752d761> | CC-MAIN-2017-04 | http://www.itworld.com/article/2713307/it-management/splashdown---spacex-dragon-returns-to-earth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95614 | 558 | 2.71875 | 3 |
Using ESP to Prevent Replay Attacks
The tighter your network's security is, the more difficult it is for a hacker to break in. However, hackers tend to be clever and have lots of methods of getting into a network.
Prior to Windows 2000, hackers could use a method called a replay attack to break into even some of the most secure networks. Replay attacks are seldom used because of their complexity--often, less-complicated methods work just as well. The problem is that before Windows 2000, there were lots of ways to protect against the less sophisticated attacks, but few (if any) ways to protect against replay attacks.
In a replay attack, a hacker uses a protocol analyzer to monitor and copy packets as they flow across the network. Once the hacker has captured the necessary packets, he can filter them and extract the packets that contain things like digital signatures and various authentication codes. After these packets have been extracted, they can be put back on the network (or replayed), thus giving the hacker access to the desired access.
Replay attacks have existed for a long time. Years ago, replay attacks were simply aimed at stealing passwords. However, given the encryption strength of passwords these days, it's often easier to steal digital signatures and keys.
Repelling Attacks with IPSec
Windows 2000 provides a way to protect against a replay attack: the IPSec subcomponent called Encapsulating Security Payload (ESP). The IPSec protocol is a security-enabled protocol that's designed to run on IP networks. IPSec runs at the network level and is responsible for establishing secure communications between PCs. The actual method of providing these secure communications depends on the individual network. However, the method often involves a key exchange. ESP is the portion of IPSec that encrypts the data contained within the packet. This encryption is controlled by an ESP subcomponent called the Security Parameters Index (SPI).
In addition to the encryption, ESP can protect against replay attacks by using a mathematically generated sequence number. When a packet is sent to a recipient, the recipient extracts the sequence number and records the sequence number in a table. Now, suppose a hacker captured and replayed a packet. The recipient would extract the sequence number and compare it against the table that it has been recording. But the packet's sequence number will already exist in the table, so the packet is assumed to be fraudulent and is therefore discarded. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:ae868ecf-83d7-405a-b2bc-f92a89c0bf47> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/624871/Using-ESP-to-Prevent-Replay-Attacks.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00560-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956634 | 570 | 2.703125 | 3 |
Faced with an onslaught of malware attacks that leverage vulnerabilities and design weaknesses in Java, Oracle Corp. recently tweaked things so that Java now warns users about the security risks of running Java content. But new research suggests that the integrity and accuracy of these warning messages can be subverted easily in any number of ways, and that Oracle’s new security scheme actually punishes Java application developers who adhere to it.
Running a Java applet now pops up a security dialog box that presents users with information about the name, publisher and source of the application. Oracle says this pop-up is designed to warn users of potential security risks, such as using old versions of Java or running applet code that is not signed from a trusted Certificate Authority.
Security experts differ over whether regular users pay any mind whatsoever to these warnings. But to make matters worse, new research suggests most of the information contained in the pop-ups can be forged by malware writers.
In a series of scathing blog posts, longtime Java developer Jerry Jongerius details the various ways that attackers can subvert the usefulness of these dialog boxes. To illustrate his point, Jongerius uses an applet obtained from Oracle’s own Web site — javadetection.jar — and shows that the information in two out of three of its file descriptors (the “Name” and “Location” fields) can be changed, even if the applet is already cryptographically signed.
“The bottom line in all of this is not the security risk of the errors but that Oracle made such incredibly basic ‘101’ type errors — in allowing ‘unsigned information’ into their security dialogs,” Jongerius wrote in an email exchange. “The magnitude of that ‘fail’ is huge.”
Jongerius presents the following scenario in which an attacker might use the dialog boxes to trick users into running unsafe applets:
“Imagine a hacker taking a real signed Java application for remote desktop control / assistance, and placing it on a gaming site, renaming it ‘Chess’. An unsuspecting end user would get a security popup from Java asking if they want to run ‘Chess’, and because they do, answer yes — but behind the scenes, the end user’s computer is now under the remote control of a hacker (and maybe to throw off suspicion, implemented a basic ‘Chess’ in HTML5 so it looks like that applet worked) — all because Oracle allowed the ‘Name’ in security dialogs to be forged to something innocent and incorrect.”
Oracle has not responded to requests for comment. But Jongerius is hardly the only software expert crying foul about the company’s security prompts. Will Dormann, writing for the Carnegie Mellon University’s Software Engineering Institute, actually warns Java developers against adopting a key tenet of Oracle’s new security guidelines. | <urn:uuid:e377cba9-6020-48de-93c9-fb0b778f9e02> | CC-MAIN-2017-04 | https://krebsonsecurity.com/tag/applet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914671 | 612 | 2.828125 | 3 |
In this week’s hand-picked assortment, researchers consider virtualizing HPC as a Service, low latency on global cloud systems as well as accelerators and surveying the HPC cloud environment as a whole.
Hardware-Assisted Virtualization for Deploying HPC-as-a-Service
Virtualization has been the main driver behind the rise of cloud computing, argued researchers out of the A*STAR Institute of High Performance Computing in Singapore. Despite cloud computing’s tremendous benefits to applications (e.g. enterprise, Web, game/ multimedia, life sciences, and data analytics), its success in High Performance Computing (HPC) domain has been limited. The oft-cited reason is latency caused by virtualization.
Meanwhile, according to the researchers, the rising popularity of virtualization has compelled CPU vendors to incorporate virtualization technology (VT) in chips. This hardware VT is believed to accelerate context switching, speed up memory address translation, and enable I/O direct access; which are basically sources of virtualization overheads.
Their paper reports the evaluation on computation and communication performance of different virtualized environments, i.e., Xen and KVM, leveraging hardware VT. Different network fabrics, namely Gigabit Ethernet and InfiniBand, were employed and tested in the virtualized environments and their results were compared against those in the native environments.
A real-world HPC application (an MPI-based hydrodynamic simulation) was also used to assess the performance. Outcomes indicate that hardware-assisted virtualization can bring HPC-as-a-Service into realization.
Low Latency Communications in Global Cloud Computing Systems
A paper out of McMaster University in Hamilton explores technologies to achieve low-latency energy-efficient communications in Global-Scale Cloud Computing systems.
A global-scale cloud computing system linking 100 remote data-centers can interconnect potentially 5M servers, considerably larger, according to the paper, than the size of traditional High-Performance-Computing (HPC) machines. Traditional HPC machines use tightly coupled processors and networks which rarely drop packets.
In contrast, today’s IP Internet is a relatively loosely-coupled Best-Effort network with poor latency and energy-efficiency guarantees, with relatively high packet loss rates. This paper explores the use of a recently-proposed Future-Internet network, which uses a QoS-aware router scheduling algorithm combined with a new IETF resource reservation signalling technology, to achieve improved latency and energy-efficiency in cloud computing systems.
A Maximum-Flow Minimum-Energy routing algorithm is used to route high-capacity “trunks” between data-centers distributed over the continental USA, using a USA IP network topology. The communications between virtual machines in remote data-centers are aggregated and multiplexed onto the trunks, to achieve significantly improved energy-efficiency.
According to theory and simulations, the large and variable queueing delays of traditional Best-Effort Internet links can be eliminated, and the latency over the cloud can be reduced to near-minimal values, i.e., the fiber latency. The maximum fiber latencies over the Sprint USA network are approx. 20 milliseconds, comparable to hard disk drive latencies, and multithreading in virtual machines can be used to hide these latencies.
Furthermore, if existing dark-fiber over the continental network is activated, the bisection bandwidth available in a global-scale cloud computing system can rival that achievable in commercial HPC machines.
Integrating Accelerators Using CometCloud
Application accelerators can include GPUs, cell processors, FPGAs and other custom application specific integrated circuit (ASICs) based devices. According to research out of Cardiff University, a number of challenges arise when these devices must be integrated as part of a single computing environment, relating to both the diversity of devices and the supported programming models.
One key challenge they consider is the selection of the most appropriate device for accelerating a particular application. Their approach makes use of a broker-based matchmaking system, which attempts to compare the capability of a device with one or more application kernels, utilising the CometCloud tuple space-based coordination mechanism to facilitate the matchmaking process.
They described the architecture of our system and how it utilises performance prediction to select devices for particular application kernels. They demonstrated that within a highly dynamic HPC system, their approach can increase the performance of applications by using code porting techniques to the most suitable device found, also; (a) allowing the dynamic addition of new devices to the system, and (b) allowing applications to fall back and utilise the best alternative device available if the preferred device cannot be found or is unavailable.
A Study of High Performance Computing on the Cloud
The popularity of Amazon’s EC2 cloud platform has increased in recent years, according to research out of the University of Arizona and Lawrence Livermore National Laboratory. However, the researchers argue, many high-performance computing (HPC) users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter.
Their view was that this is quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In their paper, they compared the top-of-the-line EC2 cluster to HPC clusters at Lawrence Livermore National Laboratory (LLNL) based on turnaround time and total cost of execution.
When measuring turnaround time, they included expected queue wait time on HPC clusters. Their results show that although as expected, standard HPC clusters are superior in raw performance, EC2 clusters may produce better turnaround times. To estimate cost, they developed a pricing model—relative to EC2’s node-hour prices—to set node-hour prices for (currently free) LLNL clusters. They observed that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. | <urn:uuid:f2bca470-c14b-407f-983c-9c93418488f2> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/06/21/research_roundup_virtualization_and_low_latency_for_global_clouds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00194-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910428 | 1,235 | 2.71875 | 3 |
November 23, 2016
It is inevitable in college that you were forced to write an essay. Writing an essay usually requires the citation of various sources from scholarly journals. As you perused the academic articles, the thought probably crossed your mind: who ever reads this stuff? Smithsonian Magazine tells us who in the article, “Academics Write Papers Arguing Over How Many People Read (And Cite) Their Papers.” In other words, themselves.
Academic articles are read mostly by their authors, journal editors, and the study’s author write, and students forced to cite them for assignments. In perfect scholarly fashion, many academics do not believe that their work has a limited scope. So what do they do? They decided to write about it and have done so for twenty years.
Most academics are not surprised that most written works go unread. The common belief is that it is better to publish something rather than nothing and it could also be a requirement to keep their position. As they are prone to do, academics complain about the numbers and their accuracy:
It seems like this should be an easy question to answer: all you have to do is count the number of citations each paper has. But it’s harder than you might think. There are entire papers themselves dedicated to figuring out how to do this efficiently and accurately. The point of the 2007 paper wasn’t to assert that 50 percent of studies are unread. It was actually about citation analysis and the ways that the internet is letting academics see more accurately who is reading and citing their papers. “Since the turn of the century, dozens of databases such as Scopus and Google Scholar have appeared, which allow the citation patterns of academic papers to be studied with unprecedented speed and ease,” the paper’s authors wrote.
Academics always need something to argue about, no matter how miniscule the topic. This particular article concludes on the note that someone should get the number straight so academics can move onto to another item to argue about. Going back to the original thought a student forced to write an essay with citations also probably thought: the reason this stuff does not get read is because they are so boring.
October 11, 2016
The lawless domain just got murkier. Apart from illegal firearms, passports, drugs and hitmen, you now can procure a verifiable college degree or diploma on Dark Web.
Cyber criminals have created a digital marketplace where unscrupulous students can
purchase or gain information necessary to provide them with unfair and illegal
academic credentials and advantages.
The certificates for these academic credentials are near perfect. But what makes this cybercrime more dangerous is the fact that hackers also manipulate the institution records to make the fake credential genuine.
The article ADDS:
A flourishing market for hackers who would target universities in order to change
grades and remove academic admonishments
This means that under and completely non-performing students undertaking an educational course need not worry about low grades or absenteeism. Just pay the hackers and you have a perfectly legal degree that you can show the world. And the cost of all these? Just $500-$1000.
What makes this particular aspect of Dark Web horrifying interesting is the fact that anyone who procures such illegitimate degree can enter mainstream job market with perfect ease and no student debt.
August 29, 2016
Think Amazon is the only outfit which understands the concept of strategic pricing, bundling, and free services? Google has decided to emulate such notable marketing outfits as ReedElsevier’s LexisNexis and offering colleges a real deal for use of for-fee online services. Who would have thought that Google would emulate LexisNexis’ law school strategy?
I read “Google Offers Free Cloud Access to Colleges, Plays Catch Up to Amazon, Microsoft.” I reported that a mid tier consulting firm anointed Microsoft as the Big Dog in cloud computing. Even in Harrod’s Creek, folks know that Amazon is at least in the cloud computing kennel with the Softies.
According to the write up:
Google in June announced an education grant offering free credits for its cloud platform, with no credit card required, unlimited access to its suite of tools and training resources. Amazon and Microsoft’s cloud services both offer education programs, and now Google Cloud wants a part in shaping future computer scientists — and probably whatever they come up with using the tool.
The write up points out:
Amazon and Microsoft’s cloud services offer an education partnership in free trials or discounted pricing. For the time being, Microsoft Azure’s education program is not taking new applications and “oversubscribed,” the website reads. Amazon Web Services has an online application for its education program for teachers and students to get accounts, and Google is accepting applications from faculty members.
How does one avail oneself of these free services. Sign up for a class and hope that your course “Big Band Music from the 1940’s” qualifies you for free cloud stuff.
Stephen E Arnold, August 29, 2016
August 22, 2016
The article on Inside HPC titled IBM Partners with University of Aberdeen to Drive Cognitive Computing illustrates the circumstances of the first Scottish university partnership with IBM. IBM has been collecting goodwill and potential data analysts from US colleges lately, so it is no surprise that this endeavor has been sent abroad. The article details,
In June 2015, the UK government unveiled plans for a £313 million partnership with IBM to boost big data research in the UK. Following an initial investment of £113 million to expand the Hartree Centre at Daresbury over the next five years, IBM also agreed to provide further support to the project with a package of technology and onsite expertise worth up to £200 million. This included 24 IBM researchers, stationed at the Hartree Centre, to work side-by-side with existing researchers.
The University of Aberdeen will begin by administering the IBM cognitive computing technology in computer science courses in addition to ongoing academic research with Watson. In a sense, the students exposed to Watson in college are being trained to seek jobs in the industry, for IBM. They will have insider experience and goodwill toward the company. It really is one of the largest nets cast for prospective job applicants in industry history.
Chelsea Kerwin, June 22, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/
August 3, 2016
Does your business need a mentor? How about any students or budding entrepreneurs you know? Such a guide can be invaluable, especially to a small business, but Google and Bing may not be the best places to pose that query. Business magazine Inc. has rounded up “Ten Top Platforms for Finding a Mentor in 22016.” Writer John Boitnott introduces the list:
“Many startup founders have learned that by working with a mentor, they enjoy a collaboration through which they can learn and grow. They usually also gain access to a much more experienced entrepreneur’s extensive network, which can help as they seek funding or gather resources. For students, mentors can provide the insight they need as they make decisions about their future. One of the biggest problems entrepreneurs and students have, however, is finding a good mentor when their professional networks are limited. Fortunately, technology has come up with an answer. Here are nine great platforms helping to connect mentors and mentees in 2016.”
Boitnott lists the following mentor-discovery resources: Music platform Envelop offers workshops for performers and listeners. Mogul focuses on helping female entrepreneurs via a 27/7 advice hotline. From within classrooms, iCouldBe connects high-school students to potential mentors. Also for high-school students, iMentor is specifically active in low-income communities. MentorNet works to support STEM students through a community of dedicated mentors, while the free, U.K.-based Horse’s Mouth supports a loosely-organized platform where participants share ideas. Also free, Find a Mentor matches potential protégés with adult mentors. SCORE supplies tools like workshops and document templates for small businesses. Cloud-based MentorCity serves entrepreneurs, students, and nonprofits, and it maintains a free online registry where mentors can match their skill sets to the needs of inquiring minds.
Who knew so much professional guidance was out there, made possible by today’s technology, and much of it for free? For more information on each entry, see the full article.
Cynthia Murrell, August 3, 2016
August 2, 2016
The article on TheNextWeb titled Teenagers Have Built a Summary App that Could Help Students Ace Exams might be difficult to read over the sound of a million teachers weeping into their syllabi. It’s no shock that students hate to read, and there is even some cause for alarm over the sheer amount of reading that some graduate students are expected to complete. But for middle schoolers, high schoolers, and even undergrads in college, there is a growing concern about the average reading comprehension level. This new app can only make matters worse by removing a student’s incentive to absorb the material and decide for themselves what is important. The article describes the app,
“Available for iOS, Summize is an intelligent summary generator that will automatically recap the contents of any textbook page (or news article) you take a photo of with your smartphone. The app also supports concept, keyword and bias analysis, which breaks down the summaries to make them more accessible. With this feature, users can easily isolate concepts and keywords from the rest of the text to focus precisely on the material that matters the most to them.”
There is nothing wrong with any of this if it is really about time management instead of supporting illiteracy and lazy study habits. This app is the result of the efforts of an 18-year-old Rami Ghanem using optical character recognition software. A product of the era of No Child Left Behind, not coincidentally, exposed to years of teaching to the test and forgetting the lesson, of rote memorization in favor of analysis and understanding. Yes, with Summize, little Jimmy might ace the test. But shouldn’t an education be more than talking point mcnuggets?
Chelsea Kerwin, August 2, 2016
July 21, 2016
Is big data good only for the hard sciences, or does it have something to offer the humanities? Writer Marcus A Banks thinks it does, as he states in, “Challenging the Print Paradigm: Web-Powered Scholarship is Set to Advance the Creation and Distribution of Research” at the Impact Blog (a project of the London School of Economics and Political Science). Banks suggests that data analysis can lead to a better understanding of, for example, how the perception of certain historical events have evolved over time. He goes on to explain what the literary community has to gain by moving forward:
“Despite my confidence in data mining I worry that our containers for scholarly works — ‘papers,’ ‘monographs’ — are anachronistic. When scholarship could only be expressed in print, on paper, these vessels made perfect sense. Today we have PDFs, which are surely a more efficient distribution mechanism than mailing print volumes to be placed onto library shelves. Nonetheless, PDFs reinforce the idea that scholarship must be portioned into discrete units, when the truth is that the best scholarship is sprawling, unbounded and mutable. The Web is flexible enough to facilitate this, in a way that print could never do. A print piece is necessarily reductive, while Web-oriented scholarship can be as capacious as required.
“To date, though, we still think in terms of print antecedents. This is not surprising, given that the Web is the merest of infants in historical terms. So we find that most advocacy surrounding open access publishing has been about increasing access to the PDFs of research articles. I am in complete support of this cause, especially when these articles report upon publicly or philanthropically funded research. Nonetheless, this feels narrow, quite modest. Text mining across a large swath of PDFs would yield useful insights, for sure. But this is not ‘data mining’ in the maximal sense of analyzing every aspect of a scholarly endeavor, even those that cannot easily be captured in print.”
Banks does note that a cautious approach to such fundamental change is warranted, citing the development of the data paper in 2011 as an example. He also mentions Scholarly HTML, a project that hopes to evolve into a formal W3C standard, and the Content Mine, a project aiming to glean 100 million facts from published research papers. The sky is the limit, Banks indicates, when it comes to Web-powered scholarship.
Cynthia Murrell, July 21, 2016
There is a Louisville, Kentucky Hidden Web/Dark
Web meet up on July 26, 2016.
Information is at this link: http://bit.ly/29tVKpx.
July 19, 2016
Deep learning is another bit of technical jargon floating around and it is tied to artificial intelligence. We know that artificial intelligence is the process of replicating human thought patterns and actions through computer software. Deep learning is…well, what specifically? To get a primer on what deep learning is as well as it’s many applications check out “Deep Learning: An MIT Press Book” by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
Here is how the Deeping Learning book is described:
“The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free. The print version will be available for sale soon.”
This is a fantastic resource to take advantage of. MIT is one of the leading technical schools in the nation, if not the world, and the information that is sponsored by them is more than guaranteed to round out your deep learning foundation. Also it is free, which cannot be beaten. Here is how the book explains the goal of machine learning:
“This book is about a solution to these more intuitive problems. This solution is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept de?ned in terms of its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all of the knowledge that the computer needs.”
If you have time take a detour and read the book, or if you want to save time there is always Wikipedia.
There is a Louisville, Kentucky Hidden Web/Dark
Web meet up on July 26, 2016.
Information is at this link: http://bit.ly/29tVKpx.
June 15, 2016
The Dark Web and deep web can often get misidentified and confused by readers. To take a step back, Trans Union’s blog offers a brief read called, The Dark Web & Your Data: Facts to Know, that helpfully addresses some basic information on these topics. First, a definition of the Dark Web: sites accessible only when a physical computer’s unique IP address is hidden on multiple levels. Specific software is needed to access the Dark Web because that software is needed to encrypt the machine’s IP address. The article continues,
“Certain software programs allow the IP address to be hidden, which provides anonymity as to where, or by whom, the site is hosted. The anonymous nature of the dark web makes it a haven for online criminals selling illegal products and services, as well as a marketplace for stolen data. The dark web is often confused with the “deep web,” the latter of which makes up about 90 percent of the Internet. The deep web consists of sites not reachable by standard search engines, including encrypted networks or password-protected sites like email accounts. The dark web also exists within this space and accounts for approximately less than 1 percent of web content.”
For those not reading news about the Dark Web every day, this seems like a fine piece to help brush up on cybersecurity concerns relevant at the individual user level. Trans Union is on the pulse in educating their clients as banks are an evergreen target for cybercrime and security breaches. It seems the message from this posting to clients can be interpreted as one of the “good luck” variety.
Megan Feil, June 15, 2016
May 23, 2016
Who exactly are today’s innovators? The Information Technology & Innovation Foundation (ITIF) performed a survey to find out, and shares a summary of their results in, “The Demographics of Innovation in the United States.” The write-up sets the context before getting into the findings:
“Behind every technological innovation is an individual or a team of individuals responsible for the hard scientific or engineering work. And behind each of them is an education and a set of experiences that impart the requisite knowledge, expertise, and opportunity. These scientists and engineers drive technological progress by creating innovative new products and services that raise incomes and improve quality of life for everyone….
“This study surveys people who are responsible for some of the most important innovations in America. These include people who have won national awards for their inventions, people who have filed for international, triadic patents for their innovative ideas in three technology areas (information technology, life sciences, and materials sciences), and innovators who have filed triadic patents for large advanced-technology companies. In total, 6,418 innovators were contacted for this report, and 923 provided viable responses. This diverse, yet focused sampling approach enables a broad, yet nuanced examination of individuals driving innovation in the United States.”
See the summary for results, including a helpful graphic. Here are some highlights: Unsurprisingly to anyone who has been paying attention, women and U.S.-born minorities are woefully underrepresented. Many of those surveyed are immigrants. The majority of survey-takers have at least one advanced degree (many from MIT), and nearly all majored in STEM subject as undergrads. Large companies contribute more than small businesses do while innovations are clustered in California, the Northeast, and close to sources of public research funding. And take heart, anyone over 30, for despite the popular image of 20-somethings reinventing the world, the median age of those surveyed is 47.
The piece concludes with some recommendations: We should encourage both women and minorities to study STEM subjects from elementary school on, especially in disadvantaged neighborhoods. We should also lend more support to talented immigrants who wish to stay in the U.S. after they attend college here. The researchers conclude that, with targeted action from the government on education, funding, technology transfer, and immigration policy, our nation can tap into a much wider pool of innovation.
Cynthia Murrell, May 23, 2016 | <urn:uuid:d09e585f-c1bb-46fa-9696-b01827fae4af> | CC-MAIN-2017-04 | http://arnoldit.com/wordpress/category/education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00312-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9479 | 3,979 | 2.53125 | 3 |
Taking further stock of your CompTIA Network+ knowledge
In our last two installments, we started the first parts of a self-test practice assessment based on the CompTIA Network+ (N10-006) certification exam. We looked first at 22 questions about topics beneath the Network Architecture domain, and then at 20 questions on Network Operations.
This time around, we continue to mirror the actual weighting of the CompTIA exam with 24 questions about Troubleshooting. (The final installment in the series will consist of 34 questions that combine the Network Security and Industry Standards, Practices and Network Theory domains.) Good luck!
1. The output of which command is shown in the display below?
A. arp -a
B. ipconfig /all
C. nbtstat -a
2. The output of which command is shown in the display below?
A. dig gocertify.com
B. ping gocertify.com
C. tracert gocertify.com
D. echo gocertify.com
3. You need to add a static route on the Linux server to the routing table. The route you want to add is to 22.214.171.124 with a subnet mask of /24 and a next hop of 192.168.1.1.
Which of the following commands will accomplish this?
A. route add 126.96.36.199/24 192.168.1.1
B. route add 192.168.1.1 188.8.131.52/24
C. route –a 192.168.1.1 184.108.40.206
D. add route 192.168.1.1 /24 220.127.116.11
4. As the network administrator, you want to see the IPv6 path through a network and a number of vendors and platforms differ in what you use to accomplish this. Which of the following is not a valid option for use with IPv6?
C. traceroute -6
5. Which of the following is a piece of troubleshooting equipment that can be used to check a phone line for a dial tone and connect to a 66 block or 110 block?
A. Butt set
C. Toner probe
D. Cable certifier
6. Which of the following is used to attached an RJ45 connector to the end of a cable?
7. You need to borrow a tool to troubleshoot a potential problem with a fiber optic cable. You want to interconnect a transmit fiber with a receive fiber and verity that all is functional. What type of tool should you use?
B. Loopback plug
D. Looking glass
8. All of the workstations in your office are running Windows 7 Enterprise and you want to display protocol statistics and current TCP/IP connections using NBT (NetBIOS over TCP/IP). When you enter the command nbtstat without parameters, what is displayed?
A. The local machine’s name table
B. The local machine’s sessions table
C. The cache of remote names and their IP addresses
D. The help options
9. The output of which command is shown in the display below?
10. Which of the following commands can be used to resolve an FQDN into an IP address?
11. You are trying to figure out a particularly troublesome short that seems to appear at random and want to ping a remote host repeatedly until you choose to stop it from doing so by pressing Ctrl+C. Which option can be used with the ping utility to tell it to keep working until stopped?
12. Which of the following options tells the pathping utility to not resolve addresses to hostnames?
13. You have been told to baseline everything. To assist with this, you want to access a server on the internet that will view the routing information from the perspective of that server. Which of the following types of sites will allow you to accomplish this?
A. Echo wall
B. Glass onion
C. Looking glass
D. Hand mirror
14. You are trying to explain the troubleshooting methodology to a junior administrator but he is questioning the validity of it. Which of the following steps falls beneath the “Establish a theory of probable cause” step?
A. Verify full system functionality
B. Determine if anything has changed
C. Consider multiple approaches
D. Test the theory to determine cause | <urn:uuid:ec4932a6-8e7e-41bc-b9a6-4a5b77900e2f> | CC-MAIN-2017-04 | http://certmag.com/taking-stock-comptia-network-knowledge/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00312-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.849606 | 928 | 2.515625 | 3 |
In the first article in this series I wrote about the various ways in which big data needs to be considered over and above what database you are going to host it on, specifically with respect to governance and management of that data. In the second article I wrote about trust: can you rely on this data for decision-making. A second, and related, issue is context. Big data per se is meaningless: it only becomes useful to your business when that data is understood within the proper context. Of course, this is true of all data but it is easy to get carried away when it comes to big data. There have been far too many "oh there must be some valuable information in there somewhere" claims.
So, what does data in context mean? In general terms it means understanding how this data relates to existing sources of data that are well understood. For example, how do comments on social media sites relate to brand management or CRM? This is fairly intuitive. However, as we move into the realm of the Internet of Things these relationships can become less obvious, especially to business users who may not be completely au fait with technology. For example, if you think loosely about smart meters then you might assume that these were just about billing and capacity planning but, in reality, there are a variety of other ways to use that information, for example for fraud detection and to inform service management.
However, context considerations also go more deeply than this. For instance, you will want to know where the data came from (some web sites, for example, are self-selecting and/or have a particular political, religious or other bias - alternatively, is this data from the latest model of smart meter or is it from a previous generation device? - or, thirdly, is this derived data, in which case what is its lineage?), what the terms mean (does "cool" in reference to your product mean a good thing or a bad thing?), how up-to-date the data is (in fast moving markets, older data may be irrelevant and the same may apply to different generations of sensors), who or what has touched the data and, if it has been changed, then how it has been changed (machine generated data often needs de-duplication: a good thing but you need to know that that has been done). All these contextual pieces of information will help to inform you as to how much you should trust this data (see previous article). But, more than that, this sort of information will help you to decide what information you should use in your analyses and what should be left out.
Here's a concrete example of what we might be talking about: suppose your data scientists come up with the suggestion that "if we put payphones into convenience stores that will reduce crime". Wouldn't you want to know how they reached this conclusion, based on what evidence, and whether they had considered if this would simply move the crime from one location to another? Laws of unintended consequences can easily apply, not to mention false correlations such as the famous beer and nappies (diapers).
To put all this more technically, you need metadata about the data you are intending to analyse. There will be different sorts of metadata depending on the data you are analysing just as there will be different sorts of data quality processes (discussed in the last article) that need to be applied but the need is there just as it is with conventional data. | <urn:uuid:a7068b52-5b47-4b55-ae73-dd71d05e0640> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/big-data-context/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962631 | 700 | 2.53125 | 3 |
Recovering the data stored on a failed hard disk drive (HDD) has always been hard…really hard. The nano-scale tolerances and cutting edge technology that make the ultra-high capacity drives being sold today possible are not making the recovery task any easier. However, as much as hard drive technology has changed over the last half a century, the data is still stored in a magnetic format on circular disks commonly referred to as platters. From a recovery standpoint, as long as the platters are healthy, the data can likely be recovered.
But what happens when the hard drive fails and the platters are damaged? Can the data still be recovered?
The short answer is maybe. I know, not exactly a definitive answer. In all honesty, it really depends on the amount and location of the damage, and the capabilities of the lab performing the recovery work. In some instances, the damage to the platters is so severe or located in very critical areas that data recovery is impossible. However, with the right expertise and equipment, many data recovery cases involving platter damage still result in a significant amount of the user’s data being recovered.
What factors determine whether or not Gillware can recover data from an HDD with platter damage?
Whether or not data can be recovered from a failed HDD is largely dependent on the health of the storage media, commonly referred to as a platter. The platters, after all, are the components inside the HDD that store the binary data that comprises the user’s documents, photos, spreadsheets and more. Unfortunately, with the ever increasing capacity and decreasing tolerances of modern HDDs, incidence of platter damage (the delicate magnetic coating is scratched or scored) is on the rise. This is a troubling trend and one that represents huge challenges for the data recovery industry.
Damage to the HDD platters presents two distinct challenges to data recovery labs. One of these challenges is the destruction of the data that lives in the regions of the disk that are physically scratched or damaged. Unfortunately, nothing can be done to remedy this. The data in those areas has been scratched from the platter surface and is not recoverable. But what about all the areas of the platter(s) that are not physically damaged? Surely something can be done to recover the data in the areas of the platter that have not been relegated to a life as a pesky dust particle. The short answer is yes, but doing so means we need to overcome the second challenge that data recovery labs face when attempting to recover data from an HDD with platter damage: dust. Not dust like you find on those annoying blinds you keep forgetting to wipe down during your spring cleaning kick, but rather the ultra-hard microscopic particulates generated inside the HDD when the read/write heads contact the platter surface. These particulates embed themselves in the platter surface and can damage otherwise healthy read/write heads.
Removing or smoothing these particulates is an essential first step when attempting to recover data from an HDD with damaged platters.
How does Gillware prepare damaged HDD platters for data recovery?
Contrary to what a couple of popular threads on the all-knowing Internet claim, cleaning HDD platters is not as simple as picking up a rag soaked in some isopropyl alcohol and giving the platter a good ol’ spit shine. This technique leaves an undesirable residue behind that will further damage the platter during the recovery process. HDDs are highly precise electromechanical devices. Manual caveman techniques not only don’t help the recovery effort, they actually cause additional damage that can make recovery by professional labs difficult or impossible.
Instead Gillware uses a sophisticated burnishing process to remove unwanted particulates from the platter surface. Gillware engineers modeled our burnishing setup after a similar process used during the HDD assembly process at the factory. You might be thinking to yourself, “Why do the HDD manufacturers need to clean the platter surfaces during production?” Although HDD platters are manufactured using state-of-the-art techniques and assembled in dust free environments, imperfections are always possible. As a result, HDD platters are cleaned and burnished prior to being placed into the HDD chassis.
Gillware uses the same equipment employed by HDD manufacturers, specially retrofitted for life as a data recovery tool, to burnish the platter surface(s) and prepare it for recovery operations.
What is platter burnishing and how does the process work?
In the world of mechanics, burnishing is a natural process that occurs when two surfaces slide against one another effectively polishing or smoothing each surface. Put another way, it is component wear and is generally an undesirable phenomenon. However, in the world of data recovery and platters with very hard embedded particulates, it’s exactly what the doctor ordered.
The first step in the burnishing process is to remove the platters from the HDD chassis. The platters are then mounted, one at a time, on a custom fixture that spins the platter at 10,000-15,000 RPM. As the platter is spinning, a robotic arm with a special burnishing head attached is swiped over the platter surface. Think of the burnishing head as a very precise and expensive razor blade. As the burnishing head traverses the surface of the platter, it picks up or shears off the microscopic embedded debris left behind when the head stack crashed and the platter surface was damaged. What is left after burnishing is an ultra-smooth polished platter surface. Depending on the severity of the damage, the burnishing process can take anywhere from a couple minutes to many hours to perform. After burnishing all the surfaces, the platters are remounted in the drive chassis and new read/write heads are installed and calibrated. Then, finally, the drive is ready to move forward with the data recovery process.
Want to see the burnisher in action? Check out the video below to see a demonstration. | <urn:uuid:ea4e6ddf-b33d-4743-9a16-bc50e587df22> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery/data-recovery-101-burnishing-platters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940656 | 1,238 | 2.640625 | 3 |
DWDM or any WDM just enables you to utilize additional resource which is wavelength spectrum. DWDM is used in long distance high capacity transmission. Commmon used DWDM equipment is DWDM multiplexer, also there are DWDM SFP and DWDM SFP+ transceivers. You can have single high speed interface if it fills the requirement, you can have multiple low speed interfaces running parallel in WDM system, or You can have both highest speed interfaces and running parallel on most compact WDM. We have benefit a lot from DWDM technology, but you know the disadvantages?
-DWDM is very complex and can be very costly. However if your fiber is limited and you require the bandwidth DWDM provides, it may be more economical than the alternative of OSP overbuilding of your fiber plant. A good characterization study of your existing fiber and paths is highly recommended prior to planning and engineering a DWDM system. Sometimes that information is available from the installation testing, however over time that may change with new construction, relocations, splices, etc. PMD (polarization mode dispersion) witch is a dynamic factor in the fiber plant also needs to be addressed in DWDM path calculations.
-DWDM requires high quality optics and well cooled and stable lasers.
-DWDM is needed to keep all the signals in specific band. Band which is both most usable in optical cable as different wavelengths behave differently in optical cable. And also it depends on amplifiers as it is not easy to produce amplifier that amplifies very large spectrum area without additional problems or noise.
-DWDM is inherently more complex than separate glass and parallel links, as multiple signals share a cable, mixing them together and splitting them apart again. So more cost, more power, optical looses in filters and other components and harder to diagnose issues.
-In DWDM systems, one connector can carry a whole cable's-worth of traffic. If more than one connector is pulled and several incorrectly reconnected, the **** can truly hit the fan. Replacing, for instance, an amp (with DCM, OSC, and local connections) means everything disconnected has to be put back exactly as it was - or it might simply not work at all. Provisioning can be equally similarly disastrous. And these days, training exposure to these systems is minimal.
-DWDM network is not flexible network. For DWDM network flexibility, need to use ROADM( reconfigurable OADM) that are expensive.
-High losses, the attenuation is around 10 dB in some cases.
Compared with DWDM, CWDM is more forgiving, simpler to work with and cost less, but you have the lower number of waves and bandwidth. It takes some good financial and traffic planning to decide which is best for your requirements. CWDM mux gives you a cheaper color muxing scheme, but with more constraints on distances and so on.
In less demanding situations, CWDM can be used. As CWDM is not amplified it utilizes larger spectrum area and each channel uses wider spectrum range. Meaning it can be produced with cheaper components. But trade off is distance and maximum utilization of the spectrum.
In conclusion choosing DWDM or CWDM technology all depends on specific task. | <urn:uuid:995f5fe4-ac36-4f70-997c-ecbdc3a06e78> | CC-MAIN-2017-04 | https://connect.nimblestorage.com/people/adfdfa/blog/2015/07/31/dwdm-disadvantages-and-cwdm-technology | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00486-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950406 | 691 | 2.78125 | 3 |
By the looks of it creativity in the concept car realm is alive and well. The Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) this week announced the winner of its LIghtweighting Technologies Enabling Comprehensive Automotive Redesign (LITECAR) Challenge that featured 250 entries battling it out to develop some very cool fuel-efficient cars.
+More on Network World: What advanced tech will dominate your car by 2025? IBM knows+
ARPA-E teamed with vehicle design firm Local Motors to run the LITECAR Challenge that looked to fast-track ground-breaking auto ideas by using novel materials, structural designs, energy absorbing materials and unique methods of manufacturing like 3D printing to reduce vehicle weight while maintaining current U.S. automotive safety standards.
According to Local Motors the winner was Aerodynamic Water Droplet with Strong Lightweight Bone Structure created by Andres Tovar, a mechanical engineering assistant professor at the School of Engineering and Technology at Indiana University-Purdue University Indianapolis, and a group of his graduate students. He won $60,000 for his efforts.
“Tovar’s proposed winning vehicle design has a water droplet outer shape, described as an envelope, with an embedded ribcage-like structure called a spaceframe. The spaceframe is made out of 3D printed functionally graded aluminum alloy foam. The bone-like structure of the spaceframe provides the mechanical strength and energy absorption capabilities required to protect the occupants in the event of a collision, similar to the protective structures used in NASCAR racecars,” ARPA-E stated.
+More on Network World: NTSB: Distracted driving among Top 10 transportation safety challenges+
“The envelope's material that interfaces with the spaceframe is made of a polymer composite, which has the characteristics of monocoque design. A monocoque design is similar to an egg, where weight is supported through the object’s external shell. The envelope's water droplet shape provides a more streamlined vehicle design resulting in lower aerodynamic drag and improved fuel economy. By optimizing the spaceframe for energy absorption, Tovar and his team utilized lower mass materials to achieve vehicle weight reduction,” ARPA-E stated.
Other winners included:
First runner up ($40,000): Skeletos –The car’s developers said they used biomimetics and a novel composite material with the properties of bones using tricalcium phosphate and polypropylene to come up with their design. Bones are two times tougher than granite, have 10 times the compressive strength, 50 times greater resistance to breaking under pressure than concrete and are four to five times lighter than steel. The 3D printed structure varied in thickness as well as composition for strength in critical areas to enable a lightweight yet strong design.
Second runner up ($20,000) : Metal Matrix Metallic Composite (3MC) sought to develop non-traditional composite manufacturing techniques for metal matrix metallic composites in automotive body panels. The team provided a car that shows a 70% potential reduction in body panel weight over conventional materials using co-extruded magnesium and aluminum composite materials.
Innovative design component ($10,000): Manta- focused on four main weight reduction concepts, including a windowless cabin, a rear-facing detachable seat, a 1-2-1 tire layout, and lastly, a multi-material body structure design.
Innovative safety component ($10,000) : Modular Sprung Pod Car met the vehicle safety goal by employing a passenger pod design and redirecting horizontal impact forces into vertical motion. Impact energy was further dissipated by using the suspension and chassis.
Community favorite ($10,000) : Apalis targeted its chassis, or base frame, with a safety cell and composite sub frames, electric powertrain and wheel assemblies, and super capacitors with efficient solar panels for an energy source.
If you want to take a gander at all of the LITECAR Challenge designs, go here.
You may recall that Local Motors helped build one of the world’s first full 3D printed cars, and in January said that by the end of the year they hope to be producing the vehicles for everyday consumption.
The two-seat car, known as a Strati, was demonstrated at the Detroit Auto Show and is built almost entirely of carbon-reinforced plastic, including the body and chassis, which takes about 44 hours to make. The goal for the next stage of research and development is to speed up the print rate to 24 hours while maintaining quality, the company says.
Check out these other hot stories: | <urn:uuid:78d08cd7-231e-4392-bc03-b22be86209e3> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2913635/internet-of-things/slightly-fast-and-not-furious-lightweight-car-challenge-brings-out-wicked-cool-prototypes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933093 | 961 | 2.765625 | 3 |
When reports of the Dark Seoul attack on South Korean financial services and media firms emerged in the wake of the attack on March 20, 2013, most of the focus was on the Master Boot Record (MBR) wiping functionality. PCs infected by the attack had all of the data on their hard drives erased. McAfee Labs, however, has discovered that the Dark Seoul attack includes a broad range of technology and tactics beyond the MBR functionality.
The forensic data indicates that Dark Seoul is actually just the latest attack to emerge from a malware development project that has been named Operation Troy. The name Troy comes from repeated citations of the ancient city found in the compile path strings of the malware. The primary suspect group in these attacks is the New Romanic Cyber Army Team, which makes significant use of Roman terms in their code. The McAfee Labs investigation into the Dark Seoul incident uncovered a long-term domestic spying operation, based on the same code base, against South Korean targets.
Software developers (both legitimate and criminal) tend to leave fingerprints and sometimes even footprints in their code. Forensic researchers can use these prints to identify where and when the code was developed. It’s rare that a researcher can trace a product back to individual developers (unless they’re unusually careless).
But frequently these artifacts can be used to determine the original source and development legacy of a new “product.” Sometimes, as in the case of the New Romanic Cyber Army Team or the Poetry Group, the developers insert such fingerprints on purpose to establish “ownership” of a new threat. McAfee Labs uses sophisticated code analysis and forensic techniques to identify the sources of new threats because such analysis frequently sheds light on how to best mitigate an attack or predicts how the threat might evolve in the future.
History of Troy
The history of Operation Troy starts in 2010, with the appearance of the NSTAR Trojan. Since the appearance of NSTAR seven known variants have been identified. (See following diagram.) Despite the rather rapid release cycle, the core functionality of Operation Troy has not evolved much. In fact, the main differences between NSTAR, Chang/Eagle, and HTTP Troy had more to do with programming technique than functionality.
The first real functional improvements appeared in the Concealment Troy release, in early 2013. Concealment Troy changed the control architecture and did a better job of concealing its presence from standard security techniques. The 3RAT client was the first version of Troy to inject itself into Internet Explorer, and Dark Seoul added the disk-wiper functionality that disrupted financial services and media companies in South Korea. Dark Seoul was also the first Troy attack to conduct international espionage; all previous versions were simple domestic cybercrime/cyberespionage weapons.
As interesting as the legacy of Operation Troy is, even more enlightening are the fingerprints and footprints that allow McAfee Labs to trace its legacy. In the “fingerprint” category is what developers term the compile path. This is simply the path through the developer’s computer file directory to the location at which the source code is stored.
An early Troy variant in 2010, related to NSTAR and HTTP Troy via reused components, used this compile path.
A second variant from 2010, compiled May 27, also contained a very similar compile path. We were able to obtain some traffic with the control server.
McAfee Labs has consistently seen the Work directory involved, just as throughout the other post-2010 malware used in this campaign. By analyzing attributes such as compile path, McAfee Labs researchers have been able to establish connections between the Troy variants and document functional and design changes programmed into the variants.
Both the Chang and EagleXP variants are based on the same code that created NSTAR and the later Troy variants. The use of the same code also confirms the attackers have been operating for more than three years against South Korean targets.
In the “footprint” category McAfee Labs documented the most significant functional change that occurred, in the 2013 release of the Concealment Troy. Historically, the Operation Troy control process involved routing operating commands through concealed Internet Relay Chat (IRC) servers. The first three Troy variants were managed through a Korean manufacturing website in which the attackers installed an IRC server.
From the attacker’s perspective there are two problems with this approach. The first is that if the owners of infected servers discover the rogue IRC process, they would remove it and the attacker would lose control of the Troy-infected clients. The second is that the Troy developers actually hardcoded the name of the IRC server into each Troy variant. This means that they had to first find a vulnerable server, install an IRC server, and then recompile the Troy source into a new variant controlled by that specific server. For this reason nearly all Troy variants needed to be controlled by a separate control server.
The Concealment Troy variant was the first to break this dependency on a hardcoded IRC control server. Concealment Troy presumably gets its operating instructions from a more sophisticated (and likely more distributed) botnet that is also under the control of the Troy syndicate.
This investigation into the cyberattacks on March 20, 2013, revealed ongoing covert intelligence-gathering operations. McAfee Labs concludes that the attacks on March 20 were not an isolated event strictly tied to the destruction of systems, but the latest in a series of attacks dating to 2010. These operations remained hidden for years and evaded the technical defenses that the targeted organizations had in place. Much of the malware from a technical standpoint is rather old, with the exception of Concealment Troy, which was released early 2013.
A copy of the full report can be found here. | <urn:uuid:affe6041-ab89-4db0-afa4-a00951831a62> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/07/08/dissecting-operation-troy-cyberespionage-in-south-korea/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946488 | 1,161 | 2.96875 | 3 |
For patients in underserved areas and difficult-to-reach locations, they may have few ways to gain medical help without drastic steps needing to be taken. However, the ever-growing popularity of telemedicine is helping to prevent these problems, allowing doctors to better serve their audiences in faster, more efficient methods.
Treating patients as far away as Rwanda is just one of the advantages doctors have found from adopting the technology, according to Pain Medicine News. A recent program at the University of Virginia specializing in anesthesiology doesn't just provide telemedicine needs for rural citizens in Virginia and other locations, but it's been said to be the first program helping care improve in Africa.
Not only does the process help patients in far-off locales get help, but it's also working to help the improvisation skills of the doctors involved. While Rwandan doctors, of which there are only 11 anesthesiologists, are gaining additional training they may not learn in their average education, medical residents have opportunities to adopt creative remedies for patients' problems without using modern medicine.
One example came from a Rwandan woman who had a stab wound in her neck that had punctured her airway. Working together, the Rwandan doctors and American medical students brainstormed methods to save the woman's life. In the end, they determined that the patient needed a tracheostomy despite a lack of materials, and the patient survived.
Benefits to employers as well as doctors
As many benefits that doctors gain from adoption of the technology, employers can gain them too, according to the Dallas Business Journal. Employers no longer have to reimburse doctors for the cost of office visits, nor do they have to wait extended amounts of time for simple treatments. In addition, the service is frequently free, or at least much less expensive than the alternative would be.
Telemedicine gains an additional advantage from its convenience, as it can be even more readily available when people in remote locations, from mining locations to offshore oil rigs, can receive a diagnosis with little trouble. Whereas they would normally need to travel long distances for assistance, they can now receive attention without even leaving their location.
Equipment and healthcare industry piece brought to you by Marlin Equipment Finance, leaders in healthcare equipment financing. Marlin is a nationwide provider of equipment financing solutions supporting equipment suppliers and manufacturers in the security, food services, healthcare, information technology, office technology and telecommunications sectors. | <urn:uuid:22cfad07-afb5-46e2-8d13-28de0d3a0427> | CC-MAIN-2017-04 | http://mediaroom.marlinfinance.com/healthcare-equipment/telemedicine-can-help-patients-from-rwanda-to-rural-america/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968678 | 497 | 2.640625 | 3 |
The problem: You want to send images into space and you want them to last 5 billion years. The solution: A gold-plated disc.
That's the idea behind The Last Pictures project, which is scheduled to blast off in the next few months.
[ FREE DOWNLOAD: 6 things every IT person should know ]
The project involves attaching a silicon disc encased in gold to the outside of a communications satellite. The disc will include just 100 etched photos, which are meant to be a cultural artifact for aliens to find if mankind is no longer here when they come knocking.
The disc, designed by researchers at MIT and Carleton College, is filled with images chosen by artist Trevor Paglen and a team he put together, who are working with the nonprofit arts organization Creative Time and the media and satellite company EchoStar.
The golden artifact is hitching a ride on the EchoStar XVI satellite, which once in orbit will be fully leased to DISH Direct-to-Home services in the United States. After several technical problems, the satellite is now scheduled to blast off in the November-December timeframe from the Baikonur Cosmodrome, the world's first and largest space launch facility located in Kazakhstan.
Since the conclusion of the U.S. Space Shuttle program, Baikonur has become the sole launch facility for manned missions to the International Space Station.
The gold-encased silicon disc that holds the etched images (Source: Creative Time).
The black and white pictures that Paglen chose were also recently published in a book. They include photos of a meteorite, a dust storm hitting an American Midwest neighborhood, a ship traversing a canal and cherry blossoms.
Looking at the photographs, a theme is somewhat inscrutable -- an observation that Paglen himself admits. For example, a photo of a group of people taken by a predator drone joins several photos of 17,000-year-old cave paintings.
One of the more impalpable photos is of "financial crop circles," or trading activity patterns created by automated high-speed computer algorithms. One photo shows Native American petroglyphs from Canyon de Chelly in Arizona.
The photo of high-speed trading algorithm patterns also known as "financial crop circles" (Source: Trevor Paglen)
"What it depicts is Spanish raiders arriving in the Navajo territory. So really, it's an image of an alien invasion," he said. "So that was a powerful image for us in thinking about the history of empire and war and how the West and in fact much of the world [and civilization was developed]."
While at first blush the photos can seem to have been chosen at random, Paglen carefully picked the zeitgeist over a period of five years. The process included a group of artists from Creative Times, which commissioned the project.
The six artists spent eight months collecting images, everything from medieval alchemical texts to the history of cameras and messages in bottles to cybernetics (the study of mechanical, physical, biological, and cognitive systems).
Paglen also performed extensive interviews with other artists, philosophers, and scientists, who included biologists, physicists, and astronomers. To each, he posed a single question: What photos would you choose to send into outer space?
The scientists from MIT also offered the artist a fresh philosophical perspective with regard to understanding what the images may mean to aliens a billion years from now.
"Cave paintings became the obvious reference," Paglen said. "They tell us about the prehistoric environments of the ecology and perhaps even of the lifestyles of long ago that we know very little about."
Petroglyphs photo (Source: Creative Time)
19,000 miles above Earth
Once launched, EchoStar XVI will function for about 15 years and then go dead, orbiting 19,000 miles above Earth for five billion years. Why five billion? That's when scientists estimate the Sun will run out of hydrogen, become a red giant and expand, consuming the planets around it.
Until then, if humans do follow the dinosaurs and become extinct, there will be plenty of evidence of our existence left behind. Over the past 50 years, more than 800 spacecraft have been launched into geosynchronous orbit.
After a relatively short life, hundreds have become nothing more than floating junk. The debris have formed a ring of technology -- artifacts in there own right -- around Earth.
"These satellites are destined to become one of the longest-lasting artifacts of human civilization, quietly floating through space long after every trace of humanity has disappeared from the planet," Creative Time states in its online introduction to the project. | <urn:uuid:c66f90e7-8dd5-4014-a8ef-2c8ffb0d0d4c> | CC-MAIN-2017-04 | http://www.itworld.com/article/2721929/hardware/project-to-blast-gold-plated-artifact-disc-into-orbit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954592 | 948 | 2.796875 | 3 |
The Hoover Institution is an American public policy think tank and research institution located at Stanford University in California. Its official name is the Hoover Institution on War, Revolution, and Peace. It began as a library founded in 1919 by Republican Herbert Hoover, before he became President of the United States. The library, known as the Hoover Institution Library and Archives, houses multiple archives related to Hoover, World War I, World War II, and other world history.The Hoover Institution is a unit of Stanford University but has its own board of overseers. It is located on the campus. Its mission statement outlines its basic tenets: representative government, private enterprise, peace, personal freedom, and the safeguards of the American system.The Hoover Institution is an influential voice in American public policy. The Institution has long been a place of scholarship for individuals who previously held high-profile positions in government, such as George Shultz, Condoleezza Rice, Michael Boskin, Edward Lazear, John B. Taylor, John Cogan, Edwin Meese, and Amy Zegart—all Hoover Institution fellows. In 2007, retired U.S. Army General John P. Abizaid, former commander of the U.S. Central Command, was named the Institution's first annual Annenberg Distinguished Visiting Fellow.The Institution is housed in three buildings on the Stanford campus. The most prominent facility is the landmark Hoover Tower, which is a popular visitor attraction. The tower features an observation deck on the top level that provides visitors with a panoramic view of the Stanford campus and surrounding area. Wikipedia.
International Journal of Risk and Safety in Medicine | Year: 2013
Background And Objective: The impetus for this review was recent increased warnings of cardiovascular toxicity, fractures and bladder cancer associated with glitazone use. METHODS: A drug utilization review was performed regarding the use of Actos (pioglitazone) and Avandia (rosiglitazone) at Cooper Green Mercy Hospital (CGMH), an inner city safety net hospital in Birmingham, Alabama. Pharmacy records were reviewed hospital-wide to determine usage patterns of all anti-diabetic medications. Medline and the FDA websites were searched for articles on safety and efficacy of pioglitazone and rosiglitazone. Considerations were relative utilization profile, comparative efficacy, indications, relative cost, and safety profile of the two available medications in this drug class. RESULTS: On the basis of all of these factors, a hospital-wide switch of all rosiglitazone prescriptions to all pioglitazone was implemented, which was estimated to result in savings of 83,000 for the first year. No episodes of worsening of control of diabetes were anticipated, nor were episodes of decreased efficacy or adverse effects as a result of automatically switching patients from rosiglitazone to pioglitazone at the time of prescription filling. CONCLUSIONS: The conclusions can be summarized in a number of key points. • Clinicians should follow the American Diabetes Association guidelines for treatment. • The basis for diabetic control is weight loss, diet and exercise. • Initial medication management for type II Diabetes Mellitus includes metformin and insulin. • There are no circumstances in which use of glitazone medications is preferable to other medication groups, and there are no clinical circumstances in which use of glitazone medications is absolutely necessary, as opposed to other classes of diabetic medication. • There are significant contraindications, warnings and precautions to use of glitazones, which must be taken into consideration before use in every individual patient. • Glitazones in particular should not be used in the following circumstances: congestive heart failure (CHF), concurrent bladder cancer or severe osteoporosis. © 2013 - IOS Press and the authors. Source
De la Croix D.,IReS |
Journal of Environmental Economics and Management | Year: 2012
For a given technology, two ways are available to achieve low polluting emissions: reducing production per capita or reducing population size. This paper insists on the tension between the former and the latter. Controlling pollution either through Pigovian taxes or through tradable quotas schemes encourages agents to shift away from production to tax free activities such as procreation and leisure. This natalist bias will deteriorate the environment further, entailing the need to impose ever more stringent pollution rights per person. However, this will in turn gradually impoverish the successive generations: population will tend to increase further and production per capita to decrease as the generations pass. One possible solution consists in capping population too. © 2011 Elsevier Inc. Source
Hoover | Date: 2012-11-15
Pink colored metal alloys have a low gold content. The pink colored metal alloys of the present disclosure display a high level of tarnish resistance during extended use and wear, and have the appearance and properties comparable 10 karat (or above) gold alloys, which have a significantly higher gold content.
Hoover | Date: 2015-03-10
Hoover | Date: 2011-11-09
A vacuum cleaner having an elongate body portion | <urn:uuid:aa71904e-b98d-457e-b181-5337d879b77b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/hoover-80138/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929718 | 1,054 | 2.9375 | 3 |
No Child Left Behind (NCLB), a federal law to improve public education, has been criticized for only measuring student performance at a single point in time, rather than for the duration of a child's academic career. But that problem may be changing. For perhaps the first time in history, technology has the potential to not only enhance student learning, but also create a comprehensive framework for measuring and assessing student performance at the district, state and ultimately national level.Determine the value-added of specific schools and programs by following individual students' academic growth;
The Brookings Institution, one of NCLB's critics, applauded a 2005 effort by the U.S. Department of Education (ED) to start a pilot program that measures annual growth in student achievement -- what they learn from one year to the next. At the same time, however, the think tank emphasized that this holistic approach is only possible if states have the data resources for this type of measurement model.
As a result, the law's data requirements have spurred state departments of education to implement more sophisticated and complex information systems. Pressure from all government levels to generate public school data has, in turn, heightened expectations of technology offices to provide infrastructure for data management. While some states have responded well to the business and technology side of NCLB compliance, other states are farther behind on the learning curve.
New Era of Accountability
After President Bush signed NCLB into law on Jan. 8, 2002, the acronym became a loaded household name known for its mandates on students, teachers, school districts and state departments of education. The law requires that states make public the names of those schools that aren't making the grade. Districts and states must collect, categorize and analyze attendance data, test scores, teacher qualifications and other school information. Those schools that do not demonstrate adequate yearly progress (AYP) according to various NCLB measures cast a negative shadow on their stakeholders and are at risk of state takeover.
NCLB serves as a measure of school performance and method for improvement. It requires intensive and nuanced data collection and reporting. In the four years since the 670-page law passed, states have been working feverishly to adhere by 2014 to the myriad requirements outlined in the ED's Road Map for State Implementation, focusing particularly on student assessment, disaggregation of data and proficiency.
The pressure to collect and assess student achievement data is high. If states don't implement an annual measurement system, the federal government can remove some of the $66 billion in funds or grant money it spends on K-12 education annually. While this is just a fraction of the more than $450 billion states and local governments spend, it's enough to force states to overhaul data collection and analysis.
Education data, if properly organized, aggregated and presented, can provide federal, state and local entities with a wealth of information for research and evaluation. This scenario is fast becoming the goal for state departments of education. To optimize data management, educators need the help of technology and business experts.
"Data collection and reporting requirements mandated by NCLB, in the past, were not a required component of any IT department's strategy," said Michael Droe, chief technology officer of Hacienda La Puente Unified School District in California. "As a requirement for every district to meet AYP, all subgroups must meet AYP as well. This requires data collection and reporting instruments that would not otherwise be available except through the use of technology."
The volume of information produced from these assessments and the ability to align these results with state assessments and standards to make "just-in-time instructional decisions" would not be possible without technology and rich data management tools, such as data warehouses and longitudinal systems that match each student's achievement records over time from pre-kindergarten through state college or university, Droe said. The ED is encouraging a move toward longitudinal data systems (LDS), which, according to a 2005 Data Quality Campaign (DQC) survey of state data collection issues from the National Center for Educational Accountability (NCEA), do the following:
Identify consistently high-performing schools so that educators and the public can learn from best practices;
Evaluate the impact of teacher preparation and training programs on student achievement; and
Focus school systems on preparing a higher percentage of students to succeed in rigorous high-school courses, college and challenging jobs.
In addition, the DQC recommended that states incorporate these overarching concepts when designing an LDS: student privacy, effective data architecture and warehousing, interoperability, portability, and professional development for users.
"Currently three-fourths of the states have the basic elements needed for a longitudinal data system," said Nancy Smith, deputy director of the DQC. "Most other states are in the process of implementing those features. A few states have the majority of the elements. Education staff and policymakers are beginning to see the power of longitudinal data beyond federal and state reporting and accountability," she said. "Strong efforts are being made to build data systems that inform financial and policy decisions at the state level and in the schools, and most importantly in the classroom."
Other states are trying to increase and enhance automated reporting, create more robust vertical and horizontal interoperability, and implement data warehouses to feed today's sophisticated reporting requirements. But according to the DQC, all states still need to improve how they collect data.
Power of Suggestion
The ED oversees states' progress under the NCLB. CIO Bill Vajda plays a central role in data management within the federal department and indirectly among states, and said he intends to bring tangible and distinct business results under NCLB. "We have to do our job effectively for funding oversight to work," he said. "NCLB is broadening the constituent base and increasing the demand for data and access."
Although states are left to gather and present their own data, the ED is increasingly advising them on how to best design their data management systems to meet NCLB requirements. Ross Santy, deputy assistant secretary of data and information at the Office of Planning, Evaluation and Policy Development, helps school leaders make sense of how to house educational performance data in conjunction with NCLB. "There is no standard guidance on cataloging or presenting because all state systems are so different, in terms of standards and assessments," he explained. "We utilize the information they do show and ensure they are reporting through monitoring visits."
The ED awarded 14 grants to help states launch their data systems, and through the Institute of Education Sciences (IES), the federal government's principal agency conducting research on education, plans to issue 50 grants in all. The department requested a $55 million increase in IES funding for fiscal 2007 -- one of the largest single increases asked for in the presidential budget, Santy said. "We are hoping this will help more states."
Another federal program, the Education Data Exchange Network (EDEN), helps states transfer and share education program data, and is meant to streamline data collection and grant reporting. "We are finalizing a single portal," said Santy. "Right now all state coordinators know where to go. Everything is still primarily flagged and evolving into a reporting tool."
The federal government is using EDEN to provide some information standards to department of education CIOs who are tailoring their own policies, explained Santy, by demonstrating what they are responsible for reporting, and how those larger requirements affect state level data. "We want to get to the point where states can report education data in the most efficient way possible, and then discover other things we can do with that data."
How are states doing? "We get a fair amount of information from the vast majority of states," he said. "Some have done a phenomenal job of mapping all the information and reporting it for the central repository."
All EDEN data, which was initially voluntary for states, date from the 2003-2004 school year. The ED is finalizing regulatory guidance to set into motion a two-year transition for all states to report using EDEN, starting in 2007. If a state can show an inability to provide that data or at the level of quality expected by the ED, Santy's office will work with them over a two-year transition period to make sure they are ready to report.
Leading the Pack, Longitudinally
The key piece of technology in all this is the data warehouse. Florida leads among states with its comprehensive student databases. "We started a little ahead of the curve because Florida has historically been a data-rich state when it comes to education," said Ron Lauver, CIO of the Florida Department of Education.
Also, Florida's top-down policy has long encouraged data collection for school improvement. Gov. Jeb Bush's pre-NCLB education initiatives, such as A++, already emphasized many aims of the federal law, such as annual measures and learning gains.
Florida's "K20 Education Data Warehouse" stores all data on individual students from their pre-kindergarten year through 12th grade, and links it with those students' data from community colleges and state universities. Any data in this central repository can then be used to generate administrative and funding reports, queries and research extracts, and "provide links to what kids have done and are doing," said Jay Pfeiffer, assistant deputy commissioner in Florida's Division of Accountability, Research and Measurement.
In addition to its main data repositories containing standardized and state test scores, Florida generates adjunct data to support education. For example, another warehouse stores data from other state agencies that impact children, such as those that handle foster care, vocational rehabilitation and juvenile justice.
Florida has also built new technology tools to enhance education. A system called Choices links education data to students who later enter the labor market, and another, SunshineConnections.org, is used by students to map out their curricular goals. In partnership with Microsoft, SunshineConnections.org provides state and local data to teachers and administrators statewide. Florida is also designing student performance profiles that will correspond to the state's geographic regions.
Florida is not alone in moving in the direction of longitudinal systems. In 2006, the Utah Legislature passed its Education Information Technology Systems law, requiring coordination between public and higher-education IT systems, including the use of a unique student identifier. Missouri is moving in that direction as well.
Missouri's core data collection system is Web-based with the capacity for users to insert edits to the material used by each of the 524 school districts. Six times a year, districts submit data to a state-run relational database that generates payments and determines school district compliance with statutes and regulations.
Another success story can be found in Georgia, which is running a data warehouse that tracks grades and test results for each of the state's 1.3 million students, from year to year and from school to school.
Is It Working?
Despite some successes, many states struggle with the technology equation. The problem lies primarily in the area of collaboration, or lack thereof, according to Tom Ogle, director of School Core Data at the Missouri Department of Elementary and Secondary Education. "[NCLB compliance] is a collaborative venture between local school officials, districts and the state," he said. "If there is a link missing in the chain, the system won't work."
Even when collaboration is present, Ogle often hears the word "overwhelming" from employees. "There is a lot to do in a short period of time in terms of collecting and evaluating data. Humans can only do so much, and we're definitely putting stress on existing systems."
Santy has observed a leadership and communication challenge among states and within the federal department. "On the state level, it's knowing who -- i.e., the state IT offices -- needs to be reporting to the federal government on NCLB data management. We work with a fair mix of technical and data file builders, and some program people in the accountability offices. In our own office, we try to connect folks who work with the data to the programs. We have the same issues and busting down of silos here just as states are struggling with the same thing."
For some states, NCLB may have been a shot in the arm to coordinate data centers. On the other hand, Ogle, who has been working with the Missouri Department of Education for two decades, has not seen an altogether drastic change in data management due to NCLB -- with the exception of school-level data collection. "We have long had leadership in the state department and districts that emphasized the importance of having quality data collection," he said. "There's a long history, and we have been collecting individual records from teachers for 18 years. However, with the EDEN system and NCLB, the stars were aligned to change the culture of technology. We bought into moving past individual data collection to a centralized system early on, just as we moved from paper to computers 18 years ago."
State education and technology officials are also very active in sharing information and studying ways to improve data collection and presentation. Pfeiffer speaks nationwide on the topic, and Florida has hosted three national conferences in which its education department participated. "Like many things in information technology," said Florida's Lauver, "it would be a shame to reinvent the wheel."
Moving Students Forward
Looking ahead a few years, Ogle and others expect most education data collection to focus on individual students and fit into a longitudinal pattern. "This makes sense now to be linking these data," he said, "because Missouri [and other states] are offering more tests to students -- some twice a year."
The increase in testing frequency and specialization in topics are direct results of NCLB, and provide a more comprehensive snapshot of student performance at each grade level. They also produce more data.
Most states appear to be moving toward, or at least considering, LDS frameworks that allow for almost limitless data compilations.
Though often maligned by the media and school districts, NCLB has mandated data collection and analysis as a way to best educate each child. "This focus on data is meant to help the individual student," Ogle said, "which is the commitment of the NCLB program." | <urn:uuid:719d8146-e0ee-4e65-935a-fab92feaa2b4> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/pcio/Will-IT-Get-a-Passing-Grade.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956153 | 2,920 | 3.15625 | 3 |
Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Supramap tracks infectious diseases, predicts outbreaks
This week researchers from the Ohio Supercomputer Center and the American Museum of Natural History revealed an online tool, called Supramap, that tracks infectious diseases, such as the avian influenza virus (H5N1) and the H1N1 virus.
Supramap uses parallel programming on high performance computing systems at the Ohio Supercomputer Center (OSC) to track the diseases across time, geography, and host animals, including humans. The genetic data is gathered and displayed in online geographic information systems, such as Google Earth. The application has been described as a “weather map for diseases” and will be a boon to public health professionals in helping predict disease outbreaks.
The research was published in the April 2010 online issue of Cladistics. Daniel A. Janies, an associate professor at Ohio State University, is the first author of the paper.
Supramap does more than put points on a map — it is tracking a pathogen’s evolution. We package the tools in an easy-to-use web-based application so that you don’t need a Ph.D. in evolutionary biology and computer science to understand the trajectory and transmission of a disease.
We have come to the point in computational genomics where we have generated a lot of raw data. Lately, we are seeing more and more applications being developed to help us analyze that data in practical, meaningful ways, putting it to work to benefit humanity. Supramap is just such a tool.
Genetic applications move forward
Continuing the progress on the genetic front, there were several other stories of interest announced this week.
Researchers at the Virginia Bioinformatics Institute have located small genes that were missing from the scientific knowledge base. Using grid computing, and the mpiBLAST computational tool, researchers uncovered the information in only 12 hours, instead of the 90 years that the same task would have taken using personal computing resources.
This is the first large-scale attempt to find missing information in the GenBank DNA sequence repository. The study was reported in the journal BMC Bioinformatics.
An announcement from Ohio State University calls attention to a computing process that helps identify molecular structures that have the most promise to serve as the active component in new medications. Finding these molecules is not easy, but researchers are using computer simulations that connect a part of the molecule, called fragments, with a part of the diseased protein, called “hot spots.”
In order to get the fragment candidates, massive computing power is called into play to narrow down the huge pool that is gathered from thousands of existing drugs already on the market. Two computers are tasked with the simulations: one in Ohio State’s College of Pharmacy and the other in the Ohio Supercomputer Center.
Canadian researchers are using the resources of the High Performance Computing Virtual Laboratory (HPCVL) to scan the entire Homo sapiens proteome. This will be the first time such a task has been completed. From the announcement:
The proteome is the complete set of proteins produced by a species. Protein-protein interactions are an integral part of many biological processes within the body’s cells, including signaling processes to respond to outside stimuli such as the level of oxygen in the environment, transporting nutrients, and responding to threats from viruses such as H1N1 and HIV. Knowing the interactions in cells — at the molecular level — is essential for understanding cell behavior. It is also essential for understanding the impact of pathogens on cells.
The researchers expect to have complete results in a few months.
The last story of the bunch concerns the promising frontier of personalized medicine. A paper was published in Nature this week by over 200 members of the International Cancer Genome Consortium (ICGC), detailing a huge leap forward in cancer research. The group has been cataloging the genetic changes of the 50 most common cancers, with 500 genomes from each cancer type, and will now make that information publicly available online.
This research has the ability to revolutionize cancer treatment by using the genetic makeup of a patient’s specific cancer and pairing that with a treatment that has been proven to work on a similar cancer. I say similar because no two cancers are the same, but they share similarities. This will help patients get the most effective treatment first, instead of having to go through many trials before reaching an effective medicine or procedure. The information will also assist with enacting more effective and more timely clinical trials.
Said Professor Andrew Biankin, member of the Nature paper’s writing team, and part of the Australian project arm of the ICGC:
“The consortium’s Internet-based databanks will help us treat specific cancers with specific treatments. Not only that, the information will help us understand why some treatments work and others do not, and then design better drugs to target faulty elements or mechanisms.”
The Internet databanks will have different tiers of access to regulate who can see what. The general public will be able to view summaries of the data, whereas medical and research professionals will be able to request detailed accounts.
Second annual DICE Awards program shines light on Tesla GPU and Spectra Logic
The winners of DICE’s second annual Data Intensive Impact Awards were announced today. DICE stands for Data Intensive Computing Environment, which is the moniker for Avetec’s High Performance Computing Research Division.
The Future Technology Category award went to NVIDIA’s Tesla 20-series GPU, which the DICE team considers a critical new technology for the HPC space:
When compared to the latest quad-core CPUs, Tesla 20-series GPU computing processors deliver equivalent performance at 1/20 the power consumption and 1/10 the cost. More importantly, Tesla GPUs enable high performance computing users to scale their computing resources to get significant boosts in performance while staying within tight power and monetary budgets.
And the Product Category award went to Spectra Logic for its T-Finity enterprise tape library, which scales to more than 45 petabytes in a single library and to more than 180 petabytes in a single, unified library complex.
Said Al Stutz, Avetec CIO and DICE team leader:
“Our team selected the T-Finity for its versatility in data intensive environments. The product helps with the demanding archiving and backup environments experienced in the enterprise IT, federal, high performance computing (HPC) and media and entertainment space.”
Last year’s awards went to BlueArc and the now-defunct Woven Systems.
Avetec is a not-for-profit research company that aims to advance the competiveness of American companies. DICE conducts technology testing and validation for new and emerging HPC data management solutions, and the Data Intensive Impact Awards highlight products and technologies that have enabled progress in HPC data management in locality, movement, manipulation and integrity, as well as power and cooling efficiencies. | <urn:uuid:212e6a14-968e-4a50-93f5-a21c0054ee8f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/04/15/the_week_in_review/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934003 | 1,452 | 2.96875 | 3 |
Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Memristor Technology Holds Intriguing Promise
HP Labs this week announced advances in memristor technology that could fundamentally change the design of computing. Memristors could be the key that enables computers to handle the ongoing information explosion, where data from a slew of devices, both explicit and embedded, threatens to overwhelm our current computing limits.
So what is a memristor? According to the HP Labs announcement, it’s “a resistor with memory that represents the fourth basic circuit.”
If you’re familiar with electronics, you will recognize the language. The trinity of fundamental components encompasses the resistor, the capacitor, and the inductor. In 1971, a University of California, Berkeley engineer, Leon Chua, predicted that there should be a fourth element: a memory resistor, or memristor. However, when memristors were first theorized 40 years ago, they were too big to be practical.
It was not until two years ago, in 2008, that researchers from HP Labs rediscovered Chu’s earlier work. With the reduction of transistor sizes, even more capabilities of the memristor were realized due to the way properties behave at nanoscale.
What makes the memristor different from other circuits is that when the voltage is turned off, it remembers its most recent resistance, and it retains this memory indefinitely until the voltage is turned on again. It would take many more paragraphs for a full explanation, but if you are interested, I suggest this easy-to-understand primer at IEEE Spectrum Web site.
One of the advantages of memristors is that they require less energy to operate, and are already being considered as a replacement to transistor-based flash memory.
Researchers predict that in five years, such chips, when stacked together, could be used to create handheld devices that offer ten times greater embedded memory than exists today, and could also be used to power supercomputers for digital rendering and genomic research applications at far greater speeds than Moore’s Law suggests is possible.
Memristors work more like human brains. In fact, Leon Chua explained that our “brains are made of memristors,” referring to the function of biological synapses.
And according to R. Stanley Williams, senior fellow and director of Information and Quantum Systems Lab at HP:
Memristive devices could change the standard paradigm of computing by enabling calculations to be performed in the chips where data is stored rather than in a specialized central processing unit. Thus, we anticipate the ability to make more compact and power-efficient computing systems well into the future, even after it is no longer possible to make transistors smaller via the traditional Moore’s Law approach.
The promises this technology offers sound almost to good to be true. If even half of what is promised holds true, than this will go down in history as one of the great breakthroughs in computer technology.
48-Core Intel Processor for Educational Purposes Only
Intel announced plans to ship “limited quantities” of computers with an experimental 48-core processor to researchers by the middle of the year. The 48-core processors will be shipped mainly to academic institutions, an Intel rep said during an event in New York on Wednesday. And while the chip will probably not become commercially available, certain features may make their way into future products.
The 48-core chip operates at about the clock speed of Atom-based chips, said Christopher Anderson, an engineer with Intel Labs. Intel’s latest Atom chips are power-efficient, are targeted at netbooks and small desktops, and run at clock speeds between 1.66GHz and 1.83GHz. The 48-core processor, built on a mesh architecture, could lead to a massive performance boost when all the chips communicate with each other, Anderson said.
The new processor reportedly has a power draw between 25-125 watts, and cores can be powered off to save energy or reduce clock speed. The chip touts better on-die power management capabilities than current multicore chips and comes with power-management software to help lower energy consumption depending on performance requirements.
During the Wednesday event, researchers demonstrated the processor’s advanced power management features. While running a financial application, sets of cores were deactivated and the power consumption went from 74 watts to 25 watts in under a second.
The new 48-core chip is based on the 80-core Teraflop prototype created in 2007 by Intel’s Tera-scale Computing Research Program. And that chip is a runner-up to the 48-core “Single-chip Cloud Computer” announced in December 2009, also a product of the Tera-scale Computing Research Program.
Those processors, however, were only prototypes and were never released into the wild. However, the 48-core chips announced this week are almost ready to leave the research nest, and will be released if not into the fierce corporate jungles at least into the relatively tamer academic habitat. | <urn:uuid:442936b4-d2cd-4f05-b23c-ddefe8d4834b> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/04/08/the_week_in_review/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946552 | 1,048 | 3.015625 | 3 |
When you delete a file in Windows it is usually not permanently deleted. Instead, Windows moves the file to a special location called the Recycle Bin. First implemented in Windows 95, the Recycle Bin is a special directory where deleted files are stored in the event that you need to recover them. Sometimes the Recycle Bin is referred to as the trash, trashcan, or garbage. As a computer user, use of the Recycle Bin system is an extremely common task that is important to know about in order to effectively manage files. The purpose of this guide is to explain how to use the Recycle Bin to review, restore, and permanently erase your files. Additionally, this tutorial will cover some special settings that the Recycle Bin has.
The Recycle Bin can be accessed in a couple of different ways. The most straightforward way of accessing the Recycle bin is to click on the Recycle Bin icon on your desktop, which looks like the following image.
The Windows 7 Recycle Bin icon
You can also access the Recycle Bin using Windows Explorer by navigating directly to the folder associated with the Recycle Bin. The folder name of the Recycle Bin, though, is different depending on the version of Windows that you are running. For example, on Windows XP the Recycle Bin is found at
It is important to note that there are some times when files are not placed in the Recycle Bin when you delete them. This occurs in three different situations. First, only files deleted from fixed disks are sent to the Recycle Bin. Files deleted from removable media, such as memory cards, USB/jump/flash drives, external hard drives connected via USB, and floppy disks, are not sent to the Recyle Bin, but are instead permanently deleted. Also, files deleted from within the Windows command prompt are not sent to the Recycle Bin and are instead deleted immediately.
Additionally, the Recycle Bin has a maximum amount of data that it will hold. Once that space is filled, the oldest files will automatically be deleted to make room for new files as they are moved to the Recycle Bin. This maximum size can be customized in the Recycle Bin's properties, which is covered later in this tutorial.
If you wish to retrieve a file from the Recycle Bin you may do so in two different ways. The first method, is to use the restore function built into the Recycle Bin. Select the files you wish to restore and then either click the Restore the selected items button on the top bar of the Recycle Bin window, or right click and select Restore. Alternatively, if you wish to restore every item currently in the Recycle Bin you can click on the Restore all items button at the top of the Recycle Bin window. Note that on Windows XP, these options are located on the left menu bar rather than on the top. It is important to note that when you use the Restore options in the Recycle Bin, the files will be restored to their original locations. For example, if you delete a file from the Desktop and then restore it, it will return to the Desktop.
Using the Recycle Bin's restore function
The second method is to simply open the Recycle Bin, select the files you wish to retrieve, and drag them into another folder on your computer. Please note that if you use this method, you can restore the file to any location you want rather than just the previous location.
It is important to remember that even though these files are deleted, they are still accessible and taking up space on your computer's hard drive. It is possible to permanently delete these files using two methods depending on whether you wish to delete specific files or every file currently in the Recycle Bin. Please note, that on Windows XP, the following options are located on the left menu bar rather than at the top of the window.
Deleting Individual Files
To delete specific files, select the files you wish to delete and then right-click and choose the Delete option.
Emptying the Recycle Bin
To delete every file currently in the Recycle Bin, simply click the Empty the Recycle Bin button at the top of the Recycle Bin window. You can also empty the Recycle Bin by right-clicking on the Recycle Bin icon on your desktop and selecting Empty Recycle Bin.
It is possible to configure the Recycle Bin according to your personal tastes and needs. These options can be accessed by clicking the Organize tab at the top of the Recycle Bin window, and then selecting Properties. When the Properties window opens, please select the tab marked General.
Recycle Bin Options
In this screen you will see a list of hard drive letters on your computer and how much total space each of them contains. You will then see a setting where you can specify the maximum size that the Recycle Bin will use for each of these drive letters. This setting is different for each drive, so you can select each drive and modify their individual settings as needed. This Maximum size setting is set to 5%-10% of the total size of the hard drive by default. It is noteworthy that on versions of Windows prior to Windows Vista the total size of the Recycle Bin may not exceed 3990 MB. Alternatively, you may turn off the Recycle Bin by selecting the radio button titled Don't move files to the Recycle Bin. Remove files immediately when deleted. It is not recommended to select this option as it will make recovering deleted files impossible without the use of special tools.
Normally, when you move a file to the Recycle Bin via deletion, a confirmation dialog will appear to make sure that you wish to do so.
Delete Confirmation Dialog
Unchecking the Display delete confirmation dialog option will cause this dialog to no longer appear. It is recommended that you do not uncheck this option in order to protect yourself against the accidental deletion of files.
Every Windows user should be familiar with the Recycle Bin. Deleting files is a common occurrence and at times you may need to recover a file you have deleted. Properly configured, the Recycle Bin facilitates the easy recovery of recently deleted files from the hard drives of your computer. Making effective use of the Recycle Bin can potentially save you a great deal of time and money depending on the value of the files on your computer system.
A file extension, or file name extension, is the letters immediately shown after the last period in a file name. For example, the file extension.txt has an extension of .txt. This extension allows the operating system to know what type of file it is and what program to run when you double-click on it. There are no particular rules regarding how an extension should be formatted other than it must ...
Many organizations that use Remote Desktop Services or Terminal Services are not using a VPN connection before allowing connections to their in-house servers or workstations. If no VPN is required, this means that the Terminal Server or Remote Desktop is publicly visible and allows connections from anyone on the network and in most cases the Internet. This is a major security risk ...
When you install a program on to your computer it is important that the owner has full control over what actions are performed by this program. Whether that be because the machine is in an enterprise setting and you need to have perform patch testing or because your a consumer who wants to be notified and give consent when a program is being updated. Regardless of your reasons, it is every users ...
Almost everyone uses a computer daily, but many don't know how a computer works or all the different individual pieces that make it up. In fact, many people erroneously look at a computer and call it a CPU or a hard drive, when in fact these are just two parts of a computer. When these individual components are connected together they create a complete and working device with an all ...
The Windows Task Manager is a program that comes with Windows and displays information about the processes running and the resources being utilized on your computer. This utility allows you get a good overview of the tasks your computer is performing and the amount of resources each task is utilizing. Using this information you can tune your computer to run optimally and efficiently by disabling ... | <urn:uuid:d9cca9d4-49d7-4a8f-86c0-91b5e6f41f31> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/how-to-use-the-windows-recycle-bin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93996 | 1,668 | 3.75 | 4 |
A cookie (also known as an HTTP cookie, web cookie, or browser cookie) is a small piece of data sent from a website and stored in a user's web browser while the user is browsing that website. Every time the user loads the website, the browser sends the cookie back to the server to notify the website of the user's previous activity. Cookies were designed to be a reliable mechanism for websites to remember useful information (such as items in a shopping cart) or to record the user's browsing activity (including clicking particular buttons, logging in or recording which pages were visited by the user as far back as months or years ago).
A visit to a page on genband.com may generate the following types of cookie:
Anonymous analytics cookies
Every time a user visits the website, software provided by another organisation generates an 'anonymous analytics cookie'. These cookies can tell us whether or not you have visited the site before. Your browser will tell us if you have these cookies and, if you don't, we generate new ones. This allows us to track how many individual users we have and how often they visit the site. Unless you are signed in to genband.com, we cannot use these cookies to identify individuals. We use them to gather statistics: for example, the number of visits to a page. If you are logged in, we will also know the details you gave to us for this, such as your username and email address. (For example, Google Analytics)
When you register with genband.com, we generate cookies that let us know whether you are signed in or not. Our servers use these cookies to work out which account you are signed in with and whether you are allowed access to a particular service.
Other third party cookies
On some pages of our website, other organisations may also set their own anonymous cookies. They do this to track the success of their application, or to customise the application for you. Because of how cookies work, our website cannot access these cookies, nor can the other organisation access the data in cookies we use on our website. For example, when you share an article using a social media sharing button (for example, Facebook) on genband.com, the social network that has created the button will record that you have done this.
How do I turn cookies off?
It is usually possible to stop your browser accepting cookies, or to stop it accepting cookies from a particular website. All modern browsers allow you to change your cookie settings. You can usually find these settings in the 'options' or 'preferences' menu of your browser. To understand these settings, the following links may be helpful, or you can use the 'Help' option in your browser for more details. | <urn:uuid:a8fccdf2-b6e8-49cb-af82-25682d2d3b12> | CC-MAIN-2017-04 | https://www.genband.com/cookie-policy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933004 | 558 | 3.234375 | 3 |
Children learn by playing. Smart people have known this for eons. Plato suggested that in order to teach, we should “Not keep children to their studies by compulsion but by play.” So if you want to teach a child to grow up and become a coder, the best way – according to some mighty philosophers – would be to play a game that teaches the concept. Later, when their minds are ready to sit down and work, they can learn the details. According to Ralph Waldo Emerson, “The child amidst his baubles is learning the action of light, motion, gravity, muscular force.” That is the sneaky plan behind a new game, Robot Turtles, from ThinkFun.
Robot Turtles is a board game. No computers. No code. No screen. Just a playing board, pieces, and cards. But by playing it, with an adult, children – as young as preschoolers and up to about age eight -- learn basic principles of coding. From commands to subroutines to the idea that the computer will do only exactly as you tell it.
In fact, that last is the role the parent plays. “This game gives the adult a very specific role,” explains Bill Ritchie, President of ThinkFun. “That of the computer. And it’s very important that the parent not cheat. Because that is the concept you are teaching through play: The computer will do exactly as the child tells it.” If you have ever played a board game with a child, you know that not cheating is not as easy as it sounds. Children want to bend the rules, win when they didn’t, or give you points you didn’t earn. They are charming about it. And they are often persistent. It’s very tempting to let them have their way. It is only a game, after all, right? With this game, though, not cheating is integral to the game and you have to take your job …er play… seriously.
The game started as a Kickstarter campaign and, after raising $630,000, completely sold out – over 25,000 copies -- there. But ThinkFun bought it up and is bringing it out, updated and more affordable ($25), this summer. If you preorder it at ThinkFun you get a free special edition expansion pack.
For Bill Ritchie this game, of all the games ThinkFun has brought to the market, holds a special meaning. Ritchie’s brother was Dennis Ritchie, the creator of the C programming language and co-creator of the UNIX operating system. Bill is bringing the game to market in honor of his brother. “Dennis was considered by many to be the greatest coder of all time,” says Bill. “Robot Turtles would have been his favorite ThinkFun game yet!” | <urn:uuid:a3d2eaa5-0a4c-4cbf-a067-4c396cafa6ea> | CC-MAIN-2017-04 | http://www.itworld.com/article/2700530/careers/teach-a-child-to-code.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.978854 | 594 | 3.0625 | 3 |
Engagement with customers through social media has become an essential element of most integrated marketing campaigns. Marketing organizations are keenly focused on visualizations of social data and reaching influencers who impact the preferences of others in their networks. As part of brand management, marketers must hear and be heard on multiple social media platforms.
Risk managers should join marketers in keeping their ears to the ground (or perhaps we should say ears to the cloud) to hear what’s being said about their companies on Facebook, Twitter, LinkedIn, and other social networking sites. Monitoring and utilizing social media should be key components of a company’s overall risk-management strategy. The following are specific examples of how risk managers can leverage social media data to gain insight that will help reduce their total cost of risk.
Dynamic Risk Management. The speed with which an incident is identified and addressed can have a significant impact on loss severity. Monitoring social media gives risk managers information in real time so that they can respond and potentially limit the effects and scope of the incident. In 2009, a group of researchers from the University of Iowa collected and stored public tweets related to the H1N1 influenza. Among their findings was that Twitter traffic could be used “to estimate disease activity in real time, i.e., 1-2 weeks faster than current practice allows.” In a 2012 paper, two researchers from the University of Tokyo examined how Twitter, as a real-time means of communication, could be used for earthquake detection. The authors regarded each Twitter user as a sensor and each tweet as sensory information. Given the importance of sensors to the Internet of Things, a key enabler for dynamic risk management, the authors’ vision of Twitter users as “social sensors” is insightful.
Identifying Location-Based Risk. When companies identify where incidents are most likely to occur, they can focus resources and allocate costs appropriately. Monitoring social media can inform companies of safety issues or the potential for incidents in certain global locations. Law enforcement agencies are using social media for crime mapping and identifying geographic locations that are at highest risk. Predictive policing is now being used by a number of police departments, including the Los Angeles Police Department. According to a 2013 social media survey by the International Association of Chiefs of Police, 95.9 percent of agencies surveyed used social media in some capacity, including 66.1 percent that used it for intelligence and 45.7 percent that used it for listening/monitoring.
Identifying and Mitigating Reputational Risk. What’s being said about a company on social media, whether by customers, employees, or competitors, can erode the company’s reputation and taint its brand. A notable example is Domino’s Pizza, which saw its reputation severely damaged in 2009 after an unsavory online video posted by two Domino’s employees went viral. Many companies learned from this high-profile incident and instituted social media strategies. Monitoring social media can alert companies to reputational hazards so that they can respond quickly, before the situation spirals out of control. That response should include social media channels in addition to other methods so that the online community recognizes that the company is taking action to remediate the problem.
Determining Risk Exposure. Companies need to know whether one person or hundreds are blogging or tweeting about a product defect. It’s also important to know whether a given blogger is an influencer (influencers are people who others follow or whose opinion influences others). Social media can deliver an early warning for a product design or performance problem (many remember the antenna issues with the Apple iPhone 4, later referred to as Antennagate), and give companies the opportunity to identify safety and performance issues early on. For example, a case study conducted by researchers at Virginia Tech’s Pamplin College of Business showed that social media could help automobile manufacturers uncover and categorize vehicle defects before they appear in National Highway Traffic Safety Administration (NHTSA) communications.
Identifying and Responding to Crises or Disasters. Monitoring social media is an important tool to maximize situational awareness during and after a crisis or disaster. Particularly when other methods of communication are compromised, social media is essential for companies to be alerted when an incident occurs and stay informed of events and their locations in real time. A paper published by the Organization for Economic Cooperation and Development in 2013 describes how social media improved situational awareness for responders after the 2010 earthquake in Haiti. When telephone landlines were unavailable, Haitians turned to social media on their mobile devices. Social media data was aggregated by responders for interactive mapping. Of the organizations participating in the Continuity Insights “Crisis Communications: Social Media & Notification Systems” survey conducted in 2014, nearly 62 percent planned to use social media to gather information during an event or crisis, but only 37 percent used geospatial mapping to visually represent the location of employees, assets, and incidents.
Claims Fraud Mitigation. If an employee posts a selfie on a ski slope shortly after submitting a claim for a leg injury, it could be a sign of fraudulent claims activity. Earlier this year, more than 100 people were charged in a major Social Security disability fraud case, including former New York City police officers and firefighters. A January 8, 2014, Wall Street Journal article about the case features a photo of an ex-police sergeant who claimed that he couldn’t leave his house enjoying a fishing trip. Social media provides important clues for investigators as they seek to identify questionable claims. For example, geotagging can provide essential information about the location and date associated with photos posted online. Social media can also be used to detect identity fraud and organized fraud rings among individuals in a claimant’s social graph.
For risk managers, social media introduces a host of new risks that need to be anticipated and managed. At the same time, social media promises to be an important tool for managing risk. The key is incorporating social media strategy in a company’s risk management initiatives. Risk management software that features social media monitoring capabilities will facilitate this integration. In the 2014 Continuity Insights survey, less than 3 percent of the companies surveyed gave control of social media to their business continuity organizations, compared to 41 percent that gave control to corporate communications organizations. To have full visibility of factors impacting their operations, risk managers need to be active participants in their companies’ social media activities, playing both offense and defense.
Mark Pluta is CTO and David Ahrens is CMO at STARS (a division of Marsh).
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise. | <urn:uuid:0f8a5ba5-2443-47d8-bda5-2cc208b26001> | CC-MAIN-2017-04 | http://data-informed.com/mine-social-media-data-identify-manage-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943547 | 1,355 | 2.515625 | 3 |
Network Security School of Fort Knox: Part 2
Part Two: Detection Methods
In Part One of this series, we began our discussion by looking at how perimeter defenses at Fort Knox could inspire network security architecture. As we looked at firewalls and network access control (NAC) solutions, we noted how these necessary components, while effective, require additional steps to cover gaps in security coverage. Over the next several installments, we will discuss how network monitoring technologies shore up protection. We will start our discussion by evaluating the methodologies that can be applied to ferreting out network attacks.
When examining network traffic for nefarious activity, there are two major schools of thought. The first relies on examining the composition of the traffic, and the second relies on examining the behavior of the hosts.
Examining the Composition
The first method of detecting threats relies on examining the characteristics of the agent. In physical security, examples include analyzing patterns in DNA, fingerprints, chemical composition and physical parameters. In network security, it consists largely of examining patterns in data transmissions or computer files.
Fort Knox, like other military installations, has a variety of threat detection methods that rely on examining the characteristics of the object. Bomb sniffing dogs are taught to signal on a specific scent. Fingerprint scans and facial recognition can identify known enemies.
In network security, methods to detect attacks that rely on knowing the specific composition of an attack are collectively known as “signature-based detection.” Signature-based detection is often preferred in both physical and network security because it gives a concrete answer as to what exactly the exploit is. The barking bomb dog confirms the presence of explosives and would give cause for purposeful incident response. In network security, exact patterns for attacks such as the Conficker worm or “Web server vulnerability X” can be created to inform operators of the exact attack type.
However, signatures are limited in their ability to detect emerging and unknown threats because they require examination of the exploit to create the signature (e.g., you can’t match a fingerprint until it is on file). This makes signature-based detection an ineffective approach against early attackers, but it becomes increasingly effective as the exploit vector ages.
Packages coming onto secure facilities are often run through an X-ray machine where the operator looks for suspicious (though not necessarily nefarious) characteristics in the contents. For example, he/she may look for sharp objects (which could just as easily be a pen as a knife).
In network security, looking for threats by identifying potentially (though not necessarily) malicious characteristics is known as “heuristic detection.” This type of detection relies on variable or partial signature matches, but is unable to provide the concrete exploit explanation that signature-based detection can.
The second school of detecting threats focuses on the behavior of the agent. This is referred to as behavioral detection.
At Fort Knox, if a patrol doesn’t report in at its regular interval, the watch commander will note this as a deviation from normal behavior and begin an investigation. If a vendor arrives in a different style of vehicle than has been noted in the past, it will also be flagged as an anomaly.
Detecting threats by observing activity that is a deviation from normal (baseline) behavior is known as “anomaly detection.” In network security, examples of anomalies include computers using or hosting new network services, exceeding normal bandwidth usage or communicating with new types of external servers. This detection approach provides the earliest opportunity for detecting an emerging or advanced threat, but requires skilled operators and deep data/intelligence visibility to effectively foil the attack.
A second component of behavioral detection is observing suspicious activity. When personnel of Fort Knox, whether insiders or outsiders, begin to examine door locks, copy documents or go into unauthorized regions of the facility, red flags are immediately thrown.
In network security, suspicious activity can include scanning for hosts on the network (often a sign of a network worm), communicating with known “bad” external hosts (such as botnets) or attempting excessive, unnecessary connections to network resources (an indicator of a denial-of-service attack). While a signature may not yet exist, incident response can begin immediately because there is no good reason for the activity (much like observing someone trying to beat in a secure door). This allows for much quicker response times when handling emerging threats. It also decreases the time it takes for network analysts to be aware of a problem on the network.
In this second installment, we have looked at two types of methods used for detecting threats on the network: signature-based and behavioral. To recap, signature-based detection focuses on the characteristics of the agent, while behavioral detection analyzes the activities of the agent. In Part 3 of this series, we will take a look at the different types of payloads that can be delivered through network attacks. | <urn:uuid:1c7b2191-ac11-4291-b6a6-7b78800d4771> | CC-MAIN-2017-04 | https://www.lancope.com/blog/fort-knox-part-two | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939086 | 994 | 2.609375 | 3 |
Malware creation has broken all records during this period, with a figure of more than 15 million new samples, and more than 160,000 new samples appearing every day, according to Panda Security.
Trojans are still the most abundant type of new malware, accounting for 71.85% of new samples created during Q1. Similarly, infections by Trojans were once again the most common type of infection over this period, representing 79.90% of all cases.
In the area of mobile devices, there have been increasing attacks on Android environments. Many of these involve subscribing users to premium-rate SMS services without their knowledge, both through Google Play as well as ads on Facebook, using WhatsApp as bait.
Along these lines, social networks are still a favorite stalking ground for cyber-criminals, The Syrian Electronic Army group, for example, compromised accounts on Twitter and Facebook, and tried to gain control of the facebook.com domain in an attack that was foiled in time by MarkMonitor.
During the first three months of the year we have witnessed some of the biggest data thefts since the creation of the Internet, and as expected, Cryptolocker, the malicious file-encrypting ransomware which demands a ransom to unblock files, has continued to claim victims.
“Over these months, levels of cyber-crime have continued to rise. In fact, we have witnessed some of the biggest data thefts since the creation of the Internet, with millions of users affected”, explains Luis Corrons.
So far in 2014, Trojans are still the malware most commonly used by cyber-criminals to infect users. According to data from PandaLabs, four out of five infections around the world were caused by Trojans, that’s 79.90% of the total. Viruses are in second place, accounting for 6.71% of infections, followed by worms, with a ratio of 6.06%.
Trojans also top the ranking of newly created malware, accounting for 71.85% of the total, followed by worms, at 12.25%, and viruses at 10.45%.
The global infection rate during the first three months of 2014 was 32.77%. China is once again the country with most infections, with a rate of 52.36%, followed by Turkey (43.59%) and Peru (42.14%). Although Spain is not in the top ten of this ranking, it is still above the global average with 33.57%.
European countries ranked high among the least infected countries, with the best figures coming from Sweden (21.03%), Norway (21.14%), Germany (24.18%) and Japan, which with a ratio of 24.21%, was the only non-European country in the top ten of this list. | <urn:uuid:cbf27957-76da-4d93-ad4c-b61be5e97254> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/05/30/malware-creation-breaks-all-records-160000-new-samples-every-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963048 | 572 | 2.53125 | 3 |
While congress continues to wrangle over the future of specific space agency programs - in particular the seemingly doomed Ares rocket - NASA continues to prepare for future operations by bulking up commercial space, heavy lift rocket and outer space plans.
The space agency in the past few weeks has issued requests for information on a new heavy lift rocket, advanced space exploration technologies that move beyond Low Earth Orbit and today, a call for more details on how commercial programs will advance space transportation needs.
NASA said it is currently in a "conceptual phase" of developing Commercial Crew Transportation (CCT) requirements that will define how commercial outfits will be able to transport NASA astronauts and cargo safely to and from LEO and the International Space Station.
NASA said it wants to collect information from the commercial space industry to help the space agency plan the strategy for the development and demonstration of a CCT capability and to receive comments on NASA human-rating technical requirements that have been drafted as part of this initiative, NASA stated.
That draft, called the Commercial Human-Rating Plan (CHRP) defines the allocation of responsibilities, requirements, mandatory standards, and process for achieving NASA human spaceflight certification for commercial crew transportation services.
NASA is now looking for more details from space industry players to determine issues such as: "What is the approximate dollar magnitude of the minimum NASA investment necessary to ensure the success of your company's CCT development and demonstration effort? What is the approximate government fiscal year phasing of this investment from award to completion of a crewed orbital flight demonstration?"
In February NASA awarded $50 million to five companies under the CCT program who could help design and build future spacecraft that could take astronauts to and from the International Space Station. The five companies and their awards were Blue Origin: $3.7 million; Boeing: $18 million; Paragon Space Development Corporation: $1.4 million; Sierra Nevada Corporation: $20 million; United Launch Alliance: $6.7 million.
The money is expected to be used toward the development of crew concepts and technology for future commercial support of human spaceflight and are designed to foster entrepreneurial activity leading to high-tech job growth in engineering, analysis, design and research, and to promote economic growth as capabilities for new markets are created, NASA said.
In another future planning development, NASA last week said it defined six targeted technologies of the future via its Flagship Technology Demonstration effort. Such Flagship technologies could be developed at costs ranging from $400 million to $1 billion.
The key technologies from the NASA request included:
- Advanced Solar Electric Propulsion: This will involve concepts for advanced high-energy, in-space propulsion systems which will serve to demonstrate building blocks to even higher energy systems to support deep-space human exploration and eventually reduce travel time between Earth's orbit and future destinations for human activity.
- In-Orbit Propellant Transfer and Storage: The capability to transfer and store propellant-particularly cryogenic propellants-in orbit can significantly increase the Nation's ability to conduct complex and extended exploration missions beyond Earth's orbit. It could also potentially be used to extend the lifetime of future government and commercial spacecraft in Earth orbit.
- Lightweight/Inflatable Modules: Inflatable modules can be larger, lighter, and potentially less expensive for future use than the rigid modules currently used by the ISS. NASA said it will pursue a demonstration of lightweight/inflatable modules for eventual in-space habitation, transportation, or even surface habitation needs.
- Aerocapture, and/or entry, descent and landing (EDL) technology: This involves the development and demonstration of systems technologies for: precision landing of payloads on "high-g" and "low-g" planetary bodies; returning humans or collected samples to Earth; and enabling orbital insertion in various atmospheric conditions.
- Automated/Autonomous Rendezvous and Docking: The ability of two spacecraft to rendezvous, operating independently from human controllers and without other back-up, requires advances in sensors, software, and real-time on-orbit positioning and flight control, among other challenges. This technology is critical to the ultimate success of capabilities such as in-orbit propellant storage and refueling, and complex operations in assembling mission components for challenging destinations.
- Closed-loop life support system demonstration at the ISS: This would validate the feasibility of human survival beyond Earth based on recycled materials with minimal logistics supply.
The third major planning effort announced by NASA happened earlier this month when NASA began its search for a next-generation rocket capable of taking equipment and humans into space.
NASA said its procurement activities are intended to find affordable options for a heavy-lift vehicle that could be achieved earlier than 2015 - the earliest date that the currently envisioned heavy-lift system could begin work.
In his April speech outlining NASA's future, President Obama said there would be $3.1 billion for the development of a new heavy lift rocket to fly manned and unmanned spaceflights into deep space. Obama said he wanted this technologically advanced rocket to be designed and ready to build by 2015.
With that goal in mind, NASA sent out a Request for Information that will begin what has in the past been a long process to build a "new US developed chemical propulsion engine for a multi-use Heavy Launch Vehicle. NASA said it was looking for a "demonstration of in-space chemical propulsion capabilities; and significant advancement in space launch propulsion technologies. The ultimate objective is to develop chemical propulsion technologies to support a more affordable and robust space transportation industry including human space exploration."
The space agency said it will look for features that will reduce launch systems manufacturing, production, and operating costs.
As part of the RFI announcement, NASA said it will initiate development and flight testing of in-space engines. Areas of focus will include low-cost liquid oxygen/methane and liquid oxygen/liquid hydrogen engines and will perform research in chemical propulsion technologies in areas such as new or largely untested propellants, advanced propulsion materials and manufacturing techniques, combustion processes, and engine health monitoring and safety.
NASA said the new heavy lift system should help the US explore multiple potential destinations, including the Moon, asteroids, Lagrange points, and Mars and its environs in the most cost effective and safe manner. At the same time, NASA desires to develop liquid chemical propulsion technologies to support a more affordable and robust space transportation industry.
NASA said its approach will strengthen America's space industry, and could provide a catalyst for future business ventures to capitalize on affordable access to space, NASA said.
The moves are preceded by the fact that NASA has all but shut down its Constellation development program - the space agency cancelled the Ares V RFP in March -- in the face of budget constraints and the direction the current administration wants it to go.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:74f6913b-0b71-4336-a5df-5f5a6390c545> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2230819/security/nasa-preps-advanced-technology-for-the-future--now.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925931 | 1,413 | 2.65625 | 3 |
Definition: Any algorithm that makes random (or pseudorandom) choices.
Generalization (I am a kind of ...)
Specialization (... is a kind of me.)
Monte Carlo algorithm, Las Vegas algorithm, skip list insert, randomized binary search tree, reservoir sampling.
See also adversary, pseudo-random number generator, probabilistic algorithm, deterministic algorithm.
Note: From Algorithms and Theory of Computation Handbook, pages 10-17 and 15-21, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
Some algorithms make random choices initially to avoid any fixed worst case.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 14 January 2009.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "randomized algorithm", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 January 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/randomizedAlgo.html | <urn:uuid:a1151b48-ac46-410a-9bf1-f83561703689> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/randomizedAlgo.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.763212 | 272 | 2.890625 | 3 |
project-based erp for dummies
what in the world is erp?
Some 88 percent of companies use some type of ERP, an enterprise resource planning system that integrates a range of business information, from financial to customer to HR, and a lot more. But ERPs are not one size fits all. This is especially true among companies that execute projects for external customers.
Project-Based ERP for Dummies introduces the basic concepts of ERP systems that are intended to serve project-based businesses. Check out Article 1 "What in the World is ERP?"of the new Project-Based ERP for Dummies article series and see how your business can benefit from a purpose-built ERP system.
Before we go any further, we need to talk about ERP. Well begin with what the acronym stands for: enterprise resource planning. ERP systems are designed to integrate an organizations business information, including finance/accounting, customer relationship management, management accounting, procurement, human resources, budgeting, sales order entry, materials, and manufacturing.
It makes sense to integrate all these functions into one ERP system because it allows your company to establish business rules that will enforce critical processes. The whole idea is to seamlessly flow accurate and timely information between the different business functions. That gives leaders visibility into what is happening inside the enterprise, and a much greater control of the components of a business.
Whats an ERP like? Here are some key characteristics:
- There are transactional inputs.
- It has one central database for all information.
- Information is current (near real time).
- Its modular in design.
- It features open system architecture.
A project-based ERP has all of the above characteristics, just like any ERP. At its heart, though, it focuses all the processes and all the data on the companys central revenue generator: the project!
Project-based companies have their own distinct needs and requirements, (click to tweet) and they must be able to view their business in three dimensions. They need to be able to see the nature of the expense, what resource performed the work, and the project for which the work was accomplished. But take that one step further.
Generic ERP systems depict the business as flat (see Figure 1-1). They cant fundamentally understand the project dimension they simply try to pancake over the top of the vastness with some level of project information. Projects are truly an afterthought in the system design, and the system requires customizations to make it operate with a project point of view. That costs big bucks upfront and results in a higher total cost of ownership.
Beyond that, it often doesnt work well. Project-based businesses that attempt this pancake maneuver in their generic system find they have poor visibility, manual costing, manual reconciliations, assumed revenue calculations, and manufacturing systems that arent tied at all to projects for which the work is being done.
Figure 1-1: A generic ERP system with projects pan caked on top.
Generic solutions struggle to provide common deliverables that are tied to the project, and that struggle makes it difficult or impossible to produce an accurate view of project profitability and health. Businesses often respond by using predictive methods to approximate materials costs, they try reconciliation efforts to connect accounts receivable and accounts payable transactions, and they may spend weeks summing it all up into useful materials so leaders can make business decisions. Who wants to have no control over their project costs? Certainly not anyone who runs an effective business!
Now, put on your 3D glasses and take a look at what a truly project-based ERP can provide (click to tweet) (see Figure 1-2).
With a true project-based ERP system, every single transaction is tied to an account, an organization and you guessed it a project! Look a little further at these three important elements that are fundamental to success.
Figure 1-2: A Project Based System sees the business in 3D
- Account: The general ledger account that describes the expense. Examples include hotel costs, airfare, labor, or subcontracts. Think of this as a chart of accounts.
- Organization: This may describe a department, a functional group, or a product line. Most important, it describes who is doing the work.
- Project: This is the product or service being delivered to a customer or client. The project is where economic value is created within the firm, and where billings and revenues come from. Its the central activity that makes the business stay in business.
Linking these elements makes it possible to produce accurate and timely deliverables that are the lifeblood of any business, including:
- Financial reports
- Project status reports
When your corporate and project financial information are tied together through the system, you have unparalleled visibility and control of your business. That enables you to make decisions based on current, real-time information, instead of managing through a rearview mirror. | <urn:uuid:ec1e64e3-4dfd-48de-a84b-b2280968660a> | CC-MAIN-2017-04 | https://www.deltek.com/en/learn/resources/what-in-the-world-is-erp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00331-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938855 | 1,024 | 2.609375 | 3 |
Five federal agencies this week teamed to announce they are looking to fund innovative research and development of robotic technologies for everything from home healthcare and bomb detection to biological sensors and agriculture applications.
The idea of most of the new applications is to automate tasks and ultimately enrich human lives, the agencies stated. Robotics technology is reaching what the agencies called a tipping point and is poised for explosive growth because of improvements in core technologies such as microprocessors, sensors, and algorithms, they stated.
Members of the research community and program managers in key science agencies have identified a shared vision and technical agenda for developing "co-robots" - a next generation of robotic systems that can safely co-exist in close proximity to or in physical contact with humans in the pursuit of mundane, dangerous, precise or expensive tasks, the agencies stated.
In this case those agencies include: Defense Advanced Research Projects Agency (DARPA), Department of Homeland Security (DHS), the National Science Foundation (NSF), National Institutes of Health (NIH) and United States Department of Agriculture (USDA).
From the announcement here's a look at some of the robotic technologies these agencies are looking for:
DARPA -- The defense department scientific wizards want revolutionary robot motors and drives known as actuators that meet or exceed the safety and efficacy of human muscle. To be safe, an actuator must have low minimum stiffness and low stored energy, even during fault conditions. To be effective, an actuator must have high force density, high (potentially logarithmic) force resolution, sufficient bandwidth, and be robust against unexpected collision. In addition, DARPA seeks approaches that do not rely on exotic or expensive materials or processes, and approaches exhibiting potential for low-cost manufacturing.
DHS - The agency envisions revolutionary technologies to remotely or robotically access, diagnose, and render safe improvised explosive devices (IEDs). The agency is seeking advanced research in power systems, mobility, frequency allocation, security, satellite systems, wireless communications and tethered versus radio frequency systems. It is also interested in robotics to survey captured tunnels for forensic collection and mitigation activities.
NIH -- NIH wants robotics technologies to support and improve quality of life, well-being, and the ability of older adults or individuals with mobility impairment to live independently and safely at home. Examples include technologies and devices that could help evaluate, monitor and improve mobility; improve health service delivery to elders; provide information to health care providers and family members with which to evaluate the need for intervention; and promote communication and interaction between older people or individuals with disabilities living in the community or in institutional settings and their health care providers, friends and family members.
NSF -- Patient Mobility and Rehabilitation Robotics: Robotics research in rehabilitative/assistive robotics; naturally inspired, biomimetic, neuromechanical robotics; and social robotics. Fundamental research advances must be made in materials, manufacturing, signal processing, MEMS/NEMS devices, neural control, social-assist robots, simulators, robots for training/learning processes and energy harvesting techniques.
The disabled and elderly have severe problems with mobility. For example, lifting wheel chairs are rare, expensive, and not well engineered. The disabled and elderly have difficulty in getting from bed to a chair, to the toilet, or to a shower or bath, and back. Care givers often suffer back injuries from attempting to lift heavy patients in and out of bed, or even from attempting to change sheets while the patient is in the bed.
It is also looking for robotic care-givers for management of chronic heart, lung, or blood diseases. Such robots would be developed for reminding patients to take medication, monitoring compliance and vital signs, and wirelessly communicating with primary caregivers. Robotic care-givers would be of substantial benefit to remotely-located patients and/or patients who do not have a strong family or other support.
NIH is also interested in research on biomechanics which can be applied to a broad range of applications including implants, prosthetics, clinical gait and posture biomechanics, traumatic injury, repair processes, rehabilitation, sports and exercise, as well as technology development in other NIH interest areas applied towards biomechanics.
USDA - The agency wants what it calls High-Throughput Robotic Technologies. Examples include automated systems for harvesting, inspection, sorting, and handling of animal products, fruits, vegetables, and other horticultural crops in production or processing environments. It wants improved robotics for inspection, sorting, and handling of plants and flowers in greenhouses and nurseries, or for processing (e.g., sorting, vaccinating, deworming) large numbers of live animals. Multi-modal and rapid sensing systems for detecting ripeness, physical damage, microbial contamination, size, shape, and other quality attributes of plant or animal products, or for monitoring air or water quality.
The agency would like to see what it called vision-directed robotic arms that can distinguish plant or animal targets within complex natural environments (e.g., tree canopies, vines, beds, pens). Robots that can harvest fruits, vegetables, or plants or that can handle small live animals, with appropriate force and motion for proper extraction and to minimize/avoid damage.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:dcc39828-3765-4386-ae7a-01b2569ec7fe> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2227251/security/five-federal-agencies-want-big-robot-technology-advances.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92668 | 1,094 | 2.9375 | 3 |
A Google security engineer studying an SSH connection to a host unexpectedly discovered a deeper, darker secret in the GNU C Library (glibc). Google later proved that a bug in this library could be used to remotely execute code and cause a stack-buffer overflow condition. Though most Linux operating systems are protected from such an attack by address space layout randomization (ASLR), Google security engineers were able to circumvent this mitigation method.
SSH is the Linux secure shell that provides an encrypted remote channel for authentication and a command line interface. The glibc library defines the system calls and other basic facilities used by many Linux distributions that C programs use to interact with the OS.
Google reported this bug in a Security Blog post yesterday, explaining that a security engineer was able to craft a full working exploit. Google also reported that “exploitation vectors are diverse and widespread,” highlighting how important it is to to patch or mitigate. Google won’t release the code, preventing it from being copied and used for malicious or criminal purposes. But it did make a non-weaponized proof of concept publicly available.
“The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack.”
When serious exploits like this are discovered, security analysts follow disclosure rules that, in most cases, keep the exploit confidential until a patch is released. Security analysts only make a disclosure public when the maintainers of the software are unresponsive, though motivation for disclosure can sometimes be suspicious around the time of large security conferences, like RSA in just two weeks. In the case of the glibc exploit, however, Google’s announcement meets the standards of responsible disclosure because a patch is available.
In the course of Google’s investigation, engineers discovered that glibc maintainers knew about the bug and potential exploit since July. It wasn’t clear if the bug had been fixed. While seeking a solution, the company learned that two Red Hat developers were also working independently on a solution to the glibc bug. Google and Red Hat collaborated to create and test a patch that is available now.
The issue affects all glibc libraries after the 2.9 release, but updating older versions is also recommended. For those who can’t immediately apply a patch, Google has found some mitigation methods that may help prevent the exploit. | <urn:uuid:4754a450-abf6-40df-85ce-32cbd954e09b> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3034228/security/google-discloses-serious-linux-stack-buffer-overflow-bug-in-widely-used-c-library.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938635 | 526 | 2.921875 | 3 |
When you are using a public wireless network, there are two approaches to insuring that data coming and going from your computer is encrypted.
One approach involves securing each individual application. For web browsing, this means only using secure HTTPS pages. For reading email, it means using secure protocols such as POPS or APOP rather than normal POP.
The downsides to this are both human and technical.
On a technical level, some applications can not be run securely. It may be, for example, that your favorite Instant Messaging program always sends everything in plain text. On a human level, it's a pain to configure applications to run securely and then to always be aware of which applications are secure and which are not.
A better approach is a Virtual Private Network (VPN) which creates a secure, encrypted connection used by all data coming into and out of your computer. Whether the data is a web page, an email message, an IM or an FTP file transfer, the VPN encrypts it - without making any changes to the applications themselves.
I'm writing this because I fear that many people are unaware of the VPN option. Anyone reading Jay Lee's HelpLine column in the Houston Chronicle wouldn't know about it.
In part this may be due a mis-conception that VPNs are only for large companies. In fact, a number of companies market VPNs to consumers.
There is, however, a difference between corporate and consumer VPNs.
Corporate VPNs create an encrypted pathway (tunnel being the official term) between a traveling employee and their home office. Consumer VPNs create an encrypted path between the computer or smartphone and servers run by the VPN company.
With a corporate VPN, data is encrypted until the point it hits the home office. With a consumer VPN, data is encrypted until it hits the servers of the VPN company at which point it is decrypted and sent out on the Internet to its eventual destination. The purpose of a consumer VPN is to encrypt everything traveling over the air. The purpose of a corporate VPN is to encrypt data end-to-end.
In the September 3rd edition of the Security Now podcast, Steve Gibson was asked about the free HotspotShield.com VPN service.
He wouldn't recommend the service saying "...they're monitoring the websites you visit, and they are changing in some fashion the content of the pages you download to insert their own ads."
The VPN service that Leo Laporte and Steve Gibson like is HotSpotVPN. HotSpotVPN offers two services: HotSpotVPN-1 is a PPTP VPN, HotSpotVPN-2 is an SSL VPN. The services are sold by the day, week, month or year.
The VPN service that I have used, and feel comfortable recommending is from Witopia. They also offer both an SSL and a PPTP option, both of which are sold on a yearly basis.
If you travel rarely, and can live with just webmail when traveling, then you may not need a VPN.
But be aware that some webmail systems only encrypt the page where you enter the userid/password. The pages where you read and write messages are not encrypted. Yahoo falls into this category.
Yahoo offers free "classic" and "new" webmail systems. Neither says anything about encrypting web pages with HTTPS. Even upgrading to Yahoo's Mail Plus doesn't seem to offer an option to encrypt all pages. No surprise then that the Privacy page at Yahoo Security says nothing about encrypting webmail pages.
Gmail defaults to encrypting only the login page, but offers an option (Settings -> Browser Connection) to encrypt all pages.
Earthlink customers are fortunate, their webmail system serves up all pages using HTTPS.
If you can get through the techie lingo that comes hand-in-hand with the service, having a VPN is great security while traveling. | <urn:uuid:4b85d5b9-2dc0-42fd-b548-d968f6201222> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2467629/endpoint-security/what-your-mother-never-told-you-about-vpns.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937573 | 798 | 2.875 | 3 |
Kaspersky Lab, a leading developer of secure content and threat management solutions, has published its annual report on the evolution of malware threats in 2010.
The year 2010 was certainly the year for vulnerabilities and online attacks. The average number of new malicious programs detected in one month remained around the same as in 2009, while there was even a decline in the activities of some types of threats. Stabilization in the flow of malicious activity was noted by Kaspersky Lab in last year’s annual report, and the reasons behind this leveling off are the same: a decrease in the activities of a number of Trojans, particularly those targeting online games, and more proactive efforts on the part of law enforcement agencies, antivirus developers and telecom companies in combating criminal services and cybercriminal groups.
At the same time, the number of online attacks skyrocketed. Over the course of the year, Kaspersky Security Network’s distributed monitoring and rapid response system recorded over 580 million web-based attacks against users’ computers — nearly 8 times more than the number of online attacks recorded in 2009. This sharp surge is related to the prevalence of exploits that allow hackers to infect website visitors’ computers without them noticing, using drive-by download technology. A single malicious program can penetrate a user's computer via dozens of vulnerabilities in browsers and other applications used to process web content, which has led to a proportionate increase in the number of online attacks.
In 2010, the total number of online attacks logged by Kaspersky Lab online antivirus products, and local virus incidents logged on user computers, exceeded 1.9 billion. Attacks launched via web browsers represented over 30% of this indicator, that’s over 500 million attacks. Browsers became the primary route for infecting users’ computers with malware and there is no reason to expect that to change in the near future.
According to Kaspersky Lab, P2P networks are the second most commonly used channel for spreading threats. Cybercriminals are also actively using popular social networks such as Facebook, VKontakte, Twitter and others. The rapid spread of malicious code is aided by the numerous vulnerabilities in these sites, which means the number of social network-based attacks will continue to grow.
Although new malicious programs appeared in 2010 at the same rate as in 2009, their complexity and functionality — and thus the threat they pose to users — increased. Some of the most complex threats used new technologies to penetrate the 64-bit platform, and many others propagated using the zero-day vulnerabilities. Examples of the most sophisticated threats include the Mariposa, ZeuS, Bredolab, TDSS, Koobface, Sinowal and Black Energy 2.0 botnets, each of which brought together millions of infected computers and the TDSS backdoor, which infects the MBR and launches destructive activity even before the OS boots up. The Stuxnet worm represents today’s technological peak in virus writing. This malicious program simultaneously uses several vulnerabilities in the Microsoft Windows operating system, bypasses system verification using legitimate digital certificates (that have since been revoked), and attempts to control programmable logic controllers and the frequency converters involved in critical engineering processes.
Malicious programs similar to Stuxnet could be used in targeted attacks against specific companies. The increased number of targeted attacks was another trend noted in 2010. Examples include some very narrowly-focused cyber attacks, such as Aurora, which was launched in order to steal user information and source code from software projects of several major companies, including Google and Adobe. It is possible that now, programs like Stuxnet will be more frequently included in the arsenals of some companies and secret services.
In the past year, the first malicious programs targeting iPhone and Android were detected. Thankfully, no incidents using such threats to the iPhone took place. However, cybercriminals have developed several Proofs of Concepts that could be used in the future. This signals the high probability of an increase in mobile threats.
In 2010, the Kaspersky Security Network cloud system helped detect 510 different software vulnerabilities on users’ computers. These vulnerabilities were most often found in the products of four major developers: Microsoft, Adobe, Oracle and ACDSee. In 2009, the leading position in terms of the number of vulnerabilities was held by Microsoft, in 2010 the situation changed, with Microsoft and Adobe sharing first place. The development of automatic updates for Microsoft products has led to a situation whereby users have started to update their Microsoft products more often and subsequently, vulnerabilities are patched. This has forced cybercriminals to search out ‘loopholes’ in other programs. Nearly one-half of the Top 20 most common vulnerabilities were identified prior to 2010, which means that vulnerabilities on users’ computers have been left unpatched for a long time, even after their respective patches have been released. Kaspersky Lab expects that in the future, software vulnerabilities will remain the primary means of launching attacks. Furthermore, the variety of the vulnerabilities exploited by malicious users and the speed with which they are starting to use them for destructive purposes are steadily on the rise.
In reviewing the risk of infection associated with any threat, it is noteworthy that users’ computers are the most vulnerable to infection in Iraq, Oman, Russia, Belarus and the US. It is in these countries that Kaspersky Lab programs logged the highest numbers of detections. The safest countries in terms of infection are Germany, Japan, Luxembourg, Austria and Norway.
The detection of threats that have already penetrated users’ systems gives us a picture of the computer infection level of any given country. The dubious honor of leading positions in this category was shared by developing countries in Asia and Africa in 2010, due to the rapid pace at which Internet access is becoming available, combined with low levels of computer literacy among the users in those regions. The countries with the lowest percentage of infected computers in 2010 were Japan, Germany, Luxembourg, Austria and Switzerland. | <urn:uuid:4ed7ca6d-0fa4-4015-95b2-db51e8558eef> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2011/Kaspersky_Lab_Publishes_Threat_Evolution_Report_for_2010 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957349 | 1,220 | 2.671875 | 3 |
In this video, courtesy of IEEE Spectrum, we see a "Two Ball Juggling with High-speed Hand-Arm and High-Speed Vision" robot, developed by two Japanese researchers at Chiba University. The robot can juggle two balls for about five seconds before it misses, mainly because the ball in the air moves slightly. But still, it's pretty impressive when you consider that there are sensors and cameras that track the ball's movement in the air, and juggling two balls with one hand is still pretty tricky for humans.
You can read more about the project here. A report from the IEEE Spectrum blog has more details, including the point that it can't juggle for a long time because the robot "is restricted to what's essentially a two-dimensional vertical plane of operation, so anytime a ball drifts even a little bit sideways, the robot can't get to it." But it looks like this can be solved by researchers by doing either a new throwing motion, or figuring out how to create a "shoulder joint" that could allow the arm to move in a third dimension.
The best part? When the other robots take over the world, they can relax by watching this robot juggle for fun!
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. | <urn:uuid:2433582b-1e1b-4756-bf06-9bb795d58079> | CC-MAIN-2017-04 | http://www.itworld.com/article/2726749/virtualization/meet-a-one-armed--two-ball-juggling-robot.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00498-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948358 | 302 | 2.71875 | 3 |
Ask yourself this quick question: At the end of the day, do you turn your computer off or simply stand up and leave? If you were to ask this question to 10 people, you would likely not get all of them to agree on which is the best method. This is actually a common question, but one which is not as simple to answer as you might think.
So, let’s take a look into whether you should shut your computer down at night or not. The first thing we should do is look at three myths that surround this topic.
Myth 1 – My computer is safe from power surges if I turn it off
If you live in an area that has an unstable power grid, or is prone to random blackouts, you may be worried about power surges. In truth, if one reaches your computer when it’s off, it will do almost exactly the same amount of damage as if it was on. Therefore, you should ensure that your computer is plugged into a surge protector, even if it’s switched off.
Myth 2 – Leaving a computer on will cause it to overheat
This isn’t quite true. Both laptops and desktops have fans and heat sinks that are designed to cool a computer efficiently while it operates. If your computer has a working fan, leaving it on overnight will not cause it to overheat. On the other hand, if the fan isn’t working properly there is a high chance it could overheat. In other words, if the fan isn’t working, you should get it fixed before damage is done.
Myth 3 – Turning a computer on and off, or leaving it on will cause parts to wear out quicker
In theory, this is actually true. When a computer runs, it gets hot – high end video cards can run as hot as 180 F – and when it is shut down, the parts cool quickly. Anyone with a basic understanding of science knows that many substances contract when cooled and expand when hot. Therefore turning your computer off and on will cause wear from expansion and contraction. .
Well, in truth, it really makes little difference. Think about other similar electronic devices like your monitor, TV or even phone. You no doubt turn these off and on all the time with no problem. Most computer components are designed for this too. In fact, many are designed to outlast the expected time you will use the computer. This means that the vast majority of people won’t notice a difference.
The truth behind these myths shows that there will be little outright harm to your computer if you turn it off, or leave it on. But the question about which is best to do still remains.
Reasons you should turn your computer off at night
There are four main reasons as to why you should turn your computer off at night:
Reasons you should leave your system on at night
There are three main reasons as to why you would want to leave your system on at night:
So, which is better?
In truth, it really comes down to preference and how you work. If you work with an IT partner who manages your systems, it is a good idea to ask them what they would recommend.
If you just use the computer while you are at work, or are worried about potential security threats, then you can probably shut it down at the end of the day. That being said, if you do shut your system down, it is a good idea to run security scans on a regular basis while your system is on to ensure maximum protection.
At the same time, if you leave your system on, it is a good idea to periodically reboot it so important security and program updates can be installed and your computer can be refreshed.
Still not too sure what you should be doing? Why not give us a call to see how we can help keep your systems running and secure. | <urn:uuid:597b1898-acb4-40d8-bd3b-837ad9d0ed49> | CC-MAIN-2017-04 | https://www.apex.com/ever-turn-computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966455 | 796 | 2.828125 | 3 |
NASA and Boeing this week showed off new technology that could go a long way toward reducing the size, weight and drag of future, greener aircraft.
NASA and Boeing said they recently completed tests of technology they call Active Flow Control, which places small, computer controlled devices known as actuators on the surface of a wing that then blow air in a sweeping motion along the span of the aircraft's surface.
As these actuators blow air over the surface, they help to redirect and reattach the air flow that would otherwise be separated causing drag, at some of the higher rudder angles. Eliminating such air flow separation on the surface benefits performance but also could enable the design of simpler, smaller and more aerodynamically efficient structures that help reduce aircraft weight, drag, and fuel consumption, NASA stated.
To prove the technology works, NASA and Boeing tested Active Flow Control on a full size Boeing 757 vertical tail in the U.S. Air Force's Arnold Engineering Development Center's 40- by 80-foot wind tunnel at Ames in Moffett Field, Calif.
The wind tunnel tests enabled the Boeing and NASA team to observe "a wide array of flow control configurations across the whole low-speed flight envelope of the vertical tail," said Ed Whalen, Boeing Research & Technology program manager for the testing in a statement. The team will pick the most efficient and effective flow control configuration for future flight testing "to see how it performs in the real flight environment."
The combined wind tunnel and flight tests represent the first full-scale flight demonstration of this active flow control technology, Whalen said. "That will give us insight into how the system works, how effective and efficient it is, things that we're not completely sure of at this point."
"The maturation of technologies such as active flow control, which will benefit aviation by improving fuel efficiency, reducing emissions and noise levels, is what NASA's aeronautics research is all about. The promising results of these wind tunnel tests and the following flight demonstration in 2015 undoubtedly will have an impact on future "green" aircraft designs," added project manager Fay Collier.
[IN THE NEWS: NASA: Kepler's most excellent space discoveries]
The test is part of NASA's ongoing Environmentally Responsible Aviation (ERA) Project that is looking "the feasibility, benefits and technical risk of vehicle concepts and enabling technologies to reduce aviation's impact on the environment."
ERA has some lofty goals. For example by 2025 it expects to develop systems that:
- Reduce aircraft drag by 8%
- Reduce aircraft weight by 10%
- Reduce engine specific fuel consumption by 15%
- Reduce oxides of nitrogen emissions of the engine by 75%
- Reduce aircraft noise by 1/8 compared with current standards.
NASA, Boeing and others have been developing greener aircraft for the future. In 2011 NASA awarded $16.5 million to Boeing, Northrop Grumman, MIT, Cessna to continue developing quieter, cleaner, and more fuel-efficient jets. At the time NASA said the money was awarded after an 18-month study of all manner of advanced technologies from alloys, ceramic or fiber composites, carbon nanotube and fiber optic cabling to self-healing skin, hybrid electric engines, folding wings, double fuselages and virtual reality windows to come up with a series of aircraft designs that could end up taking you on a business trip by about 2030.
The projects look like this:
The Boeing Company's Subsonic Ultra Green Aircraft Research, or SUGAR is a twin-engine aircraft with hybrid propulsion technology, a tube-shaped body and a truss-braced wing mounted to the top. Compared to the typical wing used today, the SUGAR Volt wing is longer from tip to tip, shorter from leading edge to trailing edge, and has less sweep. It also may include hinges to fold the wings while parked close together at airport gates. Projected advances in battery technology enable a unique, hybrid turbo-electric propulsion system. The aircraft's engines could use both fuel to burn in the engine's core, and electricity to turn the turbofan when the core is powered down ($8.8 million)
MIT's 180-passenger D8 "double bubble" fuses two aircraft bodies together lengthwise and mounts three turbofan jet engines on the tail. Important components of the MIT concept are the use of composite materials for lower weight and turbofan engines with an ultra high bypass ratio (meaning air flow through the core of the engine is even smaller, while air flow through the duct surrounding the core is substantially larger, than in a conventional engine) for more efficient thrust. In a reversal of current design trends the MIT concept increases the bypass ratio by minimizing expansion of the overall diameter of the engine and shrinking the diameter of the jet exhaust instead ($4.6 million).
Northrop Grumman will test models of the leading edge of a jet's wing. If engineers can design a smooth edge without the current standard slats, airplanes would be quieter and consume less fuel at cruise altitudes because of the smoother flow of air over the wings ($1.2 million).
Cessna will focus on airplane structure, particularly the aircraft outer covering. Engineers are trying to develop what some call a "magic skin" that can protect planes against lightning, electromagnetic interference, extreme temperatures and object impacts. The skin would heal itself if punctured or torn and help insulate the cabin from noise, NASA says ($1.9 million).
Check out these other hot stories: | <urn:uuid:98cea3ab-b924-4348-b61e-6123099bb97e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225826/wireless/nasa--boeing-flaunt-high-tech-wing-that-could-alter-future-aircraft-design.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927294 | 1,135 | 3.75 | 4 |
An interesting study by Rice University recently found that in one of the one of the more voracious social (and increasingly political) battlegrounds, science v. religion there is more common ground that most folks believe. In fact, according to the study, only 15% of scientists at major US research universities see religion and science as always in conflict.
"When it comes to questions about the meaning of life, ways of understanding reality, origins of Earth and how life developed on it, many have seen religion and science as being at odds and even in irreconcilable conflict," But a majority of scientists interviewed viewed both religion and science as "valid avenues of knowledge" that can bring broader understanding to important questions, said study author Rice sociologist Elaine Howard Ecklund in a statement.
The Rice study interviewed what it said was a scientifically selected sample of 275 participants, pulled from a survey of 2,198 tenured and tenure-track faculty in the natural and social sciences at 21 elite US research universities. Only 15% of those surveyed view religion and science as always in conflict. Another 15% say the two are never in conflict, and 70% believe religion and science are only sometimes in conflict. Approximately half of the original survey population expressed some form of religious identity, whereas the other half did not, the university stated.
Some of the interesting finding from the study:
- Scientists deliberately use the views of influential scientists who they believe have successfully integrated their religious and scientific beliefs.
- Scientists actively engage in discussions about the boundaries between science and religion.
- Scientists as a whole are substantially different from the American public in how they view teaching "intelligent design" in public schools. Nearly all of the scientists - religious and nonreligious alike - have a negative impression of the theory of intelligent design.
- Sixty-eight percent of scientists surveyed consider themselves spiritual to some degree.
- Scientists who view themselves as spiritual/religious are less likely to see religion and science in conflict.
- Overall, under some circumstances even the most religious of scientists were described in very positive terms by their nonreligious peers; this suggests that the integration of religion and science is not so distasteful to all scientists.
"Much of the public believes that as science becomes more prominent, secularization increases and religion decreases," Ecklund said. "Findings like these among elite scientists, who many individuals believe are most likely to be secular in their beliefs, definitely call into question ideas about the relationship between secularization and science. I think it would be helpful for the public to see what scientists are actually saying about these topics, rather than just believe stereotypes."
The study "Scientists Negotiate Boundaries Between Religion and Science," appears in the September issue of the Journal for the Scientific Study of Religion.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:954b3e39-f0d2-4e45-b7ea-bf71690ba6fb> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220745/security/science-and-religion-can-and-do-mix--mostly.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962191 | 588 | 2.546875 | 3 |
- A course at Cornell University simply titled "Networks" is taught by professors from both computer science and economics. Its focus is on how the social, technological, and natural worlds are connected covers the technology, economics, and politics of Web information and online communities.
- Indiana Universitys School of Informatics and Computing offers a program for students to apply their technical knowledge to business, telecommunications, security, or fine arts. In addition to logical reasoning, basic programming, and data visualization, graduates learn human-computer interaction design and other skills that will help them put technology to better use.
- Carnegie Mellon Universitys Lane Center for Computational Biology pursues computational tools that enable the automated creation of biological process models, which can lead to tools for individualized diagnosis and treatment of cancer and other diseases.
- The University of Texas at Dallas has developed the first comprehensive degree program in Texas designed to foster the convergence of computer science and engineering with creative arts and the humanities. The Arts and Technology (ARTEC) partnership encourages the productive intersection of disparate fields and modes of thinking.
- A joint program between the Purdue University Computer Science Department and its College of Education has created a new computer science program to prepare Purdue education majors to teach computer science in secondary schools. The partnership represents an endorsement that satisfies the educational computing standards set by key accreditation organizations. It incorporates recommendations made in the ACM Model Curriculum for K-12 Computer Science as well as by the Computer Science Teachers Association (CSTA).
- Stanford University redesigned its undergraduate computer science major to include a common set of core courses; a chosen track that concentrates on a selected area for greater depth; additional electives that include multi-disciplinary ties to economics, engineering management, statistics, and psychology, among others; and a senior project with either a software development or research emphasis.
In the face of rapid and frequent changes confronting the global computing field, the Association for Computing Machinary (ACM) and the Association for Information Systems (AIS) issued new curriculum guidelines for undergraduate degree programs in information systems (IS) earlier this year. For the first time, these guidelines include both core and elective courses suited to specific career tracks. The guidelines reflect the ubiquitous use of Web technologies, and the emergence of new architectural elements including Web services, software-as-a-service, and cloud computing. They can be adapted for schools of business, public administration, and information science or informatics, as well as stand-alone schools of information systems.
Which computer languages are taught?
There is no debate about the need for learning computer languages, but there is little agreement over which is most popular or appropriate. The Bureau of Labor Statistics reports that today's computer programming jobs require higher-level skills as lower-skilled coding jobs are being exported to countries with cheaper labor. As a result, there is greater demand for higher levels of programming skills, which enable programmers to write programs that are more or less independent of a particular type of computer, and permit faster development of large programs.
In fact, there are hundreds of active computer languages and many more dead ones where the code they produced may still be running somewhere. For example, many large organizations with long histories in computing still use COBOL to run the worlds business data applications, and it is likely to remain a viable language in the years ahead.
Continuous computer science learning
Learning does not stop at the college level, particularly for computer science graduates. Due to its dynamic nature, computer science has developed a reliance on lifelong learning programs to confront the challenges of the information society. Computing professionals in growing numbers are taking advantage of the many diverse educational and instructional tools that offer multifaceted materials and resources for specific technical areas.
ACMs Learning Center, for example, features online books from prominent technical publishers, online computing courses, and extensive resources that combine annotated bibliographies, online books and courses, tutorials, sample code, videos and podcasts, and community websites and blogs from the computing world. These resources are especially attractive for serious professionals who need to stay current in their field or adapt their knowledge and skills to new applications.
Page 2 of 3 | <urn:uuid:d3661816-5c36-494f-8822-9686a3281b19> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/11050_3915826_2/Special-Report---The-State-of-Computer-Science-Education.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00030-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932638 | 844 | 2.703125 | 3 |
Agency: Cordis | Branch: H2020 | Program: CSA | Phase: WATER-4b-2015 | Award Amount: 3.00M | Year: 2016
In European countries, the cultivation of fertigated crops experience scarcity of water, and the intensity of cultivation poses significant risks to water quality. The main objective of the FERTINNOWA thematic network is to create a meta-knowledge database on innovative technologies and practices for fertigation of horticultural crops. FERTINNOWA will also build a knowledge exchange platform to evaluate existing and novel technologies (innovation potential, synergies, gaps, barriers) for fertigated crops and ensure wide dissemination to all stakeholders involved of the most promising technologies and best practices. A multi-actor integrated approach will be used through the FERTINNOWA platform which will involve various stakeholders (researchers, growers, policy-makers, industry, environmental groups etc.) at several levels including the socio-economic and regulatory level (national and European) with a special focus on the EU Water Framework Directive and Nitrate Directive. Information will be gathered at national level to feed a European benchmark study that will evaluate and compare existing technologies used at various horticulture sectors, including vegetables, fruit and ornamentals in different climate zones. All tools, databases and other resources generated will be shared within the consortium and the stakeholders group and will be made available to the broader scientific community, policy-makers, the industry and the public at large. FERTINNOWA will help the growers to implement innovative technologies in order to optimize water and nutrient use efficiency thus reducing the environmental impact. | <urn:uuid:4c14c57e-a71c-41d7-979b-4dc6d30c1884> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/centro-of-sperimentazione-e-assistenza-agricola-1970794/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884086 | 331 | 3.046875 | 3 |
by Mike Rollings | February 14, 2012 | Comments Off on Inversion of Control
According to Wikipedia, inversion of control (IoC) is an object-oriented programming practice whereby the object coupling is bound at run time by an “assembler” object and are typically not knowable at compile time using static analysis. The binding process is achieved through dependency injection. In practice, Inversion of Control is a style of software construction where reusable code controls the execution of problem-specific code. It carries the strong connotation that the reusable code and the problem-specific code are developed independently, which often results in a single integrated application. Inversion of Control as a design guideline serves the following purposes:
1. There is a decoupling of the execution of a certain task from implementation.
2. Every module can focus on what it is designed for.
3. Modules make no assumptions about what other systems do but rely on their contracts.
4. Replacing modules have no side effect on other modules.
Technologists might come to think that an organization and the humans within it operates under the IoC system engineering principle. But then they’d be wrong.
They may believe that:
1. What we want executed is decoupled from implementation – that we can precisely and completely specify the dependencies that will guide implementation. We can specify what must be done to the point where all the implementer must do is think solely about what we have disclosed to them to conduct the matter at hand. No other thought required. There are those who tell and those who do.
2. Every person can focus on what it has been designed (certified) for.
3. That specialization within IT has gotten to the point where individual staff members rely on contracts (e.g. specifications, documentation, job practices) to do implementation. They make no assumptions about other staff members actions. This would give the illusion of being able to focus on “their” work, but also mandates that someone tells them precisely what to assume and thereby relieves them of any extraneous thought or musings about the task at hand. A perfectly efficient system.
4. People, due to specialization, are replaceable modules and their replacement has no side effect on other modules – sorry, people – nor does it effect the outcomes that we intend to achieve.
Yes, it would be wrong if we thought the system engineering definition for IoC applied to people and to the organizations to which we contribute.
The first misstep, in thinking that an organization could operate as if IoC were true for people, would be believing that those who ‘tell’ could capture all the assumptions to the point where no thought is required at implementation – by those who ‘do’. This false notion is founded on the premise that control and certainty can be achieved. This arrogant belief ignores the fundamental human condition that we will ignore elements due to our own self-interests and due to our displeasure with cognitive dissonance. We can’t help but to miss (more strongly – ignore) assumptions.
The next misstep would be in seeing the chain of idea, specification, implementation (or strategy, planning, execution) as a three-step process within an organization or within a single human’s mind. All one needs to do to see the brokenness of this concept is to examine your own idea flow. The imprecise nature of thought is a symphony of brain cells and firing synapse. Our ideas, our specifications (e.g. concepts, mindsets, beliefs, mental models), and our implementation (e.g. experiences, trial/error) have a lively interplay. It is in no way precise. We lie to ourselves if we believe that organizational interplay is less chaotic, unplanned, and that it does not benefit from the multitude of thought.
We are individually and organizationally a symphony of synaptic firing. Organizationally the role of the synapse is social interaction for sharing, making a contribution, and illustrating new perspectives. The interplay co-creates everything.
A third misstep would be believing that replacing one person with another person or contracted activity does not affect the outcome. For this misstep I’d like to focus on the outcome. A caution is that the outcome that you think you want today may not be the outcome you want tomorrow, however you may not be in the position to recognize that change and instead the implementer is. The correctness of what we think we want as the outcome may be negated by the experiences of the implementer. The implementer within the call center may see that treating customers like nameless widgets no longer works, but the correctness of the call center’s efficiency may still work for those directing the call center. IoC as an organizational construct reinforces status quo. Replacing the implementer may obscure an insight about the outcome forever.
With that I call upon us to embrace an alternate definition for inversion of control as it applies to organizations and people. This version of IoC posits that the inverse of control is influence, and that instead of precisely defining execution we instead enable choice and contribution. This inversion of control would recognize the uniqueness of human thought and contribution. It would put humans in a more revered perspective than the technologies that are intended to replace them or the rigid practices which blunt their contribution.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:75b4eded-6673-49fc-94ad-3de152be1b40> | CC-MAIN-2017-04 | http://blogs.gartner.com/mike-rollings/2012/02/14/inversion-of-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936882 | 1,225 | 3.0625 | 3 |
Internet Protocol (IP) routing protocols have one primary goal: to fill the IP routing table with the current best routes it can find. The goal is simple, but the process and options can be complicated. Routing protocols define various ways that routers chat among themselves to determine the best routes to each destination.
As networks grew more complex over time, routers gained both processing power and Random Access Memory (RAM). As a result, engineers designed newer routing protocols, taking advantage of faster links and faster routers, transforming routing protocols.
Routing protocols help routers learn routes by having each router advertise the routes it knows. Each router begins by knowing only directly connected routes. Then, each router sends messages, defined by the routing protocol, that list the routes. When a router hears a routing update message from another router, the router hearing the update learns about the subnets and adds routes to its routing table. If all the routers participate, all the routers can learn about all subnets in an internetwork.
When learning routes, routing protocols must also prevent loops from occurring. A loop occurs when a packet keeps coming back to the same router due to errors in the routes in the collective routers’ routing tables. These loops can occur with routing protocols, unless the routing protocol makes an effort to avoid the loops.
As you pursue your CCNA studies you will find that different authors mix and match the terms routing protocols, routed protocols, and routable protocols. The concepts behind these terms are not difficult but because the terms are so similar, they can be a bit confusing. In all of my posts, as well all Cisco documentation, these terms are generally defined as follows:
- Routing Protocol – A routing protocol is defined as a set of messages, rules, and algorithms used by routers for the overall purpose of learning routes. This process includes the exchange and analysis of routing information. Each router chooses the best route to each subnet in a process known as path selection and finally places those best routes in its IP routing table. Examples of a routing protocol include RIP, EIGRP, OSPF, and BGP.
- Routed Protocol and Routable Protocol – Both of these terms refer to a protocol that defines a packet structure and logical addressing, allowing routers to forward or route the packets. Routers forward or route packets defined by routed and routable protocols. Examples include IP and IPX. IPX is a part of the Novell NetWare protocol model.
Even though routing protocols such as RIP are different from routed protocols such as IP, they do work together very closely. The routing process forwards IP packets, but if a router does not have any routes in its IP routing table that match a packet’s destination address, the router discards the packet. Routers need routing protocols so that the routers can learn all the possible routes and add them to the routing table so that the routing process can forward routable protocols such as IP.
A routing protocol’s underlying algorithm determines how the routing protocol does its job. The term routing protocol algorithm simply refers to the logic and processes used by different routing protocols to solve the problem of learning all routes, choosing the best route to each subnet, and converging in reaction to changes in the internetwork. There are three main branches or families of routing protocol algorithms; Distance Vector, Link-State, and Balanced Hybrid.
Historically speaking, distance vector protocols were invented first, primarily in the early 1980s. A distance-vector routing protocol requires that a router informs its neighbors of topology changes periodically and, in some cases, when a change is detected in the topology of a network. Compared to link-state protocols, which require a router to inform all the nodes in a network of topology changes, distance-vector routing protocols require less router overhead and bandwidth utilization.
Distance vector means that routers are advertised as vector of distance and direction. Direction is simply next hop address and exit interface and distance means such as hop count.
Router using distance vector protocol does not have enough knowledge of entire path from source to destination. Instead of DV uses two options.
- Direction in which or interface to which packet should be forwarded.
- Distance from its destination.
As the name suggests the DV protocol is based on calculating the direction and distance to any link in a network. The cost of reaching a destination is calculated using various route metrics. RIP (Routing Information Protocol) uses the hop count of the destination as its metric for optimal route calculation. Updates are performed periodically in a distance-vector protocol where all or part of a router’s routing table is sent to all its neighbors that are configured to use the same distance-vector routing protocol.
By the early 1990s, distance vectors protocol’s somewhat slow convergence and potential for routing loops drove the development of new alternative routing protocols that used new algorithms. Link-State protocols, in particular OSPF and Integrated IS-IS, solved the main issues with distance vector protocols. However, these new link-state protocols required more planning in medium-to larger-sized networks.
First, each node needs to determine what ports it is connected to, over fully-working links. It does this using a simple reachability protocol that it runs separately with each of its directly-connected neighbors. Next, each node periodically, and in case of connectivity changes, makes up a short message called the Link State Advertisement (LSA), which flooded throughout the network.
The basic concept of link-state routing is that every node constructs a map of the connectivity to the network, in the form of a topology table showing which nodes are connected to which other nodes. Each node then independently calculates the next best logical hop from it to every possible destination in the network. The collection of best next hops will then form the node’s routing table.
Link-state routing protocols use path cost to the destination as its metric for optimal route calculation and do not rely on periodic route updates in order to maintain its topology table. Routing information is exchanged only upon the establishment of new neighbor adjacencies and construction of the adjacency table, after which only changes are sent through event generated Link State Update (LSU).
Around the same time as the introduction of OSPF, Cisco created a proprietary routing protocol called Enhanced Interior Gateway Routing Protocol (EIGRP) which used some features of the earlier IGRP protocol. EIGRP solved the same problems as did link-state routing protocols, but less planning is required when implementing the network. As time went on, EIGRP was classified as a unique type of routing protocol. It was considered to be neither distance vector nor link state, so EIGRP is called either a balanced hybrid protocol or an advanced distance vector protocol.
EIGRP was designed to minimize both the routing instability incurred after topology changes, as well as the use of bandwidth and processing power in the router. Most of the routing optimizations are based on the Diffusing Update Algorithm (DUAL) that guarantees loop-free operation and provides a mechanism for fast convergence.
The data EIGRP collects is stored in three tables:
- Neighbor Table – Stores data about the neighboring routers, i.e. those directly accessible through directly connected interfaces.
- Topology Table – Contains the aggregation of the routing tables gathered from all directly connected neighbors.
- Routing Table – Stores the actual routes to all destinations.
EIGRP does not rely on periodic route updates in order to maintain its topology table. EIGRP uses the bandwidth and delay to the destination as its metrics for optimal route calculation and does not rely on periodic route updates in order to maintain its topology table. Routing information is exchanged only upon the establishment of new neighbor adjacencies, after which only changes are sent through triggered updates.
In my next post, I will discuss the three most commonly implemented routing protocols, RIP, OSPF, and EIGRP, in detail.
Author: David Stahl | <urn:uuid:3b82b82e-21b2-4fdb-b087-bd2fab34c738> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/01/15/routing-protocols-overview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932148 | 1,636 | 4.25 | 4 |
Well, what ARE the Air Force's cyber weapons?
An anonymous reader thought we omitted some key information in our story about the Air Force designating cyber weapons. The reader wrote: So what are the six cyber tools that are considered weapons? I can't understand how this article, or others reporting similar information, have failed to provide this important detail.
Amber Corrin responds: We did not name the tools because the Air Force has not revealed what they are -- as our story stated. This is a move that is in keeping with many details of the military's cyber capabilities, particularly on the offensive side of things.
For example, it was recently reported – as it has been for close to a year now – that the Pentagon's rules of engagement for cyber operations are close to completion. But we will not necessarily know when they are done, because they will remain classified. It is possible Defense Department officials may divulge that they are in fact being implemented once they are actually finished, but don't expect much more than that in the way of public announcements.
Still, the military in recent months has been more open about DOD in cyberspace than in the past. For example, Air Force officials have noted their struggles to define operations in the domain, something that was reiterated last week along with the cyber-weapons announcement. Gen. Keith Alexander, commander at U.S. Cyber Command and director of the National Security Agency, also has discussed CyberCom's plans to create 13 offensive operations teams as well as other teams focused on cyber threats.
Posted by Amber Corrin on Apr 12, 2013 at 12:10 PM | <urn:uuid:99f73476-564f-41bc-bb87-31c6ae76672a> | CC-MAIN-2017-04 | https://fcw.com/blogs/conversation/2013/04/air-force-cyber-tools.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.978139 | 329 | 2.53125 | 3 |
For universities and researchers across the country, access to certain National Oceanic and Atmospheric Administration (NOAA) weather data could be prohibitively expensive.
A new partnership between George Mason University in Fairfax, Va., and Ligado Networks is hoping to demonstrate the feasibility of delivering NOAA real-time weather data to users across the country at a lower cost, using a cloud-based network.
“The network we’ve developed will give the university unprecedented access to real-time public weather data, making it possible for the school’s weather research programs to better study our atmosphere and develop useful tools that will benefit the broader American public,” said Doug Smith, Ligado Networks’ president and chief executive officer.
The partnership will help students and scientists with researching, tracking, and predicting the weather.
“Extreme weather events have a huge impact on people, including their families, homes and businesses,” said Deborah Crawford, Mason’s vice president for research. “Faster and more accurate climate modeling and weather prediction will help people and organizations–including emergency responders–better prepare for and respond more quickly to weather-related events such as tornadoes, floods and wildfires, saving lives and livelihoods.”
As part of the partnership, the university and Ligado will compare the delivery of weather data from existing satellite systems with the new cloud-based content delivery network that Ligado has developed. As part of the comparison, George Mason and Ligado will examine speed and reliability of the data delivery to users nationwide. The new partnership also includes reviewing the accuracy of existing weather forecasting models and advance detection of meteorological conditions, with the hope of improving the models. The new information extraction tools will be available, for free, to the public.
“This type of network could also be expanded so schools, libraries, and the general public have access to NOAA data, which will go a long way to advancing science, technology, engineering, and mathematics education,” Smith said. “It’s hard to imagine all that may be possible by opening up access to this data, and together with Mason, we look forward to exploring those possibilities over the coming years.” | <urn:uuid:fd960314-2a9f-4969-86c4-3bf0bf56d720> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/cloud-helps-george-mason-university-deliver-weather-data-in-real-time/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931237 | 455 | 2.734375 | 3 |
Every large-scale disaster can cause a re-examination of laws, policies and procedures. The catastrophe in Japan is no exception. The United States’ complex system of state and local government makes responding to large-scale disasters problematic. Therefore, it is appropriate that governments at all levels re-examine how they intend to respond to a catastrophic disaster like the one that Japan is experiencing.
The following are technology issues that need to be addressed in the United States.
Earthquake detection and warning systems — like Japan’s that worked effectively during its latest quake — are still just on the drawing boards in California. The Cascadia Subduction Zone that runs from British Columbia to off the coasts of Washington, Oregon and California is a perfect candidate for such a warning system. Because of its extreme length there could be minutes of warning for portions of the quake zone.
The existing tsunami warning system is not currently effective for a quake that is this close to the coast. Having an earthquake detection and warning system tied into the existing tsunami siren warning system would provide a dramatic improvement in warning capabilities and give people a head start on escaping any tsunami generated by a subduction zone quake.
Go to Emergency Management's website to read about additional U.S. policies that should be re-examined. | <urn:uuid:4db2bf6b-bf4a-45c0-925a-3c414694953a> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Japan-Earthquake-Policies-Procedures-031611.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947558 | 264 | 2.765625 | 3 |
Table of Contents
When Windows, like any other operating system, is created there are bugs introduced into the software that could affect how the operating system runs. These bugs could cause Windows to not run reliably or could cause security vulnerabilities that would make Windows vulnerable to attacks. When these bugs are discovered, Microsoft creates updates to fix these issues and makes them available through the Windows Update utility. There is no set day that updates are released, except when it comes to security updates. Microsoft traditionally releases security updates for Windows on the second Tuesday of each month, which has become known as Patch Tuesday. In some cases, if a particular vulnerability is severe enough, Microsoft may release an update earlier in order to mitigate any issues that may be caused by this vulnerability. When Microsoft releases a security update ahead of schedule, this is called an out-of-band security update.
When these updates are released it is important to install them as soon as possible. Some people and organizations like to hold off on non-security updates to see if they affect anything else on the computer. When it comes to security updates, though, they should be installed the minute they become available in order to protect your computer.
To learn how to update Windows, please select the section below that corresponds to your version of Windows. You can then use those instructions to learn how to install the latest updates that are available.
Windows XP utilizes a feature called Automatic Updates that could be configured to automatically download and install Windows updates as they become available. If Windows is configured this way, after it installs the updates it will display a small icon that looks like in your Windows taskbar. Windows will also display a balloon alert stating that updates have been installed and that you need to reboot your computer to complete the process. If you see this icon or the alert, just click on them and then click on the Reboot Computer button. When your computer reboots, Windows will have finished being updated.
If Windows is not configured to automatically update Windows for you, it will still alert you when new updates available. This alert also utilizes the icon in your Windows taskbar and displays the following balloon alert.
To install the updates, simply click on the alert or the shield icon and you will be presented with a screen asking if you wish to install the updates. This screen is shown below.
For most users, you can simply leave Express Install (Recommended) selected and click on the Install button to automatically install all available updates. If you would like to see what updates are available and select the ones to install, you can also select Custom Install (Advanced) and press Install. You will then be shown a screen that displays all the updates being installed. When you are finished selecting the ones you wish to install, click on the Install button to start the update process.
Windows XP will now start to install the updates on to your computer. This process can take quite a bit of time, so please be patient. When it has completed it will prompt you to reboot your computer, so be sure to save any open documents and then click on the Reboot or Restart Now buttons depending on the prompt you are viewing.
Your computer will now reboot and Windows XP will have been updated to use the latest settings.
For more information on how to configure Automatic Updates in Windows XP, please see this tutorial:
Windows 7 and Vista can be configured to automatically download and install updates when they become available. If Windows is configured to do this, then it will automatically install new updates when they become available. Once the updates are installed it will display the Windows Update icon () in the taskbar and also issue an alert stating that new updates have been installed. To finish the update process, save any open documents and then click on the icon or the alert and then reboot your computer when it prompts. Once your computer is rebooted, Windows will be fully updated.
If you do not automatically install new updates, and new updates are available, Windows 7 will display the Windows Update icon that looks like in your Windows taskbar. An alert will also be displayed from this icon that alerts you to new updates being available for Windows. This alert looks like:
When you see this, simply click on the alert or the Windows Update taskbar icon and you will be brought to the main Windows Update screen. This screen is shown below.
At this screen you can simply click on the Install updates to start the installation process. If you wish to first review the updates that will be installed, you can click on the text that states that important or optional updates are available. This will bring you to a screen where you can select the updates you would like to install.
When the updates are finished installing, you will be presented with a dialog box that states you need to reboot your computer before the update process can be finished. Please allow it to do so and once your computer has been rebooted, Windows 7 will have been fully updated.
For more information on how to configure the various Windows Update settings, please see this tutorial:
Let's face it, the Internet is not a very safe place. There are hackers trying to access your computer, worms trying to infect you, malicious Trojans disguised as helpful programs, and spyware that reports your activities back to their makers. In many cases those who become infected unknowingly become a breeding ground for unwanted programs and criminal activity. It does not have to be this ...
Have you ever been connected to your computer when something strange happens? A CD drive opens on its own, your mouse moves by itself, programs close without any errors, or your printer starts printing out of nowhere? When this happens, one of the first thoughts that may pop into your head is that someone has hacked your computer and is playing around with you. Then you start feeling anger tinged ...
Many Spyware, Hijackers, and Dialers are installed in Internet Explorer through a Microsoft program called ActiveX. These activex programs are downloaded when you go to certain web sites and then they are run on your computer. These programs can do a variety of things such as provide legitimate services likes games or file viewers, but they can also be used to install Hijackers and Spyware on to ...
According to a report by The Radicati Group on May 9th, 2006, there about 171 billion e-mail messages sent daily, 1.1 billion e-mail users worldwide, and 1.4 billion active e-mail accounts. These numbers are staggering and truly reflect how e-mail has become such an important medium for communicating with friends, family, colleagues, and clients. Though so many of you use e-mail all the time, how ...
One of the most important things a user can do to keep their computer secure is make sure they are using the latest security updates for Windows and their installed programs. Unfortunately, staying on top of these updates can be a time consuming and frustrating task when you have hundreds of programs installed on your computer. Thankfully, we have a utility called Secunia PSI, which is vital ... | <urn:uuid:aff79815-2375-4fd7-b5f0-de76a4e1347f> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/how-to-update-windows/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00287-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918772 | 1,428 | 2.59375 | 3 |
Black holes, so fascinating to star-gazers of the professional and backyard variety, are definitely not empty as their name might imply. Quite to the contrary, they are exceedingly dense. According to NASA, these astronomic objects comprise a great amount of matter packed into a very small zone. It’s like a star ten times more massive than the Sun squeezed into an area the size of New York City. The result is a gravitational field so strong that nothing can escape, not even light.
Scientists can’t observe black holes by direct means, but they can infer the presence of black holes by scrutinizing their effect on the gas and matter that lies just outside their event horizon. They also generate heat and energy that gets radiated, in part, as light.
Knowledge about black holes has grown tremendously in recent years through indirect exploration methods, with researchers employing detailed numerical models and powerful supercomputers to simulate the complex dynamics that take place at the perimeter. To achieve accuracy, simulations of this complex scenario must account for numerous phenomenon including warped spacetime, gas pressure, ionizing radiation, magnetized plasma.
A team of astrophysicists, led by Scott Noble from Rochester Institute of Technology Rochester, created a new tool that predicts the light that an accreting black hole would produce. It works by modeling how photons hit gas particles in the disk around the black hole (also known as an accretion disk), generating light, specifically light in the X-ray spectrum, and producing signals that can be detected using ultra-powerful telescopes.
The researchers relied on powerful supercomputing resources from The Texas Advanced Computing Center (TACC) at The University of Texas at Austin to generate images of light signals from a black hole simulation. With this method and the computational power of the Ranger system (which was retired in early February), the researchers for the first time were able to explain nearly all the components seen in the X-ray spectra of stellar-mass black holes.
The ability to produce realistic light signals from a black hole simulation marks a new era for astrophysics. Based on the new techniques that were devised for this project, researchers will be able to explain numerous other observations taken with multiple X-ray satellites over the past 40 years.
It’s an exciting time for black hole researchers with each year revealing more details about their significance in shaping the cosmos.
“Nearly every good-sized galaxy has a supermassive black hole at its center,” said Julian Krolik, a professor of physics and astronomy at Johns Hopkins University. Over multi-million year periods, black holes accrete incredible amounts of gas. This equates to energy, a lot of energy. During one of these periods, a black hole can produce as much as 100 times the power output of all the stars in its host galaxy put together.
“Some of that energy can travel out into their surrounding galaxies as ionizing light or fast-moving jets of ionized gas,” Krolik added. “As a result, so much heat can be deposited in the gas orbiting around in those galaxies that it dramatically alters the way they make new stars. It’s widely thought that processes like this are largely responsible for regulating how many stars big galaxies hold.”
Ranger was a Sun Constellation system hosted by the Texas Advanced Computing Center that was operational from 2008 to 2013. In early 2015, TACC expects to welcome Wrangler, a new NSF-supported big-data driven system, which will join Stampede, one of the most powerful supercomputers in the world. | <urn:uuid:e485599a-f2a9-446d-8c68-c674bae2d278> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/03/03/supercomputers-advance-understanding-black-holes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94084 | 736 | 4.4375 | 4 |
Understanding the ins and outs of cloud computing can be a fairly complicated task. For example, deciphering the difference between local applications utilizing a cloud platform vs. Web-based Software-as-a-Service (SaaS) implementations. Both configurations have the ability to achieve similar results for the end user, but their processes are inherently different from each other. That being said, a couple recent surveys have shown that most people are downright baffled by cloud technologies.
Last week, Citrix announced the results of a cloud survey conducted by Wakefield Research. The study aimed to discover what people actually think about cloud computing. 1,006 Americans were asked some basic questions regarding cloud technologies. With 40 percent of respondents saying that working “in the buff” was the the cloud’s greatest advantage, the survey displayed a situation that is both hopeless and humorous.
Of the participants, 54 percent said they hardly use or never use cloud technologies, even though 95 percent claimed to use services like Gmail, Facebook and YouTube. 22 percent admitted they pretended to know how the cloud works, and were bold enough to do so during a job interview.
29 percent believed cloud technologies had something to do with the weather and 51 percent thought that changes in weather could affect cloud computing. That last statistic has some potential to be true, like when Amazon suffered an outage due to severe storms at one of their datacenters.
Given the results, one could potentially argue that the Citrix survey was possibly flawed. Maybe the questions were too vague or the participants might have been improperly selected. But yesterday, EurActive reported on the results of a similar, but larger survey on the other side of the Atlantic. Nonprofit trade association Business Software Alliance (BSA) facilitated the study.
Out of 4,000 European computer users polled, only 24 percent said they access cloud applications. 65 percent were unfamiliar with cloud computing and some admitted they “never heard the name.”
Usage varied quite a bit between countries. On the higher end of the spectrum, respondents from Greece and Romania reported 39 percent adoption of cloud technologies. On the other hand, only 9 percent of those surveyed from Poland said they were familiar with cloud applications.
These surveys reveal that a surprising number of people are unfamiliar with cloud technologies and the benefits they provide. To combat this issue, the European Commission is set to release a cloud computing strategy for the European Union. This includes fixing regulatory concerns and promoting off-site data storage services, like Amazon’s recently announced Glacier service. Similarly, the US office of Management and Budget has issued a cloud first IT strategy. | <urn:uuid:813f6767-4610-4735-9c22-1e412ba3275f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/09/05/the_cloud_is_over_most_people_s_heads/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95881 | 530 | 2.515625 | 3 |
NASA astronaut Don Pettit has already done some great videos for science students while working on the International Space Station. As part of his "Science off the Sphere" series, he's shown us the physics of soap bubbles in zero gravity, and what happens when a balloon pops in space.
This time, we get to see some yo-yo tricks in a zero-gravity environment. Watch what happens when a yo-yo can float around without that pesky gravitational pull getting in the way.
Cool, stuff, as always, Mr. Pettit.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: James Bond meets My Little Pony: Mashup gold This 13-foot Japanese robot is packing heat The Legend of Zelda as a Western Friday Funnies: Batman rants against the Dark Knight haters/a> Did this 1993 film predict Google Glasses and iPads? | <urn:uuid:93ea576d-609c-4483-9218-bb4cffc75c51> | CC-MAIN-2017-04 | http://www.itworld.com/article/2720379/mobile/astronaut-does-yo-yo-tricks-in-zero-gravity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.893766 | 226 | 2.609375 | 3 |
Distributed Denial of Service attacks are nothing new, but they’re becoming more and more common, from politically motivated attacks on financial and government institutions to recent attacks on data centers like Digital Ocean. DDoS attacks are when hackers use hijacked computers to flood servers with incoming requests and essentially shut down services by clogging network traffic or sending mass quantities of junk data. They are increasingly difficult to defend against as they grow in scale, and because they are distributed among various infected machines, it can be difficult to block traffic based on IP address.
Public institutions, financial industries, eCommerce sites, and hosting providers are among the most popular targets, but anyone can be a victim—and if your IT infrastructure is hosted in a data center, you need that facility to provide strong DDoS mitigation to avoid service interruptions of your own.
These days, SYN or HTTP GET flood attacks are very common ways to overload firewalls or IPS systems and make the servers behind them unresponsive. Network switches and servers do not have the resources to respond to every incoming request and therefore begin to drop network packets from any incoming source. The DDoS source traffic can come from either volunteered computers (scoundrels!), a single computer masquerading as many IP addresses, or, as is most common, a botnet of hijacked computers.
A SYN flood attack uses SYN packets, which are the first packet sent to a server to request a connection. This is part of the standard “handshake,” and the server would normally respond with a SYN-ACK message. With a SYN flood, the connecting client does not respond with ACK, causing the server to wait for a response. SYN floods are a type of Bulk Volumetric attack.
Other Bulk Volumetric attacks include ICMP packet floods, which send “PING” commands, TCP/UDP floods, which send to open network ports like TCP 81, fragment floods, which send fragmented packets, anomalous packet floods, which send error scripts within network packets, and DNS amplification, which uses DNS EDNS0 protocol to amplify the attack. This last example, uses public Domain Name Service servers to send DNS lookups to a DNS server while pretending to be the target server, so the DNS server replies to the target.
HTTP GET is an Application Layer attack, which is smaller and more targeted, going after the Layer 7 of the OSI model, which is the top layer of network traffic, rather than Layer 3 Network traffic targeted in Bulk Volumetric attacks. HTTP GET exploits the process of a web browser or other HTTP client asking an application or server for an HTTP request, which is either GET or POST. Attackers must have some knowledge of their target, as they will usually request the most resource-intensive process. They are hard to defend against because the use standard URL requests, rather than broken scripts or huge volumes.
ISPs have DDoS protection at Layer 3 and Layer 4 (network traffic), but that ignores the more targeted Layer 7 attacks, and total coverage is not guaranteed.
DDoS service providers exist. Usually they will reroute your incoming traffic through their own systems and “scrub” it against known attack vectors. They might scan for suspicious traffic from uncommon sources or geolocations, or reroute your legitimate traffic away from botnet sources.
Most modern firewalls and Intrusion Protection Systems (IPS) offer DDoS defense abilities as well. These can take the form of a single device scanning all incoming traffic, or distributed devices or software at the server level. Dedicated DDoS appliances are also available and may offer better protection against Layer 7 attacks.
Network scanning and traffic monitoring with alerts can also help you catch a DDoS attack early and take action to avoid total service loss.
Once you have a DDoS protection system in place, you’ll want to test it before it comes under fire. The first step to take is to identify attack vectors and key applications. What ports are open? What bandwidth do you have available to you? Where are likely network bottlenecks? What critical systems need additional protection?
Note areas of your infrastructure that are vulnerable based on their reliance on other systems—like a central database that could take down functionality for several applications if it is overloaded.
There are a variety of open source software tools you can use to test DDoS mitigation, as well as hardware options that can reach multi-Gigabit traffic levels. However, hardware options are expensive. A professional white hat security firm may be able to offer testing as a service.
DDoS attacks are certainly an annoyance, but with some preparation, you can be ready to intercept or respond to them quickly and avoid service interruptions for your users. | <urn:uuid:36f2133d-dffa-430e-9b14-bb74bd209c5d> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/protect-yourself-as-ddos-attacks-on-data-centers-increase | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00279-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946839 | 974 | 3 | 3 |
The key principle of agile development is to develop robust software with nominal expenditure and investment. The development life cycles should usually be of 2 – 3 weeks, in which a particular piece of software is developed, tested, and documented. Figure 1 shows the difference between efforts required to develop documentation in SDLC with that of Agile methodology.
In the Agile development environment, many companies create their own Agile methods while using some of the original ideas and principles defined in the Agile Manifesto (http://www.agilemanifesto.org/).
Figure 1*: SDLC Vs. Agile development
Terms Used in Agile Development
The basic terms associated with the Agile development environment are:
- Iteration: A tenure during which the software is programmed, at the end of which the Quality Assurance (QA) testers verify that the software is working as expected.
- Stand-up: Daily meetings in which the progress of the development work is shared among team members.
- Story: The business need that defines the purpose of the software to be developed for a user. Stories are usually sized so that a single iteration is sufficient for development, and they are usually written as “role can do task”, for example, “An Administrator can add a new User”.
- Task: Definitions of all subtasks for a single story. For example, for the story “An Administrator can add a new User,” one of the tasks might be “Connect the new component to an existing security component.”
- Backlog: A repository for stories that are targeted for release during iteration.
Role of a technical writer in Agile based projects
As part of the team working in the Agile development environment, you are expected to inform the team what is to be delivered at the end of an iteration. Working on iterations has a definite effect on the scope, content, and presentation of your deliverables. Daily status should be communicated to the team. Some techniques and certain deliverables are well suited for documenting products that are developed in an Agile environment:
Using a topic-oriented approach: Topic-oriented writing is a defining aspect of information mapping and Darwin Information Typing Architecture (DITA). In information mapping, complex information is broken down into basic components, and then these “chunks” of information are structured in a user-friendly way.
DITA is meant for creating individual topics that you can combine and reuse in different types of documentation and in different delivery formats. This makes sense in an Agile environment, in which “the right documentation, at the right time” is the main goal for all documentation - end-user and internal
Translating user stories to task-oriented topics: The Agile environment is linked to a very famous saying – “Don't tell me how it works, tell me how to use it”.
Task-oriented writing is writing in terms of how the user carries out the task. Your users are interested in knowing practical information – how to – information, rather than the details on the internal structure of the software
Creating just-in-time documentation by applying minimalist principles: The minimalist design approach is originated by John Carroll and his colleagues at IBM. This approach has played a vital role in optimizing user support, in learning-to-use software. According to this approach, avoid wordy overviews that are not task-based. Also, do not document obvious procedures, such as how to cut and paste text or print reports.
Participating as an active team member: Being an active member of the Agile team is crucial to a writer’s success. The lack of internal documentation makes full participation in the team an absolute necessity. Figure 2 shows various communication methods that can be used to gather information for effective documentation.
Figure 2**: Effectiveness of various communication modes
As programmers, the technical writers can also use Agile techniques in their writing assignments to become an integral part of delivering useful software.
* (Copyright 2006 Scott W. Ambler)
** (Copyright 2002-2005 Scott W. Ambler. Original Diagram Copyright 2002 Alistair Cockburn) | <urn:uuid:d52cb514-27dc-47d3-99b8-9b37d6cfb0a3> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/user-documentation-agile-development-environment | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923397 | 859 | 2.8125 | 3 |
Recently, the news reported that some of the Bush family members had their e-mail accounts hacked. There’s a lesson in this for all of us that use online e-mail services. What are the two ways hackers gained access to these accounts?
Password guessing: It’s sad, but true, that many people use easy to guess passwords, based on common items. If an attacker knows something about the victim, it’s possible that the attacker may be able to guess the password. A report by CBS news listed some of the top passwords of 2012: qwerty, 123456, welcome, and letmein.
Password resets: Even if a hacker cannot guess your password, he/she may be able to reset it. Many e-mail services make use of cognitive password resets. This allows users to change their passwords if they cannot remember them by answering a few simple questions. These questions might include:
- Where were you born?
- What’s your pet’s name?
- When were you married?
- What high school did you attend?
These questions might also be guessed or discovered by simply knowing something about the person.
So, how can we prevent these problems? One technique is to simply use stronger passwords. Don’t use passwords based on common facts and avoid using the same password for all your online accounts. Use passwords of nine or more characters that consist of mixed types of characters. One way to create longer, more secure passwords is to use pass phrases, such as San_Fran_is#1_to_me. The combination of upper and lower case characters with numeric values make it much harder for attackers to crack.
You might think that what you have in your e-mail or online account is not as valuable as what’s in an ex-president’s e-mail account, but the bottom line is that hackers are always looking for ways to exploit victims. One way to avoid being a target is by using strong passwords. | <urn:uuid:ad1d3bb5-ba18-4055-8a27-3b1fbdc13da1> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2013/02/27/what-can-we-learn-from-the-bush-family-e-mail-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955844 | 418 | 2.890625 | 3 |
Manufacturing Breakthrough Blog
Friday August 21, 2015
In my last several postings we have been discussing the Conflict Resolution Diagram (CRD) which is one of TOC’s Thinking Process tools. As the name implies, the CRD is a tool used to resolve conflicts. In my last posting we developed our CRD as is displayed in the figure below.
I also explained that the CRD’s purpose was to articulate the key elements of a conflict and then suggest ways to resolve it resulting in a win-win solution. The CRD uses necessity based logic which uses the syntax, “In order to have “x” I must have “y”…… because….” The CRD includes the common objective, necessary, requirements that lead to it, and the prerequisites needed to satisfy the requirements. Let’s continue our discussion on the key elements of the CRD.
To refresh your memory, the figure below is the CRT we developed in my last post.
The Conflict Resolution Diagram
One side of our conflict is stated as, “In order to have Throughput Rates High, we must have limited WIP inventory because excess WIP extends cycle times and causes late deliveries. The other side of our conflict is stated as, “In order to have Throughput Rates High, we must increase production starts because more WIP results in higher levels of production. Remember, the “because” statements are the assumptions used to justify each side’s reasoning. Let’s now look at the relationship between each side’s requirement and the corresponding prerequisite.
On one side of our conflict we state that in order to limit our WIP inventory, we must use a constraint based “pull” schedule because new starts don’t begin until something exits the constraints. On the other side of our conflict we state that in order to increase production starts, we must use a “push” system to schedule production because idle time by resources results in lower production. Obviously both a pull and a push system can’t operate simultaneously, so herein lies our conflict. That is, the two prerequisites are calling for two completely different production scheduling methods.
As with all of the Thinking Process logic trees, the presence of an arrow indicates the existence of underlying assumptions about the relationship between the entities of the CRD and it is these assumptions that are the key to solving the conflict. So the real key to solving the conflict is to invalidate the assumptions. It’s important to remember that they may never have been valid in the first place. Doing so usually involves finding a substitute for the entity in question. This substitute is referred to as an injection. In our example, I have only listed one assumption between each of the requirements and prerequisites, but there could be multiple assumptions.
Injections are specific actions to be taken that would invalidate one or more of the assumptions. Bill Dettmer refers to single injections that “kill the conflict” as “silver bullets.” Dettmer also tells us that it is unlikely that a single injection will resolve our conflict and that “thinking outside the box” is necessary. The way I would approach our example conflict is to try to understand the logic behind why one side wants a push system to schedule production. If, for example, the reasoning behind why one side wants a push system is their belief that the performance metric efficiency should be high in order to gain higher throughput, then we might be able to demonstrate why this thinking isn’t sound. So think about how we might convince this side that their thinking is flawed?
What if this side has never been exposed to the Theory of Constraint’s accounting method called Throughput Accounting (TA)? You will recall that TA believes that performance metrics like manpower efficiency and/or equipment utilization are both sound metrics as long as they are limited to measuring the performance of the constraint. The key to improving throughput is to maximize the output of the constraint without creating excessive amounts of WIP in front of the constraint which causes the cycle times of in-process parts to be extended and that result in a deterioration of throughput and on-time delivery. To quote Dettmer , “In complex conflict situations, injections are likely to be conditions you want to create, rather than actions you expect to perform. Many separate actions may be necessary to achieve these conditions.”
So think about the impact of only measuring efficiency within the constraint. The figure above is our Current Reality Tree (CRT) that we developed earlier. If we only measure efficiency within the constraint operation, operators in non-constraints will stop over-producing which would lead to minimum levels of WIP which in-turn would lead to shorter cycle times and ultimately would lead to significantly improved throughput. In an earlier post, I explained that we are always after solutions that create only win-win results for both sides of our conflict. By implementing this one simple change, our common objective should be achieved.
Before completing this post, I want everyone to understand that there may be other injections that can be developed to break any conflict, but for brevity sake, I wanted to keep it simple for you.
In my next posting we’ll move on to the next Thinking Process tree, the Future Reality Tree. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time.
The Logical Thinking Process – H. William Dettmer, Quality Press, 2007 | <urn:uuid:99b37c90-5371-4de3-b7c7-2f33281834a6> | CC-MAIN-2017-04 | http://manufacturing.ecisolutions.com/blog/posts/2015/august/the-thinking-processes-part-8.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946608 | 1,149 | 3.140625 | 3 |
Coastal flooding challenge uses cross-agency data
NASA recently announced a new challenge focusing on coastal flooding to encourage entrepreneurs, technologists, and developers to create visualizations and simulations that will help people understand their exposure to coastal-inundation hazards and other vulnerabilities.
The challenge will be included as part of the third annual International Space Apps Challenge, which will be held from April 11-13. It was developed by NASA and the National Oceanic and Atmospheric Administration, and is based on cross-agency data.
The aim of the Coastal Inundation in Your Community challenge is to create tools and provide information so communities can prepare for coastal catastrophes.
“Solutions developed through this challenge could have many potential impacts,” said NASA Chief Scientist Ellen Stofan. "This includes helping coastal businesses determine whether they are currently at risk from coastal inundation and whether they will be impacted in the future by sea level rise and coastal erosion."
Many federal data sets are now available that illustrate the hazards of coastal inundation. As part of the Climate Data Initiative, the government has gathered data sets related to coastal vulnerability and the impact of future climate changes on flooding. The data sets will be available on climate.data.gov.
The data comes from NOAA, NASA, the Federal Emergency Management Administration, the U.S. Geological Survey, the Environmental Protection Agency, the Army Corps of Engineers, the departments of Commerce and Defense as well as from New York and New Jersey.
The purpose of the larger International Space Apps Challenge is to contribute to space exploration missions and improve life on earth. Participants introduce these solutions by developing mobile apps, software, hardware, data visualization and platform solutions. They will have access to over 200 data sources, including data sets, data services and tools.
The challenge will be hosted at 100 locations over six different continents.
Posted by Mike Cipriano on Apr 10, 2014 at 8:40 AM | <urn:uuid:2951147d-35c9-45cd-8256-acf0225ed4b7> | CC-MAIN-2017-04 | https://gcn.com/blogs/pulse/2014/04/nasa-coastal-flooding-challenge.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930545 | 395 | 2.78125 | 3 |
Defining integer overflows and how hackers take advantage
An integer overflow is software behavior caused by an arithmetic operation whose numerical result is too large to store within the bit width of the system. Most machines today are either 32-bit or 64-bit. This restricts the number of bits available to store the output of an arithmetic operation to 32 or 64 bits, respectively.
Correspondingly, when an arithmetic operation produces a result that is too large to store within the bit width of the system, the result is truncated at the bit width, leading to an unexpected resultant value. This overflowed value could be used, regardless, for a critical operation such as array indexing, memory allocation or memory dereferencing.
Such behavior cannot only cause crashes in the software, but also make the software vulnerable to security exploits that deliberately exploit integer overflows to access or corrupt privileged memory in the system. The sample code below demonstrates a potential overflow in the add operation between two unsigned 32-bit values, if their sum were greater than UINT_MAX (2^32 - 1 or 0xFFFFFFFF).
In the above example, if a and b were both equal to 2^31 + 1, the resulting value of x, 2^32 + 2, would overflow 32 bits, thereby making the value of x = 2, which is (2^32 + 2) truncated to 32 bits! On line 5, this overflowed value of x is printed onto standard output. The seen result is erroneous compared to the programmer's intent of having x contain the sum of a and b. However, this overflow is benign in that it does not make this program vulnerable to attack. This is not always the case. Consider the code fragment below:
In the example above, x can still contain the overflowed value from a + b. If a and b were both 2^31 + 1, then x would be 2. If the overflowed value x were then used as the size argument to malloc, only x bytes (which is NOT equal to a + b bytes) are allocated. This creates a critical mismatch between the programmer's expectation of having allocated a + b bytes (2^32 + 2 in our example) and the system's actions of having allocated x bytes (2 in our example).
Thus, on line 7, the access p[a] (p[2^31 + 1] in our case) can access unallocated and even privileged memory locations. In particular, a malicious user might engineer the values of a and b (which are read from the user) to exploit the integer overflow and the following accesses to read or even corrupt privileged memory locations.
Also, in the example above, if the malicious user determines the address of a 2 byte memory allocation (call it L), and subsequently determines that memory representing a critical security privilege is at an offset of 40 bytes from L, the user can choose the values of a and b to be 40 and 2^32 - 38, respectively. The resultant x overflows and contains the value 2, causing a 2 byte allocation (L) on line 6. On line 7, p[a] overwrites the memory location offset at 40 bytes from L.
Such overwrites of arbitrary memory locations exploiting the integer overflow vulnerability are particularly dangerous in security-critical applications that often run with superuser privileges, due to which security-critical memory locations are within the address space of the application. In a common instance of the integer overflow vulnerability in real-world software, the attacker can overwrite the address to which the code needs to jump with the address of arbitrary code, thereby making the software execute arbitrary code. | <urn:uuid:76423608-956f-4dd7-8765-acbb8dd83fbd> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/How-to-Defend-Against-Deadly-Integer-Overflow-Attacks/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912022 | 746 | 4 | 4 |
It always amazes me how some people create a tool that makes things easier for people and then other BAD people come along and take advantage of that tool to benefit themselves. Browser cookies were invented to make surfing simple HTML Web sites easier. A cookie can store any personal information you’ve given to a site, so you don’t have to enter it again when you click a different page on that site. Nosy webmasters have invented methods to steal your private information using cookies. New cookie types that aren’t easily deleted have emerged. MAXA Cookie Manager Pro 5.0 ($35, direct, for two licenses) identifies and manages all types of cookies including the self-restoring “evercookie”. It protects your privacy and security, though the implementation is a little sloppy.
Cookies can also store preferences and other information that you’ve entered on a site, so you don’t have to enter that data again. The cookie itself is a simple text file that’s stored on your computer and that, in theory, is only accessed when you revisit the corresponding Website. For example, the site can identify what browser you’re using, tell what page you linked from, and even get a rough idea of your physical location. Combining this data with any information you’ve actively shared, a site can find out quite a bit about you.
Given the possibility of inadvertently revealing private information, some users may be tempted to disable cookies entirely. Unfortunately, many perfectly valid Web sites just won’t work without cookies. Even when standard cookie handling is disabled, Web sites can utilize non-standard technology or browser-independent cookies. One researcher has created what he calls the “evercookie,” which stores data in multiple local repositories and uses this redundant storage to rebuild any deleted components. In the modern world, you can’t thoroughly control cookies using browser settings and manual deletion. MAXA supports seven popular browsers: Internet Explorer, Firefox, Opera, Safari, and Chrome. On installation it ensures that the supported browsers are configured correctly for cookie management. It also checks settings for Flash, Silverlight, and Skype, all of which include cookie-like technologies.
During installation, the product lists several dozen popular Web sites and invites you to check off any that you use regularly. Checked sites are whitelisted automatically, meaning the product never meddles with their cookies. Naturally, you can edit or add to the whitelist at any time. After installation, MAXA scans the computer for cookies of all kinds. When I ran it on the system I use for e-mail and editing, it turned up over 3,600 cookies. Most were ordinary browser cookies, but it found several examples of advanced-technology cookies specific to Internet Explorer and Firefox. It also found a few Silverlight-based cookies and a slew of Flash-based ones. | <urn:uuid:be263455-4995-4fb1-b274-b53112b3799b> | CC-MAIN-2017-04 | http://www.bvainc.com/maxa-cookie-manager-pro-5-0/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918383 | 593 | 2.625 | 3 |
Data protection authorities are waiting for a crisis before acting to protect personal information exposed in the growing number of mash-up applications appearing on the web.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The privacy implications of new technologies such as mash-ups are too complex and unpredictable for data protection working groups to address, said Moulinos Konstantines, a Greek data protection official seconded to the European Network and Information Security Agency (Enisa).
Petros Belimpasakas, a principal researcher at Nokia's Tampere research centre, said people care about threats to their privacy, despite their willingness to disclose information on social networking sites. "Even small bits of information, such as a birthday, can be revealing because they are often part of a social security number."
He said the number of US firms that had sacked staff because of what had appeared on Facebook had doubled in a year from 4% to 8%.
Belimpasakas quoted a Cambridge University study of social network sites' privacy policies and peoples' use of them. "Most collect more data at sign-up than they need to manage the site, but almost none tell you what they do with the information," he said.
But even talking about privacy policies put off potential users, the study found. As a result, social networking sites limit overt discussion of privacy issues.
This has disturbing implications with advanced applications like mash-ups, which combine data from multiple sources.
One example was where information about a user's friends on Facebook was added to a Google map which then displayed the cheapest flight and nearest hotel to each friend.
Belimpasakas called for all mash-ups to use the OAuth protocol, which protects third-party disclosures. He called for similar protection for information disclosed via geo-location applications.
It is increasingly possible to use a mobile phone's camera to send a picture of a building to a central database which then reveals what facilities it has, such as restaurants, art galleries or even the location of one's friends in the building, he said. This means a vast increase in the amount of data traffic transmitted and revealed to third parties, such as the mobile network operator.
This is a privacy problem in itself, but there are more, Belimpasakas said. Access devices are closely associated with the mobile account holder. If they are lost or stolen, there is a track to the account holder. In addition, all the account holder's data accessible via the phone is vulnerable to theft and misuse.
"We are starting to see a multi-dimensional privacy problem that goes beyond the web and the PC," he said. | <urn:uuid:11cd3827-1341-4673-afbe-21bd083bb6ec> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280090768/Web-mash-ups-infringe-on-data-privacy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00234-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947927 | 554 | 2.578125 | 3 |
Sick of spending all your money on your cell-phone bill? The Energy Department plans to boost energy savings for laptop and phone chargers, and it's betting the new standards will save you money.
The department will strengthen energy-efficiency standards for external power supplies, products that take the form of cell-phone and laptop chargers as well as power cords for a host of other electronic devices, including video-game consoles, in homes across the country, the administration announced Monday.
DOE estimates that the standards will put close to $4 billion back in the pockets of American consumers, while lowering carbon emissions by 47 million metric tons over the next 30 years. The EPS standards will build on 2007 standards that are designed to boost the efficiency of the devices by nearly a third.
The department is no stranger to energy-efficiency. It recently finalized energy-efficiency standards for metal halide lamps, a type of light fixture commonly used in parking lots and big-box stores. It has also set out energy-savings standards for household and commercial appliances.
DOE emphasized that the external power supply standards fall under the heading of the president's larger climate-change agenda.
"Building on President Obama's State of the Union address, which called for reducing carbon pollution and helping communities move to greater energy efficiency, the Energy Department today announced new efficiency standards for external power supplies," a press release stated.
These latest efficiency standards come as the administration faces heightened scrutiny from environmental advocates looking to judge the president's commitment to acting on climate change in the lead up to a final decision on the Keystone XL oil sands pipeline.
Debate over the pipeline intensified last week, when the State Department released a report concluding that approving the project would not significantly speed oil sands development in Canada, a finding that environmental groups have contested. | <urn:uuid:5db0a684-9dfd-40ed-b8fc-b06ad56b1eb4> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2014/02/energy-department-wants-make-your-phone-charger-more-efficient/78080/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944553 | 367 | 2.703125 | 3 |
Optical WDM networks are networks that deploy optical wdm fiber links where each fiber link carries multiple wavelength channels.
An All Optical Network (AON) is definitely an optical wdm network which supplies end-to-end optical paths by using all optical nodes that allow optical signal in which to stay optical domain without conversion to electrical signal. AONs are often optical circuit-switched networks where circuits are switched by intermediate nodes in the granularity of the wavelength channel. Hence a circuit-switched AON can also be called a wavelength routing network where optical circuits are equal to wavelength channels.
A wavelength routing network includes optical cross-connect (OXC) and optical add-drop multiplexer (OADM) interconnected by WDM fibers. Transmission of information over this optical network is performed using optical circuit-switching connections, referred to as lightpaths. An OXC is definitely an N * N optical switch with N input fibers and N output fibers with every fiber carries wavelengths. The OXC can optically switch all the incoming wavelengths of its input fibers to the outgoing wavelengths of its output fibers. An OADM can terminate the signals on a quantity of wavelengths and inserts new signals in to these wavelengths. The rest of the wavelengths pass through the OADM transparently.
For a user to deliver data to some destination user, a circuit-switching connection is made by using a wavelength on each hop along the connection path. This unidirectional optical path is known as lightpath and also the node in between each hop is either an OXC or an OADM. These units are utilized within the 100G DWDM networks. A separate lightpath has to be established using different fibers to setup transmission within the opposite direction. To fulfill the wavelength continuity constraint, the same wavelength can be used on every hop along the lightpath. If a lightpath is blocked since the required wavelength is unavailable, a converter in an OXC can transform the optical signal transmitted in one wavelength to another wavelength.
Because the bandwidth of a wavelength is usually much larger than that requires by a single client, traffic glooming is used to allow the bandwidth of the lightpath to be shared by many people clients. The bandwidth of the lightpath is split into subrate units; clients can request one or more subrate units to carry traffic streams at lower rates. For instance, information is transmitted over an optical network using SONET (Synchronous Optical Network) framing with a transmission rate of OC-48 (2.488 Gbps). A lightpath is established from OXC1 to OXC3 through OXC2 using wavelength w, the subrate unit available on this lightpath is OC-3 (155 Mbps). A user on OXC1 can request any integer number of OC-3 subrate units up to a total of 16 to transmit data to another user on OXC3. A network operator can use traffic-groomed lightpaths to provide subrate transport services to the users with the addition of an online network towards the fiber optic network.
Information on a lightpath is typically transmitted using SONET framing. In the future, the data transmitted over optical network uses the brand new ITU-T G.709 standard, referred to as digital wrapper. In ITU-T, an optical network is referred to as the optical transport network (OTN). Listed here are some of the options that come with G.709 standard: 1) The conventional permits transmission of various kinds of traffic: IP packets and gigabit Ethernet frames using Generic Framing Procedure (GFP), ATM cells and SONET/SDH synchronous data. 2) It supports three bit rate granularities: 2.488 Gbps, 9.95 Gbps and 39.81 Gbps. 3) It offers capabilities to monitor an association on an end-to-end basis over several carriers, in addition to over a single carrier. 4) G.709 uses Forward Error Correction (FEC) to detect and correct bit errors brought on by physical impairments in the transmission links.
Lightpath may either be static or dynamic. Static lightpaths are in place using network management procedures and may remain up for a long time. Virtual Private Networks (VPN) could be set up using static lightpaths. Dynamic lightpaths are established instantly using signaling protocols, such as GMPLS (Generalized Multi-Protocol Label Switching) and UNI (User Network Interface) proposed by OIF (Optical Internetworking Forum). GMPLS is definitely an extension of MPLS and is built to apply MPLS label switching techniques to Time Division Multiplexing (TDM) networks and wavelength routing networks, in addition to packet switching networks. The OIF UNI specifies signaling procedures for clients to automatically create, delete and query an association over wavelength routing network. The UNI signaling is implemented by extending the label distribution protocols, LDP and RSVP-TE. | <urn:uuid:6341b887-0505-4011-9cfd-17f370628161> | CC-MAIN-2017-04 | http://www.fs.com/blog/optical-wdm-in-fiber-optic-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903621 | 1,023 | 3.4375 | 3 |
The idea behind clouds and public Infrastructure as a Service (IaaS) is certainly not new. In fact, Amazon Elastic Compute Cloud (Amazon EC2) will be six years old this year. What has changed is the focus on IaaS as a means of private cloud computing to satisfy enterprise computing with sensitive data. Private cloud computing applies the IaaS idea to private infrastructure. Although doing so lacks the economic advantages of public clouds (pay-as-you-go services), it exploits the core principles of cloud computing, with a scalable and elastic infrastructure within a corporate data center.
Let's begin with a quick introduction to IaaS and its architectures, and then jump into the leading open source solution: OpenStack.
IaaS and cloud architectures
Cloud computing architectures tend to focus on a common set of resources that are virtualized and exposed to a user on an on-demand basis. These resources include compute resources of varying capability, persistent storage resources, and configurable networking resources to tie them together in addition to conditionally exposing these resources to the Internet.
The architecture of an IaaS implementation (see Figure 1) follows this model, with the addition of other elements such as metering (to account for usage for billing purposes). The physical infrastructure is abstracted from the application and user through a virtualization layer implemented by a variety of technologies, including hypervisors (for platform virtualization), virtual networks, and storage.
Figure 1. High-level view of IaaS
Although OpenStack is the most popular open source cloud solution available today, it certainly wasn't the first. In fact, OpenStack is a combination of two older solutions developed in both the public and private sectors.
An earlier open source IaaS solution, Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems (Eucalyptus), was developed as a research project at the University of California, Santa Barbara. Other solutions include OpenNebula (an open source cloud computing toolkit) and Nimbus (another open source toolkit for IaaS clouds). OpenStack integrated pieces of the U.S. National Aeronautics and Space Administration's (NASA) Nebula platform and the Rackspace Cloud Files project (cloud storage).
Cloud computing's newcomer: OpenStack
OpenStack is a relative newcomer to the IaaS space, its first release having come in late 2010. Despite the solution's presumed lack of maturity and given that it has been around for less than two years, OpenStack is now one of the most widely used cloud stacks. Rather than being a single solution, however, OpenStack is a growing suite of open source solutions (including core and newly incubated projects) that together form a powerful and mature IaaS stack.
As shown in Figure 2, OpenStack is built from a core of technologies (more than what is shown here, but these represent the key aspects). On the left side is the Horizon dashboard, which exposes a user interface for managing OpenStack services for both users and administrators. Nova provides a scalable compute platform, supporting the provisioning and management of large numbers of servers and virtual machines (VMs; in a hypervisor-agnostic manner). Swift implements a massively scalable object storage system with internal redundancy. At the bottom are Quantum and Melange, which implement network connectivity as a service. Finally, the Glance project implements a repository for virtual disk images (image as a service).
Figure 2. Core and additional components of an OpenStack solution
As shown in Figure 2, OpenStack is a collection of projects that as a whole provide a complete IaaS solution. Table 1 illustrates these projects with their contributing aspects.
Table 1. OpenStack projects and components
|Horizon||Dashboard||User and admin dashboard|
|Nova||Compute/block device||Virtual servers and volumes|
|Glance||Image service||VM disk images|
|Swift||Storage as a Service||Object storage|
|Quantum/Melange||Networks||Secure virtual networks|
Other important aspects include Keystone, which implements an identity service that is crucial for enterprise private clouds (to manage access to compute servers, images in Glance, and the Swift object store).
OpenStack is represented by three core open source projects (as shown in Figure 2): Nova (compute), Swift (object storage), and Glance (VM repository). Nova, or OpenStack Compute, provides the management of VM instances across a network of servers. Its application programming interfaces (APIs) provide compute orchestration for an approach that attempts to be agnostic not only of physical hardware but also of hypervisors. Note that Nova provides not only an OpenStack API for management but an Amazon EC2-compatible API for those comfortable with that interface. Nova supports proprietary hypervisors for organizations that use them, but more importantly, it supports hypervisors like Xen and Kernel Virtual Machine (KVM) as well as operating system virtualization such as Linux® Containers. For development purposes, you can also use emulation solutions like QEMU.
Swift, or OpenStack Object Storage, is a project that provides scalable and redundant storage clusters using standard servers with commodity hard disks. Swift does not represent a file system but instead implements a more traditional object storage system for long-term storage of primarily static data (one key usage model is static VM images). Swift has no centralized controller, which improves and overall scalability. It internally manages replication (without redundant array of independent disks) across the cluster to improve reliability.
Glance, or OpenStack Image Service, provides a repository for virtual disk images that Nova can use (with the option of being stored within Swift). Glance provides an API for the registration of disk images in addition to their discovery and delivery through a simple Representational State Transfer (REST) interface. Glance is largely agnostic of the virtual disk image format, supporting a large variety of standards, including VDI (VirtualBox), VHD (Microsoft® Hyper-V®), QCOW2 (QEMU/KVM), VMDK/OVF (VMware), and raw. Glance also provides disk image checksums for integrity, version control (and other metadata), as well as virtual disk verification and audit/debug logs.
The core OpenStack projects (Nova, Swift, and Glance) were developed in Python and are all available under the Apache License.
With a large number of independent projects that must be installed and configured to work in concert with one another, installing OpenStack can be a time-consuming task (see Resources for more information on complete installations). But there are other options that can greatly simplify getting OpenStack up and running for the curious reader.
Anyone who's read some of my prior articles knows that I'm a fan of VM images for simplified use of Linux-based software. VMs allow you to easily create a new instance to try out or demonstrate software. The VM is a self-contained Linux instance (sometimes called a virtual appliance) that you can pre-install with the necessary software and preconfigure for your use. Provisioning software in this way greatly simplifies its use, allowing you to experiment with software that would otherwise be difficult or time-consuming to acquire. Check out Resources for installation options that fit your particular hardware and base operating system needs.
For this demonstration, I decided to go with the latest Ubuntu release (12.04) and OpenStack's Essex release. Essex is available as an ISO using the uksysadmin's installation procedure (see Resources). After a clean installation of OpenStack Essex on Ubuntu Precise, an external web browser should be able to view the OpenStack dashboard. Figure 3 shows the System Panel Images tab with the guest VM image in two container formats.
Figure 3. OpenStack Dashboard view of the available guest images
The image is used to create a demo instance, which, as Figure 4 shows, has been started. This instance is now available for use.
Figure 4. OpenStack Dashboard view of the compute instances
With a compute image now running in OpenStack, I can access it using its IP address (172.16.1.1) through a simple Secure Shell (SSH) session (see Listing 1, user input shown in bold).
Listing 1. Accessing the OpenStack compute instance via SSH
$ ssh -i Downloads/demo.pem email@example.com The authenticity of host '172.16.1.1 (172.16.1.1)' can't be established. RSA key fingerprint is df:0e:d0:32:f8:6d:74:49:ea:60:99:82:f1:07:5d:3b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.16.1.1' (RSA) to the list of known hosts. Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information disabled due to load higher than 1.0 0 packages can be updated. 0 updates are security updates. Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@demo1:~$ ubuntu@demo1:~$ hostname demo1 ubuntu@demo1:~$ ps PID TTY TIME CMD 835 pts/0 00:00:06 bash 948 pts/0 00:00:00 ps ubuntu@demo1:~$
With all of these layers running, it can be difficult to visualize what's happening. Figure 5 illustrates the entire stack and hopefully helps demystify it. In this demonstration, a Mac running Mac OS X provides the base platform. VirtualBox runs on Mac OS X, providing the platform for the execution of OpenStack (running on Ubuntu Linux). Note that VirtualBox is a type-2 hypervisor. Within the OpenStack Linux layer, QEMU is used as the guest hypervisor, which is ideal from a commodity hardware perspective but lacks the performance needed in true production settings.
Figure 5. OpenStack demonstration stack running on commodity hardware
Without support for nested virtualization (efficiently running a hypervisor on top of another hypervisor), I rely on QEMU for my guest hypervisor running in OpenStack. This allows me to run a guest VM on a guest hypervisor, running on a type-2 hypervisor. Although this setup can be slow, it fully demonstrates an IaaS stack running on a commodity computer system. Note that certain AMD processors provide an efficient way to support nested virtualization today.
Although using QEMU is not ideal from a performance perspective, it is largely compatible with KVM (Linux as a hypervisor), and therefore it is simple to migrate between the two hypervisors (in addition to the VM images being compatible between the two). What makes QEMU ideal in this case is that it can be executed on hardware that provides no virtualization support. Note that my platform in this example is virtualization capable, but because I'm running on VirtualBox (a hypervisor in its own right), the lack of nested virtualization forces me to use a guest hypervisor that has no reliance on virtualization extensions. In either case, I use libvirt to manage the VMs (starting, stopping, monitoring, and so on), so migrating to KVM on virtualization-capable hardware is as simple as a two-line modification in an OpenStack configuration file.
Other ways to use OpenStack
If you lack a cluster of your own, there are other options for enjoying the benefits of OpenStack. Rackspace, one of the creators of OpenStack, is offering what it hopes will be the Linux of the cloud. Rackspace's OpenStack cloud platform provides the benefits of OpenStack with the flexibility and scalability of public cloud infrastructure.
To simplify OpenStack installation for private clouds, numerous companies have focused on making it easy to use OpenStack within your private cluster. Companies like Piston Cloud Computing offer the Piston Enterprise OS, a private cloud operating system based on OpenStack. Mirantis provides professional services to enterprises to build out an OpenStack infrastructure.
What's next for OpenStack?
OpenStack continues to integrate new functionality, raising the bar on the definition of an IaaS solution. Numerous other projects under the OpenStack umbrella are available, still others are in the incubation process. The Keystone project provides an identity service that unifies authentication across OpenStack components while integrating with existing authentication systems. Community projects also exist for load balancing as a service (Atlas-LB); a cloud installation and maintenance system (Crowbar); a cloud-provisionable and scalable relational database (RedDwarf); a REST-based API for cloud orchestration (Heat); and a cloud management tool covering monitoring, billing, and more (Clanavi). Numerous other projects are in development within and outside of the OpenStack project, and this list grows each day as OpenStack builds on its momentum.
OpenStack is not without competition, as older projects continue to evolve and new projects appear. For example, CloudStack (first released in 2009) has several production installations but lacks the level of open source contributor support that can be found with OpenStack.
Similar to the way Linux has evolved into the all-purpose operating system that fits all usage models, OpenStack is driving toward representing the operating system for the cloud. Instead of managing a limited set of cores and local resources, OpenStack manages a massive network of servers containing compute and storage resources along with the virtual network glue that ties them all together.
Since its first release in late 2010 (Austin), the OpenStack project has released four more versions, the last in April 2012 (Essex). With each release, OpenStack continues to drive new and improved functionality, raising the bar on other IaaS solutions. Now under the Apache umbrella, it's no surprise that OpenStack is the standard in cloud stacks.
- The OpenStack official website is the unique source for information on the OpenStack family of projects, news on community projects, documentation, and everything else OpenStack.
- Cloud computing with Linux (M. Tim Jones, developerWorks, February 2009) is an introduction to cloud computing and its various themes (IaaS, Platform as a Service, Software as a Service), with an angle toward Linux-based options.
- Anatomy of an open source cloud (M. Tim Jones, developerWorks, March 2010) introduces cloud computing anatomy from the perspective of open source. This article introduces node architecture, cluster architecture, and the open source technologies that implement these requirements.
- Anatomy of a cloud storage infrastructure (M. Tim Jones, developerWorks, November 2010) explores the internals of a cloud storage infrastructure, including general architecture, manageability, performance, scalability, and availability. The article also explores cloud storage models, from private to public and hybrid.
- OpenStack isn't a single project but an umbrella over a variety of projects that collectively implement a scalable and reliable cloud. Core projects in OpenStack include Nova, Swift, and Glance. Two projects currently in the incubator (soon to be core projects) include Keystone and Horizon. Finally, there are several community projects that extend or add functionality to OpenStack, including Quantum, Melange, Atlas-LB, Crowbar, Heat, and Clanavi.
- Installing OpenStack Walk-through provides a complete introduction to installing OpenStack for production uses.
- Several options exist for using OpenStack in the context of a VM (over the various OpenStack releases, including the bleeding edge). Have a look at DevStack (from Rackspace Cloud Builders), OpenStack's Running OpenStack Compute (Nova) in a Virtual Environment, and the System Administration and Architecture Blog's Screencast Video of an Install of OpenStack Essex on Ubuntu 12.04 under VirtualBox (I used this example for the demonstration).
- If you need professional help with an OpenStack private cloud, several companies can provide this support. Two such companies include Piston Cloud Computing and Mirantis.
- CloudStack is a competitive stack to OpenStack. It has several of production installations.
- In the developerWorks cloud developer resources, discover and share knowledge and experience of application and services developers building their projects for cloud deployment.
- Follow developerWorks on Twitter. You can also follow this author on Twitter at M. Tim Jones.
- Watch developerWorks demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
Get products and technologies
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.
- Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
Dig deeper into Cloud computing on developerWorks
Exclusive tools to build your next great app. Learn more.
Crazy about Cloud? Sign up for our monthly newsletter and the latest cloud news.
Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month. | <urn:uuid:af7fc3b2-a5a1-4277-9202-3bed69ee3e0d> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/cloud/library/cl-openstack-cloud/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896268 | 3,686 | 2.9375 | 3 |
The VoIP Peering Puzzle�Part 24: Session Border Controller Functions, #2
In our previous tutorial, we looked at the first two Session Border Controller (SBC) functions that the IETF's Session Initiation Proposal Investigation ("sipping") working group documented in its recently issued Internet draft document titled Requirements from SIP (Session Initiation Protocol) Session Border Control Deployments (see http://www.ietf.org/internet-drafts/draft-ietf-sipping-sbc-funcs-01.txt). (Remember that Internet draft documents are considered "work in progress," and are therefore subject to change.) These functions were Topology Hiding, which is designed to maintain the proprietary of the private network topology; and Media Traffic Shaping, which modifies the session description messages, thus changing the characteristics of the network traffic.
In this tutorial we will examine two additional functions: Fixing Capability Mismatches and Maintaining SIP-related NAT Bindings.
Fixing Capability Mismatches
Conspiracy theorists might claim that vendors tweak network standards ever so slightly, and build these minor differences into their products just so those products will not interoperate with those of their competition. It's not clear if product developers are this overt, but nevertheless, product differences do arise, and typically it is the network manager that is left to figure out why a device or system from Vendor A will not communicate with its peer from Vendor B.
In the case of SIP, the base standards have been developed by the IETF and documented in RFC 3261 (see http://www.ietf.org/rfc/rfc3261.txt?number=3261), but there are two other standards organizations that are also embracing this technology. The first is the 3rd Generation Partnership Project (3GPP), which incorporates SIP into the IP Multimedia Subsystem (IMS) specification (see http://www.3gpp.org/ftp/Specs/archive/23_series/23.228/23228-760.zip). The second is represented by the PacketCable standards, developed by Cable Labs, a consortium of cable system operators (see http://www.packetcable.com/).
Other examples of potential incompatibilities would be networks that are running IP version 6 or IPv6 (in addition to the more common IP version 4), which have very different protocol and address formats, or differences in the types of codecs that are supported by each network.
Network operators have a big incentive to fix these capability mismatches so that their communication capabilities can extend past their own networks. To correct these mismatches, the SBC must intercept the SIP INVITE message as it is being passed from the outer network to the inner network. The SBC then examines the session description line contained in that SIP INVITE message, and rewrites the part of that descriptor that is not compatible. Because this process operates on the media being transferred between networks, it is often called media bridging.
Maintaining SIP-related NAT Bindings
In many cases, the solution to one networking challenge turns into a problem for another networking technology, and such is the case with Network Address Translation, or NAT, which is defined in RFC 3022 (see http://www.ietf.org/rfc/rfc3022.txt?number=3022). NAT was originally devised in the mid-1990s as a way to deal with the impending shortage of IPv4 addresses, as the world awaited the deployment of IPv6-based internetworks.
In brief, a NAT device sits at the edge of a network, representing one or a block of IPv4 addresses to the public world (i.e., the Internet) and another block of addresses (typically with the format 10.x.y.z) that identify a private network. As IP traffic transverses the NAT device, the addresses are mapped on the fly from the public to private format, or from the private to public format, depending upon the direction of communication, with that mapping process being transparent to the end users.
A second process, known as Network Address Port Translation (NAPT), is a method in which both the IP addresses and the Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) port addresses are translated as well. Because addresses are frequently considered to be proprietary to the private network, the NAT function is often incorporated into other security devices, such as a firewall.
But herein lies the rubsome upper layer protocols, specifically the File Transfer Protocol (FTP) and the SIP embed IP addresses within their upper layer protocol messages, with the intention of using this address information for part of the message processing. In the case of SIP, those addresses may identify the end point (e.g., a softphone) that is being called. And if the destination endpoint address is translated mid stream between the calling and called parties, how do you complete that call? A second challenge is that of the firewallits objective is to protect the private network from unauthorized access, which typically gleans its marching orders from the source and destination addresses or the type of traffic that is being sent.
To further illustrate this challenge, recall that a SIP call is comprised of two parts: signaling for the call setup/disconnect, and media transfer of the actual call information. If a SIP station is located behind a firewall/NAT device, an inbound SIP INVITE message will not be successful since the destination address is translated by the NAT. In addition, any inbound media streams will be blocked at the firewall, and if they were allowed to get through, the embedded addresses would identify private, not public (or routable) addresses, thus resulting in packet delivery problems.
Several solutions to this problem have been designed, all under the general rubric of NAT Transversal. One solution that is found with VoIP networks is called Simple Transversal of UDP over NATs (STUN), as defined in RFC 3489 (see http://www.ietf.org/rfc/rfc3489.txt?number=3489). Another option is an enhanced Firewall/NAT function, frequently called an application-layer gateway, which modifies the signaling messages to include public routable addresses. Vendor-proprietary solutions are also available, all designed to achieve the same end result.
The issue of maintaining SIP-related NAT bindings occurs when an SBC is performing the NAT transversal function. In this example, assume the SBC is between a SIP User Agent (on the private or access network side) and the SIP Registrar (on the public side). When the Registrar receives a REGISTER REQUEST from the User Agent, the SBC modifies the response so that the registration expires sooner than normal. This triggers the User Agent to refresh the binding at the NAT sooner than normal. This process allows connectivity (i.e., registration) across the SBC, but not for so long a time as to leave the network open to extensive security breaches.
In our next tutorial, we will conclude our examination of the functions that a SBC should perform.
Copyright Acknowledgement: © 2007 DigiNet Corporation ®, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet Corporation®, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:54117315-f375-4083-b9a2-38884d4d8458> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/unified_communications/The-VoIP-Peering-Puzzle151Part-24-Session-Border-Controller-Functions-2-3670766.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00500-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923648 | 1,561 | 2.890625 | 3 |
From the early viruses, created as experiments in the eighties, to the latest malicious code, one of the biggest worries for all computer users is the threat of viruses entering their systems.
To prevent viruses from entering a system there are basically just two options. The first of these is to place the computer in a protective ‘bubble’. This in practice means isolating the machine; disconnecting it from the Internet or any other network, not using any floppy disks, CD-ROMs or any other removable disks. This way you can be sure that no virus will get into your computer. You can also be sure that no information will enter the computer, unless it is typed in through the keyboard. So you may have a fantastic computer, the perfect data processing machine…but with no data to process. If you’re happy with that, your computer will be about as much use as a microwave oven.
The second option is to install an antivirus program. These are designed to give you the peace of mind that no malicious code can enter your PC. But how do they do it? How does the program let you install a game, but prevent a virus from copying itself to disk? Well, this is how it works….
An antivirus program is no more than a system for analyzing information and then, if it finds that something is infected, it disinfects it. The information is analyzed (or scanned) in different ways depending on where it comes from. An antivirus will operate differently when monitoring floppy disk operations than when monitoring e-mail traffic or movements over a LAN. The principal is the same but there are subtle differences.
The information is in the ‘Source system’ and must reach the ‘Destination system’. The source system could be a floppy disk and the destination system could be the hard disk of a computer, or the origin an ISP in which a message is stored and the destination, the Windows communication system in the client machine, Winsock.
The information interpretation system varies depending on whether it is implemented in operating systems, in applications or whether special mechanisms are needed.
The interpretation mechanism must be specific to each operating system or component in which the antivirus is going to be implemented. For example, in Windows 9x, a virtual driver VxD is used, which continually monitors disk activity. In this way, every time the information on a disk or floppy disk is accessed, the antivirus will intercept the read and write calls to the disk, and scan the information to be read or saved. This operation is performed through a driver in kernel mode in Windows NT/2000/XP or an NLM which intercepts disk activity in Novell.
Antivirus products that are not specially designed for operating systems, but are implemented over other applications, have a different interpretation mechanism. For example, in an antivirus for CVP Firewalls, it is the firewall that provides the antivirus with information in order to scan it through the CVP protocol and in the antivirus for SendMail, the MilterAPI filter facilitates information interpretation.
Sometimes an interpretation mechanism is not provided by the antivirus (such as a VxD) or the application (such as the CVP). In this case, special mechanisms between the application and the antivirus must be used. In other words, resources that intercept information and pass it to the antivirus, offering complete integration in order to disinfect viruses.
Once the information has been scanned, using either method, if a threat has been detected, two operations are performed:
1. The cleaned information is returned to the interpretation mechanism, which in turn will return it to the system so that it can continue towards its final destination. This means that if an e-mail message was being received, the message will be let through to the mailbox, or if a file way being copied, the copy process will be allowed to finish.
2. A warning is sent to the user interface. This user interface can vary greatly. In an antivirus for workstations, a message can be displayed on screen, but in server solutions the alert could be sent as an e-mail message, an internal network message, an entry in an activity report or as some kind of message to the antivirus management tool.
As you can see, antivirus programs do not perform miracles, nor is it a software tool that you need to be wary of. It is a very simple security ally that offers precision and advanced technology. Consider this; when you copy a few mega bytes to the hard disk of your computer, the antivirus must look for over 65,000 viruses without affecting the normal functioning of the computer and without the user realizing.
Antivirus programs offer a high level of protection and prevent any nasty surprises. It is as simple as putting XXX dollars in a box to get peace of mind. I’m sure that now you don’t have any serious doubts…
Regardless of how the information to be scanned is obtained, the most important function of the antivirus now comes into play: the virus scan engine. This engine scans the information it has intercepted for viruses, and if viruses are detected, it disinfects them.
The information can be scanned in two ways. One method involves comparing the information received with a virus database (known as ‘virus signatures’). If the information matches any of the virus signatures, the antivirus concludes that the file is infected by a virus.
The other way of finding out if the information being scanned is dangerous, without knowing if it actually contains a virus or not, is the method known as ‘heuristic scanning’. This method involves analyzing how the information acts and comparing it with a list of dangerous activity patterns.
For example, if a file that can format a hard disk is detected, the antivirus will warn the user. Although it may be a new formatting system that the user is installing on the computer rather than a virus; the action is dangerous. Once the antivirus has sounded the alarm, it is up to the user whether the danger should be eliminated or not.
Both of these methods have their pros and cons. If only the virus signatures system is used, it is important to update it at least once a day. When you bear in mind that 15 new viruses are discovered everyday, an antivirus that is left for two or three days without being updated is a serious danger.
The heuristic system has the drawback that it can warn you about items that you know are not viruses. If you have to work with a lot of items that may be considered dangerous, you could soon tire of the alerts. Programmers in particular may prefer to disable this option.
Permanent and on demand scans
When describing antivirus programs, it is important to clearly distinguish between the two types of protection on offer. The first is permanent scans, which are more complex and essential. These scans constantly monitor the operations performed on the computer to prevent any kind of intrusion.
The other type of protection available is on demand scans. These use the same scan engine as the permanent protection and check any parts of the system whenever the user wants. These are normally used under special circumstances. For example, a user may want to perform an on demand scan when using a new floppy disk or to check information stored on the computer that hasn’t been used for a while. | <urn:uuid:a89580f7-f73b-4463-a02c-bd45070909d1> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/05/07/how-an-antivirus-program-works/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92742 | 1,511 | 3.59375 | 4 |
It’s no secret that a main roadblock on the trail to exascale computing comes in the form of high power consumption. Throughout the history of computing, performance, and to a great extent, performance per watt, has grown exponentially in accordance to Moore’s law. However, there seems to be a growing divide between the power efficiency of smaller devices and the world’s fastest supercomputing clusters.
Jonathan Koomey of Technology Review wrote an article this week talking about the energy efficiency of computing devices. Specifically, he focused on the ratio of flops per kilowatt-hour (kWH), noting that the power required to perform a computational task is roughly halved every 18 months. Koomey points out that as processors have become increasingly power efficient, it has opened up whole new application areas, especially for battery-dependent mobile computing.
More than ever before, computers can simply do more with less. One of the most telling instances of this trend is seen in mobile and sensor-based devices. Koomey points to sensors that run on as little as 50 microwatts of energy and require no direct power source. Created by Joshua Smith from the University of Washington, these sensors gather energy from radio and television signals and relay weather information to indoor displays.
On the opposite side of the spectrum is Japan’s K Computer, which draws 12.7 megawatts of energy to deliver 10.5 petaflops. In this arena, performance is still the driver, with supercomputing users expecting roughly a 1000-fold increase in flops every decade. That’s about ten times the rate of increase that is naturally delivered by Moore’s Law. As a result, the annual cost of power for these high-end machines – about a million dollars per megawatt per year – is now a major limiting factor for new deployments.
Koomey points out that in theory the K Computer’s computational performance would be matched in the next 20 years by a device using less power than a typical toaster. Unfortunately, scientists and enterprises that need those flops today don’t have the luxury of time to wait.
While consumer-based devices are benefitting from the natural progression of energy efficiency delivered by the semiconductor industry, it’s simply not good enough to meet the needs of HPC users, especially for those at the top end. To reach exascale computing within reasonable power limits by the end of the decade, architectural innovation will be needed on top of what Moore’s Law will be able to deliver | <urn:uuid:e87a1975-2cfa-4b69-8dd2-332a3eb88d42> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/04/10/a_tale_of_two_power_envelopes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933462 | 527 | 3.40625 | 3 |
Data Loss Prevention (DLP) is typically defined as any solution and/or process that identifies confidential data, tracks that data and prevents unauthorized access to it. Data has multiple forms. There is data in motion (endpoint actions), data at rest (storage) and data in use (network traffic). Each of these makes security and integrity difficult to ensure with constantly changing regulations, large fines and evolving attack methods. In today's growing mobile world, devices and file sharing software are compounding the problem by giving employees multiple ways to put company information at risk for data loss, such as identity theft, data breaches and fraud.
Here are some excellent articles that outline DLP and give a variety of suggestions for creating and maintaining a data loss protection strategy for your company.
Defining a DLP Strategy This CS Magazine article offers guidelines for defining a DLP strategy. It presents several requirements to consider in the context of data at rest, in transit and in use. Jeffrey Brown explains the pros and cons of deploying a gateway versus an endpoint monitoring solution, which is considered a key decision point. It also recommends starting small and keeping control of the project scope since most DLP plans are said to evolve over many years.
Top Ten Tips for Preventing Data Loss
This recent article in The Social Media Monthly provides ten excellent tips to help organizations keep their data secure. The author, Bob Fine, describes how growth in the BYOD movement, data overall and computer hacking make it imperative that businesses create defense mechanisms against data loss. Automation of those mechanisms is emphasized.
The $100 Billion Problem No One Is Talking About
This article discusses the growing concern of data security and data protection cost controls. According to the Forbes author, Eric Savitz, the current trends of globalization, piracy, lack of cyber-security training and increased BYOD all put company data at increased risk. Thoughts on how to buck these trends and other strategic considerations are discussed.
Four Keys to DLP Success
This article from Information Security focuses on the Data Loss Prevention technology as well as ways to make DLP technology a success. Crystal Bedell discusses the importance of 1) companies understanding the technology and its capabilities 2) having broad support from all the data owners across the organization and 3) having the legal department provide clarity and consistency in the actions taken.
3 Steps To Protecting Your Company Against Data Breaches
This Forbes article by Eric Savits and Chris Poulin discusses ways of preparing and protecting against a data breaches. Insurance is a new option that offers coverage for potential costs related to a breach, including legal defense, forensic investigations, and crisis management. Technology that addresses the organization’s DLP needs is suggested and should include the control of a network’s information flow, protection of end systems, and data encryption.
How to Protect Your Company’s Data
In this BizJournals article, Bridget Bothello reviews a few of the statistics and stories regarding data loss and how you can begin to create a data loss protection plan. Suggested data security practices from the US Chamber of Commerce and additional data security tips are included.
Three Questions You Should Ask About Your Cyber-Security
This post published in the Harvard Business Review helps senior management focus on the critical pieces of information needed to create a DLP strategy. James Kaplan and Allen Weinberg discuss the importance of a multi-faceted approach towards data loss protection, including techniques for sharing dating with strategic alliances.
Image Credit : DellPhotos | <urn:uuid:348660bb-8ca5-4146-abdc-585054645431> | CC-MAIN-2017-04 | http://blog.contentraven.com/security/bid/239359/suggested-reading-enterprise-level-data-loss-prevention | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00344-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923539 | 708 | 2.5625 | 3 |
OverviewBy Regina Kwon | Posted 2003-05-01 Email Print
What is it? A way to manage many different data sources as though they were all in one place.: Integrating Information">
Overview: Integrating Information
In the same way that enterprise application integration reduces the number of connections you have to create to get applications
to communicate with one another, a virtual database simplifies how you share data throughout an enterprise.
1. The virtual database has it own data model ...
2. ... which acts as a map to every source of data in the enterprise, such as databases, flat files, documents and so forth.
3. The result: Applications need to deal with only one query and one language to access all the data in the company.
SOURCE: Baseline Research | <urn:uuid:18c29c3f-de43-4f4d-9eb1-b4c780a879d9> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Networks-and-Storage/Primer-The-Virtual-Database/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00068-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908705 | 161 | 2.59375 | 3 |
A Controller is a PHP file that takes the HTTP request, and manages the execution path of the application (instance of application) at server-side. An application can have more than one controller files when required. A Controller file can define public URLs or private URLs (i.e., password protected URLs) as the situations demand. However, a Controller file cannot have both public and private URLs, and you need separate Controller files for this.
A Controller file is defined and edited by the OSF IDE, and requires the developer to specify the controller file name. The Controller file name can be prepended by path from root of application folder if it is located inside an inner directory, but never include any slash in the beginning. For example
mycontroller.phpif it is located in application root directory.
test/mycontroller.phpif it is located inside directory
testlocated in application root directory.
We have decided to use the name as mycontroller.php in the above example, but you can name anything. Please note further that you can also choose other file extension for your controller file if you are ready to implement that as a PHP script through your apache configuration or
Please note that you already get one controller when you install the OSF -
index.phpis the default controller of application as anybody will access this file when they go to. In case of public website, this controller can be made public. In case of web application, this controller can be made private, and
sign.phpbe made its Sign In controller.
$_CTRID. This will be the ctrid field value at
*od_controllertable for corresponding controller file.
load.delight.php. This act will instantiate three Delight objects:
$USER(if it is private controller)
$APP->ID), and loads data for like
$APP->VIEWPARTS). If no view part is defined for a particular instance, data arrays as obtained from BL calls, are sent back to UI by the Controller (usual in case of AJAX calls from UI).
Updated on Aug 20, 2016
The techReview is an online magazine by Batoi and publishes articles on current trends in technologies across different industry verticals and areas of research. The objective of the online magazine to provide an insight into cutting-edge technologies in their evolution from labs to market. | <urn:uuid:52f441d0-dbc8-420b-b965-87f6b6aa326a> | CC-MAIN-2017-04 | https://www.batoi.com/support/articles/article/controller-in-the-batoi-osf | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00464-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916887 | 490 | 2.59375 | 3 |
5.1.6 What is Kerberos?
Kerberos [KNT94] is an authentication service developed by the Project Athena team at MIT, based on a 1978 paper by Needham and Schroeder [NS78]. The first general use version was version 4. Version 5, which addressed certain shortfalls in version 4, was released in 1994. Kerberos uses secret-key ciphers (see Question 2.1.2) for encryption and authentication. Version 4 could only use DES (see Section 3.2). Unlike a public-key authentication system, Kerberos does not produce digital signatures (see Question 2.2.2). Instead Kerberos was designed to authenticate requests for network resources rather than to authenticate authorship of documents. Thus, Kerberos does not provide for future third-party verification of documents.
In a Kerberos system, there is a designated site on each network, called the Kerberos server, which performs centralized key management and administrative functions. The server maintains a database containing the secret keys of all users, authenticates the identities of users, and distributes session keys to users and servers who wish to authenticate one another. Kerberos requires trust in a third party (the Kerberos server). If the server is compromised, the integrity of the whole system is lost. Public-key cryptography was designed precisely to avoid the necessity to trust third parties with secrets (see Question 2.2.1). Kerberos is generally considered adequate within an administrative domain; however across domains the more robust functions and properties of public-key systems are often preferred. There has been some developmental work in incorporating public-key cryptography into Kerberos [Gan95]
For detailed information on Kerberos, read ``The Kerberos Network Authentication Service (V5)'' (J. Kohl and C. Neuman, RFC 1510) at ftp://ftp.isi.edu/in-notes/rfc1510.txt. | <urn:uuid:67d045af-f9b5-4afc-b0b9-c7a6d27c3037> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/kerberos.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900428 | 415 | 3.703125 | 4 |
Using Floodlight to Explain SDN Controllers and OpenFlow
In an SDN (Software Defined Network), the switch's role is one of packet forwarding only. The controller plane is removed from the switch and placed in a (usually) centralized device known as an SDN controller. The controller makes the routing decisions and then communicates these decisions to all the devices on the network using a communication protocol - typically OpenFlow - that allows the controller to modify the behavior of the networking devices. Each switch uses a "flow table" to switch traffic flowing from one place to another. It is these flow tables, or updates to them, that the switches receive from the network controller.
A good analogy is a pilot flying a plane. If the pilot is flying by sight, then as well as being responsible for manipulating the controls to actually fly the plane, the pilot must also make decisions about which direction to go, and what actions to take to avoid other aircraft. This is like a conventional switch with a data plane and a control plane.
An alternative way of flying is for the pilot to maintain control of the aircraft, but to hand over responsibility for the precise path they should take to a centralized air traffic controller. The air traffic controller has an overall view of all the aircraft flying around, and can make decisions about the best routes for individual aircraft to take. The air traffic controller communicates those decisions to the individual aircraft by sending control messages, which are acted upon by the pilots.
Network controllers are available as commercial products from the likes of Nicira (now owned by VMware) and Big Switch Networks, and there are also many open source controllers such as Beacon, developed at Stanford University for academic purposes, and Floodlight, a more production-oriented project that was forked from Beacon. The Floodlight controller is the core controller at the heart of Big Switch's commercial Big Network Controller platform.
Floodlight is just one of a number of libraries that are bundled together to make a working Floodlight network controller. Others include Endpoint Manager -- which tracks physical hosts, VMs, and routers; Topology Manager -- which uses LLDP and multi-cast probes to infer topology; and Path Calculator -- which computes paths between devices.
In a network controlled by a Floodlight controller, when a packet leaves a server and hits an OpenFlow enabled switch -- a physical switch or perhaps a hypervisor based vSwitch if the packet comes from a virtual machine -- the packet header is forwarded over a control VLAN to the Floodlight controller. Floodlight then does a series of database lookups to find information about the source, destination, available physical paths and tunnels between the two, examines ACLs (access control lists) to check whether the packet should be dropped, and decides if any tunnel encapsulation or other rewrites are needed.
Once this has been done the controller sends flow table entries to every switch on the path from the source to the destination so that they know how to handle the packet when it comes in. From that point on, all subsequent matching packets flow at line rate through the switches -- without the need to consult the controller -- unless some change is made to the topology of the network, a port or switch fails, or the physical location of the source or destination servers changes. In that case the network controller automatically updates the flow tables in all effected switches. This automation is the basis of the flexibility found in SDN controllers when compared to traditional switching environments.
This type of packet flow is known as a reactive packet flow, but there's another possibility too - especially in smaller networks - called a proactive packet flow. In this case flow tables are created in advance and sent to switches. In simple terms they might dictate that Application X can talk to Application Y, and Application A can talk to Application B. That means that the first time a packet from Application X is sent to Application Y, the switches already know where to send it. Updates to these flow tables may still be made by the network controller from time to time as changes occur to the infrastructure.
The Floodlight controller is available under the Apache license and supports a range of OpenFlow virtual and physical switches, routers and access points that support OpenFlow.
Instructions for downloading, building and running Floodlight from source, and Floodlight virtual appliances, are available from here. | <urn:uuid:2adaa955-e723-4e92-89e3-c70ba2057296> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/datacenter/using-floodlight-to-explain-sdn-controllers-and-openflow.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931788 | 879 | 3.328125 | 3 |
(Translated from the original Italian)
Cyber warfare scenario is rapidly changing, governments all around the world are investing to increase their cyber capabilities and designing new tools to adopt in cyberspace to face with opponents in what is considered the fifth domain of warfare.
The warfare scenario is deeply changed, new actors fight in the cyberspace an asymmetric war with rules of engagement completely distorted, the intent is to destroy and interfere with enemy systems in critical infrastructures.
Principal defense companies are developing new solutions to propose to governments and intelligence agencies, recently Boeing firm has successfully tested a new generation of missile which is able to attack the computer systems of a country without causing loss of life.
The project is known with the acronym of CHAMP (Counter-electronics High-powered Microwave Advanced Missile Project) and uses the microwaves to permanently knock out computers in a specific area. The project is born in US military environment, specifically developed by Air Force Research Laboratory, and it explores the possibility to design a directed-energy weapon capable of destroying and interfering with adversary’s electronic systems such as radar systems, telecommunication systems, computer systems and power distribution systems. While the project is started in military and is led by Boeing the technology comes from a small company called Ktech, acquired by Raytheon bought last year, specialized in the providing of microwave generators to generate EMP able to knock out electronics equipments.
The concept behind directed-energy weapon (DEW) is the transmission of energy, typically electromagnetic radiation or electromagnetic radiation, against a target for a desired effect.
Keith Coleman, CHAMP program manager for Boeing Phantom Works enthusiastic said:
"This technology marks a new era in modern-day warfare," "We hit every target we wanted to. Today we made science fiction into science fact."
"In the near future, this technology may be used to render an enemy's electronic and data systems useless even before the first troops or aircraft arrive."
The rocket was successfully tested in October at the Utah Test and Training Range, during the exercitation it discharged a burst of High Power Microwaves against the target site and brought down the compound's entire spectrum of electronic systems without causing any other damage, the missile was launched from a B-52 heavy bomber.
How does it works and how much cost it?
Despite Boing has made public the test and the obtained results it hasn’t provided technical information on the new generation of weapon. Many experts believe the rocket is equipped with an electronic pulse cannon which is able to cause voltage surges in target electronic destroying it.
Military experts are sure that we are facing with a new generation of cyber weapons for which is still much to be studied, similar devices must be analyzed with attention especially for the real effects they could have on the environment, let’s think for example to the effect of evaporation of target material.
Another important question is related high power consumption necessary to generate the beam of energy, existing methods of for conducting and transferring energy against a targets appears still inadequate to produce a convenient hand-held weapon.
Professor Trevor Taylor from the Royal United Services Institute told the Mail on Sunday newspaper:
"The historical record shows that important technologies developed in one country are developed elsewhere within a relatively short period - look what happened with regard to the USSR and nuclear weapons."
I find the warning of Professor extremely timely and enlightening, the scenario of warfare is changing rapidly and soon a multitude of governments will have similar weapons for which is required an appropriate defense strategy. | <urn:uuid:4278bba2-4256-4984-b940-4f5883705b6e> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/22763-New-weapons-for-cyber-warfare-The-CHAMP-project.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944762 | 720 | 2.546875 | 3 |
At the Science Beyond Fiction conference, researchers said that viruses on mobile phones will reach the mainstream once a common operating system is used by the handsets.
Malware spread by Bluetooth, for example, said researchers, could reach all users of a given mobile phone operating system in days, whilst those spread by multimedia messages could spread in a matter of hours.
But, said Professor Albert-Laszlo Barabasi, Director of the Centre for Complex Network Research at Northeastern University in the US, this can only happen when the operating system has reached a mobile phone penetration rate of around 10 per cent.
Professor Barabasi and his team's research conclusions were predicted in the Check Point-sponsored webinar earlier this week, in which Professor Sommer noted the diversity of operating systems in the mobile phone marketplace, and said that all new trends start small and insignificant before starting to take off.
Professor Sommer told Infosecurity that, although he was unaware of Professor Barabasi's research, his own predictions and the report's conclusions are based on a common sense plus observations on previous evolutions of computing platforms and their allied security technologies.
"Parallels can be drawn between the state of mobile phone security and the situation with Internet security just a decade ago. The difference with mobile phone malware and its security solutions is that the technology is developed a lot faster," he explained.
To view Check Point's webinar with Professor Peter Sommer, click here: | <urn:uuid:0cb71cbb-02b3-47a5-b6c3-e24d47271ff9> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/infosecurity-webinar-predictions-become-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00334-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944313 | 298 | 2.671875 | 3 |
Protecting Against An Avalanche of Vulnerabilities
According to a study by the security firm Qualys, desktop applications like iTunes, Firefox, PowerPoint and, surprisingly, antivirus programs account for more than 60% of critical vulnerabilities. And attackers are focusing on new network targets as well.
VoIP servers and phones, IM servers, and even printers and faxes are now consider weak points that may provide access to an otherwise hardened network. Added to that, user-introduced errors and network misconfigurations can undermine even the best security plans.
Faced with these issues, as well as industry regulations like HIPAA (Health Insurance Portability and Accountability Act), Mercy Hospital in Miami, Fla. had to change the way it went about securing its network. But before new security measures could be introduced, Mercy had to get a better understanding of its network.
According to Moses Hernandez, a network engineer for Mercy, in the past the task of studying the network was done infrequently. We used scanning to get a feel for the network and know what was on it, Hernandez said. We simply needed to get an idea of what was out there.
Soft on the Inside
However, after scanning for inventory and topology, Mercy realized it had to keep scanning on an ongoing basis. Networks can change dramatically over time, and new vulnerabilities are discovered in operating systems and applications on an almost daily basis. Vulnerability assessments have become so important that we scan every week or even every day, Hernandez said.
In other words, simply hardening your network against outside intruders is no longer an effective strategy. Increasingly, with guest access and partner applications and distributed networks, its more and more difficult to define what inside-the-network even means.
According to Ross Brown, CEO of vulnerability management vendor eEye Digital Security, in the past the term vulnerability had a specific meaning, referring to flaws in systems or software. These could be fixed via patches. Today, the term vulnerability has a broader meaning, encompassing not just software flaws but also user-introduced vulnerabilities, network misconfigurations, and even interoperability problems. The new generation of vulnerability management tools even discovers instances where users are putting the organization at risk by not following corporate policies.
A recent survey by Computer Security Institute (CSI) and the FBI found that nearly 52% of participants were hit by security breaches, many from outside of the organization. However, 68% said that a significant portion of those breaches came from within the network.Any security expert will tell you that internal attacks are the most troubling sort. They are harder to defend against, and insiders know what to go after. The average loss that companies in the CSI/FBI survey experienced due to financial fraud or theft of proprietary data was over $160,000. However, The CSI/FBI survey numbers fall on the conservative side, since they rely on voluntary reporting.
Other surveys point to much higher losses due to insider exploits.
According to the U.S. Commerce Department, intellectual property theft alone costs U.S. businesses approximately $250 billion each year. IBM reports that cybercrime is now more of a problem for U.S. businesses than traditional physical crimes, while also saying that more than 70% of businesses theyve studied believe that insider attacks are a more significant threat than those from hackers.
Insiders have a better sense of which systems are vulnerable, and they can often intentionally introduce misconfigurations that they can then later exploit. This again points to the necessity for aggressive vulnerability monitoring. However, aggressive monitoring creates its own headaches.
A common enterprise report may find 30,000 vulnerabilities, said Alan Paller, director of research for the SANS Institute. In fact, a number that high is by no means uncommon.
Understanding the too-much information curse, most vulnerability management vendors classify vulnerabilities by risk. Vulnerabilities that have known exploits in the wild are rated much higher than those for which no known attack exists.
The key is to focus on the most significant risks , and vendors like CoreSecurity, eEye, and nCircle, Qualys understand this. After all, the slew of false positives and alarms set of by minor problems bedeviled the intrusion detection space for years, and those in the vulnerability management space learned from those mistakes.
The space has also learned that point products often have short life spans so theyve rolled vulnerability assessment in with other value-added security services. Traditional vulnerability assessment simply tells you where you are vulnerable, said Ross Brown of eEye. Vulnerability management, on the other hand, not only tells you where you are vulnerable, but also what to do about it.
After surveying the market, Mercy Hospital in Miami chose eEyes vulnerability management suite partly because of its remediation abilities. Since eEye ties into BigFixs patch and configuration management platform, Mercy can streamline its remediation process.
Mercy was also drawn to eEyes extensive vulnerability database and their research team, which has uncovered such serious flaws as the Microsoft DCOM RPC Memory Leak and the remote code execution flaw in McAfees ePolicy Orchestrator.
A final consideration for Mercy was the importance of protecting legacy applications. As much as youd like to be running a homogenous network with one operating system and current applications, what happens in a hospital is that you have many homegrown applications that fill niche needs. Hospitals are almost forced to run very obscure applications, Hernandez said.
As a result, the final piece of the vulnerability puzzle is linking with related security offering that protect against no-signature and zero-day attacks, as well as providing protection for legacy products for which no patches exist. After all, what good is a system that points out a flaw but then tells you that there is nothing you can do about it? | <urn:uuid:9d799da0-7466-4c00-b502-b60339dcce0e> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/trends/article.php/11047_3650791_2/Protecting-Against-An-Avalanche-of-Vulnerabilities.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963829 | 1,182 | 2.515625 | 3 |
When one nation launches a missile at another, it's easy to pinpoint the aggressor. But during a cyber attack, the aggressor may not be so identifiable, and the traditional rules of warfare don't quite fit.
As nations increasingly develop their cyber offenses and defenses, an international think tank in Estonia is researching a range of legal questions and concepts around clashes in cyberspace.
One of those questions is how to label these skirmishes and whether it's appropriate to call them "cyber warfare" or "cyber conflict," said Rain Ottis, a scientist with the Cooperative Cyber Defense Center of Excellence in Tallinn (CCDCOE).
The CCDCOE was launched in May 2008 to help NATO countries deal with ever-growing cyberthreats by focusing on defense tactics, training, protection of critical national infrastructure, and policy and legal issues.
Although several nations have experienced significant cyberattacks, "we don't have a single good instance of real cyber warfare," Ottis said. He believes that warfare occurs between states.
"We are trying to come up with a way to explain this in a more formal way so not everything by default is cyber warfare," Ottis said. "Personally, I don't want to devalue the word 'war.'"
How to define a cyber incident is one of the topics on the agenda for the CCDCOE's 2010 Conference on Cyber Conflict in June, which will include a new legal and policy track.
CCDCOE researchers are also part of a working group studying the laws of armed conflict to see how cyber attacks should be interpreted. The laws of war, encompassed in international treaties -- some of which are more than 100 years old -- deal with issues such as when a nation can go to war and what is considered legal when at war, Ottis said.
It's brand-new legal territory, but one with which nations will soon have to deal. "When the first cyber war kicks off, mostly likely in conjunction with a physical war, all of these questions will come up in a hurry," Ottis said.
The working group will eventually write a manual for how cyber conflict fits into the existing laws of war.
The CCDCOE is also looking into how Cold War-era concepts such as deterrence fit into cyberspace. Deterrence -- which is based on meeting aggression with greater aggression -- doesn't quite apply, said Kenneth Geers, a civilian with the U.S. Navy's Naval Criminal Investigative Services who is assigned to the CCDCOE.
Geers presented a paper last October in Moscow on deterrence in cyberspace. One of the problems with deterrence is attribution, or identifying the enemy.
"It's really easy to hide in cyberspace," Geers said. "You need much more than computer log files to know what happened."
The basic building blocks of deterrence are capability, communication and credibility. There's also the question of whether a physical response such as bombing is appropriate.
"You have to be able to get back at the aggressor, and in cyberspace, there's no guarantee of that," Geers said. "You may not know who is attacking you, and to get back at them, you have to hack back or do a kinetic response."
It is hard to deter an aggressor who can invest a small amount and cause the target 100-fold damage, Geers said.
Geers is also writing a paper exploring how the 1997 Chemical Weapons Convention (CWC) could be used as an arms control model for cyberspace, exploring concepts such as prohibitions and inspections. Again, cyberspace poses vexing questions.
"There's just not a way, given the fact there are gigs of data on something the size of a stick of gum, that you can possibly verify that no malicious code exists anywhere," Geers said. | <urn:uuid:967103f2-ea59-408f-876a-267381493677> | CC-MAIN-2017-04 | http://www.cio.com/article/2419190/legal/think-tank-in-estonia-ponders-war-in-cyberspace.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963682 | 791 | 2.9375 | 3 |
A few days ago, a critical bug was found in the common OpenSSL library. OpenSSL is the library that implements the common SSL and TLS security protocols. These protocols facilitate the encrypted tunnel feature that secure services -- over the web and otherwise -- utilize to encrypt the traffic between the client (user) and the server.
The discovery of such a security bug is a big deal. Not only that OpenSSL is very common, but the bug that was found is one that can be readily exploited remotely without any privilege on the attacker's side. Also, the outcome of the attack that is made possible is devastating. Exploiting the bug allows an attacker to obtain internal information, in the form of memory contents, from the attacked server or client. This memory space that the attacker can obtain a copy of can contain just about everything. Almost.
There are many essays and posts about the "everything" that could be lost, so I will take the optimistic side and dedicate this post to the "almost". As opposed to with other serious attacks, at least the leak is not complete and can be quantified, and the attack is not persistent.I will focus on the server as the target of the attack.
Say an attacker exploits the newly discovered bug, and starts dumping out contents of memory addresses from your server. The bug allows to exfiltrate 64K at a time, but multiple iterations are possible to exfiltrate as much data as needed. This dump can contain anything that is stored in memory. The memory involved is the process space of the application or web server that happened to call OpenSSL.
What is there as loot for the attacker? In essence, there is all the state information of the application, including that of the web server process, if the application is web-based. The actual state information depends on what the application or web server is doing, but at a minimum it contains:
This is not to be taken lightly. This is a lot. Notwithstanding, let us see what was not put at risk. There are three resources that are clearly left out of scope for the attacker:
First and foremost, install the necessary updates so to close the tap.
Second, comprehend the scope of the leakage that might have occurred. Unfortunately, there is no way to tell if your server was hacked and to what extent, so assume it was and enumerate the data that might have been compromised. This data, which we refer to as "session data", consists of all data that is served, processed or obtained by the web (or other application) process that calls OpenSSL. This includes all its inputs that come over the web (or other bearer), all data that goes out, and all data that may be processed in between by the same process that calls OpenSSL (e.g., the web server).
What is not in the scope of leaked data is all data that may be processed but is not served, or otherwise made available to the process that uses OpenSSL. For example, if the application is a web-application, then data that is neither sent nor received over the web, and which is not processed by the web server, would never find itself in the process memory space of the web server, and is thus safe. Also, other data on the server, such as files in home directories, is safe.
The private key of the web server is also at immediate risk, but in most cases it accounts for a change in quantity, not in quality, of the leaked data. In other words, this key will allow an attacker to decipher more sessions, so the attacker can leak not only session data during the attack, but other session data as well, yet it is still session data by the definition above, which we already considered to be entirely lost.
Passwords may be another issue. In most cases, however, stolen passwords only account for yet more session data that can be accessed by impersonating the user, so it is still in the sense of "more of the same". Obviously, if the passwords that your application uses are also used for granting access to other assets -- those may be at risk as well.
Private keys of users, if used by the application, are not at risk, because they are never made available to the server in the first place.
To summarize, in the usual case, the maximum leakage that could have occurred consists of all data served or processed by the process calling OpenSSL. Data of other applications and back-end data are safe.
Third, return to secure state. We got lucky with the "heartbleed bug" in that it is passive and cannot cause your system to be "owned", or to be contaminated in a way that calls for a complete re-install or serious scrubbing. After installing the patch to OpenSSL, you need to generate a new key-pair for the OpenSSL deployment, get it certified if your previous key was, revoke the previous key, and change application passwords that might have been leaked. Once this is done, aside of the data that might have been leaked forever, you can consider the incident to be behind you.
If you run an application server utilizing OpenSSL which was subject to attack, a lot of data might have been stolen, both in terms of application data and in terms of credentials. However, the only bright side is that, as opposed to with other serious attacks:
“The integrity of the system. This is probably the most important point. The attack is passive in the sense that it gets data out, but cannot change anything in the system.”
What if the exfiltrated data allows a hacker to then log in to the server and install a rootkit, or exploit some other installed software to do this?
In this case, all bets are off…
The sentence that follows the one you quoted reads: “An obvious exception would be if a password that was captured happens to open the door to other attack venues.”
Hello, thank you for this article.
I would like to draw your attention to something else.
The sad fact is, that I understand some of this and still would not know how to change keys. Worse is, that in my neighborhood I am the Nurde .. So I would like to ask if anyone would be willing and able to give a “How to” instruction to all the oblivious users that really do not know how to help themselves with this?
Replacing the SSL keys (yes, you need to replace the keys and the certificates, not just the certificate) is done by repeating the same process of installing SSL in the first place.
The technicality depends only on your web server and OS. Search for “set up ssl apache” or “set up ssl iis” to get hundreds of useful guides.
A good one for SSL on Linux/Apache is at: http://www.htmlgoodies.com/beyond/security/article.php/3774876
I was talking to a colleague last week who said that their IT security co-workers were examining their logs and traffic over the last couple weeks and have not seen any attempts to exploit the Open SSL issue. Is the bug too new and black hats have not yet had the time to write the code to exploit it yet. If what I’ve been told is true, does anyone have suggestions why we’re not hearing about open SSL attacks yet.
The bug is not new. It is there for more than two years. The only question is whether it was known to black-hats before it was “officially” discovered or not. It is not trivial to determine if it was exploited or not on a given system, because the typical HTTP logs Apache keeps do not show heartbeat packets.
Here you can find possible evidence to past exploitation:
Form is loading... | <urn:uuid:2b3fb3fe-e8b6-4cda-b480-b510c6b71521> | CC-MAIN-2017-04 | https://www.hbarel.com/analysis/itsec/openssl-heartbleed-bug | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00473-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961075 | 1,614 | 2.609375 | 3 |
Last time we looked at First-In First-Out queue management, in this blog we’ll explore the next queue mechanism on our list: Weighted Fair Queuing (WFQ).
WFQ is a flow-based queuing algorithm used in Quality of Service (QoS) that does two things simultaneously: It schedules interactive traffic to the front of the queue to reduce response time, and it fairly shares the remaining bandwidth between high bandwidth flows.
A stream of packets within a single session of a single application is known as flow or conversation. WFQ is a flow-based method that sends packets over the network and ensures packet transmission efficiency which is critical to the interactive traffic. This method automatically stabilizes network congestion between individual packet transmission flows. There are three types of WFQ:
- Flow based Weighted Fair Queuing
- VIP distributed Weighted Fair Queuing
- Class based Weighted Fair Queuing
The WFQ method has the advantage of being fast, reliable and easy to implement. WFQ follows these main criteria:
- Dedicated queues for each flow (referred to as conversations), messages are sorted into conversations reducing starvation, delay, and jitter within the queue.
- Allocating bandwidth fairly and accurately among all flows, reducing scheduling delay and guaranteeing service.
- IP Precedence is used as weight when allocating bandwidth.
Although bandwidth is allocated fairly among all flows, unfairness is reinstated by giving proportionately more bandwidth to flows with higher IP precedence or lower weight.
WFQ has to classify individual flows using the following information taken from the IP/TCP/UDP headers. These parameters are used as input for a hash algorithm that produces a fixed length number that is used as the index of the queue.
- Source IP address
- Destination IP address
- Protocol number to identify TCP or UDP
- Type of service field
- Source TCP/UDP port number
- Destination TCP/UDP port number
There is a fixed number of per flow queues, and the hash algorithm translates flow parameters into a queue number. The number of dynamically allocated queues is based on the interface bandwidth configuration and can be configured in a range between 16 and 4096 in multiples of 2. Here is a list of default dynamic queues based on bandwidth.
WFQ Insertion and Drop Policy
WFQ uses two methods to drop packets, Early and Aggressive dropping. Early dropping is when the congestion discard threshold is reached and aggressive dropping is when the hold-queue out limit is reached. WFQ always dropspackets of the most aggressive flows and uses the following two parameters to affect the way WFQ drops packets.
In this example, N is the number of packets in the WFQ system when the N-th packet arrives.
- Congestive discard Threshold (CDT) is used to start dropping packets of the most aggressive flows, even before the hold-queue limit is reached.
- Hold-queue out limit (HQO) defines the maximum number of packets that can be in the WFQ system at any time.
For scheduling purposes, in WFQ the length of the queue is not measured in packets but in the time it would take to transmit all the packets in the queue. WFQ adapts the number of flows and allocates equal amounts of bandwidth to each flow. Small packet flows which are usually interactive flows (voice and video) typically receive better service because they do not need a lot of bandwidth. They do need and receive low delay because smaller packets have a lower finish time.
Finish time is the sum of the current time and the time it takes to transmit the packet. Current time is zero if there are no packets in the queue. The first packet into this queue will have a finish time of current time + transmission time.
Current time = 0ms (no packets in the queue)
First packet = 100ms to transmit this packet
Finish time = 0 + 100ms = 100ms
Packet two comes into the queue
Current time = 100ms
2nd Packet = 20ms to transmit this packet
Finish Time = 100ms + 20ms = 120ms
WFQ will place the packets into the hardware queue based on the finish time lowest to highest. The last piece is to use the finish time and the IP precedence to introduce the weight into the calculation of which queues will be serviced in which order.
The calculation to add weight is finish time divided by IP Precedence plus one (to prevent division by zero). In Cisco routers this calculation is done differently to decrease the load on the routers CPU. The Cisco router uses the packet size instead of the transmission time as they are proportional to each other. In addition, the packet size is not divided by IP Precedence; instead the packet size is multiplied by a fixed value (one value for each IP Precedence value). This is done because division is a CPU intensive operation in comparison to multiplication.
Pros and Cons of WFQ
- Simple Configuration
- Guarantees throughput to all flows
- Drops packets of most aggressive flows
- Supported on most Cisco platforms and all IOS versions
- Multiple flows can end up in one queue
- Does not support the configuration of classifications
- Cannot provide fixed bandwidth guarantees
WFQ is enabled by default on all interfaces that have a default bandwidth of less than 2Mbps. Use the fair-queue command to enable WFQ on interfaces that do not have WFQ enabled.
Router(config-if)# [cdt [dynamic-queues [reservable-queues]]]
Congestive discard threshold (CDT) – Optional
- Number of packets allowed in the WFQ system before the router starts dropping new packets for the longest queue, with a value of 1 to 4096 (default 64)
Dynamic-queues – Optional
- Number of dynamic queues, with values of 16,32,64,128,512,1024,2048,4096
Reservable-queues – Optional
• Number of reservable queues if the RSVP feature is configures on the interface, with values of 0 to 1000
Router(config-if)# hold-queue max-limit out
Hold-queue Max (HQO)
- Maximum number of packets that can be in all output queues on the interface at any time
- Default is 1000
In most cases the hold-queue limit will never be reached because the CDT will start dropping the most aggressive conversations.
WFQ Show Commands
Router#show interface interface
- 0 packets in the WFQ system
- 1000 packets allowed in the WFQ system, set by the HQO
- 64 packets in anyone queue before CDT starts dropping aggressive flows
- 0 packets have been dropped
- 0 active conversations
- 4 maximum number of concurrent conversations
- 256 total number of WFQ queues
Router#show queue interface-name interface-number
Show queue displays the packets inside a queue for a particular interface
- Depth = number of packets in the queue, the above example displays 1 packet
- Weight = depending on the IOS version the weight is calculated by “IP Precedence + 1” or “32,384/(IP Precedence + 1), the above example displays 4096
- Discards = represents drops due to CDT
- Tail Drops = represents drops due to HQO
This blog discussed Weighted Fair Queuing, the next blog in this QoS series will discuss Class Based WFQ and Low Latency queuing (LLQ).
Author: Paul Stryer
- Cisco IOS Quality of Service Solutions Configuration Guide, Release 12.4T
- End-To-End QoS network Design, by Tim Szigeti and Christina Hattingh – ISBN # 1-58705-176-1
- DiffServ – The Scalable End-To-End QoS Model
- Integrated Services Architecture
- Definition of the Differentiated Services Field
- An Architecture for Differentiated Services
- Requirements for IP Version 4 Routers
- An Expedited Forwarding PHB (Per-Hop Behavior) | <urn:uuid:9e273134-69ac-435a-a01b-91716363ba87> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/02/12/quality-of-service-part-10-weighted-fair-queuing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888608 | 1,691 | 2.703125 | 3 |
It's been a couple of years and a couple of million dollars. Finally, researchers and graduate students who have spent years developing intelligent water sensors released them into the Sacramento River on Wednesday, about 80 miles east of San Francisco.
That area of the river is a mixture of salt water from the nearby San Francisco Bay area. Altogether, the water within the Delta region supplies two-thirds of California's drinking water.
Researchers hope that their sensors will be able to help track environmental spills and the flow of water, which could also help improve salmon spawning.
"This is the way of the future," said Alexandre Bayen, associate professor at the University of California, Berkeley who is supervising the project, called the Floating Sensor Network. "We're moving from an age when humans were deploying things and baby-sitting them to an age where you just put the robots in the water, they do their job, they come back or they call you if they have a problem."
Watch a video of the sensors entering the water, here.
Researchers from University of California, Berkeley, San Francisco State University, The Center for Information Technology Research in the Interest of Society and the Lawrence Berkeley National Laboratory helped with the project. | <urn:uuid:914a4130-a904-4448-a905-2bca12386b90> | CC-MAIN-2017-04 | http://www.itworld.com/article/2726620/networking/uc-berkeley-tests-floating-robot-sensors-to-track-water-flow--environmental-concerns.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00317-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954316 | 248 | 3.484375 | 3 |
This chapter describes compiler behavior that is defined by the implementation according to the C and/or C++ standards. The standards require that the behavior of each particular implementation be documented.
The C and C++ standards define implementation-defined behavior as behavior, for a correct program construct and correct data, that depends on the characteristics of the implementation. The behavior of the Cray C and C++ compilers for these cases is summarized in this section.
All diagnostic messages issued by the compilers are reported through the UNICOS/mp message system. For information on messages issued by the compilers and for information about the UNICOS/mp message system, see Appendix E.
When argc and argv are used as parameters to the main function, the array members argv through argv[argc-1] contain pointers to strings that are set by the command shell. The shell sets these arguments to the list of words on the command line used to invoke the compiler (the argument list). For further information on how the words in the argument list are formed, refer to the documentation on the shell in which you are running. For information on UNICOS/mp shells, see the sh(1) or csh(1) man page.
A third parameter, char **envp, provides access to environment variables. The value of the parameter is a pointer to the first element of an array of null-terminated strings, that matches the output of the env(1) command. The array of pointers is terminated by a null pointer.
The compiler does not distinguish between interactive devices and other, noninteractive devices. The library, however, may determine that stdin, stdout, and stderr (cin, cout, and cerr in Cray C++) refer to interactive devices and buffer them accordingly.
The Cray C compiler treats the first 255 characters of a name as significant, regardless of whether it is an internal or external name. The case of names, including external names, is significant. In Cray C++, all characters of a name are significant.
Table 12-1 summarizes Cray C and C++ types and the characteristics of each type. Representation is the number of bits used to represent an object of that type. Memory is the number of storage bits that an object of that type occupies.
In the Cray C and C++ compilers, size, in the context of the sizeof operator, refers to the size allocated to store the operand in memory; it does not refer to representation, as specified in Table 12-1. Thus, the sizeof operator will return a size that is equal to the value in the Memory column of Table 12-1 divided by 8 (the number of bits in a byte).
64 (each part is 32 bits)
128 (each part is 64 bits)
long double complex
256 (each part is 128 bits)
The full 8-bit ASCII code set can be used in source files. Characters not in the character set defined in the standard are permitted only within character constants, string literals, and comments. The -h [no]calchars option allows the use of the @ sign and $ sign in identifier names. For more information on the -h [no]calchars option, see Section 2.9.3.
A character consists of 8 bits. Up to 8 characters can be packed into a 64-bit word. A plain char type, one that is declared without a signed or unsigned keyword, is treated as an unsigned type.
Character constants and string literals can contain any characters defined in the 8-bit ASCII code set. The characters are represented in their full 8-bit form. A character constant can contain up to 8 characters. The integer value of a character constant is the value of the characters packed into a word from left to right, with the result right-justified, as shown in the following table:
Wide characters are treated as signed 64-bit integer types. Wide character constants cannot contain more than one multibyte character. Multibyte characters in wide character constants and wide string literals are converted to wide characters in the compiler by calling the mbtowc(3) function. The current locale in effect at the time of compilation determines the method by which mbtowc(3) converts multibyte characters to wide characters, and the shift states required for the encoding of multibyte characters in the source code. If a wide character, as converted from a multibyte character or as specified by an escape sequence, cannot be represented in the extended execution character set, it is truncated.
All integral values are represented in a twos complement format. For representation and memory storage requirements for integral types, see Table 12-1.
When an integer is converted to a shorter signed integer, and the value cannot be represented, the result is the truncated representation treated as a signed quantity. When an unsigned integer is converted to a signed integer of equal length, and the value cannot be represented, the result is the original representation treated as a signed quantity.
The bitwise operators (unary operator ~ and binary operators <<, >>, &, ^, and |) operate on signed integers in the same manner in which they operate on unsigned integers. The result of E1 >> E2, where E1 is a negative-valued signed integral value, is E1 right-shifted E2 bit positions; vacated bits are filled with 1s. This behavior can be modified by using the -h nosignedshifts option (see Section 2.9.4). Bits higher than the sixth bit are not ignored. Values higher than 31 cause the result to be 0 or all 1s for right shifts.
The result of the / operator is the largest integer less than or equal to the algebraic quotient when either operand is negative and the result is a nonnegative value. If the result is a negative value, it is the smallest integer greater than or equal to the algebraic quotient. The / operator behaves the same way in C and C++ as in Fortran.
The sign of the result of the percent (%) operator is the sign of the first operand.
Integer overflow is ignored. Because some integer arithmetic uses the floating-point instructions, floating-point overflow can occur during integer operations. Division by 0 and all floating-point exceptions, if not detected as an error by the compiler, can cause a run time abort.
An unsigned int value can hold the maximum size of an array. The type size_t is defined to be a typedef name for unsigned long in the headers: malloc.h, stddef.h, stdio.h, stdlib.h, string.h, and time.h. If more than one of these headers is included, only the first defines size_t.
A type int can hold the difference between two pointers to elements of the same array. The type ptrdiff_t is defined to be a typedef name for long in the header stddef.h.
If a pointer type's value is cast to a signed or unsigned long int, and then cast back to the original type's value, the two pointer values will compare equal.
A pointer can be explicitly converted to any integral type large enough to hold it. The result will have the same bit pattern as the original pointer. Similarly, any value of integral type can be explicitly converted to a pointer. The resulting pointer will have the same bit pattern as the original integral type.
Use of the register storage class in the declaration of an object has no effect on whether the object is placed in a register. The compiler performs register assignment aggressively; that is, it automatically attempts to place as many variables as possible into registers.
Accessing a member of a union by using a member of a different type results in an attempt to interpret, without conversion, the representation of the value of the member as the representation of a value in the different type.
Members of a class or structure are packed into words from left to right. Padding is appended to a member to correctly align the following member, if necessary. Member alignment is based on the size of the member:
For a member bit field of any size, alignment is any bit position that allows the member to fit entirely within a 64-bit word.
For a member with a size less than 64 bits, alignment is the same as the size. For example, a char has a size and alignment of 8 bits; a float has a size and alignment of 32 bits.
For a member with a size equal to or greater than 64 bits, alignment is 64 bits.
For a member with array type, alignment is equal to the alignment of the element type.
A plain int type bit field is treated as an signed int bit field.
The values of an enumeration type are represented in the type signed int in C; they are a separate type in C++.
When an object that has volatile-qualified type is accessed, it is simply a reference to the value of the object. If the value is not used, the reference need not result in a load of the value from memory.
The value of a single-character constant in a constant expression that controls conditional inclusion matches the value of the same character in the execution character set. No such character constant has a negative value. For each, 'a' has the same value in the two contexts:
#if 'a' == 97 if ('a' == 97)
The -I option and the method for locating included source files is described in Section 2.19.4.
The source file character sequence in a #include directive must be a valid UNICOS/mp file name or path name. A #include directive may specify a file name by means of a macro, provided the macro expands into a source file character sequence delimited by double quotes or < and > delimiters, as follows:
#define myheader "./myheader.h" #include myheader #define STDIO <stdio.h> #include STDIO
The macros __DATE__ and __TIME__ contain the date and time of the beginning of translation. For more information, see the description of the predefined macros in Chapter 9.
The #pragma directives are described in Chapter 3.
We do not recommend using shorts because of performance penalties. | <urn:uuid:6c5941a8-8fc4-49ca-a5a0-2f75f32d09dd> | CC-MAIN-2017-04 | http://docs.cray.com/books/S-2179-50/html-S-2179-50/rvc5mrwh.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901639 | 2,147 | 2.84375 | 3 |
Procurement documents or bid packages are designed to elicit particular types of responses. The type of response (from the vendor) desired by the buyer depends on how well-informed they are about their own needs. The following are examples of procurement documents or terms you may encounter.
EOI – Expression of Interest
An EOI may be used very early in the procurement process to aid in planning. It is a request for input from vendors on how best to solve technical or procurement problems. Knowledge gained from the process is then used to develop the final bid documents.
It is common for vendors to assist in the writing of SOW statements. A collaborative approach to the planning process helps to ensure that the SOW can be readily interpreted by the vendor community.
RFI – Request for Information
An RFI is similar to an EOI. It is asking potential vendors to help a buyer decide what to buy. Most vendors are anxious to comply, because it helps them learn exactly what the buyer is after.
IFB -Invitation for Bid
The term “IFB” is used after the procurement documents are fully developed and the buyer knows exactly what he or she wants (scope is clearly defined).
The seller is being asked for a bid to deliver the goods and services as described. IFBs are anticipating a response in the form of a fixed price quote.
RFP – Request for Proposals
The term “RFP” is used when solutions are being requested. An RFP implies a “cost plus” response. Vendors are being asked to explain how they would meet the needs of the buyer, what makes their skill set unique and what the probable cost would be.
RFPs are used when the scope is ill defined or vague, either because of urgency or technical uncertainty.
Requests for Quotes
An RFQ is used to solicit fixed price bids. The name implies that the buyer knows exactly what she/he wants (clearly defined scope) and is looking for a firm, fixed price.
“Tender notice” is the term used by government agencies and some large corporations in place of “procurement notice,” or, “the following are going to be purchased.” A tender notice may then request responses in any of the forms listed above.
Invitation for Negotiation
An invitation for negotiation is actually used after procurement planning and during “conduct procurements.”
Vendors are invited to participate in negotiations only after their bids have been selected as the best.
An invitation does not mean that a contract will be signed. It only implies that the vendor who receives it has been selected as one of the best suppliers and is being asked to discuss the details of their bid.
Next week I will explain the ingredients of a “bid package.” | <urn:uuid:f6bd90c0-7399-45b5-b836-eb68771e9c05> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/06/02/procurement-documents-and-vendor-responses-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961601 | 589 | 3.03125 | 3 |
The Information Technology world has a definite jargon of its own, which can be confusing to both the end users and (sometimes) to the IT people themselves. One of our biggest problems these days is Malware (mal meaning “bad”) infections on our users’ computers. In the interests of making the problem a little clearer, here is a basic (if not necessarily complete) dictionary of terms, in plain English.
Adware: Advertising-supported software. This is software that automatically plays, downloads or displays advertisements to a computer. A classic example would be a “helper toolbar” that causes advertising pop-ups on your screen.
Backdoor: Some spyware can install a credential and password that make unauthorized and unexpected entry into a computer possible by an outside user, who can then plant more malware and/or harvest available data.
Bot: A piece of software designed to grant an outside user complete control of your computer at will. A computer affected by bots is called a zombie, and “armies” of like-infected machines can be used to launch simultaneous attacks on other systems, or send out spam email messages.
Browser Hijacker: Code that replaces search pages, home pages or error pages with its own, allowing further browsing to be redirected to wherever it wants you to go (as opposed to where you wanted to go).
Rootkit: Code designed to gain root-access to your computer and manipulate it into allowing viruses or spyware to install and operate, while hiding from anti-virus scanners by appearing to be a part of the operating system.
Spyware: Differing from viruses in that they are not out to wreck your system, but to gain from it – controlling functions or accessing data for financial gain. Spyware might include keystroke loggers, backdoors, or browser hijackers, among other things.
Trojan: A disguise for malicious software, which may be brought into your computer as something apparently safe, but which can drop one or more harmful programs once inside. For example, an image file might contain code that operates only when the image is viewed, which installs backdoors, bots or viruses at that time, but which is otherwise inert.
Virus: A self-replicating program, intended to cause damage in computers. Pretty much pure vandalism, there is generally no gain for the perpetrators…
Worm: A program that looks for holes in your computer’s security, to get itself inside your computer where it can drop its payload (viruses or spyware). It is not, itself, either a virus or spyware, but may be thought of as something like a trojan. It scans IP addresses, opportunistically looking for entry points to exploit. | <urn:uuid:516d7424-d731-4c1d-85e0-0a49cd8ed9d8> | CC-MAIN-2017-04 | http://www.bvainc.com/malware-terminology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930331 | 562 | 3.328125 | 3 |
In their recent paper, Wireless NoC for VFI-enabled multicore chip design: performance evaluation and design trade-offs, researchers from Carnegie Mellon's Department of Electrical and Computer Engineering and Washington State University identify a new approach for enabling energy-efficient multicore systems. Much like bypassing road congestion when traveling long distances, by using wireless on-chip communication between individually controllable clusters, researchers were able to provide an efficient communication backbone, which can be tailored for large scale multicore systems. This paper presents a platform that is poised to save significant energy with little or no performance penalty. The article is the featured IEEE Transactions on Computers paper for the month of April. As the number of cores packed into a single chip rises, scalable power management strategies are needed to keep power under prescribed limits. Voltage frequency islands, or VFIs, have long been used to enable such strategies. In VFI-based designs, the system is partitioned into islands with individually adjustable voltage and frequencies so as to reduce the power within allowable performance penalties. However, while enabling significant power savings, a main challenge of VFI-based designs is the on-chip communication cost which negatively impacts application performance. Indeed, mixed voltage/frequency interfaces must be used for inter-VFI communication, thereby increasing communication delay. This paper presents two innovative solutions, the first of which is through the VFI clustering methodology. A hybrid VFI clustering that combines both per-VFI utilization and inter-VFI communication enables minimal inter-VFI communication without greatly increasing the inter-cluster utilization variation. Secondly, researchers utilized a small-world wireless Network on Chip or mSWNoC to enable fast and energy efficient on-chip communication. The mSWNoC exploits small-world connectivity for reducing communication costs through wireless long-range short cuts between VFIs. The wireless small-world connectivity is able to mitigate most of the performance penalties introduced by VFIs. Furthermore, VFI-based multicore systems with mSWNoC communication are shown to be significantly better in energy efficiency compared to classic systems using wired on-chip networks (e.g. mSWNoC improves the energy dissipation by 40% and the energy-delay product by 52% compared to a wireline mesh on common PERSEC and SPLASH-2 benchmarks)
News Article | April 21, 2016
Phone calls and text messages reach you wherever you are because your phone has a unique identifying number that sets you apart from everybody else on the network. Researchers at the Georgia Institute of Technology are using a similar principle to track cells being sorted on microfluidic chips. The technique uses a simple circuit pattern with just three electrodes to assign a unique seven-bit digital identification number to each cell passing through the channels on the microfluidic chip. The new technique also captures information about the sizes of the cells, and how fast they are moving. That identification and information could allow automated counting and analysis of the cells being sorted. The research, reported in the journal Lab on a Chip, could provide the electronic intelligence that might one day allow inexpensive labs on a chip to conduct sophisticated medical testing outside the confines of hospitals and clinics. The technology can track cells with better than 90 percent accuracy in a four-channel chip. "We are digitizing information about the sorting done on a microfluidic chip," explained Fatih Sarioglu, an assistant professor in Georgia Tech's School of Electrical and Computer Engineering. "By combining microfluidics, electronics and telecommunications principles, we believe this will help address a significant challenge on the output side of lab-on-a-chip technology." Microfluidic chips use the unique biophysical or biochemical properties of cells and viruses to separate them. For instance, antigens can be used to select bacteria or cancer cells and route them into separate channels. But to obtain information about the results of the sorting, those cells must now be counted using optical methods. The new technique, dubbed microfluidic CODES, adds a grid of micron-scale electrical circuitry beneath the microfluidic chip. Current flowing through the circuitry creates an electrical field in the microfluidic channels above the grid. When a cell passes through one of the microfluidic channels, it creates an impedance change in the circuitry that signals the cell's passage and provides information about the cell's location, size and the speed at which it is moving through the channel. This impedance change has been used for many years to detect the presence of cells in a fluid, and is the basis for the Coulter Counter which allowed blood counts to be done quickly and reliably. But the microfluidic CODES technique goes beyond counting. The positive and negative charges from the intermingled electrical circuits create a unique identifying digital signal as each cell passes by, and that sequence of ones and zeroes is attached to information about the impedance change. The unique identifying signals from multiple cells can be separated and read by a computer, allowing scientists to track not only the properties of the cells, but also how many cells have passed through each channel. "By judiciously aligning the grid pattern, we can generate the codes at the locations we choose when the cells pass by," Sarioglu explained. "By measuring the current conduction in the whole system, we can identify when a cell passes by each location." Because the cells sorted into each channel of a microfluidic chip have certain characteristics in common, the technique would allow the automated detection of cancer cells, bacteria or even viruses in a fluid sample. Sarioglu and his students have demonstrated that they can track more than a thousand ovarian cancer cells with an accuracy rate of better than 90 percent. The underlying principle for the cell identification is called code division multiple access (CDMA), and it's essential for helping cellular networks separate the signals from each user. The microfluidic channels are fabricated from a plastic material using soft lithographic techniques. The electrical pattern is fabricated separately on a glass substrate, then aligned with the plastic chip "We have created an electronic sensor without any active components," Sarioglu said. "It's just a layer of metal, cleverly patterned. The cells and the metallic layer work together to generate digital signals in the same way that cellular telephone networks keep track of each caller's identity. We are creating the equivalent of a cellphone network on a microfluidic chip." The next step in the research will be to combine the electronic sensor with a microfluidic chip able to actively sort cells. Beyond cancer cells, bacteria and viruses, such a system could also sort and analyze inorganic particles. The computing requirements of the system would be minimal, requiring no more than the processor power of smartphones that already handle decoding of CDMA signals. The proof-of-principle device contains just four channels, but Sarioglu believes the design could easily be scaled up to include many more channels. "This is like putting a USB port on a microfluidic chip," he explained. "Our technique could turn all of the microfluidic manipulations that are happening on the chip into quantitative data related to diagnostic measurements. Ultimately, the researchers hope to create inexpensive chips that could be used for sophisticated diagnostic testing in physician offices or remote locations. Chips might be contained on cartridges that would automate the testing process. "It will be very exciting to scale this up, and I think that will open up the possibility for many different assays to become accessible electronically," Sarioglu said. "Decentralizing health care is an important trend, and our technology might one day allow many kinds of diagnostic tests to be done beyond hospitals and large medical facilities."
The new mass-spectral imaging system is the first of its kind in the world and its applications are just beginning to surface, said Carmen Menoni, a University Distinguished Professor in the Department of Electrical and Computer Engineering. A special issue of Optics and Photonics News last month highlights the CSU research among "the most exciting peer-reviewed optics research to have emerged over the past 12 months." Editors identified the imaging device among global "breakthroughs of interest to the optics community." Menoni's group, in collaboration with an interdisciplinary group of faculty, devised and built the instrument with help from students. She found a partner in CSU's renowned Mycobacteria Research Laboratories, which seek new treatments for the global scourge of tuberculosis. The partners described the system in a paper published earlier this year in Nature Communications. Dean Crick, a professor who researches tuberculosis, collaborated with Menoni to refine the mass spectrometer imaging system. He said the instrument will allow him to examine cells at a level 1,000 times smaller than that of a human hair - about 100 times more detailed than was earlier possible. This will give researchers the ability to observe how well experimental drugs penetrate and are processed by cells as new medications are developed to combat disease. Crick's primary research interest is tuberculosis, an infectious respiratory disease that contributes to an estimated 1.5 million deaths around the world each year. "We've developed a much more refined instrument," Crick said. "It's like going from using a dull knife to using a scalpel. You could soak a cell in a new drug and see how it's absorbed, how quickly, and how it affects the cell's chemistry." The earlier generation of laser-based mass-spectral imaging could identify the chemical composition of a cell and could map its surface in two dimensions at the microscale, but could not chart cellular anatomy at the more detailed nanoscale and in 3-D, Crick said. In addition to observing how cells respond to new drugs, he said, researchers could use the technology to identify the sources of pathogens propagated for bioterrorism. The instrument might also be used to investigate new ways to overcome antibiotic resistance among patients with surgical implants. "You might be able to customize treatments for specific cell types in specific conditions," Crick said. The CSU instrument would cover the average dining room table. Its central features are mass-spectral imaging technology and an extreme ultraviolet laser. Jorge Rocca, also a University Distinguished Professor in the Department of Electrical and Computer Engineering, created the laser attached to the spectrometer. Its beam is invisible to the human eye and is generated by an electrical current 20,000 times stronger than that of regular fluorescent tubes in ceiling lights, resulting in a tiny stream of plasma that is very hot and dense. The plasma acts as a gain medium for generating extreme ultraviolet laser pulses. The laser may be focused to shoot into a cell sample; each time the laser drills a tiny hole, miniscule charged particles, or ions, evaporate from the cell surface. These ions then may be separated and identified, allowing scientists to determine chemical composition. The microscopic shrapnel ejected from each hole allows scientists to chart the anatomy of a cell piece by piece, in three dimensions, at a scale never seen before, the scientists said. The project was funded with $1 million from the National Institutes of Health as part of an award to the Rocky Mountain Regional Center of Excellence for Biodefense and Emerging Infectious Disease Research. The optical equipment that focuses the laser beam was created by the Center for X-Ray Optics at the Lawrence Berkeley National Laboratory in Berkeley, Calif. The CSU system recently received support for system engineering design from Siemens. The company gave the CSU team an academic grant for its NX software package, including 30 seat licenses, valued at $37 million. Other CSU faculty involved in the project include Feng Dong and Elliot Bernstein from the Department of Chemistry. The lead author on the paper published in Nature Communications is Ilya Kuznetsov, a CSU doctoral student in Electrical and Computer Engineering. "The whole system was built by students and post-docs," Menoni said. "This is something we pride ourselves on, that the students get an interdisciplinary experience. Having access to design software such as the Siemens NX package is critical for creating these instruments and for training students." Key to the project has been collaboration among scientists who build high-tech devices and those who use them to solve global problems. "It's been very interesting learning how to communicate with engineers," Crick said. "We don't think alike. They understand the biology about as well as I understand the engineering. But over the years we've learned how to talk to each other, which is nice. I can see the need for the instrument, but I have no idea how to build it. They do." At one end of the instrument is a special laser created in an argon gas-filled tube when a pulse of 60 kilovolts is discharged. "It's like a lightning strike in a nanosecond," said Carmen Menoni, University Distinguished Professor in the Department of Electrical and Computer Engineering. The laser is guided through chambers using mirrors and special lenses that focus it down to a diameter of less than 100 nanometers. In a chamber at the far side of the spectrometer, the laser hits a sample cell placed with the aid of a microscope. "When you're trying to hit a single bacterium with a laser, it's tricky. You have to aim well," said Dean Crick, a CSU professor in the Department of Microbiology, Immunology and Pathology. Once the laser drills a miniscule hole in the cell, charged ions emitted after the tiny explosion are drawn into a side tube using electrostatic fields. The larger mass the charged particle has, the slower it moves down the tube; the time it takes an ion to reach a detector gives scientists information about its mass. "It's like you have a sports car and a big truck," said Ilya Kuznetsov, a doctoral student in Electrical and Computer Engineering. "Imagine you put the same motor in both—they will move at different speeds. And the more you allow them to go, the more they separate. That's why our tube is so long, to allow for that differentiation." A set of special pumps creates high vacuum that sucks all air from the tube, to remove any foreign particles the sample might collide with and to ensure equally smooth sailing for all the ions. "If you want to have a car race, you need to remove all traffic from the roads," Kuznetsov explained. By keeping the charge and amount of energy applied to each particle consistent, the mass becomes the key signature that provides researchers with every ion's chemical identity. A computer program developed in-house generates the data in a color spectrum of masses, which is then used to create a kind of topographical cell composition map. Explore further: Research team refrigerates liquids with a laser for the first time
Home > Press > Light and matter merge in quantum coupling: Rice University physicists probe photon-electron interactions in vacuum cavity experiments Abstract: Where light and matter intersect, the world illuminates. Where light and matter interact so strongly that they become one, they illuminate a world of new physics, according to Rice University scientists. Rice physicists are closing in on a way to create a new condensed matter state in which all the electrons in a material act as one by manipulating them with light and a magnetic field. The effect made possible by a custom-built, finely tuned cavity for terahertz radiation shows one of the strongest light-matter coupling phenomena ever observed. The work by Rice physicist Junichiro Kono and his colleagues is described in Nature Physics. It could help advance technologies like quantum computers and communications by revealing new phenomena to those who study cavity quantum electrodynamics and condensed matter physics, Kono said. Condensed matter in the general sense is anything solid or liquid, but condensed matter physicists study forms that are much more esoteric, like Bose-Einstein condensates. A Rice team was one of the first to make a Bose-Einstein condensate in 1995 when it prompted atoms to form a gas at ultracold temperatures in which all the atoms lose their individual identities and behave as a single unit. The Kono team is working toward something similar, but with electrons that are strongly coupled, or "dressed," with light. Qi Zhang, a former graduate student in Kono's group and lead author of the paper, designed and constructed an extremely high-quality cavity to contain an ultrathin layer of gallium arsenide, a material they've used to study superfluorescence. By tuning the material with a magnetic field to resonate with a certain state of light in the cavity, they prompted the formation of polaritons that act in a collective manner. "This is a nonlinear optical study of a two-dimensional electronic material," said Zhang, who based his Ph.D. thesis on the work. "When you use light to probe a material's electronic structure, you're usually looking for light absorption or reflection or scattering to see what's happening in the material. That light is just a weak probe and the process is called linear optics. "Nonlinear optics means light does something to the material," he said. "Light is not a small perturbation anymore; it couples strongly with the material. As you change the coupling strength, things change in the material. What we're doing is the extreme case of nonlinear optics, where the light and matter are coupled so strongly that we don't have light and matter anymore. We have something in between, called a polariton." The researchers employed a parameter known as vacuum Rabi splitting to measure the strength of the light-matter coupling. "In more than 99 percent of previous studies of light-matter coupling in cavities, this value is a negligibly small fraction of the photon energy of the light used," said Xinwei Li, a co-author and graduate student in Kono's group. "In our study, vacuum Rabi splitting is as large as 10 percent of the photon energy. That puts us in the so-called ultrastrong coupling regime. "This is an important regime because, eventually, if the vacuum Rabi splitting becomes larger than the photon energy, the matter goes into a new ground state. That means we can induce a phase transition, which is an important element in condensed matter physics," he said. Phase transitions are transitions between states of matter, like ice to water to vapor. The specific transition Kono's team is looking for is the superradiant phase transition in which the polaritons go into an ordered state with macroscopic coherence. Kono said the amount of terahertz light put into the cavity is very weak. "What we depend on is the vacuum fluctuation. Vacuum, in a classical sense, is an empty space. There's nothing. But in a quantum sense, a vacuum is full of fluctuating photons, having so-called zero-point energy. These vacuum photons are actually what we are using to resonantly excite electrons in our cavity. "This general subject is what's known as cavity quantum electrodynamics (QED)," Kono said. "In cavity QED, the cavity enhances the light so that matter in the cavity resonantly interacts with the vacuum field. What is unique about solid-state cavity QED is that the light typically interacts with this huge number of electrons, which behave like a single gigantic atom." He said solid-state cavity QED is also key for applications that involve quantum information processing, like quantum computers. "The light-matter interface is important because that's where so-called light-matter entanglement occurs. That way, the quantum information of matter can be transferred to light and light can be sent somewhere. "For improving the utility of cavity QED in quantum information, the stronger the light-matter coupling, the better, and it has to use a scalable, solid-state system instead of atomic or molecular systems," he said. "That's what we've achieved here." The high-quality gallium arsenide materials used in the study were synthesized via molecular beam epitaxy by John Reno of Sandia National Laboratories and John Watson and Michael Manfra of Purdue University, all co-authors of the paper. Weil Pan of Sandia National Laboratories and Rice graduate student Minhan Lou, who participated in sample preparation and transport and terahertz measurements, are also co-authors. Zhang is now the Alexei Abrikosov Postdoctoral Fellow at Argonne National Laboratory. Kono is a Rice professor of electrical and computer engineering, of physics and astronomy and of materials science and nanoengineering. Li received a "Best First-Year Research Award" from Rice's Department of Electrical and Computer Engineering for his work on the project. ### The research was supported by the National Science Foundation, U.S. Department of Energy, Lockheed Martin Corp. and the W.M. Keck Foundation. About Rice University Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,910 undergraduates and 2,809 graduate students, Rice's undergraduate student-to-faculty ratio is 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for best quality of life and for lots of race/class interaction by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance. To read "What they're saying about Rice," go to tinyurl.com/RiceUniversityoverview. Follow Rice News and Media Relations via Twitter @RiceUNews For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
From Data Centers to Wireless Datacenter on Chip (WiDoCs). Credit: Carnegie Mellon University Electrical and Computer Engineering Diana Marculescu and Radu Marculescu have been awarded an NSF grant to develop a new paradigm for Big Data computing. Specifically, this project focuses on a new Datacenter-on-a-Chip (DoC) design consisting of thousands of cores that can run compute- and data-intensive applications more efficiently compared to existing platforms. Currently, data centers (DC) and high performance computing clusters are dominated by power, thermal, and area constraints. They occupy large spaces and necessitate sophisticated cooling mechanisms to sustain the required performance levels. The proposed new DoC design consists of thousands of cores that communicate via a new communication infrastructure, while provisioning the system resources for the necessary power, performance, and thermal trade-offs (Fig.1). From an intellectual perspective, this approach lies squarely at the intersection of two major trends in integrated systems design, namely low power and communication centric design. "There are three goals in this project," explains Radu Marculescu. "We want to design small-world wireless architecture as a communication backbone for many core-enabled Wireless Datacenter on Chip (WiDoC), while establishing physical layer design methods for highly-integrated 3-D WiDoC suitable for low latency data communication. We hope to evaluate latency-power-thermal trade-offs for the proposed WiDoC platform by considering relevant big data applications." The unique proposed research brings together highly novel and interdisciplinary concepts from network-on-chip (NoC), wireless and complex networks, communication circuits, and optimization techniques aimed at single chip solutions for achieving data center-scale performance. At the same time, this work will help to establish an interdisciplinary research-based curriculum for high performance many-core system design meant to increase the number of students attracted to this area of engineering. "Our research will impact numerous areas," says Diana Marculescu. "Big data applications like social computing, life sciences, networking, and entertainment will benefit immensely from this new design paradigm that aims at achieving server-scale performance from hand-held devices." This is a joint project between Carnegie Mellon University and Washington State University. Preliminary results based on this work will be presented at the 2016 edition of Embedded Systems Week. | <urn:uuid:7e630eb1-f7a2-4fdf-8e90-004c8b53a897> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/electrical-and-computer-engineering-17497/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939462 | 4,967 | 2.703125 | 3 |
Google Warns Searchers Of Windows Malware InfectionGoogle has started alerting users running Windows about a specific form of local malware it can detect through network traffic flows.
(click image for larger view)
Slideshow: 10 Massive Security Breaches
Hundreds of thousands of people using Google Search have seen this message atop a search results page recently: "Your computer appears to be infected." While finding malware on one's computer can be disconcerting, it's also disconcerting to consider that Google appears to know what's on your computer.
In fact, Google doesn't know about your applications, apart from those you use to access Google services on the Internet. If the company has identified malware on your computer, it's because your computer is probably infected with malware that hijacks Google search results and redirects search traffic to websites for payment.
For years, Google has presented alerts about websites in its search index that it believes may have been compromised to serve malware. It has also provided open-source Web security research tools such as skipfish, ratproxy, and DOM Snitch. This is the first time Google has applied its knowledge of Internet network traffic to identify malware on its users' local computers.
Google security engineer Damian Menscher said the company's security team discovered unusual search traffic while performing routine maintenance on one of its data centers. "After collaborating with security engineers at several companies that were sending this modified traffic, we determined that the computers exhibiting this behavior were infected with a particular strain of malicious software, or 'malware,'" he explained in a blog post.
The malware prompts infected Windows computers to send traffic to Google through proxy servers. Google is detecting traffic that comes from these servers and notifying users sending the traffic that their computers appear to be infected.
Google says that that several million PCs appear to be affected, that it has warned several hundred thousand people, and that the source of the infection appears to be one of roughly a hundred variants of fake antivirus software. The company says it is not aware of a specific name for the fake antivirus software responsible for the infection.
Google advises that users utilize current antivirus software to scan for an infection and to be wary of inadvertently installing fake antivirus software in an attempt to correct the problem. If legitimate antivirus software fails to fix the issue and Google searches still bring a warning message, Google provides instructions for manually cleaning one's Windows hosts file, through which the malware redirects Web requests.
Black Hat USA 2011 presents a unique opportunity for members of the security industry to gather and discuss the latest in cutting-edge research. It happens July 30-Aug. 4 in Las Vegas. Find out more and register. | <urn:uuid:6b01ce5c-f505-471f-852e-c621c74849eb> | CC-MAIN-2017-04 | http://www.darkreading.com/vulnerabilities-and-threats/google-warns-searchers-of-windows-malware-infection/d/d-id/1099044 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00217-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93589 | 544 | 2.515625 | 3 |
The history of data centers shows years of innovation
Thursday, Mar 6th 2014
Data centers were not always the sleek powerhouses they are today. Just as computers advanced from rudimentary behemoths to the aesthetic devices they are now, so too did data centers. But where computers started out bigger and got smaller, the exact opposite is true for data centers. SiliconAngle recently put together an interesting piece on the history of the data center, and we thought we would discuss some highlights:
1977: The introduction of ARCnet
What we now call the Internet - a vast and interconnected realm of data - finds its roots in local area networks. The first of these emerged in 1977 and was put to use at a Chase Manhattan Bank. A network that enabled inter-computer communication within a small area, ARCNet (short for Attached Resource Computer network) had the capacity to link up 255 computing devices. From its humble beginnings operating at a single New York bank, the system fundamentally shaped the course of information sharing between devices.
Mid-1990s: The birth of the .com age
The early 1990s was a time of transformation for both computing and data centers. This period saw a surge in Internet use. Suddenly, the Web became a common point of access for users across the world, and all that web computing began generating data. With the influx of data came the need to identify storage solutions. Businesses started to build up data centers started to handle their emerging information storage needs. Yet these centers lacked the sophistication they have today. In fact, Google's first data "center" - launched in 1998 - could not be called much more than a cage, measuring merely 7 feet by 4 feet with a capacity for 30 PCs on its shelves, PC Magazine reported. Considering the closet-like nature of the center, it is a safe bet that the data room cooling system the company used wasn't nearly as innovative as its current methods, which include using seawater as a means of maintaining optimal data center temperature, according to Google.
2006: A data center to address an increasingly mobile world
In the computing sphere, 2006 seems like the distant past. And yet that year represented an important advancement with regard to data mobility. It was the year that Sun Microsystems launched their "data center in a box" - a highly compact data storage system that was perfect for data-driven projects that needed to come together quickly. Because its design was centered around a 20-foot shipping container, these data boxes were made to be transferable, The New York Times reported. This proved an expeditious solution for operations such as oil rigs, which could now house their data at the site of the business. In order to maintain a workable server room temperature, the system relied on a spot cooling method that applied chilled air to hot areas.
The present: constant developments in data center efficiency
Hearing about a data center that keeps cool with water from a Gulf or calls its home the inside of a mountain might have seemed like science fiction to a 1990s audience. But such advancements are simply par for the course is an age when data centers are growing alongside the information they store. Google has emerged as a front-runner in such innovations. It oversees a vast worldwide network of data centers, each with its own distinguishing features. Its facility in Berkeley County, S.C., for example, relies on data room cooling from a 240,000 gallon storage tank. Meanwhile, its center in Douglas County, Ga., is equipped with highly pressurized water pipes specially designed to unleash their contents in case of a fire. Across the world there are many other data centers that work to make the process of data maintenance more efficient. | <urn:uuid:b4be8b53-8e28-4250-aea8-0c743f1d3f14> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-history-of-data-centers-shows-years-of-innovation-591461 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00427-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958532 | 746 | 3.09375 | 3 |
The new chip could make the previously sci-fi quantum computing revolution a reality in just 10 years.
Scientists from the University of Bristol’s Centre for Quantum Photonics have announced that they have developed a working silicon chip that will allow the mass manufacture of microscopic quantum chips in the near future.
Quantum computing would allow for the computation of complex tasks at speeds beyond even the capabilities of the latest supercomputers, such as stock market analysis, simulation of scientific lab conditions (or space) and potentially even artificial intelligence.
"It had previously been thought that a large-scale quantum computer will not become a reality for at least another 25 years," said Jeremy O’Brien, director of the Centre for Quantum Photonics.
"However, we believe that, using our new technology, a such a device, in less than 10 years, be performing important calculations that are outside the capabilities of conventional computers."
Unlike conventional bits or transistors, which can exist in one of only two states at any one time (1 or 0), a quantum computing qubit can be in several states at the same time. This means it can process a much larger amount of information at a much greater rate, and on a microscopic scale.
Quantum processing chip – full chip size is 2mm x 4mm. The processing part in the centre is just 100um by 400um and contains over 200 components.
Scientists have been working on quantum computing for decades as the next phase of evolution in computing. Until now, it had been considered a sci-fi dream that wouldn’t see the light of day for decades – and mainstream usage decades after that.
The University of Bristol team claims that it has solved much of the problems by moving from using glass-based circuits to silicon-based circuits.
This is the same material routinely used en masse to build the tiny electrical processors in all computers and smart phones. However, unlike conventional silicon chips that work by controlling electrical current, these circuits manipulate single particles of light (photons) to perform calculations.
The circuits exploit strange quantum mechanical effects such as superposition (the ability for a particle to be in two places at once) and entanglement (strong correlations between particles that would be nonsensical in our everyday world).
This means the new chips use the same manufacturing techniques as conventional microelectronics, and can be scaled for mass-manufacture.
"Using silicon to manipulate light, we have made circuits over 1000 times smaller and more complex than current glass-based technologies. For the first time, we can mass-produce this kind of chip, and the much smaller size means it can be incorporated in to technology and devices that would not previously have been compatible with glass chips" says Mark Thompson, deputy director of the Centre for Quantum Photonics.
"This is very much the start of a new field of quantum-engineering, where state-of-the-art micro-chip manufacturing techniques are used to develop new quantum technologies and will eventually realise quantum computers that will help us understand the most complex scientific problems."
In the short term, the team plans to implement quantum-secure communications chips that could work in mobile phones and laptop computers – increasing the security of online banking and internet shopping. The aforementioned nature of quantum mechanics makes encryption more compex, and the Bristol team claims this would make smartphones essentially ‘un-hackable’.
In the long term the researchers believe that their device represents a new route to the long dreamed of quantum computer. These devices will have unprecedented computational power for tasks including search engines and the design of new materials, pharmaceuticals and clean energy devices.
"Our approach will ultimately allow us to achieve component densities millions of times greater than current technologies, enabling miniature quantum circuits that could potentially fit inside a mobile phone, for example to enable quantum-secure communications for internet banking", said Thompson.
The team’s research will be unveiled at the British Science Festival this week. | <urn:uuid:96d83969-3ea0-4f57-b2de-787d150a4bc1> | CC-MAIN-2017-04 | http://www.cbronline.com/news/uk-scientists-crack-quantum-computing-030812 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936924 | 805 | 3.859375 | 4 |
Data tampering is used to bypass controls that are executed on client side. in this lesson, you will learn how to use the Interceptor to tamper a request in order to divert the sending of a mail.
Our example will be based on WebGoat, Bypass a Path Based Access Control Scheme. In this example, there is a given list of files you can view by clicking on the "View File" button:
The list of files is taken from this directory: /usr/local/www/webgoat/tomcat/webapps/webgoat/lesson_plans/English/
We would like to access this file: /usr/local/www/webgoat/tomcat/webapps/webgoat/main.jsp
From the current directory, the relative path to our target is: ../../main.jsp. This is what we call a Directory Traversal attack.
To do it with Watobo, open the interceptor mode (Tools > Interceptor) and check the "Requests" checkbox.
Point your browser to http://localhost:8080/webgoat/attack?Screen=99&menu=200, select a file from the list and click on the "View file" button. Go back to Watobo and analyze the request:
We notice that the filename is transmitted as a parameter. Transform the request by replacing the initial filename with "../../main.jsp" and click on the "Accept" button. You have bypassed the control executed on the client side! | <urn:uuid:798276ac-2088-4047-9b7c-a8bd259ef805> | CC-MAIN-2017-04 | https://www.aldeid.com/wiki/Watobo/Usage/Interceptor | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.809445 | 325 | 2.71875 | 3 |
Data locality plays a critical role in energy-efficiency and performance in parallel programs. For data-parallel algorithms where locality is abundant, it is a relatively straightforward task to map and optimize for architectures with user-programmable local caches. However, for irregular algorithms such as Breadth First Search (BFS), exploiting locality is a non-trivial task.
Guang Gao, a professor in the Department of Electrical and Computer Engineering in the University of Delaware works on mapping and exploiting locality in irregular algorithms such as BFS. Gao notes, “there are only a few studies of the energy efficiency issue of the BFS problem, … and more work is needed to analyze energy efficiency of BFS on architectures with local storage.”
In BFS, data locality is exploited in one of two ways: intra– and inter-loop locality. Intra-loop refers to locality within a loop body between adjacent loops. Inter-loop refers to locality between loop iterations in different loops. Exploiting both intra– and inter-loop locality is relatively simple assuming the programmer leverages a model that supports fine-grain parallelism.
Typical approaches to irregular algorithms do not perform well under traditional coarse-grain execution models like OpenMP. Using BFS as their motivating example, Gao’s team exploits data locality using Codelet, a fine-grain data-flow execution model.
In the Codelet model, units of computation are called codelets. Each codelet is a sequential piece of code that can be executed without interruption (e.g., no synchronization is required). Data dependence is specified between the codelets through a directed graph called the codelet graph. At execution time, the runtime schedules the codelets accordingly based on the dependencies.
The Codelet model executes in the context of an abstract parallel machine model. The machine consists of many computing nodes stitched together via an interconnection network. Each node contains a many-core chip organized into two types of cores: CUs and SUs. This heterogeneity provides differing performance and energy profiles. Codelets that can benefit from a weaker core can be scheduled into one type of core to save energy. Conversely, a codelet that requires heavy-duty computation can be scheduled into a stronger core.
By leveraging fine-grain data-flow execution models such as Codelet, Gao and his team are able to improve dynamic energy for memory accesses by up to 7% compared to the traditional coarse-grain OpenMP model. | <urn:uuid:3c0b0130-4c23-4d17-861a-cb796fe5cced> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/02/18/data-locality-cure-irregular-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927418 | 508 | 2.859375 | 3 |
Since the dawn of Information Society and E-Government programs, the objective of increasing transparency and citizen participation in policy-making has been high on the agenda of most countries, states and cities. Terms like e-participation, e-consultation, e-petition have been used to indicate different ways in which IT could ease the engagement in political decision-making.
The advent of Web 2.0 has increased the appetite for even greater and more effective engagement, also in view of the shifting attitude in Internet use, with more people creating content through blogs, wikis and social networks of all sorts.
Open government initiatives have provided the platform for more systematic engagement, by promoting the provision of more information, by pushing departments and agencies toward innovative ways to involve citizens in discussions about city planning, budget formulation, trash management, environmental monitoring and so forth. Mechanisms such as idea contests, unconferences, jam sessions, policy blogs and fora are proving very helpful. However, most of these are applied relatively late in the policy-making process.
Policy-making includes the following phases:
- Conception: policies are usually initiated by parliamentary or government committees. These ma be consulting with targeted constituencies (such as consumer or professional associations, unions, political parties).
- Drafting: the original idea is developed into a draft text, which usually undergoes a number of inter-departmental consultations; the outcome is a draft that is ready for public consultation
- Public consultation: the draft is exposed to the public for a general consultation.
- Finalization: the input received through public consultation is processed, together with further internal debate. The outcome is a final draft that goes through parliamentary or government discussion or both for approval.
The focus of electronic participation and, more recently, of open government has been primarily the public consultation. The main goal is to provide additional, easier and more compelling channels for citizens to be enticed to participate. With open government, there have also been modest attempts at addressing the drafting phase, by using policy wikis, and even the conception phase through idea collection initiatives.
What is still missing in most cases, though, is the use of technology much more upstream in the policy-making process. The increasing wealth of data that people put online every day provides an invaluable source of information to explore existing issues, to uncover trends, desires, sentiments that can inspire the conception of new policies. Of course creating a web site or a Facebook page or a discussion forum where citizens can propose ideas is a step in the right direction, but somewhat self selects the audience: in fact only people who have a vested interest or a passion for a particular issue will participate. But what about all the conversations where people share problems, suggestions, even solutions, which do not happen on an e-participation web site, but pop out from online communities where people socialize for reasons that have nothing to do with politics?
This is a classical example of what I call the asymmetry of open government. Rather than just creating avenues for people to participate, governments should listen to what people say in their own communities, and distill stimuli to conceive new policies.
Of course I am not advocating eavesdropping, but being attentive to where people debate in the open, and engage – once again at an individual level (civil servants and political staffers alike) – on the citizen’s own turf.
There are a few reasons why this is not happening, some good and some less good. The risk of being perceived as intruding or controlling citizen free speech is clear and present, and this is why it is up to individual employees and not to the government organizations they work for to engage.
This can be a time-consuming activity, and its ROI on the efficiency and effectiveness of the policy-making process may be difficult to demonstrate. This is why government employees’ engagement in external social networks (in the context of their job role and responsibilities) is so important, as they become the eyes and ears that are needed to advise their hierarchy and ultimately senior political leaders on the conception of new or amended policies.
I honestly believe that the problems above can be overcome. But what is a much thornier issue is the potential risk that this approach poses to those institutional counterparts to government, such as formal associations and political parties, that would see the disintermediation of the policy-making process as a threat to their own existence.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:195ea2fb-fe23-4bc8-b7d1-49f4085a356a> | CC-MAIN-2017-04 | http://blogs.gartner.com/andrea_dimaio/2011/05/22/where-technology-should-be-used-to-improve-policy-making-and-is-not/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941366 | 1,019 | 3.109375 | 3 |
Mobile malware is getting lots of attention these days, but you can't forget about your PC's security--after all, you probably still use it to pay bills, shop online, and store sensitive documents. You should fully protect yourself to lessen the chance of cybercriminals infiltrating your computer and your online accounts, capturing your personal information, invading your privacy, and stealing your money and identity.
Mobile malware is getting lots of attention these days, but you cant forget about your PCs securityafter all, you probably still use it to pay bills, shop online, and store sensitive documents. You should fully protect yourself to lessen the chance of cybercriminals infiltrating your computer and your online accounts, capturing your personal information, invading your privacy, and stealing your money and identity.
You need to guard against viruses, of course, but not all antivirus programs catch all threats, and some do better than others. You have to watch for many other types of threats, too: Malware invasions, hacking attacks, and cases of identify theft can originate from email, search engine results, websites, and social networks such as Facebook. They can also come in the form of links or advertisements for phishing and scam sites. But with some education on the topic, and the right tools, you can identify such scams and avoid falling victim.
If your children use your computer, you must also protect against inappropriate content such as violent games and adult sites, and you should monitor communication on social networks. Although the best approach is to keep a close eye on your kids while they use the computer, you can employ tools and services to filter content and monitor their Web usage when youre not around.
Protecting your data from computer thieves and from people who tap in to your Wi-Fi signal is also important. Encrypting your computer is the only way to ensure that a thief cannot recover your files, passwords, and other data. And unless you password-protect and encrypt your wireless network, anyone nearby can connect to it, monitor your Internet usage, and possibly access your computers and files.
Here are the security threats you should watch for, and the tools you can use to protect against them.
Viruses and other malware
Viruses, spyware, and other types of malware are still prevalent, and cybercriminals are constantly finding new ways to infect computers. Although adult sites and illegal file-sharing sites have a reputation for harboring malware, you dont have to browse the shady parts of the Web to become infected.
Installing a good antivirus or Internet security program should be your first step. However, not all are created equal. While no single antivirus product can protect against all of the millions of malware variants, some packages detect (and successfully remove) more threats than others do. For strong PC security, choose one of the top performers from our 2012 antivirus product-line reviews, such as Bitdefender Internet Security, Norton Internet Security, or G Data Internet Security. And in the future, be sure to check back for our more up-to-date reviews.
Although an antivirus package is your primary weapon for fighting malware, you might wish to add other tools to your arsenal for extra security.
OpenDNS provides content filtering that blocks many malware-infested sites and phishing scams. You can enable this online service on select computers, or on your router to protect all connected devices. The free OpenDNS FamilyShield automatically blocks malware, phishing sites, adult content, and proxy sites that try to bypass the filtering, and it requires only a simple setting change on your PCs or router. The OpenDNS Home and Premium DNS offerings filter malware and phishing sites, and let you make a free or paid account to customize the filtering and other features.
The freeware utility Sandboxie lets you run your Web browseror any other applicationin a safe mode of sorts to protect against damage from downloaded viruses or suspicious programs that turn out to be malware. It does so by running the browser or selected program in a virtual environment (also known as a sandbox) that isolates the program from the rest of your system. Some antivirus or Internet security packages come with a sandbox feature, but if yours doesnt (or if it doesnt allow you to run programs in the sandbox manually), consider using Sandboxie when youre browsing risky sites or downloading suspicious files.
Intended to complement the defenses you already have, Malwarebytes works alongside most regular antiAvirus programs. It may catch malware that your regular antivirus utility misses, or remove threats that your standard package cant. The free version does on-demand scans (you manually open the program and run a scan), whereas the paid version has real-time monitoring just as regular antivirus software does.
In addition to installing antimalware utilities, you can take other steps to help prevent attacks.
Enable automatic Windows Updates: This action ensures that Windows and other Microsoft products regularly receive the latest security patches. You can adjust Windows Update settings via the Control Panel. For best protection, choose to have Windows download and install updates automatically.
Keep non-Microsoft software up-to-date: Dont forget to update your other software, too. Some popular programs and components (such as Web browsers, PDF readers, Adobe Flash, Java, and QuickTime) are bigger targets than others, and you should be especially mindful to keep them up-to-date. You can open the programs or their settings to check for updates, but most will automatically notify you when an update is availableand when you receive such notifications, dont ignore or disable them.
Hacking and intrusions
Malware-caused PC problems arent the only thing you have to worry about. A determined cybercriminal can get inside your PC by directly hacking into it, and some malware can steal your data and passwords, sending the information back to home base.
This is where a firewall comes in handy: It serves as a gatekeeper, permitting safe traffic (such as your Web browsing) and blocking bad traffic (hacking attempts, malware data transfers, and the like).
Windows includes a firewall, named (appropriately enough) Windows Firewall. Its set by default to block malicious traffic from coming into your computer, but it isnt set to watch the data thats going out, so it will likely not detect any malware attempts to transmit your data to cyberattackers. Although you can enable the firewalls outgoing protection (in Windows Vista and later versions), that isnt easy for the average user to set up or configure.
For the ultimate in PC security, you should use a firewall that protects your machine from both incoming and outgoing malicious traffic by default. First, find out whether your antivirus utility or Internet security package has a firewall component, and whether it offers full protection. If it doesnt, consider a third-party firewall such as ZoneAlarm Firewall or Comodo Firewall Free.
Phishing and scam sites
One method that cybercriminals use to steal your passwords, money, or identity is commonly called phishing (a play on the word fishing). Attackers try to get you (the fish) to hand over your information or money. They do so by hooking you with an email message, IM, or some other form of communication (the bait) that looks as if it came from a legitimate source such as a bank or an online shopping site.
Phishing isnt a new tactic, but people still fall for it. Here are some precautions that you can take to keep phishing scams from reeling you in.
Dont click links in email: Scammers often put links to fake login pages in email messages that look very convincing in an attempt to steal your personal information. With that in mind, if an email ever asks you to click a link to log in to a site and enter your username and password, dont do it. Instead, type in the real website URL of the company directly into your browser, or search Google for the site.
Check for SSL encryption: Before entering sensitive information online, make sure that the website is using encryption to secure the information while its moving over the Internet. The site address should begin with https instead of http, and your browser should show some kind of indicator near the address bar. If a site isnt using encryption for a screen in which it asks you to enter sensitive data, its most likely a phishing site or scam site. SSL encryption isnt a guarantee of safety, but you ought to make a habit of looking for that lock icon.
Use a Web browser add-on: Many Web browser add-ons out there can help you identify phishing scams and other dangerous sites. Typically these plug-ins use badges or some other indicator to show whether a site is safe, unsafe, or questionable. Most antivirus programs offer these types of browser add-ons, but if yours doesnt or you dont like it, consider using Web of Trust, an independent site-reputation tracking service.
Social network safety
Facebook, Twitter, and other popular social networking sites have given cybercriminals additional avenues to try grabbing your personal data. For example, scammers might create a malicious Facebook app that attempts to harvest your information for their financial gain, spreads tainted links, or hijacks other peoples profiles. Below are a few measures that you can implement to protect yourself on social networks.
Tighten your security and privacy settings: Although security and privacy features vary across social networks, they can help to protect you and your account data. You must set them up, however, for them to work effectively. For instance, both Facebook and Twitter allow you to encrypt your connections so that other people cant hijack your account when youre connecting from public Wi-Fi hotspots. And Facebook offers a feature to monitor and track the computers and devices that log in to your account, to help identify unauthorized logins.
Be careful who you friend or follow: Before you add someone as a Facebook friend, or follow them on Twitter or Google+, ask yourself whether you really know the person. Cybercriminals often set up fake profiles just to spread spam and malicious links.
Watch for phishing attempts, scams, and hoaxes: If something sounds fishy or too good to be true, it probably is. Two widespread Facebook scams, for instance, promote links or apps that claim to tell you who has viewed your profile, or that promise to change your Facebook profile layout or themeeven though neither capability exists. Think before you click on these types of links or apps, as they could steal your information, hijack your account, send spam to your friends, or cause other damage. To learn more about social network security and to discover scams as they develop, follow sites such as Facecrooks or PCWorlds own security topic page.
Check app permissions: If youre thinking of giving a Facebook app permission to access your profile information, first check out the types of information it wants. If you think a particular app should not be able to access certain details, dont allow it. Also, periodically check the apps youve authorized to see if any of them look suspicious.
Twitter lets apps access account information, too. Be sure to review which apps and services can access your profile. If you no longer want to use a particular app or service, you can disable it from this page.
Use apps to help detect malicious activity: A number of apps can tell you if your social network accounts are vulnerable to attack, or if youre sharing too much personal data. For starters, they can filter and moderate your feeds and comments for malicious or inappropriate content, and detect fake profiles set up to flood your feeds with spam.
Two good antiscam apps are Bitdefender Safego for Facebook or Twitter and MyPageKeeper for Facebook, both of which monitor your profile's feeds and comments and alert you and other users to any malicious links they encounter. For more details on how each utility works, see go.pcworld.com/socialmediasecurity. And if you operate your own Facebook Fan Page or blog, consider using a service such as Websense Defensio, which filters comments for spam messages, malicious content, and profanity.
If children use your computer, you should look at ways to block inappropriate content and online predators. Even if children arent searching for unsuitable content, they could still stumble across it in searches, find it via links or advertisements, or even access it directly by mistyping a site address.
Enable Parental Controls in Windows: With the parental controls in Windows Vista and later versions (accessible through the Control Panel), you can determine when your kids can use the computer, which games and applications they can run, and the types of websites they can visit. The feature also provides activity reporting, so you can keep an eye on their computer usage.
Activate OpenDNS for Web filtering: As I mentioned earlier, OpenDNS is an online service that offers content filtering. But in addition to stopping malware and phishing sites, OpenDNS can block adult-oriented sites and other online material that may be inappropriate for children. | <urn:uuid:ca0498cb-9e02-4762-8c01-dc8769eb7056> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2161235/lan-wan/pc-security--your-essential-software-toolbox.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00512-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912418 | 2,684 | 2.71875 | 3 |
The damning research comes from the National Research Council (NRC) in the US, which notes that no single trait has been identified that is stable and distinctive across all the various categories of biometrics – including fingerprints, palmprints, voice recognition and facial recognition.
According to the NRC, in order to strengthen the science of biometrics and improve system effectiveness, additional research is needed at virtually all levels of design and operation.
"For nearly 50 years, the promise of biometrics has outpaced the application of the technology", said Joseph Pato, chair of the committee that wrote the report and a technologist with HP Labs in Palo Alto.
Pato added that, whilst some biometric systems can be effective for specific tasks, they are not nearly as infallible as their depiction in popular culture might suggest.
"Bolstering the science is essential to gain a complete understanding of the strengths and limitations of these systems", he explained.
The in-depth report notes that biometric systems provide "probabilistic results", meaning that any confidence in the results must be tempered by an understanding of the inherent uncertainty in any given system.
The study also identifies numerous sources of uncertainty in the systems that need to be considered in system design and operation.
For example, says the report, biometric characteristics may vary over an individual's lifetime due to age, stress, disease, or other factors.
Furthermore, it adds, technical issues regarding calibration of sensors, degradation of data, and security breaches also contribute to variability in these systems.
The study was far from lightweight or sensationalist, Infosecurity notes, and was funded by the Defense Advanced Research Projects Agency, the CIA and the Department of Homeland Security. | <urn:uuid:d5402a19-2929-4f71-b6af-a41dbffa5960> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/biometric-id-technologies-are-inherently-fallible/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00446-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952316 | 354 | 2.859375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.