text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Most people–especially in the West–know very little about the Middle East and the people that live there. This lack of knowledge hurts our ability to understand, and engage in intelligent discussion about, current events.
For example, frighteningly few know the difference between Sunni and Shia Muslims, and most think the words “Arab” and “Muslim” are pretty much interchangeable. They aren’t. So here’s a very brief primer aimed at raising the level of knowledge about the region to an absolute minimum.
Arabs are part of an ethnic group, not a religion. Arabs were around long before Islam, and there have been (and still are) Arab Christians and Arab Jews. In general, you’re an Arab if you 1) are of Arab descent (blood), or 2) speak the main Arab language (Arabic).
Not all Arabs are Muslim. There are significant populations of Arab Christians throughout the world, including in Lebanon, Syria, Jordan, Northern Africa and Palestine/Israel.
Islam is a religion. A Muslim (roughly pronounced MOOSE-lihm) is someone who follows the religion. So you wouldn’t say someone follows Muslim or is an Islam, just as you wouldn’t say someone follows Christian or is a Christianity.
Shia Muslims are similar to Roman Catholics in Christianity. They have a strong clerical presence via Imams and promote the idea of going through them to practice the religion correctly. Sunni Muslims are more like Protestant Christians. They don’t really focus on Imams and believe in maintaining a more direct line to God than the Shia.
Arabs are Semites. We’ve all heard the term anti-Semitism being used — often to describe Arabs. While antisemitism does specifically indicate hatred for Jews, the word “Semite” comes from the Bible and referred originally to anyone who spoke one of the Semitic Languages.
According to the Bible, Jews and Arabs are related [Genesis 25]. Jews descended from Abraham‘s son Isaac, and Arabs descended from Abraham’s son Ishmael. So not only are both groups Semitic, but they’re also family.
Sunni Muslims make up most of the Muslim world (roughly 90%). 1
The country with the world’s largest Muslim population is Indonesia. 2
The rift between the Shia and Sunni started right after Muhammad’s death and originally reduced to a power struggle regarding who was going to become the authoritative group for continuing the faith.
The Shia believed Muhammad’s second cousin Ali should have taken over (the family/cleric model). The Sunni believed that the best person for the job should be chosen by the followers (the merit model) and that’s how the first Caliph, Abu Bakr, was appointed.
Although the conflict began as a political struggle it now mostly considered a religious and class conflict, with political conflict emanating from those rifts.
Sunni vs. Shia | Arab vs. Non-Arab
Here’s how the various Middle Eastern countries break down in terms of Sunni vs. Shia and whether or not they are predominantly Arab. Keep in mind that these are generalizations; significant diversity exists in many of the countries listed.
Iraq Mostly Shia (roughly 60%), but under Saddam the Shia were oppressed and the Sunni were in power despite being only 20% of the population. Arab.
Iran Shia. NOT Arab.
Palestine Sunni. Arab.
Egypt Sunni. Arab.
Saudi Arabia Sunni. Arab.
Syria Sunni. Arab.
Jordan Sunni. Arab.
Gulf States Sunni. Arab. | <urn:uuid:4931a17a-a26f-458c-8ebb-0789c0ba1568> | CC-MAIN-2017-09 | https://danielmiessler.com/blog/10-facts-every-westerner-should-know-about-the-middle-east/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00446-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939487 | 754 | 2.921875 | 3 |
Data Mining and Predictive Analytics Software For Microsoft Excel: Page 2
Data Mining and Predictive Analytics in Four Steps
The tool allows you to mine your own data to find patterns using only four steps — prepare, analyze, predict and report. It is also designed to automate the process of algorithm selection, parameter tuning and reporting. Each step is easily accessed using a separate tab in your 11Ants Model Builder Excel Ribbon.
Prepare: By selecting data columns in your spreadsheet, you choose one column as the target column and others as the input. For example, for sales data you might use season, date and volume as the input and the revenues column as the target. The 11Ants Model Builder will analyze the relationships between the input and the target. The data is then partitioned — or split — into two sets: training and test. Here users familiar with Excel will have little problem with getting the data prepared.
Options in Model Builder allow you to change the target column and adjust the weight between test and train size. Once you have selected options, you select "Prepare Sheets" and you will find your Excel worksheet is now three worksheets: the original plus one for training data and test data.
As this process runs, 11Ants Model Builder is analyzing the data for relationships and generates continuous models looking for the best one. The quality score changes based on the amount and quality of data being analyzed.
You can view quick info about the project, including estimated Input influences, Top 10 and improvement curve. As the process runs, you can watch the quality score — the higher the percentage, the more patterns found in the data. This information is also available through the Manage Tab in the Excel Ribbon.
Predict: When ready, you can build your predictive model using the test spreadsheet. By choosing "Predict" from the ribbon, you can choose your prediction settings and a new sheet will be generated to see how the model works on the test data. Using your test data worksheet, you can choose your model, confirm your input data type, choose a column to output the results, assign a confidence (hi/med/low) to each prediction, and also decide the type of prediction to be reported and compare predictions against known values.
When you click "Predict Now," a new worksheet for the prediction statistics is generated. | <urn:uuid:b565473b-8081-4ea3-8c95-9ef0057e59eb> | CC-MAIN-2017-09 | http://www.enterpriseappstoday.com/business-intelligence/data-mining-and-predictive-analytics-software-for-microsoft-excel-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00622-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.896541 | 481 | 2.625 | 3 |
December 1940 – March 2015
Co-founder of SAP
A trained physicist and amateur astronomer, Tschira played a pioneering role in business software.
Tschira started his professional career at IBM in the mid-1960s, working as a systems analyst and in the emerging field of enterprise software. That led Tschira and several other colleagues to start their own company, called SAP (an acronym for Systems, Applications and Products in Data Processing), in 1972.
He and his colleagues built the German company into a multinational corporation that remains one of the dominant players in the enterprise software market. Tschira, who stepped down from the SAP board in 2007, was also active in various charitable activities.
He was 74. | <urn:uuid:339dc294-7833-4c35-8e6c-0027092ad605> | CC-MAIN-2017-09 | http://www.itnews.com/article/3016810/it-industry/tech-luminaries-we-lost-in-2015.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00622-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.985292 | 147 | 2.546875 | 3 |
This guest blog post is a part of our cyberSAFE blog series focusing on back-to-school security, privacy and identity topics. It comes to us from Sue Scheff, author and family internet safety advocate. She is the founder and president of Parents’ Universal Resource Experts Inc. (P.U.R.E.) and has been helping to educate parents on cyberbullying awareness and safe online practices for teens since 2001.
Making smart cyber choices today is as important as your GPA.
As children are online now more than ever, it’s important to realize that your child’s digital image is their future. Your child’s online reputation determines what college they get into and where they’ll work in the future. Today, 98 percent of employers run an Internet search on applicants and if they find a negative online presence, 77 percent of those employers will not invite the applicant in for an interview.
College recruiters are reporting nearly the same statistics. They are putting your child’s name through an Internet wash-cycle, and how it spins out will determine if your child secures a spot at a college of their choice. As we start the new school year, we have to remember that every keystroke and photo posted in cyberspace is public and permanent – there is no rewind online.
Becoming a Cyber-Smart Citizen
Digital citizenship restarts every day as you power-up your smartphone or connected device. To help your teen better navigate the rough waters of social media, here’s a look at some of the golden rules of cyber-smart citizens:
- Over-sharing is a common mistake that many people of all ages make on social media. Be selective and smart about what you share.
- Prior to posting a comment, photo or video – you need to consider the following: is what you’re posting helpful, kind or necessary? Or is it something you may regret later?
- Check your privacy settings on all social media sites. Make this a weekly habit.
- Who is in the comments/photos/videos? If you are posting a picture of other people, did you get their permission?
- Tag and share with care. Treat others as you want to be treated online.
- Social media is not a scrapbook. Don’t use it as a diary.
Friending and Unfriending Guidelines
In addition to these golden rules, it’s important for teens to evaluate who they are connecting with online. You are judged by who you hang with, online and offline. Here are some steadfast rules when it comes to “friending” and “un-friending” online:
- If you have a friend that is posting questionable comments or pictures on your social media sites, don’t be afraid to unfriend them.
- Just because someone is friends of friends of someone you know, it doesn’t mean you have to be friends with them virtually. Cyber criminals can use this tactic to steal your identity.
- Keep this in mind: quality beats quantity on social media.
Cyberbullying and Online Harassment
There are lines that should never be crossed on social media. Empower your teen to know how to report digital abuse. Here’s how:
- Do learn where how to report abuse on each social media platform.
- Do tell a parent or an adult if you are a victim of online abuse.
- Don’t engage with a cyberbully.
- Don’t stay in chat rooms or on websites that make you feel uncomfortable.
Your child’s digital trail is the path to their future. It is our job as parents to help them protect and maintain their good name. A great reminder to all students is a New York Times article that ran last year: They Loved Your GPA Until They Saw Your Tweets. One of the most important things about social media that teens should never forget is that social media is not a diary, scrapbook or venting machine.If you are having a bad day, stay off of technology.
In addition to securing your teen’s online reputation by encouraging positive, smart actions, you can also inform your teen of the cyber security issues at stake. They can secure their identity by never giving out their account password or smartphone passcode to anyone. A best friend today could easily become a frenemy tomorrow. Only parents should have passwords.
Keep in mind: you never get a second chance to make a first impression – especially online.
For more information and tips on raising digital citizens at NCSA’s website. | <urn:uuid:065b35af-f4f0-4ced-a86d-cb2de5851e8d> | CC-MAIN-2017-09 | https://www.csid.com/2014/08/teaching-your-teen-how-to-be-a-cyber-smart-citizen/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00566-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.93929 | 957 | 2.828125 | 3 |
Russia has long been a whispered frontrunner among capable nations for performing sophisticated network operations. This perception is due in part to the Russian government’s alleged involvement in the cyber attacks accompanying its invasion of Georgia in 2008, as well as the rampant speculation that Moscow was behind a major U.S. Department of Defense network compromise, also in 2008. These rumored activities, combined with a dearth of hard evidence, have made Russia into something of a phantom in cyberspace.
In this report, learn about how this group:
1 Markoff, John. “Before the Gunfire, Cyberattacks”. The New York Times 12 August 2008. Web. http://www.nytimes.com/2008/08/13/technology/13cyber.html
2 Knowlton, Brian. “Military Computer Attack Confirmed”. The New York Times. 25 August 2010. Web. http://www.nytimes.com/2010/08/26/technology/26cyber.html
Download the Report | <urn:uuid:6622de1f-cef3-4823-93c0-50ac97158783> | CC-MAIN-2017-09 | https://www2.fireeye.com/apt28.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00566-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929068 | 212 | 2.765625 | 3 |
Robert Sloan, a professor at the University of Illinois at Chicago, found an AI system to be as smart as a four-year-old. Scientists often talk about creating artificial intelligence, but how "intelligent" are these systems really?
Researchers at the University of Illinois at Chicago came up with an answer after giving one of the top artificial intelligence (AI) systems an IQ test.
The MIT-built system, dubbed ConceptNet 4, is as smart as the average four-year-old.
"We're still very far from programs with common sense and AI that can answer comprehension questions with the skill of a child of eight," said Robert Sloan, head of computer science at the university.
His goal is research that can help focus attention on the "hard spots" or challenges in AI research.
The university reported Monday that researchers put the AI system through the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children. While the system has the average IQ of a young child, unlike most humans, the machine's scores were uneven across different parts of the test.
For instance, Sloan noted that ConceptNet 4 did very well on a vocabulary test, as well as on its ability to recognize similarities. However, the system did "dramatically worse" than average in its comprehension abilities, which are about answering "why" questions.
According to Sloan, one of the hardest problems in artificial intelligence research is building a computer program that can make good judgment calls based on any situation that might arise. Basically, it's difficult to program common sense because scientists haven't yet figured out how to give systems knowledge about things that humans find obvious, like the fact that ice feels cold.
"All of us know a huge number of things," said Sloan. "As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don't appreciate having their tails pulled."
This article, Top Artificial Intelligence system is as smart as 4-year-old, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Top Artificial Intelligence System is as Smart as a 4-Year-Old" was originally published by Computerworld. | <urn:uuid:b3afea7a-8232-40cf-b5fd-8ea4276e8f5b> | CC-MAIN-2017-09 | http://www.cio.com/article/2384126/innovation/top-artificial-intelligence-system-is-as-smart-as-a-4-year-old.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00142-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.971727 | 539 | 3.109375 | 3 |
Computer users with rudimentary skills will be able to program via screen shots rather than lines of code with a new graphical scripting language called Sikuli that was devised at the Massachusetts Institute of Technology.
With a basic understanding of Python, people can write programs that incorporate screen shots of graphical user interface (GUI) elements to automate computer work.
One example given by the authors of a paper about Sikuli is a script that notifies a person when his bus is rounding the corner so he can leave in time to catch it.
The script would pull together visual elements of a GPS-driven bus-tracking application that a transit authority might make available online. First, the user would box and capture a map image of the corner that the bus will turn to trigger the notification. That image is pasted into a line of code in Sikuli Script Editor that would look like this:
1: street_corner=find( ).
The image of the street corner to be found would be pasted inside the set of parentheses.
Then the script would command looking for the pointer that indicates a bus's location. That line of code would look like this:
2: while not street_corner.inside().find( ).similar(0.7):
A captured image of the pointer icon would be pasted in the second set of parentheses. The script seeks out the image of the bus pointer as pasted, and the pointer image that was boxed and cut necessarily includes some background. But as the pointer moves around the map, the background changes, so there will be no exact match for the image as pasted. To account for the differences the "similar.(0.7)" command indicates that the script should find images that are 70% similar to the icon pasted in the line of code.
The script tries to find the bus icon within the target area every 60 seconds, and that is written "sleep(60)". When the bus icon enters the target area, it triggers this response, scripted as: "popup("The bus is arriving!")".
Sikuli -- which means God's eye in the language of the Huichol Indians in Mexico -- also has a search function. Users paste in an icon from a program they are working with and the search engine will find sites that tell more about its function.
Specifying the visual search is actually faster than specifying a search based on keywords, say the researchers, Tom Yeh, Tsung-Hsiang Chang and Robert C. Miller.
In an upcoming paper, the researchers describe a way that programmers could use Sikuli to accelerate quality assurance testing for applications they are writing. They would write scripts to check whether applications in development continue to function as they should after each set of revisions.
Rather than having humans click on the applications' GUIs to see if they give the expected response, testers could script the clicking of the buttons and what visual feedback to expect if the button works properly. The script would flag those interactions that fail to provide the expected feedback. | <urn:uuid:f1c98fe3-a0aa-4275-8bea-506176bd2a49> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2242620/software/mit-creates-picture-driven-programming-for-the-masses.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00142-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.932455 | 616 | 3.875 | 4 |
As Google pushes for greater security across the Internet through HTTPS encryption, unprepared businesses are ironically left MORE vulnerable to cyber attacks through encrypted traffic.
In case you haven’t heard, Google is attempting to enhance security across the World Wide Web by pushing for all websites on the Internet to be encrypted. They started internally by making sure that users of Google services like Search, Gmail and Drive gain a secure connection to Google by default. In order to coerce the masses into following suit, Google has announced that encryption will now be a factor in ranking sites within its search algorithms. As a result, those webmasters concerned with Search Engine Optimization (in other words, all of them) will be looking to encrypt their sites in droves. The particular vehicle for encryption that Google has keyed in on is HTTPS.
Hyper Text Transfer Protocol Secure (HTTPS)
As Hyper Text Transfer Protocol (HTTP) is the basic set of rules that define how information is transmitted over the web, HTTPS results from adding a Secure Sockets Layer (SSL)—a protocol that uses a cryptographic system to keep data sent over the Internet private. That way, if a data transmission is somehow hijacked, the pilfered information is encrypted and unreadable.
Depending on which browser you’re using, you can tell when your connection to any website is secure by simply looking up at the address bar. You’ll notice that the URL begins with “https” and you should see some sort of icon resembling a lock. The address bar itself—or a portion of it—may also be green. You can actually click on the lock icon to check out the details of the website’s security. You should notice this when making any type of payment over the web, as credit card data should only be entered over an encrypted connection.
A Security Nightmare?
A World Wide Web full of secure websites should be a good thing, right? I mean, data protection is a major concern for businesses worldwide—particularly for organizations that must remain in compliance with government regulations in this regard. Not to mention, SSL certification also serves as a form of identity verification, in that it authenticates the validity of a particular website. In other words, they are who they say they are.
But there’s a dark side to this discussion that few have talked about. Google’s push for greater security through encryption across the Internet ironically leaves a great number of businesses more vulnerable to potentially damaging threats.
Let me explain. A company’s first (and sometimes only) line of defense against security threats—such as viruses, Trojans, rootkits, exploits and botnets to name a few—is a firewall. A traditional firewall inspects the packets of electronic data that travel in or out of a network or workstation, and applies a specified set of rules that it was given during configuration. If the data passes inspection, it freely moves along towards its intended recipient; if not, it gets blocked by the firewall. You can think of a firewall as a sort of data “bouncer” that decides whether or not to let a data packet into the club (the network), based on the perceived threat that the packet may cause trouble.
When traffic is unencrypted, a firewall is very effective in protecting the network from a threat embedded in the data. The firewall opens up the data packet, identifies the threat, and blocks the data from passing, end of story. But here’s where the problem lies:
Firewalls are not effective against encrypted data.
Sit back and take that in for a second, because it’s something of a dirty little industry secret. If a threat is hidden inside of an encrypted data packet, a traditional firewall can’t look inside that packet to see if danger lurks within.
80% Of Companies in Danger
A recent Gartner survey backed this up with the following statistic: “Less than 20% of organizations with a firewall, an Intrusion Prevention System (IPS) or a Unified Threat Management (UTM) appliance decrypt inbound or outbound SSL traffic.” In other words, over 80% of organizations with these safeguards in place are left vulnerable to attack through encryption.
With Google’s announcement, the amount of encrypted traffic is going to steadily increase; as will the potential for disaster. Webmasters aren’t the only ones altering their methods to conform with Google’s push for encryption across the Internet. So are the bad guys. As more applications and websites move towards encryption, hackers see more opportunity to mask their shenanigans from company firewalls and Unified Threat Management tools.
A Viable Solution
But amidst the darkness, there IS a practical solution: It’s called DPI-SSL, and it’s a feature available on Dell SonicWALL Next-Generation Firewalls, offered through Data-Tech.
Let’s break down the terminology. DPI stands for Deep Packet Inspection. It’s an advanced form of packet inspection that probes far deeper into a data packet than conventional inspection. This allows the firewall to better examine the packets for threats before permitting them access to the network. Next-Generation firewalls incorporate DPI along with additional filtering functionalities to provide far more intensive inspection and greater overall security than a traditional firewall. DPI does not, however, apply to encrypted data.
Deep Packet Inspection of Secure Socket Layer (DPI-SSL) on the other hand, takes Dell SonicWALL’s DPI technology to the next level by allowing the firewall to open and inspect encrypted traffic. With DPI-SSL activated, the traffic is decrypted, scanned, security inspection is applied, content filtering and data leakage policies are enforced, and the encrypted applications are controlled. This all happens without introducing any latency (i.e. delays) into the network.
Incorporating DPI-SSL is a relatively simple fix to combat a steadily increasing threat of disaster.
A New Level of Security and Productivity
In addition to the obvious security advantages for organizations that employ DPI-SSL technology, further benefits exist in the way of content filtering. Loopholes occur within many content filtering systems when web traffic becomes encrypted, allowing employees to access social media or view content or images that should be prohibited by company policy. If the firewall can’t see what the traffic is, it can’t discern whether or not it should be blocked.
DPI-SSL allows a company to close these loopholes and provide a safe and compliant web browsing experience for users on the network. Not only does this minimize the opportunity for employees to invite threats into the system, it keeps them from “cyberloafing” on company time.
With the announcement that technology giant Google will now use encryption as a factor in organic search rankings, websites that adopt https are quickly multiplying. The irony is, by taking this step to make the Internet a more secure medium as a whole, Google’s leaving both businesses and individuals more vulnerable to potentially crippling threats from unscrupulous individuals looking for new ways to attack corporate networks.
“It’s a challenging time for those in charge of protecting a company’s network and its data,” says Social Media and Community Professional Jason Cobb of Dell. “This has been made even more challenging by the rapid adoption of encrypted web communications. Without the ability to inspect encrypted traffic, the security solutions you deploy are only effective against the traffic that is not encrypted. When you consider the average network has 40% of its traffic encrypted, with that rate dramatically rising every year, you can see how many companies are ill prepared to deal with this emerging threat.”
By employing Dell SonicWALL’s DPI-SSL technology, offered through Data-Tech’s Firewall as a Service solution, organizations can effectively protect their networks against threats masked behind both secure and non-secure avenues.
Data-Tech is dedicated to providing superior solutions precisely tailored to your technological needs. We employ highly skilled professionals in the industry to implement our solutions in order to ensure you receive the service level you deserve. Data-Tech offers a broad selection of computer services that are available as pre-defined solutions or à la carte. Whatever you need—Computer IT Services, Managed IT Services, Cloud Services, Telephony & Cabling, VoIP, Data Hosting, Healthcare IT Management, Disaster Recovery and Backup and Business Continuity needs—Data-Tech will accept nothing less than your total 100% satisfaction.
© Copyright 2015 Data-Tech. All Rights Reserved. www.datatechitp.com | <urn:uuid:0a736258-d6f9-4a1e-ad76-5ce3d34a4c68> | CC-MAIN-2017-09 | https://www.datatechitp.com/2015/04/googles-push-for-security-leaves-unprepared-businesses-vulnerable-to-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00142-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920184 | 1,781 | 2.921875 | 3 |
This week on Security Levity... spam laws around the world.
Many of us know about the U.S. federal law regulating spam, known as the CAN-SPAM Act (or at least we think we do). But what about the laws internationally?
First: a disclaimer: IANAL (I am not a lawyer). If you use this blog post as a substitute for legal advice, you're probably not thinking straight!
That said, here are a few notable international spam laws.
Australia: Australians are famously a plain-speaking culture, so it's no surprise that their spam act, enacted in 2003, is called the Spam Act 2003. With few exceptions, the act outlaws unsolicited commercial email. Commercial mail must include information about the sender, and must allow unsubscribing. Address?harvesting software is outlawed, as are lists of email addresses created by harvesting software. So if you're spidering the Web for email addresses, stop it and delete any lists you might already have.
Canada: The proposed Electronic Commerce Protection Act (ECPA or Bill C-27) has just exited the committee stage and is expected to come into force next year. It sets out to define "consent" more clearly than does CAN-SPAM, referring to the question of implicit or implied consent. It also applies a time limit to consent, which it seems will force marketers to ask recipients for consent again after 18 months. The Act is expected to define a maximum penalty of CA$1 million for individuals or CA$10 million for businesses. It also prohibits false or misleading commercial email. Enforcement will be via three separate agencies: the Canadian Radio-television and Telecommunications Commission, the Competition Bureau, and the Office of the Privacy Commissioner. Both civil and criminal actions will be possible.
China: On March 30th, 2006, The People's Republic enacted the Regulations on Internet E-Mail Services via the Ministry of Information Industry. Again, this is an opt-in regime, and all commercial advertisements should be prefaced with the abbreviation "AD" in the message's subject. Harvesting is also outlawed. The law also puts the onus on service providers to do their part in fighting spam, including logging user complaints and retaining evidence of spammer activity on their networks.
Europe: The European Union (EU) is different from a federal system such as in the U.S., in that the EU doesn't make laws, as such. The various countries that make up the EU -- the Member States -- continue to maintain their sovereignty. However, the EU does pass "directives", which instruct the member states to pass laws, by a certain deadline, which meet at least the minimum standard laid down in the directive. In the case of spam, we're talking about Directive 2002/58/EC on Privacy and Electronic Communications (AKA the E-Privacy Directive). It's not just about spam, but also covers cookies and other data confidentiality issues. Article 13 deals with spam, and says that email recipients must have given informed consent to receive email -- consent is implicit in the case of an existing customer relationship, but only relating to a similar product or service.
Israel: Here in my home, our law is quite similar to the EU directive. It's officially called Amendment 40 to the Communications (Bezeq and Broadcasting) Act, and provides for criminal fines of up to 202,000? (about $54,000). Interestingly, an Israeli private individual can sue a spammer in small claims court for 1,000? ($280) per message, without needing to prove that the spam caused any "damage". However, the law does allow political or charity messages without an opt-in. The Israeli Chapter of the Internet Society helped the Knesset formulate the legislation.
In a future blog post, I'll talk more about the CAN-SPAM Act and explode a few myths and misconceptions.
I want to make this an interactive place: where I can answer questions and cover topics that you suggest. Feel free to add comments and ask Amir! | <urn:uuid:b6bb280d-1751-46ed-823a-2de3e26f1585> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2467818/internet/spam-laws-around-the-world.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00086-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951503 | 829 | 2.59375 | 3 |
Some of the world’s biggest IT companies and their suppliers are contaminating rivers and underground wells in developing countries with a wide range of hazardous chemicals, according to Greenpeace.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The environmental campaigning group has released a report called ‘Cutting Edge Contamination: A study of environmental pollution during the manufacture of electronic products’.
Analysis of samples taken from industrial estates in China, Mexico, the Philippines and Thailand reveals the release of hazardous chemicals in each of the three sectors investigated: printed wiring board (PWB) manufacture, semiconductor chip manufacture and component assembly.
Most noteworthy, said Greenpeace, was the discovery at most of the investigated sites of polybrominated diphenyl ethers (PBDEs), a group of brominated chemicals used as flame retardants, and of phthalates, chemicals used in a wide range of processes and materials, though they are most commonly used as plasticisers (softeners) in some plastics.
“Over recent years we have seen an increasing concern over the use of hazardous chemicals in electronic products but attention has focused on the contamination released during disposal or ‘recycling of electronic waste’,” said Dr Kevin Brigden from the Greenpeace Research Laboratories.
“Our findings of contamination arising during the manufacturing stage make it clear that only when we factor in the complete lifecycle will the full environmental costs of electronic devices begin to emerge,” he said.
Zeina Al-Hajj, toxics campaigner for Greenpeace International, said, “There is shockingly little information on precisely which major brand companies are supplied by which manufacturing facilities.
“Responsibility for the contamination lies as much with those brands as with the facilities themselves.
“There has to be full transparency regarding the supply chain within the electronics industry, so that brand owners are forced to take responsibility for the environmental impacts of producing their goods.”
The study also documents the contamination of groundwater aquifers at a number of sites, particularly around semiconductor manufacturers, with toxic chlorinated volatile organic chemicals (VOCs) and toxic metals including nickel.
Contamination of groundwater is of particular concern, said Greenpeace, since local communities in many places use groundwater for drinking water.
At one site, the Cavite Export Processing Zone (CEPZA) in the Philippines, three samples contained chlorinated VOCs above World Health Organisation (WHO) limits for drinking water.
One sample contained tetrachloroethene at nine times above the WHO guidance values for exposure limits and 70 times the US Environmental Protection Agency maximum contaminant level for drinking water.
Elevated levels of metals, particularly copper, nickel and zinc, were also found in groundwater samples in some sites.
The use of such toxic chemicals in manufacturing processes also poses potential risks to workers through workplace exposure.
Wastewater discharged from an IBM site in Guadalajara, Mexico contained hazardous compounds, including some (such as the potent hormone disruptor nonylphenol) that were not found at other sites.
IBM’s Supplier Conduct Principles Guidelines state that suppliers should operate in a manner that is protective of the environment. “IBM should act upon our findings and investigate activities at the site in order to prevent any releases of persistent organic compounds from the Guadalajara site,” Al-Hajj said.
IBM has so far not responded to the Greenpeace report.
Comment on this article: firstname.lastname@example.org | <urn:uuid:c28d3d42-0719-49e7-b146-8268335767e6> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/2240079970/IT-firms-contaminating-the-environment-says-Greenpeace | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00438-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94002 | 745 | 2.65625 | 3 |
Why maps matter
- By Frank Konkel
- Mar 17, 2014
People used to use maps so they wouldn't get lost. But in recent years, access to the Global Positioning System and the proliferation of mobile technology have made paper-based maps almost irrelevant. Unless you're in uncharted territory, it's hard to get lost anymore. Basic geography is as easy as inputting an address and letting your mobile phone tell you how to get there.
And as mapping technology advances, it allows for far more than foolproof directions. Federal agencies now use geospatial data, geo-analytics and multi-layered maps for myriad purposes, including gathering intelligence, predicting disease outbreaks and sharing data pools with the public.
The allure of mapping lies in its intuitiveness. Even simple "dots on a map can be a powerful way to see trends in data," said Josh Campbell, geographic information system architect for the Humanitarian Information Unit at the State Department. "Maps are a compressed mechanism for storytelling."
Last year, Campbell's office created a series of maps to track the mass migration of Syrians displaced by the country's ongoing violence. The HIU team combined data from thousands of media and internal reports with commercial satellite imagery. Each map provided a geographical snapshot of a place. Together, they showed trends over time and revealed the areas with the most intense conflict.
"That visualization can simplify complex data relationships among variables," Campbell said. "It's one thing to read the information, but I think visualization is a powerful way to consume information that scales beyond reading."
That is perhaps the most important aspect of maps: They make for better decision-making.
The Federal Communications Commission used to convey policy changes through 1,000-page Microsoft Word documents, said Mike Byrne, geographic information officer at the FCC. Now the agency uses cartography to explain complicated policy subjects such as spectrum allocation.
Officials rely on a mixture of open-source and proprietary tools to do that, but the focus is on creating a product that users can easily understand, whether those users are federal decision-makers or members of the general public, Byrne said.
"The platform for us is the Internet," he added. "At FCC, we take really complicated things and display them so that anyone can understand what the high-level landscape view looks like."
'So much easier than words'
The ease with which maps can be created, shared, accessed and understood is why they are reaching the highest levels of decision-making in government. As the mapping technology improves, even Congress is getting into the act.
Legislative committees and even individual lawmakers are hiring GIS experts to make maps that inform and educate policy-makers or enhance decision-making regarding prospective legislation.
Cathy Cahill, a professor in the Department of Chemistry and Biochemistry at the University of Alaska Fairbanks, started a stint as a congressional fellow with the Senate Energy and Natural Resources Committee in January. Within days, Cahill had produced her first map. It detailed the locations of various types of power plants across the country using open data from the Energy Information Administration.
Because Cahill knows Alaska well and because the committee includes Sen. Lisa Murkowski (R-Alaska), much of Cahill's work centers on her home state. One map she produced highlighted the costs remote Alaskan communities sometimes face due to the long distances petroleum must travel from refineries.
"Working with the Senate, we have incredible data from a bunch of agencies and beautiful maps and databases that I can pull from," she said.
However, she isn't staring into a desktop screen of Esri's ArcGIS on her own. She's working with and training committee staffers and sharing her GIS knowledge so that the mapping can continue after her 12-month fellowship is over -- something the legislators increasingly demand.
It's not uncommon to see Murkowski or Sen. Ron Wyden (D-Ore.) using maps on their mobile devices to explain policy or problems to peers or the citizens they serve. Wyden has gone as far as embedding maps in press releases dealing with Medicare reform.
"It's a very integrative process," Cahill said. "I'll show what data is available, and they'll say, 'We want it presented this way.' To present [it visually] is so much easier than words. You're setting up problems, putting topics they are interested in into a map where they can see it spatially and think about why those things occur and where."
The changing technology landscape
Geographers, GIS experts, coders and cartographers are sought-after professionals in the private sector and government alike. Producing high-quality, informative maps requires a complex skill set, but evolving tools, technologies and policies are simplifying certain aspects of map-making.
Esri's mapping software has long been dominant in federal agencies -- integrating with other large data systems and offering a full suite of analytical tools for those with sufficient training. But Tableau Software and Google Maps offer increasingly powerful visualization options for those new to GIS, while ESRI is making its systems more accessible as well. Mapbox's mobile-first approach and open-data focus are winning that Washington, D.C.-based company numerous federal customers, and OpenStreetMap offers a non-proprietary framework and data library that appeal to agencies pushing to open their own geodata.
"Up until recently in geo for government, there really hadn't been a choice," said MapBox CEO Eric Gundersen.
This growing competition in map-making software is good for the government -- for reasons that go beyond cost and learning curves. Esri, for instance, recently made headlines by enabling federal agencies that use the company's proprietary tools to open geospatial data to developers and the public. Individual agencies can decide whether to release their geospatial data in this way, but the Environmental Protection Agency wasted little time in doing so and other agencies are sure to follow.
Officials at the National Park Service are already moving in that direction. "We support lots of proprietary systems, but we started building core parts of our stack around open source, so we've become more nimble and agile in regard to our development," said Nate Irwin, EGIS and web mapping coordinator at NPS. "That doesn't mean we've lost our connection to the traditional GIS world."
Irwin leads NPS' renowned map-building team, which has created road-closure maps for the Blue Ridge Parkway and air-quality visualizations for each national park, among others. In the near future, Irwin wants NPS maps to track park infrastructures in real time at a level so detailed that a ranger could report a grizzly bear sighting in Yellowstone and have traffic routed around the animal in a matter of seconds.
"Technology has changed dramatically, and things that were impossible to do five years ago are almost easy to do now," Irwin said. "We can really focus on the details."
That ability to drill into the details has a lot to do with improvements in computer hardware. Previously, servers and enterprise systems groaned and chugged through storing and computing resource-intensive geodata. Now in all but the most extreme cases, cloud computing has eliminated the hardware problem. Datasets hundreds of terabytes in size or larger -- such as climate change models organized by the National Oceanic and Atmospheric Administration -- can be processed in short order because of on-demand horsepower available via the cloud.
Storage and computing capacity are not the issues anymore, said Jeff Peters, director of national government sales at Esri. "Complex cloud systems have sprung up, and the elasticity and scalability of the cloud is really what's driving this," he added. "You have the complex tools that leverage the horsepower and the analytical tools that sit back in data centers. The challenge becomes how best do we enable these tools and ask the sophisticated questions."
Disaster prevention and response
Some agencies are using maps to address increasingly complex problems. Disaster response and emergency planning teams descended on New Jersey in preparation for the Super Bowl in February. Local fire department personnel inspected critical facilities near the stadium, and officials digitally scanned documents related to water hookups, the locations of hazardous materials and building blueprints to create a complex, multi-layered mapping system that authorized users could access on mobile devices.
In addition, maps often include real-time weather feeds and camera imaging, said Russ Johnson, Esri's public safety director. The company often partners with the Federal Emergency Management Agency during disasters.
"Not only do you have the single view of what is happening in time, but if you have the appropriate credentials, you and other stakeholders can log on and be connected to the same view of data for shared situational awareness," he added.
Disaster response has also spurred geodata-based applications that any user with access to GitHub can download and run in the cloud for a few dollars. Mitre, a nonprofit organization that operates research and development centers sponsored by the government, developed GeoQ for the National Geospatial-Intelligence Agency as Hurricane Sandy bore down on the East Coast in late 2012. The app allows analysts to compare real-time images after a disaster with existing satellite imagery to conduct damage assessments and provide other valuable information. It also provides a Rosetta Stone of sorts for geodata, allowing users to import, convert and combine virtually all the commonly used formats.
The app has thousands of federal users, 2,000 of whom used it during the Boulder, Colo., flooding in September 2013, said Jay Crossler, senior principal software engineer at Mitre. Yet anyone can download the code and run it on a virtualized machine in the cloud for about $14 a year.
"We've made it all available so that anyone can set it up," Crossler said.
The importance of context
Not surprisingly, the intelligence community is at the forefront of geospatial technology and mapping, led by NGA. The agency is in the early stages of building its Map of the World, an internal platform for all the geo-intelligence and multisource content the agency collects for the intelligence community. At a geospatial conference hosted by Esri in February, NGA Director Letitia Long said the agency's goal is to have analysts fully immersed in an information environment that is enhanced with the latest visual, auditory and tactical tools by 2020.
The Navy maps energy consumption at Naval Station Norfolk and other installations, monitoring trends and visualizing opportunities to economize. (Esri image)
NGA's budget, which has doubled to $5 billion in the past 10 years, highlights the increased importance geospatial data plays in national security. And the agency collects petabytes of data via various platforms. To use and share it effectively, officials have to give it context, which can be a challenge if the data lacks geospatial attributes.
"How do you put context to documents that don't have geospatial latitude and longitude in them? How do you disambiguate?" asked Michael Walsh, senior director of Virginia Intelligence Programs at Intelligent Decisions, which has IT contracts with several intelligence and defense agencies. "That's a big piece to making sense of information."
"Maps aren't pieces of paper anymore," he added. "They are layers on top of each other. Geography, weather patterns, crops, migrations of people -- those are all layers."
Whether they show dots on a map, trends over time, real-time situational awareness or possible policy implications, maps have a growing importance in government. In the past decade, technology has allowed for the production of better maps while simultaneously improving the consumption of the information they provide, down to nearly every mobile and Web-connected device. People used to use maps so they didn't get lost. Now we use them for almost everything. | <urn:uuid:064cec3f-6216-40d3-88cf-240d972c19a7> | CC-MAIN-2017-09 | https://fcw.com/articles/2014/03/17/why-maps-matter.aspx?admgarea=TC_ExecTech | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00614-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946892 | 2,426 | 2.71875 | 3 |
Refueling an aircraft while it is flying can be a tricky-enough proposition but refueling an unmanned jet from another unmanned jet sounds like a scene for a James Bond movie.
But such a tricky procedure is apparently within reach as the Defense Advanced Research Projects Agency, this week said it successfully tested the technology needed to fly two drones close enough together in mid-air, at speed that one, acting as a tanker aircraft, could successfully refuel the other.
From DARPA: " During its final test flight, two modified Global Hawk aircraft flew in close formation, 100 feet or less between refueling probe and receiver drogue, for the majority of a 2.5-hour engagement at 44,800 feet. This demonstrated for the first time that High Altitude Long Endurance (HALE) class aircraft can safely and autonomously operate under in-flight refueling conditions. The flight was the ninth test and the first time the aircraft flew close enough to measure the full aerodynamic and control interactions. Flight data was analyzed over the past few months and fed back into simulations to verify system safety and performance through contact and fuel transfer-including the effects of turns and gusts up to 20 knots."
DARPA said that because HALE aircraft are designed for endurance rather than control authority, it expected only one of six attempts would actually achieve positive contact (17%). However, "the final analysis indicated that 60% of the attempts would achieve contact. Multiple autonomous breakaway contingencies were successfully triggered well in advance of potentially hazardous conditions."
The demonstration could open a world of longer duration drone flights as today's UAVs aren't designed to be refueled in flight. In 2007, DARPA teamed up with NASA to show that high-performance aircraft can easily perform automated refueling from conventional tankers, yet many unmanned aircraft can't match the speed, altitude and performance of the current tanker fleet. The 2007 demonstration also required a pilot on board to set conditions and monitor safety during autonomous refueling operations.
BACKGROUND: What the drone invasion looks like
Under a $33 million deal in 2010 with DARPA, Northrop agreed to demonstrate refueling with a pair of Global Hawks. Although air-to-air refueling was not originally part of the design for drones like the Global Hawk, Northrop then stated such technology offers a number of benefits.
A Global Hawk with a particularly heavy payload, for example, would be able to take off with less fuel, and be subsequently refueled in the air. In addition, a Global Hawk with a unique sensor package would be able to stay on station longer if equipped to receive fuel from another platform.
Check out these other hot stories: | <urn:uuid:3e03ac15-cfc6-40d6-9fae-7afb53b2db7c> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2223266/security/darpa-shows-off-unmanned-aircraft-in-flight-refueling-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00014-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963544 | 542 | 3.25 | 3 |
This handy cheat-sheet helps keep you straight on evolving storage terminology.
Use this handy cheat-sheet to help keep straight the evolving list of storage terminology.
Direct-attached storage (DAS) Storage connected directly to a server.
Fibre Channel An expensive short-distance networking technology used for building SANs.
ATA (AT attachment) or IDE (Integrated Device Electronics) Traditional desktop and low-end server storage technology that includes the controlling circuitry for mass-storage devices as part of the devices. This technology is one of the standard ways of connecting hard drives, CD-ROM drives, and tape drives to a system.
iSCSI A low-cost way to create SANs over IP networks.
IP SAN A SAN built around the iSCSI protocol.
Network-attached storage (NAS) A storage appliance that connects to the Ethernet network and provides file-level storage access.
RAID (Redundant Array of Inexpensive Disks) The name for a number of different fault-tolerance schemes that use drive arrays.
SCSI (Small Computer System Interface) The de facto standard for midrange and high-end server direct-attached storage.
Serial ATA (SATA) A new low-cost storage standard with faster transfer speeds than IDE/ATA.
Serial-attached SCSI A new high-performance SCSI standard. Products will appear late this year.
Storage area network (SAN) Typically a Fibre Channel subnetwork of storage devices that can be shared by several servers. | <urn:uuid:c4b26a33-cada-40cf-8b0d-fc14427095b7> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Data-Storage/Know-Your-Storage-Technologies | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.868124 | 316 | 2.734375 | 3 |
Simplicity is: logging in without a username or password
“I really like what I can do in the web interface, but having to enter my username and password to login each time is extra work.”
We’ve seen the above comment many times. Identity verification, as everyone who has not been lost on a desert island for 10 years knows, is really, really important these days. But like many aspects of security, it can be rather annoying.
On the bright side, there are a number of ways to get around this step and make the login process simpler without necessarily making your account less secure. Here is how we have helped many customers simplify their Internet life.
1. Your browser can save your password
Most modern web browsers can tell when you are on a web site’s “login” page and will ask you if you want to “save your password” to that site. If you choose this option, then the password will be saved on your computer somewhere so that, the next time you visit the login page, your login credentials can be pre-filled for you. All you have to do then is click “login” and you are in. Super quick.
This method will work with most web sites. If you are not being prompted to save passwords, it’s possible this feature simply isn’t enabled in your browser. Here is how to turn it on:
- Settings > click on “Show advanced settings…”
- Enable “ “
- Preferences > Security tab
- Enable “Remember passwords for sites”
- Preferences > Passwords tab
- Enable “Autofill user names and passwords”
Internet Explorer (v11):
- Internet Options > Content Tab
- Press the “Settings” button under “AutoComplete
- Enable “User names and passwords on forms”
- press “Ok”
Warning: What you must know about this method is that your username and password are being saved on your local computer. As such, someone with access to your computer (either access to your login or an administrator) could possibly get at that information, and that can be a significant security risk. Additionally, if you step away from your computer without logging out, anyone sitting down can then login as you to any sites where your login credentials are being saved. So, you should never save your passwords on public computers (e.g. library, coffee shop) or computers that are not accessed exclusively by you and/or people you trust.
If you use Mozilla FireFox, there is a useful feature that allows you to set a “master password” for all your other passwords. With this option enabled, your saved login passwords will be encrypted on disk, making them inaccessible without the master password. This protects your passwords from someone sitting down at your computer and opening a new FireFox session, but of course hinges on you remembering to close FireFox before you step away. To enable the master password option:
- Go to “Preferences” and choose the “Security” tab
- Enable “Use a master password”.
If your organization has security requirements (e.g. HIPAA), please check with your compliance officer or IT staff to see if saving passwords in this way is permitted before you start doing it.
2. Quick Logins
LuxSci has a cool feature called “Quick Logins” that drastically improves on the browser-based “Saved Password” option discussed above for logins to the LuxSci.com member’s web site:
- It works with any browser, even on tablets and mobile phones
- Your password is never saved on your computer or device.
- You can setup Quick Logins for multiple accounts so you can get a list of account choices on the login page and just press one button.
- You can see a list of all browsers that have Quick Logins enabled, and you can selectively invalidate any of them at any time even if you no longer have access to that computer or browser.
- Users can enable Quick Logins for themselves, or administrators can require Quick Logins for their users on a case-by-case basis.
To learn more about Quick Logins and how to set them up, see: Want to Login to LuxSci from your Mobile Phone with a Single Touch?
Quick Logins work great with all web browsers, but they are especially designed for mobile devices where it is much more painstaking to type passwords manually.
What about high security HIPAA accounts?
For high security accounts, such as those with HIPAA compliance requirements, Quick Logins are limited:
- Account and Domain Administrators are not permitted to use Quick Logins for themselves at all.
- Users are never permitted to self-provision Quick Logins — an administrator must enable a Quick Login for an approved user and communicate an authorization code to that user.
Even in lower security accounts, administrators are only allowed to access the “mobile site” via Quick Login, for security reasons. | <urn:uuid:bf5f2848-0ffa-4008-8be1-8c31551e7f2f> | CC-MAIN-2017-09 | https://luxsci.com/blog/simplicity-is-logging-in-without-a-username-or-password.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.898893 | 1,064 | 2.5625 | 3 |
Wan Z.,Nanjing Forestry University |
Wan Z.,Huangshan University |
Li Y.,Nanjing Forestry University |
Chen Y.,Nanjing Forestry University |
And 3 more authors.
Phytoparasitica | Year: 2013
Epidemic outbreaks of rust disease have been observed in most of the black cottonwood (Populus trichocarpa) plantations in the south of China in recent years. However, the exact pathogens that cause rust disease in this area remain largely unknown. In this study we collected rust fungi from black cottonwood plantations at four different places in the south of China. Examination of these fungi by scanning electron microscope (SEM) and light microscopy (LM) revealed that the morphological characteristics of urediniospores for all the collected fungi were very close to that of Melampsora larici-populina. Using species-specific primers, these pathogens were confirmed to be M. larici-populina. In a survey on the resistance to rust disease for 88 genotypes of black cottonwood, nine potential candidate genotypes were detected that may be free from infection of rust pathogen. The results of this study provide essential information for breeding new rust-resistant black cottonwood cultivars. © 2013 Springer Science+Business Media Dordrecht. Source
Feng X.-H.,Chinese Academy of Forestry |
Cheng R.-M.,Chinese Academy of Forestry |
Xiao W.-F.,Chinese Academy of Forestry |
Wang R.-L.,Chinese Academy of Forestry |
And 3 more authors.
Chinese Journal of Ecology | Year: 2011
Longer growth season has been confirmed due to the elevated temperature in recent decades. Though the changes in the duration of growth season could affect tree productivity, it's unclear how the growth season with different initiating temperature affects the radial growth of tree. In order to investigate the effects of growth season's variability in temperature to the radial growth of Masson pine (Pinus massoniana) and search for the sensitive temperature to the growth, old Masson pine stands in Hanzhong, the northwest margin of north subtropical region, were chosen as test objects, with their tree ring width index chronology from 1945 to 2009 measured by dendrochronology method. The air temperatures on the first day, last day, and in the whole growth season as well as the active accumulative temperature during growth season were determined based on the daily mean temperature of Hanzhong, and the relationships between the temperatures and chronology were analyzed. The results showed that the growth season with initiating temperature 6.0 °C-7.5 °C had negative effects on the tree ring width index chronology, with 6.0 °C being most significan. 10.5 °C on the last day had significant positive effects on tree ring growth. 10.0 °C and 10.5 °C in growth season were significantly positively correlated with the tree ring growth, and the active accumulative temperature during growth season was also significantly positively correlated with the growth. These sensitive temperatures were respectively corresponding to the onset of photosynthesis, needle emergence in spring, and shutting down of cambium activity in autumn. Our study suggested that elevated temperature led to the changes in phenophase, and thereby, affected the radial growth of P. massoniana in Hanzhong. Source
Wang R.,Chinese Academy of Forestry |
Cheng R.,Chinese Academy of Forestry |
Xiao W.,Chinese Academy of Forestry |
Feng X.,Chinese Academy of Forestry |
And 3 more authors.
Shengtai Xuebao/ Acta Ecologica Sinica | Year: 2011
North Subtropical Area of China, located in the transition from warm temperate zone to the subtropical, is more sensitive to environmental changes. Therefore, study on the relationship between masson pine (Pinus Massoniana) tree-ring width data and NDVI in north subtropical region where masson pine growth is more sensitive to changes of climatic factors, is of much importance to reveal how the terrestrial ecosystems respond to global climate change. As the northern boundary of masson pine natural distribution, Nanzheng county of Shanxi Province and Jigongshan National Nature Reserve of Henan Province were selected. Using masson pine tree-ring width indices, monthly normalized difference vegetation index (NDVI) and climatic data from 1982 to 2006, the relationships between tree-ring width indices, NDVI, and climatic data, including monthly mean temperature, precipitation and the Palmer Drought Severity Index (PDSI) were analyzed firstly. Then, the relationship between tree-ring width indices and NDVI of forest was explored. The results showed that North Subtropical vegetation index NDVI was influenced by the hydrothermal conditions, and monthly NDVI was mainly positive to monthly mean temperature, negatively related to monthly mean precipitation and PDSI. In addition, the correlation coefficient between NDVI and temperature was larger than other factors. Masson pine radial growth was positive to temperature of last growing season, while negative to precipitation and PDSI. Temperature and precipitation during the growing season of the same year promoted the pine radial growth, the influences of PDSI on Nanzheng county and Jigognshan were opposite. In northern subtropical region, the relationship between masson pine tree-ring width and forest NDVI was not significant (P > 0. 05). However, NDVI of Nanzheng County in March, August and December were significantly associated with two chronologies, NDVI of Jigongshan region in September associated with the RES chronology with the biggest correlation coefficient. Through analyzing synthetically, we figure out that the tree growth of Nanzheng county was mainly affected by temperature, that of Jigongshan was influenced by the interaction of temperature and precipitation. In conclusion, we imply that long time series of ring width data does not reflect well the long-term vegetation changes in the northern subtropical region, and it is unreasonable to model and reconstruct the long-term vegetation changes and productivity using tree radial growth. Therefore, the further study is still required to reconstruct regional NDVI using tree-ring width chronologies in the North Subtropical Region. Source
Induri B.R.,West Virginia University |
Ellis D.R.,West Virginia University |
Slavov G.T.,West Virginia University |
Slavov G.T.,Aberystwyth University |
And 5 more authors.
Tree Physiology | Year: 2012
Understanding genetic variation for the response of Populus to heavy metals like cadmium (Cd) is an important step in elucidating the underlying mechanisms of tolerance. In this study, a pseudo-backcross pedigree of Populus trichocarpa Torr. & Gray and Populus deltoides Bart. was characterized for growth and performance traits after Cd exposure. A total of 16 quantitative trait loci (QTL) at logarithm of odds (LOD) ratio ≥2.5 were detected for total dry weight, its components and root volume. Major QTL for Cd responses were mapped to two different linkage groups and the relative allelic effects were in opposing directions on the two chromosomes, suggesting differential mechanisms at these two loci. The phenotypic variance explained by Cd QTL ranged from 5.9 to 11.6 and averaged 8.2 across all QTL. A whole-genome microarray study led to the identification of nine Cd-responsive genes from these QTL. Promising candidates for Cd tolerance include an NHL repeat membrane-spanning protein, a metal transporter and a putative transcription factor. Additional candidates in the QTL intervals include a putative homolog of a glutamate cysteine ligase, and a glutathione-S-transferase. Functional characterization of these candidate genes should enhance our understanding of Cd metabolism and transport and phytoremediation capabilities of Populus. © The Author 2012. Published by Oxford University Press. All rights reserved. Source
Li S.,Nanjing Forestry University |
Zhang X.,Hubei Forestry Academy |
Yin T.,Nanjing Forestry University
Journal of Microbiology and Biotechnology | Year: 2010
In this paper, we analyzed the microsatellites in the transcript sequences of the whole Laccaria bicolor genome. Our results revealed that, apart from the triplet repeats, length diversification and richness of the detected microsatellites positively correlated with their repeat motif lengths, which were distinct from the variation trends observed for the transcriptional microsatellites in the genome of higher plants. We also compared the microsatellites detected in the genic regions and in the nongenic regions of the L. bicolor genome. Subsequently, SSR primers were designed for the transcriptional microsatellites in the L. bicolor genome. These SSR primers provide desirable genetic resources to the ectomycorrhizae community, and this study provides deep insight into the characteristics of the microsatellite sequences in the L. bicolor genome. Source | <urn:uuid:93f6f9e7-03e5-4e39-bf4b-9e28678cc971> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/hubei-forestry-academy-150464/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00482-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92744 | 1,894 | 2.53125 | 3 |
Banking Trojans traditionally use configuration files that have been stored on the computer under attack. These configuration files contain the addresses of the compromised websites, and the code, called the Webinject, which they are seeking to add to these websites via the banking Trojans. This code is then responsible for stealing access data and personal information, for example.
Stealth Cloud technology
With this new functionality individual parts of the malware configuration are moved to the cloud. Through this procedure, the malware authors intend to prevent an analysis by antivirus vendors and banks.
Graph 1: Classical Man in the Browser attack
Graph 2: Information Stealer with Cloud technology
For detailed technical information, visit the G Data SecurityBlog: http://blog.gdatasoftware.com/blog/article/banking-trojans-disguise-attack-targets-in-the-cloud.html. | <urn:uuid:1a56648b-dc78-4347-b63d-ac070a2726a2> | CC-MAIN-2017-09 | https://www.gdata-software.com/news/3338-spying-via-the-cloud-for-cash/page/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.851946 | 187 | 2.703125 | 3 |
In a post-mobile world, we will be connected via devices with billions of other objects, from refrigerators to cars.
In a post-mobile world, the lines will blur. An interconnected network of devices and objects will offer better connectivity and functionality.
In the early 2000s, mobile devices like the Palm and Blackberry were revolutionary, but could not match the computing power of a desktop computer. As mobile technology improved, the smartphone (and laptop and tablet) became ubiquitous, giving users the power of a desktop that could function as well as any desktop computer when a powerful wireless signal was available.
And now, with the growth of the Internet of Things, we are entering another era of computing, one in which we are connected to billions of devices and objects that are designed to interact with each other and with us. By all accounts, it's clear we are entering the post-mobile (phone) age.
Living in a Post-Mobile World
In a post-mobile world, our notion of a network will be expansive. Consider the Internet of Things (IoT), the network of billions of devices outfitted with sensors and software. With inexpensive wireless technology, these devices can collect information, share data, and detect problems with ease. Already, there are an estimated 15 billion devices within the IoT, with estimates predicting that, by 2020, that number could grow to 120 billion.
In a post-mobile world, consumers will be able to connect and interact via a dizzying array of endpoints, including mobile and desktop devices, wearables, and IoT-connected devices such as smart thermostats and refrigerators. Smart cars, smart homes, and interconnected apps will allow us to access applications and information from not just any device, but any location and time interchangeably.
Just as the introduction of smartphones a decade ago revolutionized our notion of mobile connectivity, the IoT and other technologies are resetting the way in which we use devices. Consider the seemingly sudden rise in the use of devices like the Amazon Alexa or Google Home, with verbal commands letting us control home functions, place online orders, or seek information.
Wearables are similarly mainstream, providing trackability of data points on health and fitness. Consumers and medical providers can track, in real time, how they are doing with chronic and acute illnesses and maintaining healthy lifestyles.
What it Looks Like
It is perhaps too early to predict where the post-mobile world will take us, just as a decade earlier it was nearly impossible to predict what the post-PC world would be in the age of the iPhone. However, the technology news provides a few possibilities
- Virtual Reality. The use of VR in gaming and entertainment is here already, but the technology will begin to be seen in other applications
- Aggregation. Take cord-cutting as an example. To get all the TV channels you want requires purchasing multiple online services from competitors that do not offer the same channel options. That means supplementing with provider-specific services (e.g., HBO Go or CBS All Access). Aggregating these services and apps lends simplicity.
- The Internet of Apps. The IoT will likely soon provide brands with the ability to personalize our experiences like never before. Check on movie times on your phone and you may see trailers and ads on your Facebook or Twitter feed, for example.
Wherever the technology ultimately leads, the blending of technologies and increasing reliance on those technologies will continue to grow. | <urn:uuid:cc4f0c00-2124-445c-b7ee-b444c866bba2> | CC-MAIN-2017-09 | https://www.broadsoft.com/work-it/entering-the-post-mobile-phone-age | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00182-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935677 | 703 | 3.15625 | 3 |
The inventors of public key cryptography have won the 2015 Turing Award, just as a contentious debate kicks off in Washington over how much protection encryption should really provide.
The Association for Computing Machinery announced Tuesday that Whitfield Diffie and Martin Hellman received the ACM Turing Award for their contributions to cryptography.
The two are credited with the invention of public key cryptography, which is widely used to scramble data so it can be sent securely between users and websites, and to protect information on devices like smartphones and computer hard drives.
“The ability for two parties to communicate privately over a secure channel is fundamental for billions of people around the world,” ACM said in a statement.
By coincidence or design, the award was announced at almost the exact moment that a hearing on encryption got under way in Washington, D.C., before the House Judiciary Committee.
Lawmakers are hearing testimony on how they should balance the right to privacy with the needs of law enforcement to access encrypted data for national security reasons and to solve crimes.
Representatives from Apple and the FBI, who are battling in court over access to an iPhone used by one of the San Bernardino mass shooters, are testifying at the hearing.
Diffie was chief security officer at the former Sun Microsystems and Hellman is professor emeritus of electrical engineering at Stanford University. Their paper from 1976, “New Directions in Cryptography,” introduced the ideas of public-key cryptography and digital signatures, "the foundation for most regularly-used security protocols on the Internet today," the ACM noted.
In the system they invented, the public key is used to encrypt data, while the private key, which never leaves the receiving device, is used to decrypt it. The system is designed so that anyone who knows the public key can't calculate the private key, even though the two are linked.
The Turing Award is named for Alan Turing, the British mathematician who helped crack the Enigma coding machine used by Germany in World War II, depicted in the film "The Imitation Game."
The award comes with a $1 million prize. In a blog post Tuesday, Hellman said he would use his half of the money to further a project to curtail nuclear proliferation and conflict.
ACM didn't immediately reply to a question about the timing of the announcement. It also coincided with a panel at the RSA security show in San Francisco where Diffie and Hellman were speaking. | <urn:uuid:8d0411f0-821c-4bfe-a088-b8314640c126> | CC-MAIN-2017-09 | http://www.itnews.com/article/3039909/as-encryption-debate-rages-inventors-of-public-key-encryption-win-prestigious-turing-award.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00534-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964225 | 498 | 3.25 | 3 |
To prevent Denial of service attrack in the EXOS switches, DoS Protection should be used.
DoS Protection is designed to help prevent this degraded performance by attempting to characterize the problem and filter out the offending traffic so that other functions can continue. When a flood of CPU bound packets reach the switch, DoS Protection will count these packets. When the packet count nears the alert threshold, packets headers will be saved. If the threshold is reached, then these headers are analyzed, and a hardware access control list (ACL) is created to limit the flow of these packets to the CPU. This ACL will remain in place to provide relief to the CPU. Periodically, the ACL will expire, and if the attack is still occurring, it will be re-enabled. With the ACL in place, the CPU will have the cycles to process legitimate traffic and continue other services. | <urn:uuid:c3d4474a-0daa-487e-80e6-d8bc390fea3d> | CC-MAIN-2017-09 | https://gtacknowledge.extremenetworks.com/articles/Q_A/What-is-Blacknurse-and-how-to-prevent-it | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00530-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.915912 | 178 | 2.671875 | 3 |
Despite all the recent fanfare about the latest CPU wonderchips from Intel, AMD and IBM, not everyone has hopped aboard the multicore train. In a recent column in Forbes, NVIDIA chief scientist, Bill Dally, argues that the traditional multicore implementation of Moore’s Law is a dead end. He sums it up thusly:
To continue scaling computer performance, it is essential that we build parallel machines using cores optimized for energy efficiency, not serial performance. Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work.
The fact that Bill Dally is saying this should come as no surprise. He works for a GPU maker after all, so his view of the computing landscape is from a rather particular vantage point. In his commentary, he only mentions GPUs once, but the subtext of GPUs as the savior of Moore’s Law is palpable enough.
In fact, his main point is valid, and one that been recognized for years: CPU power scaling, which enabled performance increases at a constant level of wattage, is over. The workaround is multiple cores, but since CPU cores are optimized for serial work, there is a built-in inefficiency when trying to mold highly-parallel codes around this architecture.
The reasoning is a little bit more subtle than that. Multicore CPUs are generally fine for traditional task parallelism, where each thread more or less can operate independently. CPUs, however, are less adept at data parallelism, and that’s where GPUs really shine. The other side to this is that task parallelism usually doesn’t scale well (or easily) as the size of the problem grows. Data parallelism, on the other hand, is relatively easy to scale.
To keep Moore’s Law-type scaling viable for applications, Dally says that we need to build throughput computers made up of many simple cores. That just so happens to coincide with the GPU model, but other manycore processors from companies such as Tilera and Tensilica also fit this architectural style. The Larrabee architecture was Intel’s first attempt to build a true throughput computer, with x86 as the starting point. That didn’t quite work out as they planned, although you can bet the chipmaker will take another run at this.
Beyond the construction of throughput computers, Dally believes the real challenge will be converting the huge bulk of existing serial apps to run in parallel. Here’s my take on this is: don’t bother. Most serial programs are serial for a reason. For example, the text editor I’m using to compose this article is about as fast as I need it to be. Outside our particular HPC community, there are plenty of apps in this category.
Most of the killer apps for throughput processors have yet to be designed, much less implemented. A next-generation word processor that converts my English to German on the fly and simultaneously suggests Web references to what I’m writing about will be able to take advantage of throughput processors. And that’s a fairly trivial example. Companies like Intel and NVIDIA are betting the “3D Web” will be one of the big playgrounds for these highly parallel applications.
Meanwhile, back in Fermiville…
Whether intentional or not, Dally’s Forbes commentary last week served as an interesting precursor to NVIDIA’s slow-motion rollout of the company’s new Fermi Tesla 20-series hardware. NVIDIA quietly posted the specs for the new products on its Web site on Tuesday, even though volume production of the processors is not expected until late May. The GPU maker’s fab partner, TSMC, is having problems with yields for the new 40nm chips — not too surprising considering Fermi sports around 3 billion transistors for the high-end parts.
In fact, NVIDIA has scaled back the core count on the first batch of Tesla GPUs. Back in September the company was talking about 512-core Fermis, but the first Tesla silicon will come in with just 448 cores (not quite twice the 240 cores of the previous 10-series). They’ve also throttled the clock frequency a bit to keep the heat manageable. Even at that, the new Tesla chips suck plenty of power — 225 watts TDP, to be precise.
But for that wattage, you get 515 gigaflops double precision and over a teraflop of single precision. EM Photonics benchmarked the new Fermi GPUs using DGETRF (a double precision LAPACK routine) and demonstrated a three-fold performance increase over the previous generation GPUs. In a real-world application, Artemis Capital Asset Management demonstrated a performance boost for certain financial analytics codes with the new Fermi GPUs. “The new cache structure in combination with the huge number of processor cores provides excellent resources for high-frequency trading,” said Tobias Preis, managing director of Artemis Capital Asset Management.
Despite the late production start for the Fermi Tesla parts, Appro, AMAX, Supermicro and Tyan all announced new Fermi-based server gear this week. Tyan revealed two new platforms that stuff as many as 8 Tesla M2050 GPUs in a 4U chassis. Supermicro launched three Fermi-based offerings: a 1U server with two GPUs, a 4U with four GPUs, and 2U with two hot-plug GPU nodes. AMAX unveiled a GPU cluster using NVIDIA S2050/S2070 Tesla servers as well as a 4U server with 2 CPUs and up to 8 GPUs per chassis. Appro launched a couple of new Fermi-based product, which we covered in greater depth here.
The Fermi deluge is just beginning. Most of the major and minor HPC OEMs will come out with products using the new GPUs between now and ISC’10, and even beyond that. If all goes according to plan, I expect to see a smattering of Fermi-accelerated supers on the TOP500 list in November. | <urn:uuid:501143ea-9ca9-48c7-9299-e20174f368c2> | CC-MAIN-2017-09 | https://www.hpcwire.com/2010/05/06/dally_disses_multicore/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00174-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931498 | 1,268 | 2.90625 | 3 |
The Impact of Insider Threats – The South Korea Episode.
In Layman’s Terms, What Happened?
At the center of the story is an employee who was working as a software engineer for three credit card companies. Over the course of a year and a half, this employee copied data from corporate servers to his personal drive. What makes this story particularly interesting is that the software engineer was writing anti-fraud software for the firms that he worked for during the same time that he was stealing data.
Business Impact? You Bet!
According to Bloomberg, 27 executives resigned following this incident, including bank CEOs and other senior management. Over half a million credit card users have already asked for new credit cards with many more to come. Perhaps the most significant impact is on the brand of the affected companies. Some companies never recover from the brand damage caused by such a massive security breach.
There are opportunities to prevent these sort of breaches. Audit and a properly deployed behavior alerting system could and should have flagged abnormal behavior from a user with privileged access. In this case, a software engineer who needed access to perform his job was copying massive amounts of data over time. From a security standpoint, a simple “rule” that alerts IT when a user accesses massive amounts of sensitive data over time would have caught him in his tracks.
Authors & Topics: | <urn:uuid:ddad6f97-bad9-4631-bbc8-f49c59353699> | CC-MAIN-2017-09 | http://blog.imperva.com/2014/01/the-impact-of-insider-threats-the-south-korea-episode.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00050-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.980238 | 280 | 2.5625 | 3 |
WAN – The phrase “wide area network” in a word is known as WAN. That wide ranging network works collectively with the help of hardware such as computers and related resources. Cisco has introduced a lot of devices such as modem and routers for WAN, protocols and technologies such as ATM, Cisco frame relay etc so to provide a better wide area network (WAN) environment.
What is network? A computer network in simple form can be consisted on just two computers that are allied to share available resources as variety of hardware, files and to correspond with each other. But in broad sense, this network can enclose thousands of computers in it because running a big business in traditional way can be a difficult task without a network and that provides an easy way to be in touch and cooperate with employees too. That means computer’s networking is referred to the assemblage of various kinds of hardware components plus computer’s interconnection with the help of communications mediums. The key purpose of networking is to allow the sharing of an organization’s resources as well as information. | <urn:uuid:9cff1317-8980-4054-a79d-72e36233de90> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/tag/wan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00226-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954399 | 222 | 3.65625 | 4 |
May 8, 2016 was a day that garnered much discussion. On that day, Germany’s solar and wind power peaked at 11 am local time, enabling renewables to supply 54.8 GW at a time when demand—according to provisional data from Agora Energiewende, a research institute in Berlin—was running at 57.8 GW. In plain terms, Germany, albeit briefly, got almost all of its power from renewable sources—and roughly 40 months earlier than industry experts expected.
Meanwhile, just as striking but less widely noticed, Portugal has recently run 107 consecutive hours with 100 percent renewable generation. Alongside these landmarks, we cannot neglect to mention the world record for wind penetration held by Denmark, which managed to fulfill 42 percent of its energy consumption with wind in 2015. Nevertheless, while some transmission system operators (TSOs), distribution system operators (DSOs) and power producers are doing an impressive job of inverting the fossil-versus-renewable energy mix, there is still room for improvement. Indeed, it is questionable whether utilities across Europe are ready to reach the EU guidelines for 20 percent of energy to come from renewable sources by 2020.
Wind and sunshine—depending on the region in Europe—are potentially abundant. However, the big challenge is managing their intermittency. Given this, one of the keys to increasing renewables penetration will be improving the accuracy of forecasting and controlling volatility. A more accurate forecast contributes to day-to-day operational effectiveness, through advances such as more effective day-ahead planning (including calculation of required reserves, congestion management and so on), and improved asset management. Greater precision around wind and solar forecasts also underpins business success, as it can help renewables operators make better decisions around energy production and how much they can trade. Going back to the German example, due to market-wide oversupply that day in May 2016, power prices turned negative during several 15-minute periods, dropping as low as minus €130.07 per megawatt-hour (according to data from Agora Energiewende).
In the area of wind power forecasting, there are multiple tools to choose from: Some are proprietary to the utility, some developed by academic organizations, and some are recognized products from technology vendors. These tools are all largely powered by algorithms carefully cultivated on the basis of historical data. And since each one seemingly has its specific strengths, most utilities will have more than one. The question executives are asking themselves is this: Which forecasting tool could help them most significantly improve their estimates, and thus further optimize wind and solar power generation?
As utilities look to answer this question, the good news is that they may not need to rely on a single tool to master the weather. Digital technology techniques such as data analytics and machine learning can be applied to operational data, potentially enabling utilities to develop a more intelligent forecasting approach. For example, by combining the results from multiple tools, data scientists can assess each forecast’s accuracy over short- or long-term horizons, and/or according to different scenarios, such as high-wind conditions. As such, analytics approaches can enable utilities to develop a smarter combination. And over time, by continuing to apply analytics techniques, the outcome would be tweaked to deliver increasing accuracy.
Nevertheless, if it was as easy as applying an analytics application, it seems likely that some utilities would have already tested this option. But the smart combination is not just about combining the forecasts—it’s also about bringing together the appropriate people. More effective wind and solar power forecasts require bringing experts to the table from operations, IT, and data science backgrounds. The results can be powerful, but it takes time for this collaboration to be effective. Through the journey of analyzing, interpreting and comparing the data, the team needs pool together their collective expertise, methods and insights, and bridge differences in language and experience.
By integrating information and operational technologies and data, as well as having practitioners aligned to those domains, utilities can generate a more accurate forecasting approach. Like the code on a safe, achieving the smart combination requires lots of numbers and a sharp focus on unlocking the guarded prize. For those that succeed, the results will be worth the effort.
Guest author, Stéphanie Lakkis, MSc
An engineer in the Grid Operations team, Stéphanie leads Wind Power Forecast Optimization at OMNETRIC Group. Working largely with European transmission system operators and distribution system operators, she collaborates with OMNETRIC Group's data scientist team to identify and develop analytics use cases for improved forecasting. | <urn:uuid:26ecdfc8-24e3-4f86-9820-2d805903c733> | CC-MAIN-2017-09 | https://www.accenture.com/bd-en/insight-highlights-utilities-key-unlocking-potential-europe | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00398-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946082 | 927 | 2.671875 | 3 |
Instruct by asking questions and not lecturing. Answer questions with a question. At first this sounds like a simple corporate learning strategy. However, as you try to implement the Socratic style, you may find it quite daunting. Following the ABCs will help master the technique that will aid in monumental development of those learners you instruct.
First let’s understand why the Socratic method is effective, especially in corporate training and development. Many thoughts swirl around in the mind, each with a different origin and pathway. Some thoughts are programmed like memorizing an organization’s mission statement or core values. Other thoughts originate from observational learning such as shadowing another employee.
Most of employee learning comes from experience and the process of doing a workflow repeatedly. However, it can be challenging for trainers to change later in employee development. In the digital era, corporate trainers should prompt and encourage critical thinking every step of the way to minimize risk and maximize excellence.
Our thoughts result from complex neurological growth and development throughout our personal life experiences. The most powerful lessons that stick out come from connecting thoughts with own neurologic fuel.
The Socratic teaching style facilitates deep learning and understanding. Your job is to facilitate learning by asking key questions that allow learners to arrive at the discovery point on their own.
To illustrate, imagine a connect-the-dots drawing activity. At first glance, there is no recognition of the picture that the collective dots will represent. Once you start connecting the dots, an image begins to take shape. Halfway through the activity there may be some guesses as to what the dots are forming. Once all the dots are connected, the image is clear. It’s the “a-ha" or "eureka" moment.
Develop learners and help them gain critical thinking tools by facilitating learning with questioning. Here are the ABCs for beginning a Socratic teaching style.
Assure deep understanding of the concept that you present. Studying and reviewing an already known topic drives greater understanding. It eases the next steps like primer paint on a wall.
Be in the mind of the learner. What is it that you want them to figure out on their own? Write it down and be clear on what thoughts you want them to derive on their own without lecture.
Create a question or a series of questions to ask. The question(s) should prompt ongoing thought. The brain scans billions of bits of information in search of answers. Once the right bits fuse together, learners will come to the discovery phase on their own.
The Socratic technique is effective, but does not come naturally to many. Practice once a week by taking one concept from your curriculum and applying the ABCs. Depending on commitment to practice, the method becomes more fluid. | <urn:uuid:b0038448-1d39-4242-ae19-049b518d0b37> | CC-MAIN-2017-09 | http://blog.contentraven.com/learning/maximize-learning-with-the-abcs-of-the-socratic-method | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00098-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947365 | 564 | 3.265625 | 3 |
Nine years late, the Air Force is finally ready to launch a new missile-spotting satellite that it says will usher in "a new era in persistent infrared surveillance."
Barring further delays, the first of four Space-Based Infrared System satellites will blast off May 6 and climb to an altitude of about 22,200 miles, where it will park in a geosynchronous orbit and stare at Earth, watching for missile launches and searching for new military targets.
Air Force Brig. Gen. Roger Teague, chief of the Air Force Infrared Space Systems Directorate, emphasized the new satellite's expected capability during a telephone press conference Tuesday.
Infrared sensors on the spacecraft are "so much more sensitive" than those in use on current missile-detecting satellites, he said. "They can see much more much earlier" and they "can see much dimmer targets."
Teague said he could not elaborate on what more the sensors can see or what dimmer targets might be without disclosing classified information. Dimmer targets are expected to include smaller, shorter-range missiles.
While extolling the new satellite, Teague also acknowledged that the SBIRS program "has faced and overcome a number of challenges in the past."
Those include major delays and exorbitant costs. Begun in 1995, SBIRS was supposed to be a $4.5 billion program that put new missile launch detecting satellites in orbit starting in 2002. Nearly a decade behind schedule, the program has consumed $15.9 billion, and according to the Government Accountability Office, costs are still going up.
Teague said the last of four geosynchronous satellites now planned won't be launched until 2016 if the current schedule holds.
The SBIRS satellite constellation also includes four sensor payloads that are hosted on non-Air Force satellites in highly elliptical orbits, he said. Two of those already have been launched.
As they are launched one by one, the SBIRS satellites will begin augmenting the existing Defense Support Program system of early warning satellites that watch for hostile missile launches, Teague said. They will become "the gold standard for missile warning," he said.
In addition to missile launch warnings, the new satellites are intended to contribute to missile defense, to battle space awareness and to gather "technical intelligence," the Air Force says.
Their contribution to missile defense is to gather intelligence and send it to the ground to be processed and distributed fast enough to provide theater commanders with actionable intelligence for planning defenses, Teague said.
Gathering technical intelligence involves spotting new targets on the ground and gathering data "to figure out the profiles of the new targets," said Jeff Smith, a Lockheed Martin vice president for SBIRS.
Smith, too, noted the "many challenges" that SBIRS has faced, but said Lockheed is confident that the satellites "will meet or exceed customer expectations" to deliver "unprecedented global persistent and taskable infrared surveillance."
But even now, costs continue to escalate and there is danger of further delays, GAO told Congress in March. The Defense Contract Management Agency "projects nearly $600 million in cost overruns at contract completion, more than twice the amount reported last year," GAO reported.
The SBIRS program office "is working to rebaseline" SBIRS cost and schedule estimates "for the sixth time," GAO said, referring to the process of re-estimating costs and schedules after they have been exceeded.
Recent delays were caused by faulty flight software designed to monitor the health of the satellite, GAO said. | <urn:uuid:8ec59398-a423-4a4b-9bcd-0aa15a1b8e15> | CC-MAIN-2017-09 | http://www.nextgov.com/health/2011/04/nearly-a-decade-behind-schedule-new-satellite-is-to-provide-earlier-missile-launch-warning/48961/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00098-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944991 | 733 | 2.640625 | 3 |
Whether it happens with that key memo left unfinished, the last scene of a movie unwatched or an epic gaming battle interrupted, it's likely that, at one time or another, you've been left with a dead notebook battery at the worst possible moment. What can you do about it?
"Notebooks are not as efficient as they could be," says Robert Meyers, data center product manager for the Energy Star Program at the Environmental Protection Agency, "and they waste a lot of energy."
The payoff for being aware of how much power your system uses and how to control it can be huge -- because every watt saved can run the notebook that much longer. "The natural incentive is that greater efficiency translates directly into longer battery life," Meyer says.
In this article, I'll go through 11 ways you can cut down on your laptop's power usage. Some may be appropriate for your style of work and/or play, some not; but even if you follow one or two, it could give you those crucial extra operating minutes.
But first, it might be useful to look at which components are the most power thirsty in your device -- and how they are being improved.
What uses battery power?
While there's a lot of variation between an 11-in. Chromebook with an Intel Celeron processor and a 17-in. gaming laptop with an Intel Core i7 Extreme chip, each has a similar array of components that turn electricity into an interactive computing experience.
There are six components that are the major power users in a computing device. They are listed here roughly in order of power use, although that can vary based on the notebook itself. They have each been redesigned over the past decade for greater efficiency, but there's still work to be done.
The processor is a power hog, often using as much as half of the total power in a system. Smaller is better; as the size of the microscopic wires and electronic architecture within the chip shrinks with each generation, its power use declines.
A decade ago, the best Intel processors used the company's 90-nanometer (nm) production process, codenamed Dothan. Today, the company's Haswell chips have 22nm architecture -- less than one-quarter the size and roughly 100,000 times smaller than the width of a pencil point. Chips made with 14nm microarchitecture, a.k.a Broadwell, have been promised for later this year or in early 2015.
Meanwhile, current AMD processors are made using a 28nm process, including the new Karavi laptop CPUs, but the company's Project SkyBridge promises a series of new chips for mobile devices using a 20nm manufacturing process.
2. Graphics processors
Graphics processors are often integrated into a notebook's system, but can significantly drain a battery as well. For example, Intel Graphics 4000 and 5000 integrated video chips typically range in power use from about 15 watts for the HD 4200 at the entry level to upwards of 50 watts for the Iris Pro 5200.
AMD's Radeon graphics engines also vary in how much power they pull. For instance, the mid-range HD 6290 graphics chip consumes about 18 watts at peak use, while the more sophisticated HD 8650G chip uses upwards of 35 watts.
Plus, many high-end engineering and gaming notebooks also have discrete graphics chips with dedicated memory from Nvidia or AMD that can consume a lot of power when they're being used.
Displays have improved -- no doubt about it. The move in the late 2000s from CCFL backlighting to LED backlighting reduced a typical LCD's power drain by about 25%.
More recently, Panel Self-Refresh (PSR) technology can lower power use even further by stopping screen refresh if what's being displayed doesn't change. This can add as much as 20 minutes to a battery's run time, according to Ajay Gupta, director of commercial notebook products at HP. PSR is currently used on a limited number of devices, including the HP EliteBook Folio 1040 and the LG G2 smartphone.
In the long term, display power use could decrease by another 40% by using Organic Light Emitting Diode (OLED) screens that produce their own light and don't require backlighting. These screens are currently being used in phones like the Nokia Lumia Icon.
Traditional hard drives that use rotating magnetic discs are giving way to SSDs that store data on solid-state chips. Solid state storage still costs four to five times what a hard drive goes for, but uses a lot less power.
For instance, the 500GB Seagate Momentus Thin 2.5-in. mobile hard drive (starting at $50) uses 1.20 watts, while a 480GB Crucial SSD (about $236) consumes 0.28 watts, less than a quarter as much. And more lower-cost laptops -- including such lightweight models as the HP Chromebook 11 -- are shipping with SSDs.
According to Gupta, the next step is to stop making SSDs that mimic 2.5-inch hard drives in size and shape, and move to M.2 circuit board technology that puts all the components on a small circuit board, such as the one included in HP's EliteBook 840. This can reduce power use further, he says.
Every watt used inside a computer system turns into heat -- and so the system has to be cooled in order to keep running. The less power used, the less cooling is needed. As a result, current systems that use power more efficiently also use smaller fans that don't need to run as often (and so conserve power themselves).
6. AC adapter
The technology that turns a wall outlet's alternating current into the direct current that a notebook needs has made great strides: From being roughly 50% efficient 20 years ago to between 80% and 90% efficient today. Still, a lot of power is wasted, because for most computers the adapter still draws phantom current after the system's battery is fully charged.
Today, some adapters -- like that of the Lenovo ThinkPad X1 Carbon Touch -- are smart enough to shut themselves off when the battery is full. Hopefully, more are on their way.
According to HP's Gupta, a high-efficiency adapter could be made for a single voltage, like the 110 volts we use in the U.S., rather than switchable between 110- and 220 voltage for global use. Theoretically, it could hit 94% efficiency, he says.
What you can do now
Whether you have a Windows-based system or a Mac laptop, there's a lot you can do right now to make its energy use more efficient and get more life out of its battery. The tips and tricks that follow may not work for every system, but even if you choose one or two, you can make your notebook more efficient.
1. Slow down your CPU
The processor is a great place to save a few watts.
If you're using an older Windows-based system, start with your Control Panel Power Options page, go to the Change advanced power settings section and click on Processor to adjust its maximum processor state. I aim for a balance between performance and power use, and typically set the processor's maximum power use to 95%.
If your machine is recent enough to have a Haswell processor -- and therefore has Intel's Turbo Boost overclocking -- anything less than 100% prevents the CPU from raising its clock speed (and power use) when the computing load increases. In other words, if you want to keep your battery use down, lowering the maximum processor state will add even more power efficiency, even if it takes a moment or two longer to complete some tasks.
Unfortunately, at the moment, there's no easy way to easily disable or control Turbo Boost in a MacBook. Your best bet is an open-source XCode-based command-line tool called Turbo Boost Disabler for Mac OS X.
While you can't easily control Turbo Boost in a MacBook, the Intel Power Gadget can keep you informed.
If you're just interested in how much power your processor is using (including its clock speed and core temperature), you can use the Intel Power Gadget.
2. Add more memory
Regardless of whether you use a PC or Mac, when it comes to performance, more RAM equals better performance and lower total power use. RAM chips use so little power that adding 4GB or 8GB has a marginal impact on its total power use -- more RAM can, however, save power by reducing the system's use of virtual memory.
How? Virtual memory is actually hard drive space that is used to store items from memory when the system runs out of unused physical memory. Because the hard drive uses a lot more power than RAM chips, using virtual memory eats into efficiency and battery time. So adding RAM can not only make your system more efficient, but save battery power as well.
3. Make storage more efficient
Compared to a conventional hard drive, an SSD not only speeds things up but also uses less power -- so you might want to consider upgrading your storage. However, if you can't afford a new drive (or just don't want to bother), a traditional hard drive's hunger for electrons can be tamed by adjusting its power management settings.
For Macs, you can control when the drive goes to sleep in the System Preferences Energy Saver pane. In the Battery tab, start by checking the box that says Put the hard disk(s) to sleep when possible. Apple sets 10 minutes as the default period of inactivity before the drive nods off, but you can tap into the system's pmset utility to adjust it. Here's what you do:
Go to Terminal (which you'll find in the Utilities folder, or you can just search for Terminal). Type sudo pmset disksleep X, where X is the length of time in minutes that you want the system to wait before putting the drive to sleep. (Warning: You'll need the administrator's password to do this.)
The pmset utility lets you set when your Mac's hard drive goes to sleep.
With a Windows system, you can use the Change Advanced Power Settings page in the Power Options portion of the Control Panel.
I generally set my system's hard drive to turn off after 10 or 15 minutes of inactivity. It'll take a second or two for the device to spool up when you need it, but the extra minutes of battery life make it worth the wait.
4. Lessen your display time
Fewer pixels put less of a power load on the graphics chip, video memory and display panel. So although I'm wowed by the latest high-resolution notebook screens, I don't really do much more than view the occasional YouTube video. As a result, when I shop for a notebook, I get the lowest resolution screen that is acceptable for my purposes. These days, that's generally a 1280 x 800 display.
But no matter what the resolution is, a major way to save on battery life is not to have the display running when you don't need it.
For a MacBook, open up the Energy Saver window and adjust the position of the slider control at the bottom marked Turn Display Off After. You can vary the time before the screen shuts down from "never" to as little as one minute.
Apple's Energy Saver window lets you adjust when to turn your display off.
I also dim the screen a bit when on battery power by hitting the F1 button several times until I get to a brightness that is comfortable but not too bright. (If you dialed down too far, the F2 button makes the screen brighter.)
Windows lets you set your power plan for turning off the display and putting the computer to sleep.
With a Windows system, go to the Power Options page and edit the power plan to suit how you work and play. When I'm running the system on battery, I generally set the screen brightness to roughly 80% and have the screen turn off after 15 minutes of inactivity.
Many current Windows laptops also use function keys to make it easier to dim or brighten the screen.
5. Put it to sleep
While you're tweaking your power plan settings, go ahead and set a period of inactivity after which your computer will go to sleep.
How long you wait before putting your system to sleep can affect battery life profoundly. The best approach is to use trial and error to find a balance between battery life and convenience -- for example, my own settings put the computer to sleep after 45 minutes of inactivity. Your mileage may vary.
In a Windows system, go to the Control Panel, click on the Power section and select Change plan setting. Here, you can adjust how long a system will wait before it goes to sleep.
For even more efficiency, EnviProt's Auto Shutdown Manager, a $15 Windows utility, models how you use your computer and can intelligently put the system to sleep and wake it up. It even tabulates how much power has been saved and the amount of carbon dioxide you've kept out of the atmosphere. You can try it for free for 45 days.
And you don't have to wait for the automatic triggers to kick in. Go ahead and manually put the system to sleep if the computer is sitting idle with nothing going on -- by doing this, you can save as much as 15 watts. (Most Windows systems let you press a function key to put the computer to sleep; which one you use depends on your specific system.)
Among other things, Auto Shutdown Manager shows which components shut down when.
You can put any MacBook instantly to sleep by opening the Apple Menu in the upper left corner of the screen and clicking on Sleep. Or you can just close the laptop's lid.
If you want to adjust when your system goes to sleep automatically, go back to the Energy Saver/Battery page and use the Computer sleep: slider control; you can have it sleep anywhere between 1 min. and Never. | <urn:uuid:cb39beaa-99be-43d4-849a-6d647fd16523> | CC-MAIN-2017-09 | http://www.cio.com/article/2375712/laptop-computers/boost-that-battery--tips-and-tricks-for-laptops.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00395-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.936856 | 2,870 | 2.53125 | 3 |
Technology surrounding the Internet is constantly evolving. Many programs that helped allow the Internet expand and become what it is today are still in use. They stay relevant by issuing updates that often bring more functionality while meeting the evolving needs of Web developers and users. One program, however, has had a number of security issues in the past year that have prompted experts and government departments to recommend that users disable it.
That program is Java – a programming language and application that allows developers to create web applications, and users to view much of the visual content and animations on the Internet. The problem isn’t with the programming language per se, but with the application developed by Oracle Systems.
Oracle released an update to Java – Java 7, Update 10 – in December, but it was found to have some serious security flaws. These issues were quickly spotted by hacker groups who released exploit kits – software making it easy to exploit Java 7’s security weaknesses – giving them full security privileges. This exposed any computer running Java 7 to potential malware and attack. Because Java runs at the browser level, every OS could be targeted. To make matters worse, 30 security flaws were patched back in September, after nearly 1 billion computers were found to be at risk.
It’s this string of security red flags that had the US Department of Homeland Security issue a warning that users should disable Java on their browsers. In response to this, Oracle updated Java again, to Java 7, Update 11 on January 12, and noted that the security flaw had been fixed. Many experts, including those at the Department of Homeland Security, aren’t convinced though, and are still suggesting that users disable Java because new vulnerabilities will likely be discovered.
How do I disable Java?
Internet Explorer users
There is no way for you to disable Java in the browser, you will instead have to completely disable Java from your computer. This can be done by following the steps on the Java website.
If you do disable Java, some websites will no longer work. This can be a bit of an annoyance, but in all honesty, security of your systems is more important, not to mention the potential costs of dealing with a massive malware infection. Besides that, many websites no longer use Java, so you can probably get by without it. At the very least, we recommend you go download the latest update from the Java website and apply it to all computers.
If you would like to learn more about this update, you can visit an excellent FAQ here. Before you do update, or disable Java, we recommend you contact us. We can help advise you on what steps to take next if you use Java. | <urn:uuid:09b9b0ac-c38f-4af8-9a02-a370e9c2239e> | CC-MAIN-2017-09 | https://www.apex.com/time-cut-back-java/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00623-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958718 | 539 | 2.734375 | 3 |
GCN at 25: NASA's expanding storage universe
DATA PAYLOAD: The Hubble telescope has collected 120T of data.
Electronic data storage has been a concern for government agencies for as long as there has been electronic data. And perhaps no agency has had to handle a bigger data load than NASA.
The Oct. 29, 1990, issue of GCN reported that the agency was concerned about satellite data straining its capacity. NASA already had 1.2 million magnetic tapes containing roughly 1,714 terabits of data, and it expected to add 63 terabits that year. (Terabits refer to data in transit. A terabyte, data at rest, is equal to 8 terabits.) And the agency wasn't kidding itself about the exponential growth of data, expecting the yearly load to reach 4,300 terabits by the end of the decade.
By then, of course, storage was being talked about in petabytes. And outer space is still the limit. | <urn:uuid:3a990885-8c96-4909-8995-e2678ad5d271> | CC-MAIN-2017-09 | https://gcn.com/Articles/2007/05/04/GCN-at-25-NASAs-expanding-storage-universe.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00391-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969813 | 203 | 2.84375 | 3 |
NASA's Super-Tiger Balloon Breaks Records
/ February 5, 2013
After 55 days, 1 hour and 34 minutes spent at 127,000 feet in the air, the Super-TIGER balloon (its official name is the Super Trans-Iron Galactic Element Recorder balloon) broke the record for longest flight by a balloon of its size, which was 46 days, according to NASA.
The team also broke another record -- the longest flight of any heavy-lift scientific balloon, including NASA's Long Duration Balloons. The previous record was of 54 days, 1 hour and 29 minutes was set in 2009 by NASA's Super Pressure Balloon test flight.
A new instrument aboard the Super-TIGER measured rare elements heavier than iron among the flux of high-energy cosmic rays bombarding Earth from elsewhere in our galaxy. The information retrieved will be used to understand where these energetic atomic nuclei are produced and how they achieve their very high energies -- and so much data was gathered, that it will take scientists about two years to analyze it fully.
Photo courtesy of NASA | <urn:uuid:5bf6e674-397c-4069-8303-7ce8662e2343> | CC-MAIN-2017-09 | http://www.govtech.com/photos/Photo-of-the-Week-NASAs-Super-Tiger-Balloon-Breaks-Records.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00567-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939493 | 219 | 3.015625 | 3 |
Texas Instruments networks its calculators
Anyone who has taken a calculus class in the last 20 years is sure to also have a great deal of experience plugging figures into a TI-8X graphic calculator, and I'm sure I'm not alone in feeling a certain pang of geeky nostalgia for the TI-85, a standard-issue tool for high school kids in the 1990's.
Technology has come a long way since the 6 MHz Zilog Z80 processor, but Texas Instruments isn't retiring the popular calculators just yet. Instead, it has moved a significant number of those old devices into the wireless age.
Today, the company announced its TI-Nspire Navigator system, which links a classroom full of graphic calculators together for collaborative teaching, polling, testing, and grading.
Supporting devices from the TI-83 Plus (1999) all the way up to the current TI-Nspire (which offers wireless connectivity of its own), the TI-Navigator system lets students hook their graphic calculators into wireless hubs that communicate with the teacher's PC for lessons and testing.
The program has been piloted by about 3,000 students nationwide and the hardware can be bought through independent instructional dealers.
Since there's a wealth of higher math freeware, TI-Nspire Navigator it may not be the most efficient or versatile way to network a classroom, but you have to admit, it certainly is stretching the life of TI's hardware to an amazing length. | <urn:uuid:a09a9e49-67c6-4fe4-911a-4950201d2207> | CC-MAIN-2017-09 | https://betanews.com/2010/01/14/texas-instruments-networks-its-calculators/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00567-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951439 | 306 | 2.515625 | 3 |
Not since World War II has encryption received so much attention. Germany, during that time, had some of the most able scientists and cryptographers, but the Allies cracked Germany's submarine codes and discovered valuable information on Germany's strategic plans.
Today, encryption has become an important key in securing computer data from prying eyes. Users can encrypt files that contain sensitive data and protect them from theft or access by unauthorized co-workers or network hackers. Information traveling between computers goes through numerous routes, systems and servers. A hacker can intercept message packets in transit and attempt to reconstruct your message before it reaches its destination.
Computer data security concerns are similar to those of any confidential communication. The reality is that the Internet is no more insecure than any other medium of commerce, such as bank, postal or telephone credit card transactions. But Internet security concerns cannot be overstated either, because computerized tools such as network "sniffers" are employed by hackers to sort, filter and intercept sensitive information from a network.
Many of the newer versions of popular applications -- Microsoft Word, Excel, Corel WordPerfect and others -- already provide encryption. Many experts predict that encryption will soon become an integral part of any application.
While these applications feature less secure algorithms, their encryption is sufficient for most needs. Inexpensive but very effective software programs such as Symantec's Norton DiskLock, Pretty Good Privacy and Netscape Communicator 4.0 provide an excellent way for users to test encryption.
Asymmetric or Symmetric Keys
While the technical details of cryptography are very complicated, the concept is rather simple. Basically, encryption is the scrambling and altering of data until it is no longer readable by anyone who does not have the proper decryption key.
Cryptographers have developed various methods to perform this task. The asymmetric method, also called public-key cryptography, requires two keys -- one to encrypt, and the other to decrypt a message. The user's public key is freely distributable to anyone through several key servers on the Internet. These servers act as public-key white pages.
For example, say Joe wants to send Mary some secure files or messages. To do so, he must request and receive Mary's public key via e-mail or look for it in a public-key server and use that key to encrypt the files. When Mary receives the message, she uses her private key to decrypt the message, which was encrypted with her public key. The security of this system resides in the combination of the two keys; if the keys don't match, the file or message can't be viewed.
Similarly, Mary uses Joe's public key to encrypt her reply before sending it. To assure Joe that she sent the answer and that it was not forged, Mary signs this message with her private key, which generates a digital signature block that Joe can verify using Mary's public key.
Digital certificate authorities issue digital signatures and verify the user's identity much the same way a DMV verifies an identity and issues a driver's license.
Symmetric cryptography uses a single key to encrypt and decrypt messages. Its weakness is that, to transmit an encoded message, users must also send the private key, which means a secure distribution route is needed.
Key Bit Rate
No matter how securely the doors are locked, a persistent intruder can find a way through. While no encryption program is 100 percent uncrackable, most intruders lack the time or skill to bypass or dismantle such security tools.
One primary indicator of encryption strength is the key's bit rate, which is the number of bits in a key. A bit is a single digit in a binary number -- either 0 or 1. The amount of time required to decode depends on the length of the decryption key. A longer key means a hacker must try more combinations in order to decode the data. For example, a combination lock with only one, single-digit number on the tumbler is simple to open by just trying each number. With two or more numbers in the tumbler, the difficulty rises considerably. The cracker must put the first set on one number, then try each number in the second set, then repeat the process with the second number in the first set, etc. The more numbers to try, the more difficult the cracker's job.
Just as with the combination lock example, the higher the bit rate, the harder it is to break the encryption scheme. A 40-bit key, for example -- the U.S. government restricts export of key lengths greater than 40 bits -- requires the cracker to attempt more than a trillion combinations. While this may seem like an extremely large number of keys, an Intel Pentium-based PC -- attempting various combinations in what is called "brute force" -- could crack the key in a matter of hours.
A 56-bit key requires trying more than 72,000 trillion possible combinations. A conventional PC might take about 1,000,000,000,000,000,000,000 years to crack a 128-bit key. In the United States, domestic versions of 128-bit keys are used and are virtually impossible to crack by brute force methods using current computing technologies.
The easiest way to crack a message is to obtain a copy of the sender's private key, or in case of symmetric encryption, to intercept the message and the key en route to its destination.
When DES encryption was devised in the 1970s, the 56-bit key was considered very safe; with the computers of today, a DES-encrypted message is still fairly secure, but a 56-bit key was recently cracked.
One of the shortcomings of public key technology is the extra time it takes to encrypt and decrypt data. The longer the key, the more time required to encrypt or decrypt a message.
To increase the speed of encryption, nCipher's nFast line of cryptographic hardware could be used to accelerate the timing. It does that by off-loading the cryptographic burden from the CPU. Each nFast accelerator improves performance by up to 100 times and is able to handle up to 300 1024-bit key public signings per second.
For additional information on the Internet:
Previous versions of Symantec's DiskLock focused on locking the hard disk and preventing access to specific files. With the spread of Internet and other networks and e-mail, DiskLock shifted its focus to the encryption of files, thereby rendering them useless to an unauthorized user.
The program comes with a group of encryption and decryption tools that provide protection at the file and folder level. Encrypted files and folders cannot be moved, copied or deleted by unauthorized users; if they are opened, the encryption renders them unreadable.
After the encryption and screenlock components are installed on the system, users must enter their user name and password to activate the program each time the machine is turned on. Once the application is activated, users can access the encryption and decryption options.
DiskLock uses an asymmetric encryption scheme that requires two different keys to encrypt and decrypt files. It works with a public and private key, allowing public keys to be exchanged between users wishing to access each other's work. Without a user's private key, however, the public key remains useless, so security is not compromised.
Additionally, DiskLock provides a timeframe access during which someone can access information from a hard drive. It also features an audit log that tracks system activity, revealing what was done to the system and when it occurred.
For additional information, contact Symantec, 10201
Torre Ave., Cupertino, CA 95014. Call 800/
441-7234. Internet: .
Netscape Communicator 4.0
Netscape Communicator 4.0 gives users the most powerful and flexible data security. For a secure communication across the Internet, Netscape developed Secure Socket Layer (SSL), which utilizes encryption.
Web browsers, for example, routinely encrypt credit card numbers and other sensitive information when helping perform online purchases. The encrypted data goes to an online merchant, who decrypts the message and processes the order.
SSL makes sure traffic between the two hosts is not modified in transit. It uses a technique called "hashing" to ensure that message integrity is guaranteed.
Mutual authentication is guaranteed by SSL digital certificates, which are exchanged by the communicating machines at the time they initiate connection.
SSL offers potentially broader security, since it works on a network-transport level. Any program conversing over the network can use SSL, which sets up a safe passageway or tunnel between a client and server. Once erected, everything traveling within the tunnel is secure from outsiders.
For additional information, contact Netscape Communications, 501 East Middlefield Road, Mountain View, CA 94043. Call 415/937-3777. Internet: .
Pretty Good Privacy
PGP (Pretty Good Privacy) for Personal Privacy, written by Phillip Zimmerman in 1991, allows users to encrypt and decrypt files on demand. PGP combines multiple encryption algorithms, most notably those based on RSA Data Security's public key. According to the company, PGP automatically integrates with popular e-mail clients, such as Eudora (Pro or Light Versions) and Microsoft's Exchange.
In September, PGP Inc. released its Business Security Suite -- a trial version of security applications available over the Net for DOS, Windows, OS/2, UNIX and Mac systems.
For additional information, contact Internet: .
A growing number of organizations are seeking another innovative and economical alternative -- the virtual private network (VPN). VPNs involve a vendor that controls the Internet connection at both ends, including protocol and secured encryption keys. VPNs use TCP/IP "tunneling" to let users dial in to their offices via the Internet.
RedCreek's Ravlin 10 encryption hardware and software let users create a secure VPN. It is interoperable with firewalls and routers and provides data encryption without slowing the network. According to the company, this lets users create secure virtual private networks without forcing them to make radical changes.
Ravlin 10 allows the establishment of secure VPNs over both private and public networks, and it uses standard DES encryption, authentication and access control using digital signature standards and X.509 digital certificates.
For additional information, contact RedCreek, 3900 Newpark Mall Road, Newark, CA 94560. Call 510/745-3900. Internet: .
The Government Agenda
Longer keys and more complex algorithms are clearly required for meaningful security, but proposals for government access to data are having the opposite effect.
Some government and law enforcement agencies want to keep strong encryption out of the hands of terrorists and other criminals. As a result, a mandatory key escrow has been proposed, whereby government agencies would keep a sort of "skeleton key" to all encrypted data. The FBI wants "realtime" access to all encrypted communications.
Privacy advocates understandably worry that as voice and data networks increasingly carry a larger share of the nation's communications traffic, government agencies will be able to access private networks without safeguards.
Encryption is also grabbing headlines elsewhere. Other countries are contemplating similar moves. The European Union is launching a pilot project called EuroTrust, which could be the first step in creating a single authority to manage the copies of private keys necessary for back-door access to all computer data.
In the most extreme example, France has outlawed the use of encryption of any kind. So in the final analysis, encryption is not just a matter of technology and bit length, but a political, social and policy issue that will become more prominent as global electronic commerce increases and as computer networks reach into more and more homes, businesses and government agencies. *
March Table of Contents | <urn:uuid:26c555c5-b9a3-4b4b-b3f5-477af8fb4550> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/Simple-Concept-Complex-Technology-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00319-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920374 | 2,402 | 3.578125 | 4 |
2FA (Two-Factor Authentication)
All techniques used to strengthen typical Username/password login session (e.g. single factor authentication) by adding a second security challenge.
3FF (3rd Form Factor)
A very small SIM card, also known as micro-SIM, for use in small mobile device.
3G (Third Generation)
The broadband telecommunications systems that combine high speed voice, data and multimedia.
3GPP (3G Partnership Project)
A industry group that aims to produce specifications for a 3G system based on GSM Networks.
A Comprehensive, secure all-IP based mobile broadband solution to smartphones, tablets, laptop computers, wireless modems and other mobile devices.
Automatic Border Control
The use of an Automated gate in lieu of a one-to-one meeting between the Traveller and an Immigration Officer.
The objective of deploying Automatic Border Control is to automate the process for a large percentage of the Travellers' flow and to allow the Immigration Officers personnel to perform face-to-face control on identified targets.
Techniques and solutions to grant or deny access to a given user for a given digital service.
Consumers are very familiar with Username/Password as a basic access control technique for popular web services such as web mails or eMerchants web sites.
Security sensitive services such as Payment or eGov are often deploying more robust access control techniques, usually relying on Secure Elements, Smart Cards being one example.
A collection of data set so large and complex that they are difficult to process with traditional applications.
The term "big data" is commonly used to present new analytical applications leveraging on the power of very large amounts of data sets.
A typical example is CRM (Customer Relationship Management) whereby the analysis of large amounts of past data can provide tools to improve sales forecasts, stocks management, marketing trends and customer behaviors.
Data Analysis is foreseen as an opportunity to monetize such "big data" by improving business intelligence.
Human attributes that are unique to one given individual and can be digitalize to then be compared with a stored reference.
The use of biometrics data such as finger prints can be used for security services such as access control, data encryption or digital signature The challenge of Biometry is to enroll then securely store the reference data for each individual.
Smart Card solutions offer match-on-card applications, removing the need for an online verification via a central database.
A short range wireless technology that simplifies communication and synchronization between the internet, devices and other computers.
Bluetooth is commonly used for consumers electronics devices such as, for example, headsets for cell phones or MP3 players.
Bluetooth requires first the user to establish a pairing between two devices. Once this pairing is establish, a fast wireless data exchange between the two devices can happen.
Everything about the standard at:
Bot (Internet bot)
A type of computer program designed to do automated tasks.
The act of controlling Travellers identities and visas when entering a given country (Airports, Sea-ports or roads)
Common Access Card: a US Department of Defense smart card issued as standard physical and network identification for military and other personnel.
Learn more about the DoD Common Acces Card
CDMA (code Division Multiple Access)
A wireless communications technology that uses the spread spectrum communication to provide increased bandwidth.
Computing by using servers, storage and applications that are accessed via the internet.
Cloud Computing is the architecture of choice for popular applications such as Web Mail, Social Networks, collaborative applications such as Microsoft office 365 or Google Docs.
The promesses of Cloud Computing are no data losses, no backup needed, no software license updates needed. Applications are executed from a web browser or an apps. The application itself and the user data are hosted in a Data Center.
Cloud Computing is often seen as the alternative to client software where a license of a given software is installed and executed on the user's device.
A card that communicates by means of a radio frequency signal, eliminating the need for physical contact with a reader.
Contactless communications includes several technologies aiming at performing short range data transfer betwenn two communicating devices. Operational ranges can vary from 2cm to 10 to 15 meters.
Contactless Cards used for Payment or Transport use very short range technolgy. Such card's silicon chip are powered by the proximity of the reader to establish the contactless communication in a secure manner.
Customer Relationship Management
A set of tools and techniques using data to enhance sales forecast, supply strategy, pricing strategy and all aspects of products&services strategy.
CRM is foreseen has a key application of Big Data, where large amounts of past data can really enhance current and future business steering and decision making.
DDA (Dynamic Data Authentication)
Authentication technology that allows banks to approce transactions at the terminal in a highly secure way.
DI (Dual Interface)
A device that is both contact and contactless.
Dual-Interface cards, combining contact and contactless transactions are often used for EMV payment. There are also more an more payment + transport cards where a payment card is also used to access to a mass transit network.
Diagnostic and Monitoring Management objects. The Diagnostics and Monitoring (DiagMon) functions perform various Diagnostics and Monitoring activities on mobile phones.
DIAGMONMO defines as well a way to perform network monitoring (GSM, UMTS or LTE) byautomatically getting network status from the handset.
Humans can own one or several Digital Identiti(es) - also called avatars - to be used to access various deigital services
For secure services, Digital Identities must be issued by a Certificate Authority (CA) capable to establish a link between the actual user and his/her digital Identities.
There is no limit to how many Digital Identities any given user may have.
An electronic signature created using a public-key algorithm that can be used by the recipient to authenticate the identity of the sender.
Device Management: Management of mobile phone configuration, updates and other managed objects of mobile devices over the entire life-cycle as defined by
OMA DM. DM is also used generically to describe all methods and activities associated with mobile device management.
Device Management Solutions
DNS Cache poisoning
A technique that tricks a Domain Name Server (DNS server) into believing it has received authentic information when in reality it has not.
Any small piece of hardware that plugs into a computer.
Most popular form-factor are USB keys or Smart Cards that can get inserted into card readers
Innovative device using optical reader have also been launched onto the market.
Diffractive Optical Variable Image Device: a hologram, kinegram or other image used in secure printing of cards, documents etc.
Digital Video Broadcasting-Handheld: a technical specification for bringing broadcast services to handheld receivers.
EAC (Extended Access Control)
A mechanism enhancing the security of ePassports whereby only authorized inspection systems can read biometric data.
Accessing banking services via the internet
Buying and selling goods via the internet.
a pre-3G digital mobile phone technology allowing improved data transmission rates.
The use of digital technologies (often via the internet) to provide Government services. Second generation eGov 2.0 programs aim to increase efficiency, lower costs and reduce.
Personal identification using a variety of devices secured by microprocessors, biometrics and other means.
The industry standard for international debit/credit cards established by Europay, MasterCard and Visa.
Find out more about EMV
An "electronic" passport with high security printing, an inlay including an antenna and a microprocessor, and other security features.
More info on ePassport
A small portable device that contains "electronic money" and is generally used for low-value transactions.
A diverse family of computer networking technologies for local area networks (LANs).
Electronic systems for issuing, checking and paying for tickets predominantly for public transport.
More info on Transport
European Telecommunications Standards Institute: the EU organization in charge of defining European telecommunications standards.
FIPS 201 (Federal Information Processing Standard)
A US federal government standard that specifies Personal Identity Verification requirements for employees and contractors.
FOMA (Freedom of Mobile Multimedia Access)
The brand name for world's first W-CDMA 3G services offered by NTT DoCoMo, the Japanese operator.
Please refer to FUMO
Device Management Solutions
Firmware Update Management Object, is an Open Mobile Alliance specification for updating the firmware of mobile devices over the air.
FUMO allows mobile operators to update mobile devices across network infrastructure without requiring consumers or network engineers to initiate upgrades through direct contact.
It enables operators and device manufacturers to perform updates over-the-air ranging from the simple ones (e.g.:security patch) to the most complex (e.g.: important parts of the operating system).
Device Management Solutions
GSM (Global System for Mobile Communications)
A European standard for digital cellular phones that has now been widely adopted throughout the world.
GSMA (GSM Association)
The global association for Mobile phone operatorsFind out more about GSMA
Health Insurance Portability and Accountability Act: the US act that protects health insurance coverage for workers and their families when they change or lose their jobs
HSPD-12 (Homeland Security Presidential Directive 12)
Orders all US Federal Agencies to issue secure and reliable forms of identification to employees and contractors , with a recommendation in favor of smart card technology.
Identity and Access Management
ICAO (International Civil Aviation Organization)
The United Nations agency which standardizes machine-readable and biometric passports worldwide.
Using text on a mobile handset to communicate in real time
IP (Internet Protocol)
A protocol for communicating data accross a network; hence an IP address is a unique computer address using the IP standard.
International Organization for Standardization: an international body that produces the worldwide industrial and commercial "ISO" standards.
A network oriented programming language invented by Sun Microsystems and specificallt designed so that programs can be safely downloaded to remote devices.
Key (keystroke )logging
A means of capturing a user’s keystrokes on a computer keyboard, sometimes for malicious purposes.
L6S (Lean Six Sigma)
A methodology for eliminating defects and improving processes.
Lock And Wipe Management Object. It is an Open Mobile Alliance specification for locking handsets in case they are lost or stolen or for wiping the handsets’ memory. The handset wipe removes all personal data stored either on the handset memory or on the inserted memory card. As a result, the handset is then totally blank, without any chance to retrieve the data.
Device Management Solutions
LTE (Long Term Evolution)
The standard in advanced mobile network technology, often referred to as 4G.
Technology enabling communication between machinesfor applications such as smart meters, mobile health solutions, etc…
Malicious software designed to infiltrate or damage a computer system without the owner's consent.
An attack in which an outsider is able to read, insert and modify messages between two parties without either of them knowing.
Buying and selling goods and services using a mobile device connected to the internet.
MFS (Mobile Financial Services)
Banking services such as money transfer and payment, available via a mobile device.
Microprocessor (smart) card
A 'smart" card comprising a module embedded with a chip, a computer with its own processor, memory, operating system and application software.
A removable memory card that can also be modified by adding a microprocessor to become a Secure Element, using the SDIO protocol to communicate with the device.
Complementary information about MicroSD Card
MIM (Machine Identification Module)
The equivalent of a SIM with specific features such that it can be used in machines to enable authentificationMMS (Multimedia Messaging Service) a standard way of sending messages that include multimedia content (e.g. photographs) to and from mobile phones.
A standard way of sending messages that include multimedia content (e.g. photographs) to and from mobile phones.
MNO (Mobile Network Operator)
A company that provides services for Mobile devices subscribers.
Banking and payment services for unbanked users.
The unit formed of a chip and a contact plate.
Using a mobile handset to pay for goods and services.
NFC (Near-Field Communication):
A wireless technology that enables communication over short distances (e.g. 4cm), typically between a mobile device and a reader.
OATH (The Initiative for Open Authentication)
An industry coalition comprising Gemalto, Citrix, IBM, Verisign and others, that is creating open standards for strong authentication.
OMA (Open Mobile Alliance)
A body that develops open standards for the mobile phone industry.
Find out more about Open Mobile Alliance
Open Mobile Alliance – Client Provisioning. Standardized protocol to configure basic settings on a mobile phone, using SMS bearer.
Device Management Solutions
Open Mobile Alliance – Device Management. Standardized protocol to configure advanced services on mobile phones, using IP bearer.
Device Management Solutions
OS (Operating System)
Software that runs on computers and other smart devices and that manages the way they function.
OTA (Over The Air)
A method of distributing applications and new software updates which are already in use.
OTP (One Time Password)
A password that is valid for only one login session or transaction.
The process of recovering secret passwords from data in a computer system.
PDA (Personal Digital Assistant)
A mobile device that functions as a personal information manager, often with the ability to connect to the internet.
PDC Personal Digital Cellular
A2G mobile phone standard used in Japan and South Korea.
Sending fraudulent emails requesting someone’s personal and financial details.
PIN (A Personal Identification Number)
A secret code required to confirm a user's identity.
PKI (Public Key Infrastructure)
The software and/or hardware components necessary to enable the effective use of public key encryption technology. Public Key is a systel that uses two different keys (public and private) for encrypting and signing data.
Short to mid-range wireless communication technology typically used for low end services with no security needs (Tags).
RUIM (Public Key Infrastructure)
Xan identity module for standards other than GSM.
Software Component Management Object. It is an Open Mobile Alliance specification that allows a management authority to perform software management on a remote device, including installation, uninstallation, activation and deactivation of software components.
Device Management Solutions
SE (Secure Element)
A secure and personalised physical component added to a system to manage users rights and to host secure apps.
SE typically consist of a Silicon Chip, a secure Operating System, application software and a secure protocol to communicate to the device.
SE can be a removable device (such as UICC or µSD for mobile devices or MIM for M2M connected machines). SE can also be components inside the system.
SIM (Subscriber Identity Module)
A smart card for GSM systems.
SMS (Short Message Service)
A GSM service that sends and receives text messages to and from a mobile phone.
It refers to any authentication protocol that requires multiple factors to establish identity and privileges.
This contrasts with traditional password authentication which requires only one authentication factor such as knowledge of a password.
Common implementations of strong authentication use 'something you know' (a password) as one of the factors, and ‘something you have' (a physical device) and/or 'something you are' (a biometric such as a fingerprint) as the other factors.
TEE (Trusted Execution Environment)
A software and hardware dedicated environment embedded within the core device microprocessor to host and execute secure applications.
TEE consists of dedicated logic (hardware) within the device microprocessor with its own secure Operating System (software) and secure API to communicate with the Device rich-Operating system.
TEE acts like a vault within the microprocessor to ensure a secure provisioning and execution of security sensitive appliactions such as payment.
A TSM service is used to install software applications within the TEE environment, as well as performin activation:de-activation of services.
A computer (client) that depends primarily on a central server for processing activities. By contrast, a fat client does as much local processing as possible.
A program that contains or installs a malicious program.
TSM (Trusted Services Manager)
A third party enabling Mobile Operators, Mass Transit Operators, Banks and businesses to offer combined services seamlessly and securely.
UICC (Universal Integrated Circuit Card)
A high capacity smart card used in mobile terminals for GSM, UMTS/3G and now 4G/LTE networks.
UMTS (Universal Mobile Telecommunications System):
One of the 3G mobile telecommunications technologies which is also being developed into a 4G technology.
USB (Universal Serial Bus)
A standard input/output bus that supports very high transmission rates.
USIM (Universal Subscriber Identity Module)
A SIM with adbanced software that ensures continuity when migrating to 3G services.
VPN (Virtual Private Network)
A private network often used within a company or group of companies to communicate confidentially over a public network.
W-CDMA (Wideband Code Division Multiple Access)
A 3G technology for wireless systems based on CDMA technology. | <urn:uuid:0b65fe1b-123e-45ed-a658-265d18bafcf9> | CC-MAIN-2017-09 | http://www.gemalto.com/techno/glossary | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00087-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.877083 | 3,662 | 2.546875 | 3 |
by Phillip Tracy, RCR Wireless News
Complex event processing is an emerging network technology commonly used in the “internet of things” that uses distributed message-based systems, databases and applications to derive conclusions from data in real time or near-real time. It is a kind of computing in which incoming data about events is turned into more useful, higher level “complex” event data designed to provide insight into what is happening.
CEP is event-driven because the computation is triggered by the receipt of event data. CEP is used for demanding, continuous-intelligence applications said to enhance situation awareness and support real-time decisions. CEP combines data from multiple sources to infer events or patterns to suggest more complicated circumstances. It can provide companies with the ability to define, manage and predict events, situations, conditions, opportunities and threats.
The events being analyzed can be happening across different parts of an organization as sales leads, orders or customer service calls, according to David Luckham, research professor of electrical engineering at Stanford. These data types can include news items, text messages, social media posts, stock market feeds, traffic reports, weather reports or other kinds of data. An event may also be defined as a “change of state,” when a measurement exceeds a predefined threshold of time, temperature or other value – that is really where IoT comes in.
IoT and CEP
The challenge of real-time analysis continues to grow as the forthcoming tens of billions of sensors and smart devices continue to collect more data. Being able to react quickly in a mission-critical situation can save companies millions of dollars and is one of the pillars of IoT functionality. This is why CEP is becoming a more mainstream solution for IoT deployments.
One specific IoT-based use case was outlined in a LinkedIn post written by Eric Bruno, lead real-time engineer and member of the technical leadership team at Perrone Robotics. Bruno argues IoT, when combined with complex event processing, can have transformative effects on the health care industry.
“To ensure patient safety in m-health IoT systems, more than careful programming and testing is required,” Bruno wrote. “An entirely different product development approach and paradigm must be used. This is where complex event processing can help. Engineering health care solutions through event processing – using commercial or open-source CEP systems that have been tested across a wide range of use cases – may arguably deliver a higher level of safety. This helps to both mitigate risk and increase the level of patient care.” Link to Article.
DCL: Nice to see a simple introduction to CEP for those who see “CEP” mentioned everywhere but don’t know what it means or why it is now part of everyone’s technology for real-time analytics. For those who want to know more, try “The Power of Events” or “Event Processing for Business“. | <urn:uuid:055c42d6-fe3e-48c0-98c3-973e5282c31a> | CC-MAIN-2017-09 | http://www.complexevents.com/2016/11/23/what-is-complex-event-processing-and-why-is-it-needed-for-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00263-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948845 | 603 | 2.703125 | 3 |
Measuring Our World with Wireless Sensor Networks
I recently interviewed Seapahn Megerian, a professor in the Electrical and Computer Engineering Department at the University of Wisconsin, about wireless sensor networks for an upcoming feature for CertMag’s Systems & Networks community. He talked about some very intriguing possibilities for this new technology, including space exploration, as well as some potential drawbacks.
For those of you who don’t know what wireless sensor networks are and want a relatively simple explanation, think back to the movie “Twister.” Remember those metallic balls that Helen Hunt and Bill Paxton’s characters sent up into the tornado to measure dimensions like size and wind speed? Imagine those balls being shrunk to the size of marbles or even smaller, and you have a pretty good sense of how a wireless sensor network might look.
In fact, their diminutive size is one of their most appealing qualities, Megerian said. “Miniaturization is one of the stronger motivators for the advent of wireless sensor networks. Smaller, faster and lower power, which essentially mean cheaper, can have tremendous impacts on virtually any branch of computer engineering. Nanotechnology not only opens new door in terms of new sensor technologies, but also in terms of tiny actuators that when combined with sensors and computers can go a step beyond in just observing and learning. With actuators, we can actually do stuff!”
However, we have to be careful when using these technologies, because their presence may wind up changing the environment they’re intended to measure. “We must also be conscious of the environmental effects that placing such sensor nodes can have, especially in large quantities,” he said. “Given the current battery technologies, it is clear that we do not want them sprinkled everywhere and left as garbage when they exhaust their useful lifetimes. When you are sending hundreds of satellites into orbit, and leave them there as space dust, it doesn’t really matter. But throwing 100,000 sensor nodes from an airplane to monitor a habitat here on Earth can have very significant environmental repercussions down the line.”
“We must be careful to not become too entangled by complex technologies around us. Having a typical user in mind, I think it is crucial to make sure the wireless sensor networks we design integrate into the surroundings as seamlessly as possible. In other words, if I have a 100 sensor nodes in my house, I don’t want 100 blinking clocks that are always stuck at 12:00 a.m. (a pun to the old VCR days)!” | <urn:uuid:e2008337-a3d7-4b69-853b-9c5bef0053ef> | CC-MAIN-2017-09 | http://certmag.com/measuring-our-world-with-wireless-sensor-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00435-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95346 | 530 | 2.625 | 3 |
According to the British Library, the average life expectancy of a Web site is between 44 and 75 days and every six months, 10% of .uk Web pages vanish or are replaced by new material.
"With so much material now published online, and considering the growing influence of the Internet on British culture and society, the Web is now a key part of the nation's memory," said Margaret Hodge, the U.K.'s Minister of Culture and Tourism, in a statement. "A failure to record and preserve the UK domain would not just be detrimental to future research but leave a significant gap in our digital heritage."
The .uk Internet domain currently consists of about 8 million Web pages and is expected to reach 11 million by 2011. The British Library currently has 10 people manually archiving the 5 terabytes of U.K. Web page data.
IBM's contribution to the archiving project, BigSheets, is built atop the Apache Hadoop framework, a system for distributed data processing inspired by Google's MapReduce and Google File System, and developed in recent years by Yahoo and others.
"We think of these as big worksheets," said Rod Smith, VP of emerging Internet technologies at IBM, who stresses that the project goes beyond archiving. "You'd like to be more valuable to people than just an archive. In the British Library's case, you'd like to be known as the accurate holder of historical information."
BigSheets will allow British Library researchers, and eventually library patrons, to access Web archive data, conduct queries and visualize the results in forms like a tag cloud or pie chart, for example.
It's about ways to explore and sift data, says Smith.
Smith says it's still too early in the project's evolution to determine whether BigSheets will be adopted by other archiving organizations, like the Internet Archive. | <urn:uuid:0a1d0dcd-d2a1-4364-bdb1-c0b7fac7155d> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/ibm-aids-british-library-web-archive/826708968 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00135-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938637 | 384 | 3.28125 | 3 |
When Education Gets Too VirtualStudents can use technology to undermine the integrity of education.
The visions of how technology can help students learn are promising. The reality of how students can use technology to undermine the integrity of education is already here.
The cover story of the new issue of InformationWeek Education begins with a recent news item about two students at Ohio's Miami University who used keylogger devices to capture professor passwords and gain access to an online grade book. They were arrested and expelled after admitting to changing grades for themselves and others.
In a similar case at California's Palos Verdes High School in January 2012, three students were charged with first breaking into the janitor's office to steal a classroom master key. They reportedly planted keylogging devices on multiple computers, mined passwords, and used them to alter scores on tests and homework just enough to bump grades up a bracket. The three students set up a commercial operation, charging $300 to boost a grade from a B to an A, according to the Los Angeles Times. They were charged with burglary and conspiracy to commit burglary.
My 12-year-old son has been known to do a little shoulder surfing to capture the "learning coach" password his mom and I use on the online educational website K12.com. He and his sister are in a virtual school, so getting the password let him grade some of his own schoolwork. The good news is that he isn't as clever as he thinks he is and routinely gets stopped when he tries a tactic like this one. My hope is that as he matures, he'll learn the lesson that it's more rewarding to actually do the work.
The Palos Verdes High School students were apparently smart kids, taking honors and AP classes. It's unclear whether they needed to inflate their own grades. None of the news stories I've read reports how they were caught, but it seems likely that news of their "enterprise" got back to school officials. At Miami University, a professor noticed that the grades in the online system didn't match her paper notes. To make such exploits easier to detect, the university's technology team is modifying its grade book software to send an email notification to instructors whenever grades are changed so they can confirm the legitimacy of those changes.
Academic cheating is nothing new. Like many of the ills associated with unauthorized use of computer systems, digitization just provides new techniques and temptations.
Do online education tools make cheating easier? Maybe, but in all of the examples cited above, cheating was thwarted by people who care about education and were paying attention. Should my son's grades get an inexplicable boost, or his latest essay show better spelling, grammar and vocabulary than he has produced before, his mom will know and have a talk with him. The Miami University students apparently tried to cover their tracks by changing grades for other students in addition to themselves. However, once investigators started looking at the pattern of grade changes across multiple courses, it wasn't hard to see a couple of students turning up as the common denominator.
As the digitization of education continues, "auditing a course" may take on a whole new meaning, as educators seek better ways to verify that grades reflect actual learning. | <urn:uuid:97827369-b106-4590-9b7d-329cf3d8bb60> | CC-MAIN-2017-09 | http://www.darkreading.com/security/when-education-gets-too-virtual/d/d-id/1109683?cid=sbx_iwk_related_news_security_smb&itc=sbx_iwk_related_news_security_smb | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00011-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.978892 | 657 | 2.828125 | 3 |
Cloud computing brings great promise, but also confusion to the IT industry. Key questions are answered here.
Everyone in the IT industry is talking about cloud computing, but there is still confusion about what the cloud is, how it should be used and what problems and challenges it might introduce. This FAQ will answer some of the key questions enterprises are asking about cloud computing.
What is cloud computing?
Gartner defines cloud computing as "a style of computing in which massively scalable IT-related capabilities are provided 'as a service' using Internet technologies to multiple external customers." Beyond the Gartner definition, clouds are marked by self-service interfaces that let customers acquire resources at any time and get rid of them the instant they are no longer needed.
The cloud is not really a technology by itself. Rather, it is an approach to building IT services that harnesses the rapidly increasing horsepower of servers as well as virtualization technologies that combine many servers into large computing pools and divide single servers into multiple virtual machines that can be spun up and powered down at will.
How is cloud computing different from utility, on-demand and grid computing?
Cloud by its nature is "on-demand" and includes attributes previously associated with utility and grid models. Grid computing is the ability to harness large collections of independent compute resources to perform large tasks, and utility is metered consumption of IT services, says Kristof Kloeckner, the cloud computing software chief at IBM. The coming together of these attributes is making the cloud today's most "exciting IT delivery paradigm," he says.
Fundamentally, the phrase cloud computing is interchangeable with utility computing, says Nicholas Carr, author of "The Big Switch" and "Does IT Matter?" The word "cloud" doesn't really communicate what cloud computing is, while the word "utility" at least offers a real-world analogy, he says. "However you want to deal with the semantics, I think grid computing, utility computing and cloud computing are all part of the same trend," Carr says.
Carr is not alone in thinking cloud is not the best word to describe today's transition to Web-based IT delivery models. For the enterprise, cloud computing might best be viewed as a series of "online business services," says IDC analyst Frank Gens.
What is a public cloud?
Naturally, a public cloud is a service that anyone can tap into with a network connection and a credit card. "Public clouds are shared infrastructures with pay-as-you-go economics," explains Forrester analyst James Staten in an April report. "Public clouds are easily accessible, multitenant virtualized infrastructures that are managed via a self-service portal."
What is a private cloud?
A private cloud attempts to mimic the delivery models of public cloud vendors but does so entirely within the firewall for the benefit of an enterprise's users. A private cloud would be highly virtualized, stringing together mass quantities of IT infrastructure into one or a few easily managed logical resource pools.
Like public clouds, delivery of private cloud services would typically be done through a Web interface with self-service and chargeback attributes. "Private clouds give you many of the benefits of cloud computing, but it's privately owned and managed, the access may be limited to your own enterprise or a section of your value chain," Kloeckner says. "It does drive efficiency, it does force standardization and best practices."
The largest enterprises are interested in private clouds because public clouds are not yet scalable and reliable enough to justify transferring all of their IT resources to cloud vendors, Carr says.
"A lot of this is a scale game," Carr says. "If you're General Electric, you've got an enormous amount of IT scale within your own company. And at this stage the smart thing for you to do is probably to rebuild your own internal IT around a cloud architecture because the public cloud isn't of a scale at this point and of a reliability and everything where GE could say 'we're closing down all our data centers and moving to the cloud.'"
Is cloud computing the same as software-as-a-service?
You might say software-as-a-service kicked off the whole push toward cloud computing by demonstrating that IT services could be easily made available over the Web. While SaaS vendors originally did not use the word cloud to describe their offerings, analysts now consider SaaS to be one of several subsets of the cloud computing market.
What types of services are available via the cloud computing model?
Public cloud services are breaking down into three broad categories: software-as-a-service, infrastructure-as-a-service, and platform-as-a-service. SaaS is well known and consists of software applications delivered over the Web. Infrastructure-as-a-service refers to remotely accessible server and storage capacity, while platform-as-a-service is a compute-and-software platform that lets developers build and deploy Web applications on a hosted infrastructure.
How do vendors charge for these services?
SaaS vendors have long boasted of selling software on a pay-as-you-go, as-needed basis, preventing the kind of lock-in inherent in long-term licensing deals for on-premises software. Cloud infrastructure providers like Amazon are doing the same. For example, Amazon's Elastic Compute Cloud charges for per-hour usage of virtualized server capacity. A small Linux server costs 10 cents an hour, while the largest Windows server costs $1.20 an hour.
Storage clouds are priced similarly. Nirvanix's cloud storage platform has prices starting at 25 cents per gigabyte of storage each month, with additional charges for each upload and download.
What types of applications can run in the cloud?
Technically, you can put any application in the cloud. But that doesn't mean it's a good idea. For example, there's little reason to run a desktop disk defragmentation or systems analysis tool in the cloud, because you want the application sitting on the desktop, dedicated to the system with little to no latency, says Pund-IT analyst Charles King.
More importantly, regulatory and compliance concerns prevent enterprises from putting certain applications in the cloud, particularly those involving sensitive customer data.
IDC surveys show the top uses of the cloud as being IT management, collaboration, personal and business applications, application development and deployment, and server and storage capacity.
Can applications move from one cloud to another?
Yes, but that doesn't mean it will be easy. Services have popped up to move applications from one cloud platform to another (such as from Amazon to GoGrid) and from internal data centers to the cloud. But going forward, cloud vendors will have to adopt standards-based technologies in order to ensure true interoperability, according to several industry groups. The recently released "Open Cloud Manifesto" supports interoperability of data and applications, while the Open Cloud Consortium is promoting open frameworks that will let clouds operated by different entities work seamlessly together. The goal is to move applications from one cloud to another without having to rewrite them.
How does traditional software licensing apply in the cloud world?
Vendors and customers alike are struggling with the question of how software licensing policies should be adapted to the cloud. Packaged software vendors require up-front payments, and make customers pay for 100% of the software's capabilities even if they use only 25% or 50%, Gens says. This model does not take advantage of the flexibility of cloud services.
Oracle and IBM have devised equivalency tables that explain how their software is licensed for the Amazon cloud, but most observers seem to agree that software vendors haven't done enough to adapt their licensing to the cloud.
The financial services company ING, which is examining many cloud services, has cited licensing as its biggest concern. "I haven't seen any vendor with flexibility in software licensing to match the flexibility of cloud providers," says ING's Alan Boehme, the company's senior vice president and head of IT strategy and enterprise architecture. "This is a tough one because it's a business model change. … It could take quite some time."
What types of service-level agreements are cloud vendors providing?
Cloud vendors typically guarantee at least 99% uptime, but the ways in which that is calculated and enforced differ significantly. Amazon EC2 promises to make "commercially reasonable efforts" to ensure 99.95% uptime. But uptime is calculated on a yearly basis, so if Amazon falls below that percentage for just a week or a month, there's no penalty or service credit.
GoGrid promises 100% uptime in its SLA. But as any lawyer points out, you have to pay attention to the legalese. GoGrid's SLA includes this difficult-to-interpret phrase: "Individual servers will deliver 100% uptime as monitored within the GoGrid network by GoGrid monitoring systems. Only failures due to known GoGrid problems in the hardware and hypervisor layers delivering individual servers constitute failures and so are not covered by this SLA."
Attorney David Snead, who recently spoke about legal issues in cloud computing at Sys-Con's Cloud Computing Conference & Expo in New York City, says Amazon has significant downtime but makes it difficult for customers to obtain service credits.
"Amazon won't stand behind its product," Snead said. "The reality is, they're not making any guarantees."
How can I make sure my data is safe?
Data safety in the cloud is not a trivial concern. Online storage vendors such as The Linkup and Carbonite have lost data, and were unable to recover it for customers. Secondly, there is the danger that sensitive data could fall into the wrong hands. Before signing up with any cloud vendor, customers should demand information about data security practices, scrutinize SLAs, and make sure they have the ability to encrypt data both in transit and at rest.
How can I make sure that my applications run with the same level of performance if I go with a cloud vendor?
Before choosing a cloud vendor, do your due diligence by examining the SLA to understand what it guarantees and what it doesn't, and scour through any publicly accessible availability data. Amazon, for example, maintains a "Service Health Dashboard" that shows current and historical uptime status of its various services.
There will always be some network latency with a cloud service, possibly making it slower than an application that runs in your local data center. But a new crop of third-party vendors are building services on top of the cloud to make sure applications can scale and perform well, such as RightScale.
By and large, the performance hit related to latency "is pretty negligible these days," RightScale CTO Thorsten von Eicken. The largest enterprises are distributed throughout the country or world, he notes, so many users will experience a latency-caused performance hit whether an application is running in the cloud or in the corporate data center. | <urn:uuid:ee0810a2-8dd4-4343-8bd3-4917f7d69297> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2268449/virtualization/faq--cloud-computing--demystified.html?page=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00187-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949917 | 2,258 | 2.578125 | 3 |
Connected cars are poised to become one of the biggest changes in the driving experience since the invention of the automobile, and they could be on the road within a decade. They present many exciting opportunities, but also new threats to our privacy and security. In this post, we'll dive into exactly what lies behind the connected car concept and what it means for privacy.
Let's start with a primer on connected cars. The connected car is the ultimate goal of several different trends in the automobile industry, especially networking and self-driving cars. The connected car is a car that is increasingly connected to the Internet and, potentially, to the cars around it. The final form of the connected car is a fleet of automated cars that drive themselves and communicate wirelessly with each other to determine traffic patterns, removing the need for defensive driving courses or any human intervention at all. At an intermediate scale, connected cars are just cars like the ones we have today that make much greater use of the Internet. There is significant potential in better-streamed entertainment, improved connection to navigation software, and similar features.
The primary downside is the very thing that makes connected cars attractive: the Internet connection. Anyone who has paid attention to the headlines over the past few years has seen dozens of examples of companies of all sizes and all industries that have been hacked, causing the release of personal information and financial records. These hacks have generally resulted in decreased trust in the companies involved, which range from Target to Yahoo. The sheer number of companies that have announced hacks has made it hard to put much trust in data security, at least for companies that consumers interact with on a daily basis.
By definition connected cars are Internet-facing. A significant amount of web traffic will flow to and from connected cars, and that traffic has to be meaningful for it to be valuable. It is likely to include personal information as well as location data and possibly financial information. That alone will make it interesting to hackers. The potential benefits of hacking connected cars will be just as high as hacking a laptop, or even higher. For example, it would be possible to track a car's movements to identify when the owners tend to be away from home, so that the house is unguarded and an easier target for theft. That is an extreme example, but it is within the realm of possibility. Consider something as simple as renting a movie to stream: credit card information would have to flow over the connection. At least for now, early prototypes of connected cars have not included extensive data security. It is possible that the fact that the base is a moving car and the demands of creating a good streaming connection to that moving target will make it harder to encrypt and protect the data in the stream. If so, security will be a problem for years to come.
The most insidious and dramatic example of hacking a connected car is the threat that a hacker could actually gain control over the car's function. This is not entire impossible: researchers have already demonstrated the ability to break into a car's system remotely and issue it some commands. While this is unlikely to result in kidnappings and other sensational outcomes, it does open up the possibility that hackers could proactively dive into the car's onboard memory and search for valuable data instead of just waiting for something useful to pass through the stream. Even basic identifying information can be useful for identify theft, and it is hard to imagine that connected cars won't need to keep some of that data on hand.
The upside for connected cars is entrancing for many reasons. However, that does not mean that the road will be smooth. There are a lot of problems to work out along the way, and privacy is one of the more important ones. It has the potential to expose even more Americans to damaging hacks, expanding the scope of what is already a worsening problem. The auto industry needs to commit to a serious investment in information security.
Latest posts by Jeremy Sutter (see all)
- Does AI Make Self-Driving Cars Less Safe? - February 13, 2017
- Why Consumers Don’t Trust Self-Driving Cars - December 6, 2016
- Brexit: How it Will Influence the Global Auto Industry - October 31, 2016 | <urn:uuid:62c7452a-946b-4ceb-a87a-4aed5e987ad7> | CC-MAIN-2017-09 | https://ctovision.com/connected-cars-cost-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00187-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9703 | 851 | 2.671875 | 3 |
As the World Wide Web celebrates its 25th year Wednesday, top techies are looking ahead to the next 25 years when they say the Web will be woven more deeply, and seamlessly, into our lives.
A Pew Internet Research Project report finds that techies believe that by 2039, Internet accessibility will be like flipping a switch for electricity today.
"Devices will more and more have their own patterns of communication, their own social networks, which they use to share and aggregate information, and undertake automatic control and activation," David Clark, a senior research scientist at MIT, told Pew researchers, according to the report. "More and more, humans will be in a world in which decisions are being made by an active set of cooperating devices."
Connected devices, he added, will be more pervasive and less visible, working behind the scenes.
Pew based its conclusions on an survey of 2,558 tech experts between last November and this past January.
According to Pew, the survey found that the tech's expert class expects an expanded Internet of Things that allows devices and products like smartphones, smart shirts and smart refrigerators to tap into artificial intelligence-enhanced cloud-based information storage and sharing.
Dan Lynch, founder of Interop and former director of computing facilities at SRI International, told Pew that "The most useful impact is the ability to connect people. From that, everything flows."
To make that happen, more smart sensors will spread to automobiles, home appliances, clothing and, of course, electronic gadgets.
Jeff Jaffe, CEO of the World Wide Web Consortium, said in an interview with Computerworld this week that advances in the Web will dramatically speed up as the first online generation is hits the workforce.
"The first generation that grew up on the Web is hitting maturity," said Jaffe. "Everything that's happened until recently was with people who weren't Web natives by birth using technology and using it to improve life. I can only imagine when you have digital natives hitting maturity the level of innovation will be even greater."
Another Pew study, released late last month, found that that 87% of U.S. adults use the Internet today. That count compares to 1995 -- six years after Tim Berners-Lee, a British computer scientist, unveiled the World Wide Web -- when 42% of U.S. adults had never heard of the Internet.
Now, Pew found that 90% of U.S. adults say the Internet has been good for them, and 76% say it has been good for society.
"Television let us see the Global Village, but the Internet let us be actual villagers," Paul Jones, a professor at the University of North Carolina and founder of ibiblio.org, told Pew researchers.
Survey respondents did note worries that the Internet is boosting surveillance and cybercrime and called for stricter security and privacy rules.
"The good news is that the technology that promises to turn our world on its head is also the technology with which we can build our new world," Robert Cannon, senior counsel for Internet Law in the FCC's Office of Strategic Planning and Policy Analysis, told Pew researchers. "It offers an unbridled ability to collaborate, share, and interact. The best way to predict the future is to invent it. It is a very good time to start inventing the future."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about internet in Computerworld's Internet Topic Center.
This story, "Techies See Future Where Web Flows Like Electricity" was originally published by Computerworld. | <urn:uuid:e95b0aa5-b852-4766-82b7-59018a100f5e> | CC-MAIN-2017-09 | http://www.cio.com/article/2377967/internet/techies-see-future-where-web-flows-like-electricity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00307-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947445 | 781 | 2.53125 | 3 |
In a disturbing echo of Google's mapping of home Wi-Fi networks as part of its Streetview project, an ethical hacker has found nearly half of home Wi-Fi networks can be hacked in less than five seconds, according to a study.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Ethical hacker Jason Hart travelled the main arterial routes of six UK cities using basic freely-available "wardriving" equipment. The aim was to identify networks that broadcast wireless signals excessively into public places.
The results showed nearly 40,000 networks as high risk, opening up the personal data of thousands of individuals to hackers and identity thieves.
The study, commissioned by life assistance company CPP, comes ahead of this week's National Identity Fraud Prevention Week.
Nearly a quarter of private wireless networks had no password protection, making them immediately accessible to criminals. But more than 80% of Brits think their network is secure. Further, hackers can break a typical password in seconds, CPP said.
Only one in 20 knew for certain if their network was used without their permission, indicating that the vast majority remain ignorant of the risk, CPP said.
The study also showed the dangers of accessing the internet over publicly available networks. While nearly one in five wireless users (16%) said they regularly use public networks, hackers were able to harvest more than 350 usernames and passwords an hour by sitting in town-centre coffee shops and restaurants.
The experiment also showed that more than 200 people unsuspectingly logged onto a fake Wi-Fi network in just one hour, putting themselves at risk from fraudsters who could harvest their personal and financial information.
Hart said, "When people think of hackers they tend to think of highly organised criminal gangs using sophisticated techniques to crack networks. However, as this experiment show, all a hacker requires is a laptop computer and widely available software." | <urn:uuid:8e25fa88-7b93-44c3-ad79-8323e7c78d1f> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/1280094062/40000-Wi-Fi-UK-hotspots-open-to-hackers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00307-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958193 | 400 | 2.609375 | 3 |
Fundamentals of iOS Application Development
This course starts off with reviewing the basics of Xcode 6 and the iOS 8 SDK with the creation of a simple application. From there you will learn to integrate the iOS 8 interface elements incorporating Apple's new Swift programming language. You will learn how to use buttons, switchers, pickers, toolbars, and sliders as well as design patterns using a variety of views. Each step will present a new and unique project built from start to finish.
Anyone who wants to build applications for iPhone, iPad, or iPod touch | <urn:uuid:0a2d010a-edab-4d09-b87b-d2417ac4f328> | CC-MAIN-2017-09 | https://www.globalknowledge.com/ca-en/course/120585/fundamentals-of-ios-application-development/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00307-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.875712 | 115 | 2.703125 | 3 |
In 2014, IOActive disclosed a series of attacks that affect multiple SATCOM devices, some of which are commonly deployed on vessels. Although there is no doubt that maritime assets are valuable targets, we cannot limit the attack surface to those communication devices that vessels, or even large cruise ships, are usually equipped with. In response to this situation, IOActive provides services to evaluate the security posture of the systems and devices that make up the modern integrated bridges and engine rooms found on cargo vessels and cruise ships.
There are multiple facilities, devices, and systems located on ports and vessels and in the maritime domain in general, which are crucial to maintaining safe and secure operations across multiple sectors and nations.
Port security refers to protecting all of these assets from acts of piracy, terrorism, and other unlawful activities, such as smuggling. Recent activity appears to demonstrate that cyberattacks against this sector may have been underestimated. As threats evolve, procedures and policies must improve to take these new attack scenarios into account. For example, https://www.federalregister.gov/articles/2014/12/18/2014-29658/guidance-on-maritime-cybersecurity-standards
This blog post describes IOActive’s research related to one type of equipment usually present in vessels, Voyage Data Recorders (VDRs). In order to understand a little bit more about these devices, I’ll detail some of the internals and vulnerabilities found in one of these devices, the Furuno VR-3000.
What is a Voyage Data Recorder?
(http://www.imo.org/en/OurWork/Safety/Navigation/Pages/VDR.aspx ) A VDR is equivalent to an aircraft’s ‘BlackBox’. These devices record crucial data, such as radar images, position, speed, audio in the bridge, etc. This data can be used to understand the root cause of an accident.
Several years ago, piracy acts were on the rise. Multiple cases were reported almost every day. As a result, nation-states along with fishing and shipping companies decided to protect their fleet, either by sending in the military or hiring private physical security companies.
On February 15, 2012, two Indian fishermen were shot by Italian marines onboard the Enrica merchant vessel, who supposedly opened fire thinking they were being attacked by pirates. This incident caused a serious diplomatic conflict between Italy and India, which continues to the present. https://en.wikipedia.org/wiki/Enrica_Lexie_case
'Mysteriously', the data collected from the sensors and voice recordings stored in the VDR during the hours of the incident was corrupted, making it totally unusable for authorities to use during their investigation. As this story, from Indian Times, mentions the VDR could have provided authorities with crucial clues to figure out what really happened.
Curiously, Furuno was the manufacturer of the VDR that was corrupted in this incident. This Kerala High Court’s document covers this fact: http://indiankanoon.org/doc/187144571/
Just a few weeks later, on March 1, 2012, the Singapore-flagged cargo ship MV. Prabhu Daya was involved in a hit-and-run incident off the Kerala Coast. As a result, three fishermen were killed and one more disappeared and was eventually rescued by a fishing vessel in the area. Indian authorities initiated an investigation of the accident that led to the arrest of the MV. Prabhu Daya’s captain.
During that process, an interesting detail was reported in several Indian newspapers.
So, What’s Going on Here?
From a security perspective, it seems clear VDRs pose a really interesting target. If you either want to spy on a vessel’s activities or destroy sensitive data that may put your crew in a difficult position, VDRs are the key.
Understanding a VDR's internals can provide authorities, or third-parties, with valuable information when performing forensics investigations. However, the ability to precisely alter data can also enable anti-forensics attacks, as described in the real incident previously mentioned.
As usual, I didn’t have access to the hardware; but fortunately, I played some tricks and found both firmware and software for the target VDR. The details presented below are exclusively based on static analysis and user-mode QEMU emulation (already explained in a previous blog post).
Figure: Typical architecture of a VR-3000
Basically, inside the Data Collecting Unit (DCU) is a Linux machine with multiple communication interfaces, such as USB, IEEE1394, and LAN. Also inside the DCU, is a backup HDD that partially replicates the data stored on the Data Recording Unit (DRU). The DRU is protected against aggressions in order to survive in the case of an accident. It also contains a Flash disk to store data for a 12 hour period. This unit stores all essential navigation and status data such bridge conversations, VHF communications, and radar images.
The International Maritime Organization (IMO) recommends that all VDR and S-VDR systems installed on or after 1 July 2006 be supplied with an accessible means for extracting the stored data from the VDR or S-VDR to a laptop computer. Manufacturers are required to provide software for extracting data, instructions for extracting data, and cables for connecting between a recording device and computer.
The following documents provide more detailed information:
After spending some hours reversing the different binaries, it was clear that security is not one of its main strengths of this equipment. Multiple services are prone to buffer overflows and command injection vulnerabilities. The mechanism to update firmware is flawed. Encryption is weak. Basically, almost the entire design should be considered insecure.
Take this function, extracted from from the Playback software, as an example of how not to perform authentication. For those who are wondering what 'Encryptor' is, just a word: Scytale.
Digging further into the binary services we can find a vulnerability that allows unauthenticated attackers with remote access to the VR-3000 to execute arbitrary commands with root privileges. This can be used to fully compromise the device. As a result, remote attackers are able to access, modify, or erase data stored on the VDR, including voice conversations, radar images, and navigation data.
VR-3000’s firmware can be updated with the help of Windows software known as 'VDR Maintenance Viewer' (client-side), which is proprietary Furuno software.
The VR-3000 firmware (server-side) contains a binary that implements part of the firmware update logic: ‘moduleserv’
This service listens on 10110/TCP.
Internally, both server (DCU) and client-side (VDR Maintenance Viewer, LivePlayer, etc.) use a proprietary session-oriented, binary protocol. Basically, each packet may contain a chain of 'data units', which, according to their type, will contain different kinds of data.
Figure: Some of the supported commands
'moduleserv' several control messages intended to control the firmware upgrade process. Let's analyze how it handles a 'SOFTWARE_BACKUP_START' request:
An attacker-controlled string is used to build a command that will be executed without being properly sanitized. Therefore, this vulnerability allows remote unauthenticated attackers to execute arbitrary commands with root privileges.
Figure: ‘Moduleserv’ v2.54 packet processing
Figure: ‘Moduleserv’ v2.54 unsanitized system call
At this point, attackers could modify arbitrary data stored on the DCU in order to, for example, delete certain conversations from the bridge, delete radar images, or alter speed or position readings. Malicious actors could also use the VDR to spy on a vessel’s crew as VDRs are directly connected to microphones located, at a minimum, in the bridge.
However, compromising the DCU is not enough to cover an attacker’s tracks, as it only contains a backup HDD, which is not designed to survive extreme conditions. The key device in this anti-forensics scenario would be the DRU. The privileged position gained by compromising the DCU would allow attackers to modify/delete data in the DRU too, as this unit is directly connected through an IEEE1394 interface. The image below shows the structure of the DRU.
Figure: Internal structure of the DRU
Before IMO's resolution MSC.233(90) , VDRs did not have to comply with security standards to prevent data tampering. Taking into account that we have demonstrated these devices can be successfully attacked, any data collected from them should be carefully evaluated and verified to detect signs of potential tampering.
IOActive, following our responsible disclosure policy, notified the CERT/CC about this vulnerability in October 2014. The CERT/CC, working alongside the JPCERT/CC, were in contact with Furuno and were able to reproduce and verify the vulnerability. Furuno committed to providing a patch for their customers "sometime in the year of 2015." IOActive does not have further details on whether a patch has been made available. | <urn:uuid:6c465474-c6b2-4e81-a785-9708e216ec6e> | CC-MAIN-2017-09 | http://blog.ioactive.com/2015/12/maritime-security-hacking-into-voyage.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00483-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934618 | 1,917 | 2.984375 | 3 |
Robotics Competition Inspires Girls’ Interest in Science and Technology
On April 16-18, six Girl Scout teams from around the country will travel to the Georgia Dome to participate in the 2009 FIRST (For Inspiration and Recognition of Science and Technology) Robotics World Championship. The girls will compete against more than 10,000 students, from middle school and high school, in a robotics contest that teaches young people to address engineering and design problems in a creative and collaborative way.
Last year, Girl Scouts of the USA (GSUSA) announced a partnership with FIRST as part of the organization’s commitment to inspiring more girls and young women to pursue STEM (Science, technology, engineering and math) careers. The partnership was designed to foster opportunities for girls to explore STEM by providing hands-on experience in the designing, building and programming of robots while applying the concept of 'gracious professionalism' during competition. The partnership is made possible through support from the Motorola Foundation.
“In an ever-changing economy, there is a growing demand for critical thinkers,” remarked GSUSA Chief of Staff Jaclyn Libowitz. “Through programs like FIRST and our other STEM initiatives, we’re showing girls that not only can they be successful in math and science, but they can also be leaders in those fields.”
In addition to the FIRST partnership, the Girl Scouts help girls navigate technology through other opportunities like LMK (text speak for “Let Me Know”), an online safety campaign created in partnership with Microsoft Windows.
“We’re excited to be able to continue our partnership with the Girl Scouts” said Paul Gudonis, FIRST President. “Through their innovation, teamwork and leadership, the Girl Scout teams that have advanced to the championship are showing other young people that science can be rewarding and fun.” | <urn:uuid:5616c768-8eca-4604-be4d-0882d19cab5c> | CC-MAIN-2017-09 | http://certmag.com/robotics-competition-inspires-girls-interest-in-science-and-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00535-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945926 | 379 | 2.890625 | 3 |
A security researcher has developed a tool that can automatically detect sensitive access keys that have been hard-coded inside software projects.
The Truffle Hog tool was created by U.S.-based researcher Dylan Ayrey and is written in Python. It searches for hard-coded access keys by scanning deep inside git code repositories for strings that are 20 or more characters and which have a high entropy. A high Shannon entropy, named after American mathematician Claude E. Shannon, would suggest a level of randomness that makes it a candidate for a cryptographic secret, like an access token.
Hard-coding access tokens for various services in software projects is considered a security risk because those tokens can be extracted without much effort by hackers. Unfortunately this practice is very common.
In 2014 a researcher found almost 10,000 access keys for Amazon Web Services and Elastic Compute Cloud left by developers inside publicly accessible code on GitHub. Amazon has since started scanning GitHub for such keys itself and revoking them.
Last year researchers from Detectify found 1,500 Slack tokens hard-coded by developers into GitHub projects, many of them providing access to chats, files, private messages, and other sensitive data shared inside Slack teams.
In 2015, a study by researchers from Technical University and the Fraunhofer Institute for Secure Information Technology in Darmstadt, Germany, uncovered over 1,000 access credentials for Backend-as-a-Service (BaaS) frameworks stored inside Android and iOS applications. Those credentials unlocked access to more than 18.5 million records containing 56 million data items stored on BaaS providers like Facebook-owned Parse, CloudMine or Amazon Web Services.
Truffle Hog digs deep into a project's commit history and branches. It will evaluate the Shannon entropy for both the base64 and hexadecimal character set for every blob of text greater than 20 characters, Ayrey said in the project's description.
The tool is available on GitHub and requires the GitPython library to run. Companies and independent developers can use it to scan their own software projects before hackers do so. | <urn:uuid:f2371684-75e6-4f6d-9f51-58b32d4fe4af> | CC-MAIN-2017-09 | http://www.csoonline.com/article/3155421/security/this-tool-can-help-weed-out-hard-coded-keys-from-software-projects.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00535-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910616 | 424 | 2.515625 | 3 |
Many technologies originally intended for the enterprise end up trickling down into the consumer market at some point. Some of these technologies (ethernet or virtualization, for instance) are more practical than others; but if businesses find a use for a specific piece of technology, then chances are good that consumers can benefit from it as well. Such is the case with iSCSI.
iSCSI stands for Internet Small Computer System Interface. SCSI (sans i) has long served to connect a variety of peripherals to computer systems, but most commonly it appears in storage devices, such as hard drives or tape-backup drives. iSCSI builds upon the base technology by allowing users to connect to a remote storage volume over a network, as if said storage volume were a locally attached disk. Simply put, iSCSI transmits SCSI commands over IP (Internet Protocol) networks. iSCSI is like a virtual SATA (or SCSI) cable, in that it uses a network connection to link a system and a storage volume.
Judging from that description, you may be wondering how iSCSI differs from any other network share with a mapped drive letter. On many levels, the end results are similar. With iSCSI, though, the attached volume appears to the operating system as a locally attached, block storage device that you can format with the file system of your choice. In addition, fewer layers of abstraction separate an iSCSI volume and your PC, which can result in increased performance.
Ready to get your hands dirty with some hardware? If you wish to use iSCSI, there are two main requirements: a network-attached storage device or server with a volume that can be configured as an iSCSI target, and an iSCSI initiator, which allows a system to connect to the target.
If you own a NAS drive attached to a Windows PC (or if you have managed to make your own NAS), you probably have everything you need; virtually all NAS (network-attached storage) servers offer the ability to configure iSCSI targets, and Microsoft has included an iSCSI initiator tool with every version of Windows since Vista. You can download and install Microsoft's iSCSI initiator on all previous versions of Windows from 2000 on up, too.
To show you how to use iSCSI, we're using a two-drive Thecus N2200XXX NAS server, which runs a custom version of Linux with iSCSI support, and a desktop system running Windows 7 Ultimate. Any system running Windows will do when paired with a NAS that supports iSCSI (such as the excellent Iomega StorCenter PX6-300d).
Pros and Cons
I've already touched on some of the benefits of using iSCSI. As mentioned above, an iSCSI network target appears to a system as a local drive; therefore, not only can you format the iSCSI target with the host OS's file system, but you can also run applications that require local storage from the iSCSI volume instead. This flexibility is great for small businesses because many programs cannot run over shared networks, even if you're using mapped drive letters; iSCSI works around that issue.
For some workloads, iSCSI may also offer better performance. Although iSCSI improves PC performance in the enterprise by allowing large storage arrays to connect to client systems without the need for custom hardware or cabling (which can result in a huge cost savings), I'm going to focus on average consumers and desktop systems here. To prove that iSCSI can enhance your PC's performance, we ran some benchmarks on a testing unit; I'll show you the results on the next page.
Note, however, that using iSCSI has some drawbacks. While setup is not terribly difficult, configuring an iSCSI target and initiator is more involved than simply browsing to a shared network resource. Also, only one initiator should be connected to the iSCSI target at a time, to prevent possible data loss or corruption. In addition, assuming that you use a fast server and drives, performance may be limited by your network connection speed. A gigabit network connection (or better) is the optimal choice; with slower network connections, the potential benefits of iSCSI may be nullified.
Following are the steps necessary to set up a Thecus N2200XXX NAS server for use with iSCSI. The steps should be similar for other devices and servers as well. To see how everything works, click on each screenshot for a larger version.
Step 1: Log in to the NAS server's configuration menu, configure the RAID mode, and reserve some storage space for the eventual iSCSI volume. We used RAID 1 for redundancy with two 2TB drives, and split our setup right down the middle--dedicating half of the usable capacity to an EXT4 data share while leaving the other half unused. We would later configure the unused space for iSCSI purposes.
Step 2: After you allocate space to the RAID, you must format it before continuing. When the formatting process is complete (depending on your drive setup, it could take hours), you can then configure the unused space as an iSCSI target. Note that if you reserved all of the available storage space for iSCSI, you will have no need to format the array at this point.
Step 3: Next, we configured the iSCSI target. On our Thecus NAS, we first had to click the Space Allocation link under the Storage menu in the left pane. Then we clicked the Add button under the 'iSCSI target' tab; a new window popped up, in which we had to set the desired size of the iSCSI target, enable it, and give it a name. At this point, you can also enable CHAP (Challenge Handshake Authentication Protocol) authentication if you wish to add a layer of security, but we chose not to. Another note: If you decide not to dedicate all of the available space to a single iSCSI target, you can assign individual LUN (Logical Unit Number) identifiers to multiple targets should you want to connect multiple systems to a single NAS device or server, and give each client system its own iSCSI target.
Hit the Target
With the iSCSI target created, you must now connect to it through the iSCSI initiator on the client Windows PC. To do so, click Start, type iSCSI into the Search/Run field, and press Enter (or go to Start > Control Panel > System and Security > Administrative Tools > iSCSI Initiator). If you see a message indicating that the iSCSI service is not running, go ahead and allow it, and the iSCSI initiator will open.
Select the Discovery tab, and then click the Discover Portal button. In the window that opens, enter the IP address of your NAS device or server hosting the iSCSI target (ours was 192.168.1.100) in the necessary field. Leave the port setting alone, assuming that you didn't specify a custom iSCSI port earlier; by default, iSCSI will use port 3260. Note that if you enabled CHAP authentication earlier, you should click the Advanced button here and enter the CHAP login credentials in the necessary fields. Otherwise, just click OK, and the IP address of your NAS or server should appear in the list of Target portals.
If the target is not found and listed, confirm that you entered the IP address correctly and that the necessary port is open in any firewall application you may be running.
Once the server is in the list of Target portals, click the Targets tab at the top. The iSCSI target you created earlier should show up in the groups of discovered targets in the middle of the window. Click the target to highlight it, and then click the Connect button. In the Connect To Target dialog box that opens, check Add this connection to the list of Favorite Targets... and click OK. Then click OK in the iSCSI Initiator Properties window to close it.
Now that the client system is connected to the iSCSI target, you must format the target. To do so, follow the same procedure that you would for any local drive. Click the Start button, and then right-click Computer and select Manage from the menu. In the Computer Management utility that opens, click Disk Management in the Storage subsection in the left pane. You should immediately see an Initialize Disk dialog box. Ensure that the disk is checked in the 'Select disks' field, and then choose your preferred partition type (we used MBR) and click OK. Follow the on-screen prompts to specify the volume size, assign a drive letter, and choose a file system and volume label. Click Finish. Once the formatting is complete, a new drive letter should be available and ready to use. You can now transfer files and run programs from your NAS drive (no matter where it may be) as though it were just another drive in your PC.
To quantify the performance benefits of using a remote NAS drive connected via iSCSI, we ran a couple of disk benchmarks on our setup. Since we had dedicated half of the available storage space on our NAS to the iSCSI target and the other half to an EXT4 network share, we were able to have the iSCSI initiator connected and a drive letter mapped to the NAS to test speeds when accessing the NAS via iSCSI versus a standard mapped network drive. Here are our results.
As you can see above, the ATTO Disk Benchmark didn't show much of a performance difference between the mapped network drive and iSCSI, although the mapped drive appeared to offer slightly more bandwidth overall. However, this is a relatively light-duty benchmark that tests only sequential transfers.
The CrystalDiskMark benchmark tests both sequential and random transfers using a couple of different file sizes. In this benchmark, the iSCSI target performed significantly better overall. Write speeds were similar between iSCSI and the standard mapped network drive, but read speeds were roughly 30 to 40% better with iSCSI. As these results show, the ability to access or format your NAS as a local drive and run programs from it isn't the only benefit you can derive from iSCSI--the technology also allows your system to read data from the drive faster than it could otherwise. If you work with NAS drives at home or at the office, iSCSI offers an excellent (and free) way to boost performance.
This story, "How to speed up your NAS with iSCSI" was originally published by PCWorld. | <urn:uuid:0d68c0be-d659-4c65-aa51-db004860cb94> | CC-MAIN-2017-09 | http://www.itworld.com/article/2726932/storage/how-to-speed-up-your-nas-with-iscsi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00003-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917141 | 2,195 | 3.21875 | 3 |
AutoIt, a scripting language for automating Windows interface interactions, is increasingly being used by malware developers thanks to its flexibility and low learning curve, according to security researchers from Trend Micro and Bitdefender.
"Recently, we have seen an uptick in the amount of nefarious AutoIt tool code being uploaded to Pastebin," Kyle Wilhoit, a threat researcher at antivirus vendor Trend Micro, said Monday in a blog post. "One commonly seen tool, for instance, is a keylogger. Grabbing this code, anyone with bad intentions can quickly compile and run it in a matter of seconds."
[ MORE: The year's worst data breaches ]
"In addition to tools being found on sites like Pastebin and Pastie, we are also seeing a tremendous increase in the amount of malware utilizing AutoIt as a scripting language," Wilhoit said.
The use of AutoIt in malware development has steadily increased since 2008, Bogdan Botezatu, a senior e-threat analyst at antivirus vendor Bitdefender said Tuesday via email. The number of malware samples coded in AutoIt has recently peaked at more than 20,000 per month, he said.
"In its early days, AutoIt malware was mostly used for advertising fraud or to create self-propagation mechanisms for IM [instant messaging] worms," Botezatu said. "Nowadays, AutoIt malware ranges from ransomware to remote access applications."
One particularly sophisticated piece of AutoIt-based malware discovered recently was a version of the DarkComet RAT (remote access Trojan program), Wilhoit said. This malware opens a backdoor on the victim's machine, communicates with a remote command and control server and modifies Windows firewall policies, he said.
The DarkComet RAT has been used in targeted, APT-style, attacks in the past, including by the Syrian government to spy on political activists in the country. What's interesting about the variant found by Trend Micro is that it's written in AutoIt and has a very low antivirus detection rate.
The use of scripting languages to develop sophisticated malware is not a widespread practice, because most of these languages require an interpreter to be installed on the machine or produce very large stand-alone executable files, Botezatu said.
However, there have been exceptions. For example, the Flame cyberespionage malware used the LUA scripting language to automate some tasks without being detected by antivirus products, Botezatu said.
AutoIt is extremely intuitive and easy to use, produces compiled binaries that run out of the box on modern Windows versions and is well documented, the Bitdefender researcher said. Also, there is already a lot of malicious AutoIt code available on the Web for reuse, he said.
"Most importantly, malware created in AutoIt is extremely flexible and can be easily obfuscated, which means that a single breed of malware written in AutoIt can be repackaged and re-crafted in a number of ways to prevent detection and extend its shelf life," Botezatu said.
As scripting languages like AutoIt continue to gain popularity, more malware developers are expected to migrate toward them, Wilhoit said. "The ease of use and learning, as well as the ability to post code easily to popular dropsites make this a great opportunity for actors with nefarious intentions to propagate their tools and malware." | <urn:uuid:b490efe7-4440-4ed6-852b-69ff8e986a3e> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2166047/byod/autoit-scripting-increasingly-used-by-malware-developers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00055-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945445 | 693 | 2.515625 | 3 |
As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
MPLS is a network protocol that increases speeds through network shaping. It forwards the majority of packets at network Layer 2, the switch level. Normally data would have to be passed to the routing level, Level 3. (Networks are often described with a 7 layer OSI model, from the physical layer 1, bits, up to the application level data.)
The ingress router, where the data enters the network, labels the packet header (called the label stack), and this label is stripped at the egress router when it exits the network.
Sometimes this is referred to as “layer 2.5” protocol as the definition of this network layer is somewhat ambivalent and outside the strict data link layer 2 and network layer 3.
By adding a shorter path label instead of having the router read full length network addresses, every router on the network path doesn’t have to lookup the address in a routing table. It also means packets can transfer on any network regardless of network protocol, reducing dependence on certain link modes.
MPLS can even be stacked, so the top level label is used to deliver the packet to a destination, where that label is stripped and a second label is then used for the next destination, and so forth.
Different level-switched paths can be used to shape network traffic, so administrators can control the flow of data on the network via MPLS. Pre-defined paths can be set for latency thresholds, jitter, packet loss, and downtime. This helps meet agreed upon Service Level Agreements.
The three primary advantages of MPLS in a data center service provider environment are to engineer network traffic, controlling how it is routed through the network, managing capacity, and prioritizing some services over others; using the same infrastructure to transport data and IP routing; and improving network resiliency. | <urn:uuid:c8215e0c-c28b-49c5-9a3e-862b61583da4> | CC-MAIN-2017-09 | https://www.greenhousedata.com/blog/improving-data-center-qos-through-mpls-network-connections | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00351-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.909985 | 455 | 2.703125 | 3 |
A NASA spacecraft is sending back evidence that a deep crater on Mars once held a groundwater-fed lake.
The information is further evidence that the Red Planet has a history of water. Last fall, NASA scientists discovered evidence indicating Mars once sustained a vigorous, thousand-year water flow.
The water data fits perfectly with NASA's mission to discover whether Mars can, or has ever been able to, support life. Since water is one of the key ingredients to supporting life as we know it, scientists are heartened by the information and looking for other critical elements, such as carbon-containing chemicals that can be a source for organic life.
"This new report and others are continuing to reveal a more complex Mars than previously appreciated, with at least some areas more likely to reveal signs of ancient life than others," said Rich Zurek, a project scientist with NASA's Mars Reconnaissance Orbiter. The orbiter sent back data about the floor of what's being called the McLaughlin Crater.
The crater, which is 57 miles wide and 1.4 miles deep, has a bottom comprised of layered, flat rocks that contain carbonate and clay minerals that form in the presence of water.
NASA scientists said they believe the carbonates and clay formed in a groundwater-fed lake that could have been a habitat for life.
"Taken together, the observations in McLaughlin Crater provide the best evidence for carbonate forming within a lake environment, instead of being washed into a crater from outside," said Joseph Michalski, a research scientist with the Planetary Science Institute in Tucson, Ariz.
NASA has been highly focused on Mars for years.
Equipped with 10 scientific instruments, Curiosity has the most advanced payload of scientific gear, including chemistry instruments, environmental sensors and radiation monitors, ever used on the surface of Mars. The payload is more than 10 times larger than those of earlier Mars rovers.
Now NASA is preparing to send yet another rover to Mars, with launch set for 2020.
Last summer, the space agency also announced that it plans to explore the interior of Mars to discover why that planet developed so differently from Earth. That mission, dubbed Insight, is designed to discover whether the core of Mars is solid or liquid like Earth's, and why Mars' crust is not divided into tectonic plates that drift as they do on Earth.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:b58c4a3a-b0dd-43f8-ace8-047e198d4a45> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2494470/emerging-technology/nasa-finds-evidence-of-ancient-crater-lake-on-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00403-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944582 | 541 | 3.84375 | 4 |
XFS: It's worth the wait
In 1994, Silicon Graphics Inc., of Mountain View, Calif., (SGI) released a new journaled file system on IRIX, the company's System V-based version of UNIX. This advanced file system, called XFS, replaced SGI's old EFS (Extent File System) file system, which was designed similar to the Berkeley Fast File System. Coordinating with many other kernel developers, SGI is currently working to tightly integrate the XFS file system with the Linux operating system so that we can take advantage of the many benefits of XFS over the current ext2 file system. This article discusses XFS and its technical specifications.
Origin of XFS
SGI designed XFS with a few very important features in mind, and for very specific reasons. In 1990, SGI realized that it would need to create something to replace EFS; EFS could not handle the demands of new and forthcoming applications. The issues facing any file system at that time were demands for increased disk capacity and bandwidth, and parallelism with new applications such as film, video, and large databases. Because EFS couldn't hope to handle these needs efficiently, SGI created XFS for the purpose of handling new applications by providing support in a few key areas. These areas included fast crash recovery, large file systems, and large directories and files.
In 1999, SGI began to turn an eye to Linux as a viable and attractive operating platform to support. Due to the nature of Linux, and because SGI knew it had something to offer that would provide Linux with the same file-system capabilities as those found in IRIX, SGI released Open XFS to the Linux community.
Overview of XFS features
XFS provides some basic and powerful features that meet the requirements for any large file system, file, or directory. Let's take a look at some of these features:
XFS uses B+ trees extensively in place of the traditional linear file system structure. B+ trees use a highly efficient indexing method to index directory entries, manage file extents, locate free space, and keep track of the locations of file index information. As a result, reading file systems and retrieving information from them happens quickly--without using large amounts of system resources.
Currently, the XFS team is developing enhancements to the Linux page cache so XFS can be tightly integrated with the Linux kernel. This work is being done so XFS relies solely on the page cache to store both file data and file system metadata. This work can also be used to enhance other file systems to improve overall system performance, because it is being developed at a kernel level. These features will most likely be unavailable until Linux 2.5, except as a part of XFS itself.
XFS also dynamically allocates disk blocks to inodes. If an application uses a small number of files that are very large, very little disk data is used to store the actual files--and the remainder of the disk is freed for more data. If an application uses many small files, more disk space is made available for directories and files. This process is handled dynamically, with no need for user intervention or configuration; you can create your initial file system without specifying block sizes according to what type of application will be using it. For example, you no longer need to create a file system with a smaller block size for efficient use by a mail server. XFS handles all of this internally with an advanced space management technique that utilizes contiguity, parallelism, and fast logging.
Many powerful support utilities come with XFS and enhance it remarkably. It includes the following:
- A very fast mkfs utility to make the file system
- Advanced dump and restore utilities for backups
- xfs_db for debugging
- xfs_check for checking the file system
- xfs_repair for file system repairs
- xfs_fsr for defragmenting XFS file systems
- xfs_bmap, which can be used to interpret the metadata layouts for the file system
- grow_fs, which will enlarge XFS file systems online
XFS also provides file system journaling. This means that XFS uses database recovery techniques to recover a consistent file system state after a system crash. Using journaling, XFS is able to accomplish this recovery in under a second, regardless of the file system size. Traditional linear file systems without journaling, however, must run the fsck command over the entire file system to check it after a system crash; this process is rapid on smaller file systems, but can take a lot of time (in some cases measured in hours) on larger file systems. XFS is able to accomplish this fast recovery by logging all file transactions with information on free lists, inodes, directories, and so on. After a crash, the logs are analyzed, and XFS can quickly determine which transactions must be done in order to synchronize the file system to the state it was in prior to the crash.
|XFS Technical Specifications
The following list summarizes most of the features that XFS provides. Because the Linux implementation of XFS is still in the development stages, the features listed may or may not be applicable to the Open XFS for Linux specification. These features are available to XFS for IRIX, and they give a reasonable idea of what we can expect from a Linux implementation of XFS:
File system scalability is the ability of the file system to provide support for very large file systems, large files, large directories, and large numbers of files while still providing good I/O performance. The scalability of a file system depends somewhat on how it stores information on files.
To illustrate this point, let us compare XFS (a 64-bit file system) to any other 32-bit file system. Because XFS uses 64 bits to store inode numbers and addresses for each disk block, a single file can theoretically be as large as 9 million terabytes. A 32-bit file system, however, cannot usefully exceed file sizes of 4GB. I don't honestly know anyone who needs a file to be 9 million TB (or even 4GB!), but by providing such a high level of scalability, XFS ensures that it will not become an obsolete or unusable file system for many years to come. For individuals in high-level science applications (for example, NASA), or those in the video or audio industries where file sizes can reach ridiculous sizes, XFS is necessary to make their work easier and plausible.
Large directories are also an issue with traditional linear file systems. Applications such as Sendmail or news servers often result in spool directories with thousands of files. Looking up a filename in such a directory can take a long time, because typically the directory must be read from the beginning until the desired file is found. Because XFS uses a B+ tree structure, it makes directory searching extremely fast. Filenames in the directory are converted to a four-byte hash value and are used to index the B+ tree. Using this method, all directory functions (searching, creating, and removing) are very efficient and fast.
Using the same idea, XFS supports large numbers of files efficiently because inodes are allocated dynamically and multiple file operations are performed in parallel. The only limitation for XFS in regards to the number of files in a file system is the space available to hold them. Because XFS dynamically allocates inodes, free space usage is extremely efficient, regardless of the file size. With traditional file systems--in which the number of inodes is specified during file system creation--you are limited by that initial number of inodes. You can increase or decrease the inode size and number during the file-system creation, but then you end up locking the system into a specific state of usability. If you use a large number of inodes up front, you consume a lot of disk space that may never be used. But if you use a smaller number of inodes, any small files stored on the file system will use the full inode block size and waste space that could have been saved by using a smaller inode size (which results in more inodes).
Why choose XFS?
As we've seen, XFS is a flexible, powerful, and fast file system. Current development of file systems for Linux include a number of forthcoming journaling file systems. Available right now is the ReiserFS journaling file system, and coming soon is ext3, which is a backward-compatible journaling file system based on ext2. IBM also released an initial release of its Enterprise JFS, another journaled file system written initially for AIX.
So, in light of these forthcoming alternatives, why should you be concerned with XFS? If ReiserFS is currently available and these others are coming out, why should you choose XFS over any of them?
The main factor is maturity. ReiserFS and ext3 are still in-development immature file systems. XFS is mature--it's been running on IRIX machines since 1994. SGI developed it six years ago to be a robust, long-standing, viable alternative to linear file systems. In short, SGI knows how to make a good file system.
Yes, we may have to wait another few months before XFS is a realistic alternative to ReiserFS, which is currently available; but I think the wait will be worth it. I've illustrated the many benefits of XFS over traditional file systems. Because it has commercial backing and--perhaps more important--because commercial dollars are invested in the project, XFS for Linux will quickly attain the same level of reliability it has had on IRIX for years. To get more information on XFS or to contribute to the project, visit the project Web site at http://oss.sgi.com/projects/xfs/. | <urn:uuid:2297ab70-3f3d-4b20-a80e-a84c605f5319> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/623661/XFS-Its-worth-the-wait.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00523-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940979 | 2,035 | 2.78125 | 3 |
Trojan:Win32/Peals.B!plock is a threat identified by Microsoft Security Software. This is a typical malware that targets the core system of Windows in order to complete its tasks. Trojan:Win32/Peals.B!plock was made to execute a series of commands once it gets inside the system. It will gather data like system settings, Windows version, network configuration, and so on. Collected data will be sent to remote attacker for analysis.
In general, system will get infected with Trojan:Win32/Peals.B!plock if malicious code is executed on the computer. Source of this trojan may vary due to the changing ways how it is deployed. Typically, spam email messages disguising as open letter from reputable institution are used to deceive recipients. Body of the message contains enticing phrases that tries to convince user into opening the attached file.
Malicious links from social media sites and instant messaging program are also seen as method used in distributing Trojan:Win32/Peals.B!plock. Illegally distributed software and media materials may also contain code that can lead to the infection of this malware.
In order to run itself on Windows start-up, Trojan:Win32/Peals.B!plock will make a copy of itself under system files. Then, registry entry is created to call the file on each Windows boot-up. Apart from that, this malware will also drop non-malicious files on various folders of the compromised PC.
Trojan:Win32/Peals.B!plock occasionally connects to a remote host to execute tasks like the following:
- Notify attacker on the new infection
- Sends gathered data from the infected computer
- Download and execute additional files including an updated version of the trojan
- Accept command from a remote attacker
There is not much obvious symptom from this malware. Trojan:Win32/Peals.B!plock operates silently in the background. However, Microsoft Security Software may alert you on the presence of this trojan.
How can you remove Trojan:Win32/Peals.B!plock?
To totally remove Trojan:Win32/Peals.B!plock from the computer and get rid of relevant virus and trojan, please execute the procedures as stated on this page. Make sure that you have completely scan the system with suggested malware removal tools and virus scanners.
Windows XP, Windows Vista, and Windows 7 Instructions:
1. Open Microsoft Security Essentials by going to Windows Start > All Programs. If the tool is not yet installed on the computer, please download Microsoft Security Essentials from the link below. Save the file on your hard drive.
MSE Download Link (this will open on a new window)
Complete installation guide and usage are also provided on the same link. It is essential in removing Trojan:Win32/Peals.B!plock effectively. If Microsoft Security Essentials is already installed on the PC, please proceed with the steps below.
2. On Microsoft Security Essentials Home screen, please choose Full under Scan Options.
3. Click on Scan Now button to start detecting Trojan:Win32/Peals.B!plock items, viruses, and malware on the PC. Scan may take a while, please be patient and wait for the process to end.
Windows 8 Instructions:
Windows Defender is a free tool that was built help you remove Trojan:Win32/Peals.B!plock, viruses, and other malicious items from Windows 8 system. Follow these procedures to scan your computer with Windows Defender:
1. Tap or click the Search charm, search for defender, and then open Windows Defender.
If Windows Defender is not yet installed on the computer, please proceed to download page using the link below. It also contains detailed instruction to install and use the program effectively. Proper usage is required to totally remove Trojan:Win32/Peals.B!plock
Windows Defender Download Link (this will open on a new window)
2. On the Home tab, click Full under Scan Options. Click Scan now to start scanning for presence of Trojan:Win32/Peals.B!plock. The process may take a while to complete.
3. After the scan, delete/quarantine identified threats wether it is relevant to Trojan:Win32/Peals.B!plock or not. You may now restart Windows to complete the virus removal process. | <urn:uuid:85c97cda-34bc-4bcc-81a9-6e18f42fe0b0> | CC-MAIN-2017-09 | https://malwarefixes.com/threats/trojanwin32peals-bplock/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00223-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.889052 | 924 | 2.703125 | 3 |
Cloud computing naysayers have long cited security and privacy as their number one concerns. While more and more companies are adopting cloud services, many corporations and small businesses are still hesitant to embrace the cloud because of concerns about lax security and hacker attacks.
Companies like Microsoft, Google, and Amazon claim to have extremely strong security and have never reported a major security breach. But smaller companies like Dropbox and Zappos have, but the breaches were typically due to internal programming bugs. The question is, should consumers believe that their data is safe with major cloud players? Can cloud computing giants really deliver on their security promises? The answer, in most cases, is a resounding yes.
The largest U.S. data centers are almost always certified by the federal government under programs like FISMA and SAS 70 Type II certification. Cloud companies that hold these designations have implemented physical and cyber security measures.
Data center security starts with physical security. Large data centers typically employ a sizable number of armed guards, as well as technological measures such as high-resolution video monitoring, motion tracking, and analytics software, biometric and/or electronic keycard locks, and extremely strict policies on who has access to servers and other sensitive equipment. Employees are also subject to background checks and screenings as thorough as possible for non-defense organizations.
Companies employ multiple methods to ensure data security. These typically include both data/disk encryption and “data obfuscation,” a process in which even unencrypted data is made illegible to humans and standard computer programs. Obfuscated data is only rendered in clear text or images once it is relayed from the server backend to proprietary frontend interfaces, such as Gmail, Hotmail, and QuickBooks Online. Companies also go to great lengths to ensure physical data security. Deleted data is destroyed using complex wiping algorithms and then overwritten by other real data. Discarded hard drives are physically destroyed, rendering data recovery impossible.
At the network level, cloud companies deploy both human analysts and highly complex algorithms to analyze network packet traffic and look for any anomalies. Suspicious packets are automatically dropped and IP addresses blocked if necessary. Most companies also employ complex security protocols that require any service contacting data center servers to possess a uniquely assigned internal identity. If a network query cannot identify itself as a legitimate request from an internal service, then the connection is terminated. Other network security measures include complex, multi-level routing to detect and block malicious activity, and advanced firewalls.
At the operating system and physical server level, companies typically develop their own flavours of Linux or UNIX which are unknown outside the company, almost impossible to target with malware and viruses due to both software security measures and their obscurity, and constantly updated. Servers are also only accessible by authorized employees with unique identification numbers, and all activity is logged and monitored by both automated software and human supervisors.
Overall, data center security is extremely sophisticated and constantly evolving, leaving virtually all hackers in the dust and making it all but impossible for internal employees to inappropriately access customer information. No contemporary computer system can be completely secure, but most businesses’ data is far less secure on their own servers and computers than it is in a federally certified data center.
By Robert Shaw | <urn:uuid:70719485-13f3-4a4c-8ccb-e25e91f2d8b6> | CC-MAIN-2017-09 | https://cloudtweaks.com/2012/11/how-cloud-computing-companies-make-their-data-centers-hacker-proof/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00575-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948295 | 655 | 2.703125 | 3 |
Many security experts have talked about quarantining infected computers, but Microsoft has proposed a plan that each PC would be required to present a "health certificate" or else be considered too sick to connect to the Internet.Scott Charney, Microsoft’s vice president of trustworthy computing, presented his idea of "implementing a global collective defense of Internet health much like what we see in place today in the world of public health... Just as when an individual who is not vaccinated puts others’ health at risk, computers that are not protected or have been compromised with a bot put others at risk and pose a greater threat to society. In the physical world, international, national, and local health organizations identify, track and control the spread of disease which can include, where necessary, quarantining people to avoid the infection of others."
Charney gave his speech at the International Security Solutions Europe (ISSE) Conference in Berlin, Germany, and posted his "vision" on his blog. Other countries like Australia and the Netherlands are attempting similar security models; Charney uses examples like France’s Signal Spam or Japan’s Cyber Clean Center as cyber models to keep only healthy computers online.
Comparing the proposal to a global collective defense for health is not necessarily comforting. How many older computers would be digitally quarantined for false positives? Think back a year to the H1N1 hysteria in which unvaccinated persons were a threat to everyone's good health. If a computer cannot issue a "health certificate" and is cut off the Internet, wouldn't that be similar to not allowing a sick person transportation to a doctor's office? How does the sick computer get well without the tools or "medicine" available at Dr. Net?
Should ISPs like Comcast be responsible for cyber-patrolling and sending out bot-notifications to all its customers? Krebs on Security reported that the FCC may encourage ISPs to be more proactive in cleaning up bot infected computers. How does an entity go about it, by throwing scareware warnings on startup screens or simply no Net access? Does this lead to downloading software to monitor PC health? This could very well be a disaster, as it would be way too easy to abuse. An ISP could decide a computer was sick and couldn't connect to the Net if that computer uses too much bandwidth. I've seen domains be shutdown as hosts insisted they were under DDoS attacks . . . but the reality of the situation was Slashdotting or the Digg effect. That may be close, but the intent was not malicious.
Graham Cluely, of security firm Sophos, told BBC, "Microsoft doesn't have a faultless record when it comes to security. It has improved over the years, but every month they have to release a package of updates. There may be some who would say that Microsoft shouldn't be on the internet until they get their own house in order."
Whose software gets access to your data to scan your computer for good health? Who decides who gets to play doctor and peek under the sheet? Violating privacy and civil liberties by installing a possible backdoor? Microsoft Security Essentials is not a bad product, but hello? C'mon Microsoft! Harden your OS or ban Windows from the Net since that is where botnets, viruses, trojans and malware thrive.
Microsoft plans to advocate for legislation and policies to help advance the model in a way that "advances principles supporting user control and privacy." However, unless there is a giant collective NO to more privacy and freedom violations, online regulations and cyber-patrols may inevitably open users up to more surveillance by authorities.
Charney wrote, "Privacy concerns must be carefully considered in any effort to promote Internet security by focusing on device health. In that regard, examining health is not the same as examining content; communicating health is not the same as communicating identity; and consumers can be protected in privacy-centric ways that do not adversely impact freedom of expression and freedom of association."
What do you think of Microsoft's proposal that if a computer is not well enough to be issued a health certificate, then it's no Internet access for that PC? Is this the answer to clean up botnets or an invitation to Big Brother?
Like this? Check out these other posts:
- All of today's Microsoft news and blogs
- FBI Spied and Lied, Misled Justice Department on Improper Surveillance of Peace Groups
- EFF Warns of Untrustworthy SSL, Undetectable Surveillance
- Microsoft's Davis on Privacy: Your Digital Life Data is Bankable Currency
- ACLU Report: Spying on Free Speech Nearly At Cold War Level
- DHS to Launch SAR Database. In Suspicion and Surveillance We Trust?
- Facial recognition: Identifying faces in a crowd in real-time
- Microsoft's Live@edu email not encrypted on cloud servers
- Cyber-Warfare: U.S. Military Hackers and Spies Prepare to Knock the World Offline
Follow me on Twitter @PrivacyFanatic | <urn:uuid:db315954-7c91-4130-b678-7cb17608bc1b> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2227388/microsoft-subnet/microsoft-proposes-each-pc-needs-a-health-certificate-or-no-net-access-allowed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00451-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946389 | 1,031 | 2.640625 | 3 |
Science.gov 4.0 delves deep into the Web
- By Trudy Walsh
- Feb 16, 2007
The latest version of Science.gov, the search portal that trawls the Web for scientific information in 30 federal scientific databases and more than 1,800 Web sites, features a relevancy ranking architecture that can retrieve the full text of documents.
Launched today, Version 4.0 uses DeepRank, a relevancy ranking algorithm that returns more targeted results than previous versions.
DeepRank uses information gathered from the full-text document to perform relevancy ranking. Earlier versions of Science.gov relied on MetaRank, which ranked queries based on metadata, bibliographic information such as title, author, date or abstract, and QuickRank, which relied on the document's title and short snippets of information.
DeepRank actually downloads and indexes documents, said Walter Warnick, director of the Energy Department's Office of Scientific and Technical Information. Commercial search engines such as Google crawl the Web by attempting 'to visit each Web page they can find and make an index of that page. Science.gov does federated searching,' searching pre-identified databases. 'When the hits come back, they have to be sorted,' Warnick said. 'Otherwise patrons will be overwhelmed with hundreds of thousands of hits.'
All three relevancy ranking algorithms'DeepRank, MetaRank and QuickRank'were developed by Deep Web Technologies of Santa Fe, N.M.
Science.gov is free and requires no registration. The portal is hosted by the Energy Department's Office of Scientific and Technical Information. Members of the Science.gov Alliance include the Agriculture, Commerce, Defense, Education, Energy, Health and Human Services and Interior departments, and the Environmental Protection Agency, the Government Printing Office, NASA, and the National Science Foundation. Some support is also provided by the National Archives and Records Adminstration.
Trudy Walsh is a senior writer for GCN. | <urn:uuid:4b91b8d2-cb0d-499a-bfc4-92a84652072a> | CC-MAIN-2017-09 | https://gcn.com/articles/2007/02/16/sciencegov-40-delves-deep-into-the-web.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00451-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.871661 | 403 | 2.5625 | 3 |
A Legacy of Innovation: Timeline of Motorola history since 1928
Since 1928, Motorola has been committed to innovation in communications and electronics. Our company has achieved many milestones in its 85-plus year history. We pioneered mobile communications in the 1930s with car radios and public safety networks. We made the equipment that carried the first words from the moon in 1969. In 1983 we led the communications revolution with the first commercial handheld cellular phone and system and introduced a handheld scanner which set the standard for the industry. Today, as a global industry leader, excellence in innovation continues to shape the future of the Motorola brand.
1928: Founding of Company
On September 25, 1928, Paul V. Galvin and his brother, Joseph, incorporated Motorola's founding company—the Galvin Manufacturing Corporation—in Chicago, Illinois, USA.
1928: Battery Eliminator
Galvin Manufacturing Corporation's first product was a 1928 battery eliminator. This power converter allowed battery-powered radios to run on household electricity. The company's first customer was Sears, Roebuck and Co., which sold battery eliminators to consumers.
1930: First Motorola Brand Car Radio
In 1930 Galvin Manufacturing Corporation introduced the Motorola radio, one of the first commercially successful car radios. Company founder Paul V. Galvin created the brand name Motorola for the car radio -- linking "motor" (for motorcar) with "ola" (which implied sound). Thus the Motorola brand meant sound in motion.
1930: First Motorola Public Safety Radio Sales
Galvin Manufacturing Corporation began selling Motorola car radios to police departments and municipalities in November 1930. Among the first customers (all in the U.S. state of Illinois) were the Village of River Forest; Village of Bellwood Police Department; City of Evanston Police; Illinois State Highway Police; and Cook County (Chicago area) Police.
1930: International Motorola Sales
On June 18, 1930, Galvin Manufacturing Corporation sold two Motorola car radios to W. Oldenburger in Mexico City. This was the first recorded sale of a Motorola-branded product outside the United States.
1936: Motorola Police Cruiser Radio Receiver
In 1936 Galvin Manufacturing Corporation introduced the Motorola Police Cruiser radio receiver, a rugged one-way car radio designed to receive police broadcasts. The radio was tuned to a single frequency specified by the customer. The company had been providing Motorola radios for public safety use since 1930.
1937: Motorola Home Entertainment Radios
Galvin Manufacturing Corporation entered the home entertainment business in 1937 with a line of Motorola phonographs and home radios.
1938: National Motorola Advertising
The first Motorola national advertising campaign was underway in the U.S. by 1938. The campaign included print media, road signs and billboards.
1939: Motorola AM Two-Way Radio Equipment
In 1939 Galvin Manufacturing Corporation introduced a complete line of low-cost, dependable Motorola AM two-way radio equipment, including the model T6920 mobile transmitter. The company aimed to make radio equipment affordable for more public safety agencies to help them improve service to their communities.
1940: Handie-Talkie SCR536 Radio
In 1940 Galvin Manufacturing Corporation engineers developed the Handie-Talkie SCR536 AM portable two-way radio. This handheld radio became a World War II icon. The Handie-Talkie and other radios Galvin Manufacturing developed for the U.S. military at this time did not carry the Motorola brand.
1940: Motorola Research and Development Program
In 1940 Galvin Manufacturing Corporation increased its research and development program when Daniel E. Noble, an engineering professor and FM two-way radio pioneer, joined the company as director of research.
1941: Motorola FM Two-Way Radio Equipment
Galvin Manufacturing Corporation introduced a commercial line of Motorola FM mobile (car) two-way radio equipment in 1941. Police cars in the City of Philadelphia, Pennsylvania, USA, were equipped with the first production of Motorola FM equipment.
1943: World's First FM Portable Two-Way Radio
In 1943 Galvin Manufacturing Corporation designed the world's first FM portable two-way radio, the SCR300 backpack model, for the U.S. Army Signal Corps. Weighing 35 pounds (15.9 kg), the "walkie-talkie" radio had a range of 10 to 20 miles (16-32 km).
1944: Motorola Two-Way Radios for Taxis
In October 1944 Galvin Manufacturing Corporation installed Motorola radios in Yellow Cab Co. taxis in Cleveland, Ohio, the first commercial FM two-way taxi communications system in the United States.
1946: Motorola Car Radiotelephone
On October 2, 1946, Motorola communications equipment carried the first calls on Illinois Bell Telephone Company's new car radiotelephone service in Chicago. Due to the small number of radio frequencies available, the service quickly reached capacity.
1947: Company Name Change
In 1947 Galvin Manufacturing Corporation became Motorola, Inc.
1947: Portable Two-Way Radios for Business
Motorola introduced portable two-way radios designed especially for the industrial market in 1947.
1947: Motorola Dispatcher Two-Way Radios
The 1947 Motorola Dispatcher line of vehicular two-way radios used new radio channels for industrial customers in the United States.
1947: Motorola Golden View Televisions
Motorola introduced a line of Golden View televisions in 1947, beginning with the VK101 Consolette model. The Golden View VT71 table model television was designed to be affordable and more than 100,000 units sold in one year.
1953: Motorola Foundation
In 1953 Motorola established the Motorola Foundation to support leading universities in the United States. The foundation later expanded to support science, technology, engineering and math (STEM) education, and critical community needs globally where the company operated. Motorola Solutions Foundation continued this mission beginning in 2011.
1955: Stylized "M" Motorola Logo
In June 1955 Motorola introduced a new brand logo, the stylized "M" insignia, or "emsignia." Two aspiring triangle peaks arching into an abstracted "M" formed the basis of the new mark. It was chosen to typify the progressive leadership-minded outlook of the electronics company. The logo was designed by Morton Goldsholl, a Chicago designer.
1955: World's First Commercial High-Power Transistor
A 1955 Motorola germanium transistor for car radios was the world's first commercial high-power transistor. It was also Motorola's first mass-produced semiconductor.
1955: Motorola Handie-Talkie Paging System
Motorola's 1955 Handie-Talkie radio paging system provided individual paging inside hospitals, factories and office buildings, reducing noise from public address systems. The system included a selector console, an FM transmitter, and individual Handie-Talkie radio paging pocket receivers.
1956: Robert W. Galvin, President, Motorola, Inc.
Robert W. Galvin, son of company founder Paul V. Galvin, became president of Motorola, Inc. in 1956. After his father's death in 1959, Bob assumed full leadership of the company.
1958: Motorola Motrac Vehicular Two-Way Radio
In 1958 Motorola introduced the Motrac radio, the world's first vehicular two-way radio with a fully transistorized power supply and receiver. Its low power use enabled users to transmit without running their vehicles' engines.
1960: Motorola Astronaut TV
The 1960 Motorola Astronaut television, a 19-inch model, was the world's first large-screen, transistorized, cordless portable television.
1962: Motorola HT200 Portable Two-Way Radio
Motorola introduced the transistorized Handie-Talkie HT200 portable two-way radio in 1962. Small and lightweight at the time, it weighed 33 ounces (935 grams) and was nicknamed the "brick" because of its shape and durability.
1966: World's Smallest Prototype Pocket Television
In 1966, Motorola developed the world's smallest portable television receiver at the time. Four penlight batteries powered the 1.125 inch (2.86 cm) black and white experimental miniature set, referred to as the "Tiny Tim TV."
1969: First Words From the Moon
A Motorola radio transponder relayed the first words from the moon to Earth in July 1969. The transponder aboard the Apollo 11 lunar module transmitted telemetry, tracking, voice communications and television signals between Earth and the moon.
1972: Motorola MODAT Vehicular Data Radio System
Motorola's 1972 MODAT mobile data radio system allowed users in vehicles to transmit and receive data from dispatch computers. Public safety officers could enter license plate information during traffic stops.
1973: World's First Portable Cellular Demonstration
On April 3, 1973, Motorola publicly demonstrated the world's first portable cellular telephone and system. The first public calls using Motorola DynaTAC (DYNamic Adaptive Total Area Coverage) technology occurred in New York City. Motorola engineers had been experimenting with radio communications in the 800 and 900 MHz bands since the 1960s.
1974: Motorola MC6800 Microprocessor
In 1974 Motorola introduced the 8-bit MC6800 microprocessor. The MC6800 microprocessor was used for automotive, computing and video game applications.
1975: Motorola MX300 Portable Radios
Motorola's 1975 MX300 series of portable two-way radios operated in the 900 MHz band. They included status, identification and emergency alert features that were compatible with computer-aided radio dispatch systems.
1977: First Digital Encryption Technology for Two-Way Radio Networks
Motorola's DVP Digital Voice Protection system, introduced in 1977, was the first digital encryption technology to provide two-way radio users with a very high degree of voice communications privacy. The first system was installed in Gabon, Africa, for an OPEC meeting. The FCC granted Salt Lake City, Utah, a developmental license, making it the first U.S. city to install the system for public safety use on standard channels.
1978: Motorola RDX1000 Portable Data Radio
Motorola introduced the RDX1000 handheld two-way data radio in 1978. The RDX portable data terminal system combined scanning and communications technology. Data could be captured by scanning, voice or keyboard input, and then transmitted wirelessly to a central computer. Potential applications included inventory control, freight traffic management, and industrial uses.
1983: World's First Commercial Portable Cellular Phone
The world's first commercial handheld cellular phone, the Motorola DynaTAC phone, received approval from the U.S. Federal Communications Commission on September 21, 1983. The 28-ounce (794-gram) phone became available to consumers in 1984.
1983: Motorola KDT800 Portable Two-Way Data System
In 1983 Motorola developed a radio network, later named ARDIS, that allowed IBM service technicians to use Motorola KDT800 portable two-way data radios to communicate with host computers. The radios functioned as wireless computer terminals.
1986: Six Sigma Quality Process
Motorola invented the Six Sigma quality improvement process in 1986. Six Sigma provided a common worldwide language for measuring quality and became a global standard.
1991: National Medal of Technology to Robert W. Galvin
Robert W. Galvin, a long-time Motorola leader and son of the company's founder, received the 1991 National Medal of Technology from U.S. President George Bush "for advancement of the American electronics industry through continuous technological innovation, establishing Motorola as a world-class electronics manufacturer."
1991: World's First Narrowband Digital Public Safety Radio System
Motorola's ASTRO two-way radio system, introduced in the U.S. in 1991, was the world's first narrowband digital public safety radio system. The New Hampshire, USA, State Police began extended field tests of ASTRO portables, mobiles, base stations, consoles, and a wide area system in December 1992.
1995: World's First Two-Way Pager
In 1995 Motorola introduced the world's first two-way pager, the Tango two-way personal messaging pager. It allowed users to receive text messages and e-mail, and reply with a standard response. It also could be connected to a computer to download long messages.
1997: Motorola TETRA System, Norway
In 1997 a Motorola commercial TETRA (Trans European Trunked Radio) digital radio system began operations at Oslo Airport in Gardermoen, Norway.
2000: World's First 700 MHz Public Safety Wideband High-Speed Data Field Trial
In 2000 Motorola tested the world's first 700 MHz wideband high-speed data system for public safety users, enabling advanced mission-critical solutions. Pinellas County, Florida, USA, police, fire and EMS services deployed the trial system in 2001.
2004: Motorola National Medal of Technology
Motorola was awarded the 2004 National Medal of Technology "for over 75 years of technological achievement and leadership in the development of innovative electronic solutions, which have enabled portable and mobile communications to become the standard across society." Motorola received the award, the United States' highest honor for technological innovation, in a White House ceremony in February 2006.
2005: Motorola MOTOMESH Broadband Radio Network
In 2005 Motorola's MOTOMESH wireless mobile network was one of the first multiradio mesh networks to combine 4.9 GHz licensed mobile broadband radios and unlicensed Wi-Fi radios into a single access point. Mesh networking allowed public safety users to rapidly create a network of wireless devices linked in a relay system.
2006: Motorola MOTOTRBO Professional Digital Radios
Motorola introduced MOTOTRBO professional digital radio systems in 2006. The system offered businesses integrated voice and data applications and increased system capacity.
2008: APX Multi-Band Two-Way Radios
Motorola introduced the APX family of Project 25 multi-band two-way radios in 2008. Designed with suggestions from first responders, APX radios worked in the 700/800 MHz and VHF bands, and had custom-designed microphones, integrated GPS, and text messaging.
2008: World's First LTE 700 MHz Data Demonstration
On November 3, 2008, Motorola announced it had completed the world's first over-the-air Long-Term Evolution (LTE) data session in the 700 MHz band. The test was achieved in Motorola labs and outdoors in Illinois, USA, using prototype equipment. On November 10 Motorola demonstrated the first public safety wireless broadband applications over a live 700 MHz LTE connection. A vehicle equipped with 700 MHz OFDM and being driven in San Diego, California, transmitted video and dispatch data to an IACP trade show at the city's convention center.
2008: Industry First Project 25 Interoperability Gateways
In 2008 Motorola announced the first deployment of Inter RF Subsystem Interface (ISSI) gateways between live Project 25 public safety networks. The prototype installation in Arizona, USA, demonstrated the ability to provide interoperability among existing communications systems. It was the culmination of months of multi-agency collaboration.
2010: ES400 Global Enterprise Digital Assistant
Motorola introduced the ES400 Enterprise Digital Assistant (EDA) in 2010. Designed for mobile workers, the EDA combined voice, data, scanning and GPS in a durable, light-weight device.
2010: WiNG 5 WLAN Network
Motorola announced its 802.11n WiNG 5 WLAN wireless network architecture in 2010. The new architecture featured intelligent access points, reducing the number of controllers needed. Direct routing of packet data expanded the number of users on the network without losing quality of service during peak usage times.
2011: Motorola, Inc. Separation
On January 4, 2011, Motorola, Inc. separated into two independent, publicly-traded companies: Motorola Solutions, Inc. and Motorola Mobility, Inc. Motorola Solutions (NYSE:MSI) provided mission-critical communication products and services for enterprise and government customers. Motorola Mobility (NYSE:MMI) made mobile cellular devices and cable video management equipment.
2011: Greg Brown, CEO and Chairman, Motorola Solutions, Inc.
On January 4, 2011, Greg Brown became chief executive officer of Motorola Solutions, Inc. He was elected chairman effective May 3, 2011. He previously was co-chief executive officer of Motorola, Inc.
2011: First U.S. Statewide Broadband LTE Public Safety Network
In 2011 the State of Mississippi awarded a contract to Motorola Solutions to create the United States' first statewide broadband LTE public safety network.
2012: World's First Handheld Public Safety LTE Device
In 2012 Motorola Solutions introduced the LEX700 mission critical handheld, the world's first handheld public safety LTE device. The device combined rugged hardware and powerful software with the ability to connect with public safety LTE, cellular, IP and P25 networks. | <urn:uuid:7add4650-e6cf-49af-92da-7789cce8e1d7> | CC-MAIN-2017-09 | https://www.motorolasolutions.com/en_us/about/company-overview/history/timeline.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00572-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.897853 | 3,459 | 2.78125 | 3 |
How Does a Magnetic Stripe Encoder Work?February 8, 2017
Written by Ryan McCarthy
A magnetic stripe encoder is a hardware device that can write information or data onto a plastic PVC card. Magnetic stripe encoding allows ID badges to serve as more than just simple visual identification. Also called a “swipe card,” magnetic encoded cards are used in a variety of multi-functional card programs like:
- access control
- time and attendance tracking
- school lunch programs
The term “magnetic stripe” or “magstripe” refers to the black or brown magnetic stripe on one side of the card. The stripe itself is made of magnetic particles of resin. The amount of resin particle material within the stripe determines the coercivity of the stripe. The higher the coercivity, the harder it is to encode and erase the information on the stripe.
How Does a Magstripe Encoder Work?
There are two different magnetic stripe cards to choose from: high coercivity (“HiCo”) and low coercivity (“LoCo”) cards. Both types of cards can hold the same amount of data; the main difference is the security and durability either offers.
HiCo stripes are encoded at 2750 Oersted (the unit of magnetic coercive force used to define difficulty of erasure of magnetic material) and are generally black in color. They store information on a more secure basis than low-coercivity magnetic stripes because of the higher level of magnetic energy required to encode them.
Information is harder to erase on HiCo cards; therefore, they are most frequently used in applications where cards are swiped often and require a long life (e.g., credit card applications).
LoCo stripes are encoded at 300 Oersted. Low-coercivity stripes are generally brown and store information less securely than high-coercivity magnetic stripes. LoCo magnetic stripe cards are typically used in applications like metro transit ticketing or hotel room access control.
What Information is Stored on Magnetic Stripe Cards?
If you’re not printing on either of these magnetic PVC cards but you need to add digital information to your cards, I recommend investing in the MSR206-33 Three-Track Magnetic Stripe Reader & Encoder. This full-feature device reads and writes both high- and low-coercivity magnetic stripes (300-4000 Oersted). It can encode up to three tracks of data and is backed by a three-year manufacturer warranty.
For example, the following is the amount of data that can be encoded to a magnetic stripe (per ISO 7811 format):
- Track 1: 210 bits/inch (BPI), 7 bits/character (MPC), maximum of 79 alpha-numeric characters.
- Track 2: 75 bits/inch (BPI), 5 bits/character (MPC), maximum of 40 numeric characters.
- Track 3: 210 bits/inch (BPI), 5 bits/character (MPC), maximum of 107 numeric characters.
Another benefit is the MSR206-33 magnetic encoder is that it allows you to “read” what’s encoded on the magnetic stripe of a card. It’s ideal for credit card verification and other types of card-related applications which don’t require printing on the actual card.
Additional Considerations When Buying a Magstripe Encoder
If you’re exploring the option of printing and encoding your ID cards, we have several solutions designed for multiple requirements. In addition, most of our ID card printers are upgradeable to a magnetic encoder module.
Finally, it’s very important to select the right software when you’re implementing a magnetic stripe card to your application. Before purchasing software, make sure to confirm it’s compatible with encoding magnetic stripe cards.
|If you have questions about selecting the right card, hardware and software for your magnetic encoding application, contact a knowledgeable ID Professional at (800) 321-4405 x2 or chat now. We’re here to help!| | <urn:uuid:2990e15d-89e3-49ee-9eb0-542077054eda> | CC-MAIN-2017-09 | https://www.idwholesaler.com/blog/how-does-a-magstripe-encoder-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00568-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.890285 | 849 | 2.578125 | 3 |
When you select a password, you might choose to store it in a password manager, write it down, or commit it to memory (see How to remember passwords for some advice). Sometimes, however, things go wrong: You find yourself without access to your password manager, you lose the paper on which you recorded your passwords, or you forget a password you thought you memorized. Or maybe someone tries to break into one of your accounts, and after a few unsuccessful attempts at entering your password, the site locks out further access until you can confirm your identity.
In all those cases, online services need a secondary way of granting you access to your account or your data when you don't have (or can't use) your password. Sometimes -- especially in lower-security situations such as access to an online publication or discussion forum -- the provider lets you click a link that results in your existing password, a new password, or password-reset instructions being sent to the email address you have on file. When those simple mechanisms are considered too insecure, the site may ask you to respond to verification questions for which you've previously provided the answers.
Unfortunately, password-reset messages and verification questions come with their own problems and risks. You can reduce your chances of being hacked -- or being unable to respond correctly to one of these questions -- by following a few simple tips.
Prevent password-reset mischief
Of all your passwords, the one for your email account may be the most valuable. That's because whoever has access to your email account will be able to read and click links in any password-reset messages you receive (such as when you click an 'I Forgot My Password' link). A hacker who guessed or stole just that one password could unlock many other accounts and do all sorts of damage. You can limit your risk here in a couple of ways.
Use a dedicated password-reset account: Consider setting up a new email account for yourself (using a free service such as Gmail) with an address that you'll never share or post publicly. Use this account only when prompted to supply an email address for the purpose of verifying or resetting your passwords. That way, even if someone breaks into your main email account, the security of your other accounts won't be compromised.
Take extra care with your email account password: Be sure to choose an especially secure password for your email account. Make sure to set your email client to communicate securely with the mail serverusing Secure Sockets Layer, or SSL, protocols for exampleso that your password never travels over the air unencrypted. In Apple's Mail, select Mail > Preferences, click Accounts, choose an email account from the list, and click Advanced. Here you'll see the option Use SSL.
Question the questions
Security questions -- such as the timeless classic 'What is your mothers maiden name?' -- are supposed to have answers that you'll never forget but that most other people won't know or be able to guess. Unfortunately, most of the questions from which you can choose aren't secure at all.
Your mother's maiden name is a matter of public record, and nearly anyone can learn it online in a few minutes. If you ever wrote a blog entry or a Facebook post about your first pet, your favorite teacher, or other common security question topics, those facts are in the public domain too. To make matters worse, some questions invite ambiguous answers, which could work against you. Where did you meet your spouse? That might be in New York or at a baseball game or at Yankee Stadium, for example. Years from now, will you remember which answer you gave?
Devise memorable lies: To address such problems, there's only one right way to answer verification questions -- lie. And don't just lie, but come up with one or more answers that follow the same rules as other passwords to prevent guessability; use either a reasonably long (but memorable) phrase or a series of random characters. So, what was the name of my first pet? Why, it was bookends-qualitative. My mothers maiden name? Her dad was Mr. E27jrdU!8. My favorite car? I loved my 1986 Toyota Recalibration Cantaloupe. It doesn't matter what answers you give, as long as you and you alone know what they are, and can supply the same ones you entered previously if asked.
I know one security expert who says he normally uses the same pseudo-random answer everywhere, although some companies (including Apple) require you to provide different answers to each of several questionsmeaning you have even more password-like data to keep track of. Of course, you can write down your answers or store them in a password manager, but then the same problems that prevent you from accessing your password could prevent you from accessing your security answers.
You might make up a little story for yourself about fictional parents, cars, pets, and the like that you can memorize and then draw on when asked for security answers on different sites. Ultimately, since you're not going to be giving truthful answers, you should go out of your way to remember which lie(s) you told.
Keep them phone friendly: Remember that you could wind up in a situation where youll have to supply these answers over the phone. If that should happen, both you and the person on the other end will have an easier time coping with a series of plain-English words than a bunch of random characters.
How to change your security questions and answers
Each service that uses security questions has its own procedure for choosing the questions and answers (and for changing them after the fact). Check the FAQ pages on the websites for your bank and other important accounts to see how to modify your responses.
Update your Apple info: To change the questions or answers for an Apple ID (which you use for iCloud, among many other purposes), go to the Apple ID page, click Manage your Apple ID, enter your username and password, and click Sign in. On the left, choose Password and Security. Answer your existing security questions, and click Continue. Then you can choose new questions and answers (remember, no two answers can be the same) and also edit your Rescue Email Address if you like. Click Save when youre done.
Update your Google info: If you have a Google account (for Gmail and other services), log in as you normally would. Click the gear icon in the upper-right corner of the window and choose Settings from the pop-up menu. Click Accounts and Import followed by Change password recovery options. Under Security question, click Edit. Choose one of the existing security questions or write your own, and fill in your answer. If you also want to change your secondary address, click the Edit link in the 'Recovery email address' section and fill in the new address. Then click Save.
This story, "When password security questions aren't secure" was originally published by Macworld. | <urn:uuid:b5529df4-c665-41ae-82d3-038d55fc12ce> | CC-MAIN-2017-09 | http://www.itworld.com/article/2716217/security/when-password-security-questions-aren-t-secure.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00492-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941814 | 1,418 | 2.828125 | 3 |
Microsoft's data center cooling a breath of fresh air
We all know that heat is the enemy of computers. In fact, if we think of computers in terms of an ecosystem, heat is probably their only naturally occurring predator. And it’s ironic that they generate it themselves, even though it’s simple physics. You bring electricity into a system to do work for you, and it gets transformed into something else without changing the total amount of energy involved — in this case, mostly heat.
Heat is a unique enemy of processors. It’s actually more akin to a poison, first slowing down its victim, then causing it lots of subtle damages and finally killing it. Data centers, which pack thousands of computers into a relatively small space, are uniquely susceptible to the risks of heat. They deal with it in various ways, and have in the past been criticized by some to be inefficient beasts of burden for it.
As agencies consolidate data centers and virtualize servers, finding new ways to keep them cool could add one more way to reduce costs and increase efficiency.
Many companies are offer examples, streamlining their data centers with new techniques and processes that IDC Government Insights says feds will need to emulate in order to increase their own effective use of computers, data centers and the cloud.
Recently, Google pulled back the curtain on how it manages the heat at some of the largest data centers in the world. The company’s techniques involve stripping almost every unnecessary component and scrap of metal away from the processors inside their data centers. The computers in the racks at Google centers are little more than pallets holding motherboards.
Even the internal walls of the data center are constructed of fabric — enough to direct air flow in the proper direction, but not enough to add to the complex problems associated with heat management. They are also cheap, and easy to reconfigure on the fly. Then Google runs a lot of cool water into the facility, sometimes having the liquid-carrying pipes within inches of the processors themselves.
That’s a pretty efficient model of doing things, but Microsoft is now going one further, literally setting their servers outside in roofless data centers. According to Data Center Knowledge, the idea behind Microsoft’s new billion-dollar roofless data center facility in Boydton, Va., came from Christian Belady, general manager of Microsoft Data Center Services. He thought that computers should be able to brave the outdoor elements, and set up a server rack in a pup tent back in 2008. It ran for eight months with 100 percent up time. That demonstrated that outdoor computer cooling and housing was theoretically possible.
Now, it’s more than just dropping a computer in a field and hoping that it doesn’t get rained or snowed on, or marked by passing animals. Because if you leave it in the elements, that will happen. And it will break.
But Microsoft has been designing smaller and smaller containers to hold its servers for years, Data Center Knowledges reports. Called IT-PACs, for pre-assembled components, the shipping-crate-like boxes can each hold hundreds of servers. Cool air from the outside is brought into the unit through vents on the side, where it passes through a wet membrane that cools the air down before being used to ultimately cool the servers. This method reportedly uses just 10 percent of the water needed to cool most data centers of the same size.
Future Microsoft data centers may be little more than concrete slabs on the ground, with the IT-PACs sitting on top. And although there is some concern that Virginia might prove too hot for this method to work — the company also has experimented with outdoor cooling on a more limited basis in Washington State, Chicago and Ireland — Microsoft seems confident that it will do just fine in The Old Dominion.
I guess we’ll see what happens when the new sparse and efficient data center meets its first brutal southern summer. But in any case, this is a great example of one possible path for agencies to follow as they strive to increase their own efficiency with data centers.
Posted by John Breeden II on Feb 08, 2013 at 9:39 AM | <urn:uuid:90094c29-4d70-4c01-b107-c302bfa33970> | CC-MAIN-2017-09 | https://gcn.com/blogs/emerging-tech/2013/02/microsoft-data-center-cooling-breath-fresh-air.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00012-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94457 | 853 | 2.859375 | 3 |
LAS VEGAS -- French company Induct demonstrated for the first time in the U.S. a driverless eight passenger robotized shuttle, design for transportation in city centers and campus settings.
The all-electric shuttle, called Navia, looks a bit like an oversized golf cart, but instead of seats, passengers lean against padded inner sides. Today, the vehicle travelled a closed course at the CES conference here, stopping at designated spots to allow riders to exit the vehicle before continuing along its route.
Unlike Google's efforts to build a driverless car, Induct chose a shuttle because it can be placed into immediate use without the danger of interacting with other major roadway traffic.
"We've tested it for the past year and a half in Europe, Asia and the U.S.," said Max Leferve, who co-founded the company with his father Pierre. "The tech in Google's car is very expensive. We used the most affordable sensors ... to create a vehicle we can sell."
Leferve said his company built the vehicle smaller so as to facilitate faster loading and offloading of passengers. He also said it's 40% to 60% less expensive than a typical shuttle bus, which can cost up to $200,000 per year to run, including the pay of a driver.
"This vehicle costs $250,000 for a four year lease," he said.
While Leferve's company built the vehicle, it won't be manufacturing the fleet. The company plans to sell the intellectual property for others to build and sell.
Leferve said the company has adopters in the U.S., but didn't reveal who they are.
The Navia uses technology called SLAM (Simultaneous localization and mapping), which builds a map within an unknown environment and can be updated at will.
"It sees where you're driving and creates a map," Leferve said.
The Navia self-driving, all-electric shuttle
The vehicle is programmed through an onboard touch screen display. When in program mode, the vehicle is taken on a route by a driver, learning it as it goes along. Stops are then preset, at buildings on a campus, for example, and riders can use the touch-screen display to designate a stop for themselves. A set of gates slide closed while the vehicle is in motion, and they open for stops.
The Navia has four laser sensors, one on each corner of the vehicle. The lasers scan up 25 times per second at distances of up to 200 yards, aligning the vehicle to its pre-set course and while remaining wary of any obstacles. If an object suddenly enters the path of the shuttle, such as a pedestrian, it will automatically stop.
The vehicle runs on a lithium-ion battery that can power the shuttle for up to seven hours.
Leferve said his company has also developed a mobile app that allows pedestrians to call the vehicle to a pre-designated stop. Induct is also working on a website to allow commuters to call for the vehicle at a designated place and time along its preset route.
The Navia currently travels at 15 miles per hour (mph), but it was been tested at up to 25 miles per hour and Leferve hopes to test the technology on a faster moving vehicle, perhaps even fast enough to travel on secondary roadways.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian, or subscribe to Lucas's RSS feed . His email address is firstname.lastname@example.org.
Read more about location-based services in Computerworld's Location-Based Services Topic Center.
This story, "Driverless Shuttle Aimed at Campuses, Inner Cities" was originally published by Computerworld. | <urn:uuid:20f8f20e-f1a2-4b47-a49f-8e17b84e44c5> | CC-MAIN-2017-09 | http://www.cio.com/article/2379825/automotive/driverless-shuttle-aimed-at-campuses--inner-cities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00364-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966038 | 793 | 2.53125 | 3 |
"The Hobbit," J.R.R. Tolkien's famous story of which the movie version opens Dec. 14, was first published in 1937. The world of Middle Earth was set in an indeterminate time, but looked remarkably like an idealized early 19th-century England, though well-stocked with wizards, dwarfs, elves, dragons, trolls, goblins and of course hobbits. But techwise, it was, and is, the Stone Age of Middle Dearth.
COX CLASSIC: The trials of the Hogwarts IT director
If he was alive today, Tolkien would be creating content on an iPad, crowd-sourcing secondary characters, tweeting arcane but cool references to obscure Anglo-Saxon and Nordic narratives, contributing heavily to Wikipedia, posting photos of his English garden to a Flickr account, and texting Peter Jackson.
And writing "The Hobbit 2.0" ...
In a hole in the ground there lived a hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort super-insulated. It had a perfectly round polyethylene foam core fiberglass door like a porthole, painted green, with a shiny yellow brass knob in the exact middle. The door opened on to a tube-shaped hall like a tunnel: a very comfortable tunnel without smoke, with paneled walls in sustainably sourced mango hardwood, and floors tiled and carpeted in chemical-free hemp (certified woven without use of child labor), provided with polished chairs, and lots and lots of pegs for hats and coats, for the hobbit was fond of visitors.
... This hobbit was a very well-to-do hobbit, and his name was Baggins. The Bagginses had lived in the neighbourhood of The Hill for time out of mind, and people considered them very respectable, not only because most of them were rich, but also because they never had any adventures or did anything unexpected. This is a story of how a Baggins had an adventure ...
... What is a hobbit? I suppose hobbits need some description nowadays, since they have become rare and shy of the Big People Average Sized People, as they call us. They are (or were) a little height-challenged people, about half our height, and smaller than the bearded Dwarves Persons of Short Stature (POSS). Hobbits ... are inclined to be fat in the stomach but still very fit; ... wear no shoes, because their feet grow natural leathery soles and thick warm brown hair like the stuff on their heads (which is curly); have long clever brown fingers, good-natured faces, and laugh deep fruity mellow laughs (especially after a Dietary Guidelines for Middle Earthers-based dinner, which they have twice just once a day when they can get it, with healthy snacks between meals).
Just before tea-time there came a tremendous ring on the front-door bell, and Bilbo set down his Kindle Fire eReader, on which he was perusing the non-fiction bestseller "A Dance with Dragons: A Song of Ice and Fire: Book Five," and went to the door. "It's about time you arrived," he said.
There stood a dwarf POSS, a blue beard tucked into a golden belt, and very bright eyes under his dark-green hood. He hung his hooded cloak on the nearest peg, and as he turned around, Bilbo held up a smartphone with 10-megapixel camera and LED flash and snapped his picture, leaving the stunned POSS blinking. The facial recognition app quickly identified his visitor. "You're Dwalin, at my service, I'm sure," said Bilbo briskly. "Go into the parlour and help yourself to tea and cakes.
And so one by one, and two by two, the POSSes arrived, each photographed and identified, and each seated for tea and a catered meal of sustainably fished, chemical free sushi. Finally, Thorin, the most important of the POSSes, had had enough. "Where's Gandalf, the Gray Wizard? He summoned us here. And who are you?" he demanded.
Bilbo pulled from inside his shirt a large gray smartphone and laid it on the table in front of him. "What's that?" asked Thorin suspiciously.
"That is the new Samsung Galaxy Wizard smartphone, with a 5.8-inch Super AMOLED sun-readable HD display, LTE support, complete with NFC and mobile wallet, which we'll be needing where we're going," said Bilbo. "This is the only magic we need."
"You mean it doesn't run iOS?" asked Fili, dejectedly.
Thorin's fist crashed onto the table. "Where. Is. Gandalf?" he growled.
"That's my Gmail handle," replied Bilbo.
"But Gandalf summoned us ..." Thorin began, to be interrupted by Bilbo.
"I summoned you. By tweet and text and email. Now let's get down to business. I know about the gold."
A stunned silence met this calm announcement. "You obviously don't read WikiLeaks," Bilbo said.
"You don't look like much of a burglar," Thorin snarled.
"You don't need a burglar," Bilbo said. "You need a hacker. You need someone's who plugged in but unwired, online, mobile, adaptable, not to mention cool." His clever Hobbit fingers brushed over the Wizard's high-def screen. "I've sent you the complete agenda for the meeting, a PowerPoint presentation, a link to our Google Maps route showing real-time updates on known troll locations, and TripAdvisor listings for inns, bed and breakfasts, and if needed campgrounds with hot showers. And of course a draft of our contract, which will require your digital signatures."
Thorin stared, even as the other POSSes pulled out smartphones and tablets and began paging through the documents and websites, nodding.
"Wait," the POSS chief said. "I've got this map ..."
Bilbo's nose wrinkled at the sight of the stained, dusty and in truth rather odorous parchment, with its faded scrawls. He waved it away impatiently. "I've already mapped out the GPS waypoints and Google Earth gives us recent-enough terrain views. It's much more accurate," the Hobbit declared. "Plus it's annotated with translations of those runes."
"What about the Great Goblin under the Misty Mountains?" asked Bombur.
"He'll be too busy to bother with us," Bilbo said, with something like a smirk.
"Busy with what?" demanded Thorin.
"He's being video-linked for a live interview with Ellen DeGeneres. His pre-camera makeup prep will take hours. We'll sneak past with no goblin the wiser," said Bilbo.
"Oh! Do you think I can get her digital signature?" asked a wide-eyed Kili.
"What about this Beor, the shape changer?" asked Thorin. "He can help us get to Mirkwood."
"Think about it: shape changing? Seriously, the guy has body image issues. He's unstable. I've filed an electronic report with Social Services. They'll make sure he gets the treatment he needs," Bilbo said. "And find adopted families for his animals."
"Mirkwood has spiders," gulped Bofur. "Really big ones."
"We'll be carrying a Uniden Portable Wireless Video Surveillance Bundle, with extra cameras, to set up around our campsite each night," said Bilbo. "Plus we'll be in constant contact, via real-time video chat link, with the experts at University of California Riverside's Spiders Site.
"But they're spiders!" said Bofur.
"They devour less desirable bugs, like flies, mosquitoes and a lot more, and they can be quite interesting to watch," said Bilbo, reading from the Wizard's screen. "Learn their names and find out as much as you can about them. Keep an online journal. If you are seriously afraid of spiders, it would be wise to attack the fear through counseling or self help methods."
"Uh," said Bofur.
"I've already contacted the Lake People," Bilbo continued. "They'll be hosting our interactive, real-time travel portal, ThereAndBackAgain.com. All of Middle Earth will be able to follow us. The lake dwellers will be getting the lion's share of the online ad revenues and sponsorships, not to mention the franchise for the new lakeside casino resort complex."
"We should be getting a share of all that," said Gloin indignantly.
"We're getting the gold," Bilbo reminded him.
"Oh, right," said Gloin.
"Speaking of gold, what about the dragon?" Thorin exclaimed, and the other POSSes looked up their handhelds.
Bilbo's fingers flew over the Wizard's screen, and he held it up. The others leaned closer. "This 3D simulation (thank heavens for quad-core processors) shows the precise location of the hidden side entrance through the Lonely Mountain to the dragon's lair," he explained.
"But what about the dragon?" Thorin growled.
"The Elf lord, Elrond of the Last Homely House, will have a MQ-1C Gray Eagle drone loitering over the main entrance to the lair," Bilbo explained. "It's armed with eight AIM-92 Urlugrist [Sindarin Elvish for "firedragon-cleaver"] missiles. We'll have a satellite link to the Elf base, and real-time video from the drone. As soon as Smaug emerges from the cave, he's toast."
"What if the lakemen and elves want the gold, too?" asked Dori.
"I've got my lawyer on speed dial. He's already prepared a motion of a stay of battle to be filed electronically," Bilbo said. "Right. It's getting late and we have an early day tomorrow. To bed."
"I'm looking forward to breakfast," said Thorin.
"There's a great organic vegetarian cafA(c) on the edge of town. We'll be stopping there," said Bilbo.
John Cox covers wireless networking and mobile computing for Network World.A Twitter: @johnwcoxnwwEmail: email@example.com
Read more about anti-malware in Network World's Anti-malware section.
This story, "'The Hobbit 2.0' -- How Mobile Technology Would Improve J. R. R. Tolkien's Famous Work" was originally published by Network World. | <urn:uuid:eeeb5986-e45b-4b6d-92e9-8efdf9d66772> | CC-MAIN-2017-09 | http://www.cio.com/article/2389672/wireless-networking/-the-hobbit-2-0-----how-mobile-technology-would-improve-j--r--r--tolkien-s-famou.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00364-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965278 | 2,296 | 2.765625 | 3 |
Bad Sector Recovery
Hard drives are built in a way so that they never return unreliable data. This means that if a hard drive cannot guarantee 100 percent accuracy of the data requested, it will simply return an error and will never give away any data at all.
This article explains how bad sector recovery actually works and why it needs to be done with great caution.
Understanding Bad Sectors
General causes of bad sector formation are physical or magnetic corruption. Physical corruption is easy to understand—it occurs when there is physical damage done to the media surface. Magnetic corruption occurs when a hard drive miswrites data to a wrong location. While the latter may seem to be less damaging, it is actually as dangerous as physical damage, as miswritten data may damage not only adjacent sectors but also servo sectors.
Regardless of the cause of damage, there are several possible outcomes:
- Address Mark field corruption
- Data corruption
- ECC field corruption
- Servo sector corruption
- Or any combination of these
What is common in all these types of corruption is that your operating system or normal data recovery tools cannot read the data from those sectors anymore.
Let’s find out exactly what happens when a tool tries to read a sector that has one of the above-mentioned problems.
Address Mark corruption
When Address Mark is corrupted, the hard drive simply cannot find the requested sector. The data might still be intact, but there is no way for the hard drive to locate it without the proper ID. Some modern hard drives do not actually use sector ID or Address Mark in the sector itself; instead, this information is encoded in the preceding servo sector.
To verify data integrity, a hard drive will always validate it with the error checking and correction algorithm using the ECC code written after the data field (see above diagram). When data is corrupted, the hard drive will try to recover it with the same ECC algorithm. If correction succeeds, the drive will return the sector data and will not report any error. However, if correction fails, the drive will only return an error and no data, even if the data is partially intact.
ECC field corruption
Although this is rare, the ECC code can also get corrupted. In this case, the drive reads perfectly good data from the sector and checks its integrity against the ECC code. The check fails due to the bad ECC code, and the drive returns an error and no data at all, because there is no way to verify data integrity.
Servo sector corruption
There are up to a few hundred servo sectors on a single track. Servo sectors contain positioning information that allows the hard drive to fine-tune the exact position of the head so that it stays precisely on track. They also contain the ID of the track itself.
Servo sectors are used for head positioning in the same way a GPS receiver uses satellites—to exactly determine the current location. When a servo sector is damaged, the hard drive can no longer ensure that the data sectors following the servo sector are the ones it is looking for and will abort any read attempt of the corresponding sectors.
How Bad Sector Recovery Works
Once again, hard drives are built to never return data that did not pass integrity checks.
However, it is possible to send a special command to the hard drive that specifically instructs it to disable error checking and correction algorithms while reading data. The command is called Read Long and was introduced into ATA/ATAPI standard since its first release back in 1994. It allowed reading the raw data + ECC field from a sector and returning it to the host PC as is, without any error checking or correction attempt. The command was dropped from the ATA/ATAPI-4 standard in 1998; however, most hard drive manufacturers kept supporting it.
Later on, when hard drives became larger in capacity and LBA48 was introduced to accommodate drives larger than 128 GiB, the command was officially revived in a SMART extension called SMART Command Transport or SCT.
Obviously, since the drive does not have to verify the integrity of data when the data is requested via the Read Long command, it would return the data even if it is inconsistent (or, in other words, the sector is “Bad”). Hence, this command quickly became standard in bad sector recovery.
There is also another approach which is based on the fact that some hard drives leave some data in the buffer when a bad sector is encountered. However, our tests have shown that chances of getting any valid data this way are exactly zero.
Debunking Bad Sector Recovery
So to “recover” data from a bad sector, one would simply need to issue the Read Long command instead of the “normal” Read Sectors command. That is really it! It is so simple any software developer who is familiar with hard drives can do it. And sure enough, more and more data recovery tools now come with a Bad Sector Recovery option. In fact, it has come to the point where if a tool does not have a bad sector recovery feature, it automatically falls into a second-grade category.
Error checking and correction algorithms were implemented for a reason, which is data integrity. When a hard drive reads a sector with the Read Long command, it disables these algorithms and hence there is no way to prove that you get valid data. Instead, you get something, which may or may not resemble your customer’s data.
Tests in our lab had shown that, in reality, by using this approach, you will get much more random bytes than anything else. Yes, there are cases where this approach allows recovering original data from a sector, but these cases are extremely rare in real data recovery scenarios, and even then, only a part of the recovered sector will contain valid data.
Even when we got some data off the damaged sector, what exactly should we do with its other (garbled) part? And how exactly do we tell which part of the sector has real data in it and which is just random bytes? Nobody is going to manually go through all the sectors in a HEX editor and judge which bit is valid and which is not. Even if someone did, there is no way to guarantee that what they see is valid data.
And this is where the real problem starts.
Dangers of Read Long approach
Imagine a forensic investigator recovering data off a suspect’s drive while the drive has some bad sectors on it. To get more data off the drive, the investigator enabled Bad Sector Recovery option in his data acquisition tool. In the end, his tool happily reported that all the sectors were successfully copied, so he began extracting data from the obtained copy.
While looking for clues, he found a file that had social security numbers in it. He then used these numbers in one way or another for his investigation.
What he did not know is that one of the sectors that contained these numbers was recovered via the Read Long command, and some bits were flipped (which is very common for this approach). So instead of 777-677-766, he got 776-676-677, causing him and other people a whole lot of unnecessary trouble.
Another example: when recovering a damaged file system, even slightly altered data in an MFT record can mislead the file recovery algorithm and in the end do much more harm than if there was no data copied at all in that sector.
Once again, an error checking and correction algorithm is in place for a great reason. There is absolutely no magic in bad sector recovery; it is impossible to recover something that just isn’t there.
There are tools that claim better bad sector recovery because they utilize a statistical approach, an algorithm where the tool reads the bad sector a number of times and then reconstructs the “original” sector by locating the bits that occur most often in the sector. While these tools claim this approach could improve the outcome, there is no evidence to back up the validity of such claims. Furthermore, rereading the same spot many times while the hard drive is failing is a good way to cause permanent damage to the media or heads.
To summarize, if you are after valid data, avoid using any bad sector recovery algorithms. These algorithms will never offer data integrity no matter how complex their implementation is. And when you absolutely must recover data from bad sectors, make sure you use a tool that properly accounts for these recovered sectors, marking the files containing such sectors. This way, the operator has the ability to disregard such “unreliable” files and manually verify file integrity if it is an important one.
|Dmitry Postrigan is the founder and CEO of Atola Technology, a Canadian company that makes high-end data recovery and forensic equipment.| | <urn:uuid:b0aed521-3faf-42bf-a45a-3789eac85f74> | CC-MAIN-2017-09 | https://articles.forensicfocus.com/2013/01/21/bad-sector-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00184-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943146 | 1,801 | 3.125 | 3 |
Technology In The Classroom
With the back-to-school season now upon us, parents, students and teachers everywhere are once again struggling with the perpetual challenge of making kids job-ready in a high-speed and fast-changing environment. There is little doubt in anyone’s mind that information technology in the classrom plays a central role in all areas of life but sadly, access to this technology and the subsequent reaping of its benefits often stays out of reach, either due to lack of funding, lack of familiarity, or outright resistance.
In a recent USA Today special feature, in which CloudTweaks played a central role, Dr. Steve Paine, President of the Partnership for 21st Century Skills (P21), pointed out that “sixty-five percent of today’s students will have a job that has not been invented yet.”
In the same feature, Vint Cerf, one of the “fathers” of the Internet, and Grammy-winning musician John Legend, both attested to the power of interactive and customized learning, as a university professor and a former underprivileged student respectively.
CloudTweaks writer Adam Hausman provided a case for the effectiveness of the cloud as a secure and central repository for all of the essential files and documents that go into effective learning.
The resistance to the implementation of interactive technologies in the public school system stems from a collection of substantive adult fears that revolve primarily around control. Many teachers are not comfortable with technology and see it as a distraction from their existing lesson plan rather than a central component of it. Although they would like fulfill their calling to teach young minds, they do not feel comfortable with the sophistication and the high rate of change that modern technology delivers. Parents also are concerned about the lack of control that occurs when young children are “let loose ” on the internet. Concerns about access to pornography, scam sites, or simply ideas beyond their level of familial comfort cause parents and their related organizations to impose pressure on teachers to limit access to online material. To many, the various technologies of the online world represent a vast unknown, one which didn’t exist when they were growing up, and therefore seems unnecessary.
But the fact remains that students today not only need to be prepared for both traditional jobs and the jobs of tomorrow, they also need to be given a chance to learn, regardless of socio-economic situation or personal learning style. The USA Today article points out this is not just a nice-to-have idea. The quality of education in the U.S. is declining while other countries are aggressively taking advantage of the comparative low cost and the huge potential of online learning, giving their students a keen advantage in the global workplace.
As always, CloudTweaks played a central role in the co-ordination and content development of this special USA Today supplement. Our team of experts and writers constantly help our clients to stay abreast of the changes and benefits of interactive technology and cloud solutions, to help ensure that anyone who needs to learn about technology, including established professionals, can obtain what they need to know in real time, and in a way that makes sense. In November 2013 we will be teaming up with Fortune Magazine/Time Inc to create a special supplement that discusses the outlook for the cloud in 2014, including stories on security, commerce and innovation.
Email CloudTweaks for more information about this special project, and how your organization can be part of it…
By Steve Prentice | <urn:uuid:2c8e7e38-c5af-468e-934b-fb74891ca984> | CC-MAIN-2017-09 | https://cloudtweaks.com/2013/09/the-education-revolution-cloud-in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00184-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952437 | 717 | 2.625 | 3 |
Intranets are private networks that use standard Internet access methods. By definition, an intranet site is restricted to use within an organization. No public access is allowed. Unlike firewall-protected networks, intranets can't be seen by the outside world. Out of sight is out of mind -- and out of reach.
Users access the intranet using standard Internet browsers and file transfer methods. These universal client tools are safe and reliable because they are field-tested and used by millions of Internet users worldwide. The intranet provides a single point of internal distribution for a wide variety of information. Standard home pages provide links to data of nearly any type. Home page data can be more accurate and timely than paper because it is maintained directly.
Electronic data distribution is another step toward reducing the mountain of paper that circulates through an organization. Project histories, phone books and work schedules can be made available to the enterprise without incurring printing costs. Administrative functions such as company calendars, employment policies and holidays are readily accessible to all users. The graphical interface provides for easy viewing of charts, graphs and maps.
In most organizations, paper documents become obsolete soon after they are printed. With an intranet, the online copy is the one that is current. Traditionally, large inventories of preprinted forms for insurance benefits and other company business were made available to employees. Locating one of these forms was often a time-consuming adventure. A single change on a paper form rendered it obsolete. The remaining forms usually wound up as a large deposit in the local landfill. Electronic forms can be readily changed without waste or printing delays.
Intranets are not intended to replace groupware products like Lotus Notes and Novell GroupWise. These products offer collaboration and document database systems that are currently beyond the reach of intranet systems. Existing groupware systems are enhanced by an intranet publishing system.
Lack of replication is another intranet limitation. Lotus has a highly developed replication scheme in Notes that is currently unrivaled. Changes made to data on the first Notes server are transparently propagated to all other Notes servers.
Internet style e-mail and news groups won't replace existing mail and work group systems on the corporate LAN. None of the POP3/SMTP-based mail packages offer any significant advantages over existing mail systems. Although NNTP (Network News Transfer Protocol) performs well for news groups, it won't supplant Lotus Notes as a collaborative working environment.
Industrial-strength client/server computing does not yet exist on intranet technology. HTML code is too weak for this use and CGI back-end applications are being replaced by newer technologies. Java offers much promise, but has yet to deliver.
Intranets operate with the same TCP/IP protocol suite used on the Internet. Every workstation accessing the intranet must have a unique IP address. Installing and administering these addresses is an important step in the planning process.
Most of the popular Web server packages include DHCP (Dynamic Host Configuration Protocol) to ease the burden of IP address administration. DHCP dynamically assigns IP addresses on demand from a predefined pool rather than manually installing a dedicated IP address to each desktop.
For NetWare users, Novell's new IntranetWare includes its IPX/IP gateway that allows IPX networks to connect to TCP/IP resources. The gateway operates as a proxy server by providing TCP/IP connections to work- stations running only IPX.
NetWare workstations send all their TCP/IP traffic to the gateway using IPX packets. The gateway translates these requests to TCP/IP packets and routes them to the desired intranet server. Only one IP address must be maintained, the one used by the IPX/IP gateway. One benefit of the IPX/IP gateway is that no IP stack is required on the workstation.
Intranets by themselves do not have the stringent security requirements of networks that connect to the world at large. However, the time may come when the internal network becomes connected to the outside world. Because of this, the intranet should be designed with the organization's security policy clearly in mind. Security is easier to administer when implemented at the start of a project, rather than as an afterthought.
Firewalls and proxy servers are the enforcement tools for an organization's security policy. Without a well-defined policy these devices are ineffective. Lax or nonexistent security policies expose the network to break-in and abuse.
Invasions occur when an organization connects hundreds of LAN users to the outside world without having security in place. This type of seat-of-the-pants flying is begging for problems. Experienced break-in artists can exploit these weaknesses and gain entry to the internal network.
Unsecured systems are ripe for abuse and the software developers know it. They have created products like the Optimal Networks Internet Monitor to allow administrators to police their users. This software product works as a Big Brother that eavesdrops on all IP connections. It tracks users who visit inappropriate sites or surf all day on company time. Afterthought security measures like these consume both time and resources.
The hardware requirements for intranets are modest. Software costs vary by the product, and must be carefully considered. Windows NT server licensing has been labeled as predatory because of the high cost for connections. By comparison, Novell's Web server product offers unlimited HTML connections using only a two- license version of IntranetWare.
Training funds are required for server administration and HTML publishing skills. Departmental groups responsible for maintaining their own home pages will need training in basic graphics-publishing techniques.
Most Windows users will soon be operating on either Windows 95 or NT. Both operating systems have a TCP/IP stack built in. This saves the expense of purchasing and installing a separate TCP/IP stack on each desktop.
Microsoft shops will naturally install the highly respected Internet Information Server (IIS) that comes bundled with Windows NT Server. IIS is a full-featured, mature Web server that integrates seamlessly into an NT network.
Novell users have a choice of using the Web server bundled with IntranetWare 4.11 or running the Microsoft IIS server as an NDS object. The Novell server currently lacks several of the bells and whistles found in the Microsoft offering, but is more attractively priced. Novell has the additional advantage of outperforming the NT-based IIS server on similar hardware. UNIX shops should take full advantage of any of the top-quality UNIX server packages available.
Intranets are not a replacement for the file, print and database functions currently existing in your network. When used within their design criteria, intranets provide an excellent means of inexpensively publishing company data.
Bruce Gavin is a Novell CNE. You can reach him at <70137,email@example.com> or .
[ March Table of Contents] | <urn:uuid:20c00870-1073-44a4-8eb6-c1104b9f4315> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/Intranets-For-Internal-Publishing-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00360-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924604 | 1,435 | 2.515625 | 3 |
Computer users who get headaches, eye strain, dry eyes and difficulty focusing could be suffering from a form of repetitive strain injury of the eyes called Computer Vision Syndrome (CVS).
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
David Summers an optometrist at Melina Joy Opticians, in Heathfield, Sussex, urged anyone who uses a computer as part of their job to follow health and safety recommendations by taking regular screen breaks.
"Setting a timer for taking regular screen breaks can make a huge difference," he said. "For every 20 minutes you look at the computer screen, look away for 20 seconds."
This helps the eyes focus on more distant objects, which is another symptom of CVS. "People who stare at a computer screen all day often find it difficult to focus on distant objects. Sometimes, they find it hard to read train timetables on the platform."
Blinking regularly, using eye drops and making sure there is no screen glare and the work area is well lit, can all help to reduce problems with eye strain and dry eyes, he said.
Summer said people using reading glasses can now buy lenses that allow them to work more comfortably with a computer terminal. One example, the Nexyma, from German spectacle lens specialist Rodenstock, is a variable reading/intermediate distance lens.
Tips on tackling Computer Vision Syndrome
- Take regular screen breaks
- Look away from the computer screen for 20 seconds every 20 minutes
- Blink regularly
- Use eye drops to improve eye comfort
- Ensure work area is well lit and no glare is coming off the screen | <urn:uuid:e8bb72ec-1cd6-4ac0-adb6-16b968a1df09> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/2240086564/Computer-users-may-suffer-visual-RSI | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00060-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935674 | 342 | 2.515625 | 3 |
The IEEE has approved a new standard that kicks Wi-Fi up into the service provider environment.
IEEE 802.16m is actually an extension to existing standards, with the official name of "Amendment to IEEE Standard for Local and Metropolitan Area Networks, Part 16: Air Interface for Broadband Wireless Access Systems – Advanced Air Interface."
The organization explained that IEEE 802.16m provides the performance improvements necessary to support future advanced services and applications for next-generation broadband mobile communications.
It incorporates technologies such as multi-user MIMO, multicarrier operation and cooperative communications. It supports femtocells, self-organizing networks and relays. | <urn:uuid:4e74f2b4-1a3c-4dde-af12-194665d5e726> | CC-MAIN-2017-09 | https://www.cedmagazine.com/news/2011/04/new-ieee-standard-brings-wi-fi-provider-world | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00480-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.890797 | 134 | 2.59375 | 3 |
If you want to build your own Internet of Things, try the toy monkey hack.
Digi International, a company that makes software and hardware systems for the Internet of Things (IoT) has a step-by-step guide to create an alert device using its postage stamp-size XBee wireless radio.
You will need a few other parts, and a little soldering to attach a battery-powered cymbal playing monkey to a wireless network and Internet. But once it's connected, the fun begins. In this case (see instructions and video), the cymbal banging toy monkey can be integrated into an alert system to notify, an IT department , for instance, that "the call center is down."
The cymbal monkey hints at the creativity involved with the IoT.
There are a wide array of sensors that have, in total, more capability than humans in sensing the environment. Sensors can capture motion and direction, magnetic fields, sound and light, and many other things. In the environment, sensors can be used to detect chemicals and pollutants. Biosensors to detect the presence of bacteria in food supplies may become a major growth area.
You can mix and match sensors, connect them to a wireless network, and send the data to clouds and applications for analysis. A major connecting point to the IoT won't be via cymbal-banging monkeys but, more likely than not, government deployments.
Although IT budgets for towns and cities are increasing at about 1.5% a year, worldwide spending on IoT will reach, $265 billion this year and will grow by 11% each year for the next five year, said Ruthbea Yesner Clarke, the Smart Cities Strategies director at IDC. This figure includes spending by military and national governments.
IoT adoption is helped by vendors that often sell sensing devices with a cloud platform for data delivery. The vendors install and maintain the systems, Clark said. "A cloud supported system is really spurring adoption," she said.
The ability to combine sensors in IoT deployments has many possibilities. Streetline, a company that installs sensors in pavement that help drivers find parking spots via an app, said this month that it is adding sound sensors to provide real-time data on noise levels, as well as road temperature sensing, which can help determine, for instance, when salting should be deployed.
What Clarke expects to see ahead is more coordination between disparate systems. An acoustic monitoring system, for instance, would get linked to video surveillance and street lighting systems, all of which come into play when gunfire is detected.
Silver Spring Networks, a company that develops networks for utilities, is working to reduce the cost of streetlights in Paris, the "City of Lights," by about 30% over the next decade. This week the company announced a sensor network for broad use.
The intent of its new network, the SilverLink Sensor Network, is to provide common networking architecture and security provisioning to readily enable the addition of new sensors and applications, said Eric Dresselhuys, executive vice president of global development at Silver Spring, and one of its founders.
Dresselhuys said governments everywhere "are increasingly worried about the livability of cities." Communities want to improve traffic management and better use their resources.
Creating connected devices is getting traction in the Maker Movement community, but the IoT components that Digi sells also appeal to IT shops in industries and governments that are "trying to build quick solutions and rapid prototypes to show what's possible," said Joel Young, Digi's CTO and SVP of research and development.
Digi, which has been building machine-to-machine connected systems since 1985, makes systems used in a wide range industries. It also offers cloud services that connect devices to applications and enable, for instance, a traffic light to call for its own repair.
When he looks at the changes coming via the IoT, Young said people may not even notice. "It's the kind of thing that sneaks up on you," he said.
This article, The Internet of Things in five words: sensor, monkey, radio, cloud, Paris, was originally published at Computerworld.com.
Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about government/industries in Computerworld's Government/Industries Topic Center.
This story, "Internet of Things in five words: sensor, monkey, radio, cloud, Paris" was originally published by Computerworld. | <urn:uuid:ef6a803b-c531-464a-9683-287490d10f8a> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2173861/smb/internet-of-things-in-five-words--sensor--monkey--radio--cloud--paris.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00356-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948018 | 967 | 2.515625 | 3 |
That's because the Government Accountability Office reports that efforts to deploy two of the Fed's most prominent security efforts the Trusted Internet Connections (TIC) and Einstein (or officially known as the National Cybersecurity Protection System) have largely gone unused, keeping the threat of cyber attacks on federal systems very real.
According to the GAO: As of September 2009, none of the 23 federal agencies it looked at had met all of the requirements of the TIC initiative. Although most agencies reported that they have made progress toward reducing their external connections and implementing critical security capabilities, most agencies have also experienced delays in their implementation efforts. TIC is supposed to secure and consolidate federal agencies' external network connections, including Internet connections, set baseline security and improve the government's response to infiltrations. Early this year the Office of Management and Budget is directing agencies to deploy a standard set of security tools and processes on all of their Internet connections, which may explain why many agencies haven't caught up.
In the same time frame, fewer than half of the 23 agencies had executed Einstein and Einstein 2 had been deployed to 6 agencies. Agencies that participated in Einstein 1 improved identification of incidents and mitigation of attacks, but the Department of Homeland Security which oversees this efforts, will continue to be challenged in understanding whether the initiative is meeting all of its objectives because it lacks performance measures that address how agencies respond to alerts. Einstein technology is intended to provide the DHS with Internet monitoring capability including intrusion detection.
While the GAO doesn't specifically link the lack of TIC and Einstein implementations to specific problems, its notes that federal security breaches have potentially allowed sensitive information to be compromised, and systems, operations, and services to be disrupted. For example:
- The Department of State experienced a breach on its unclassified network, which daily processes about 750,000 e-mails and instant messages from more than 40,000 employees and contractors at 100 domestic and 260 overseas locations.
- The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as "Slammer" infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours.
- Officials at the Department of Commerce's Bureau of Industry and Security discovered a security breach in July 2006. In investigating this incident, officials were able to review firewall logs for an 8-month period prior to the initial detection of the incident, but were unable to clearly define the amount of time that perpetrators were inside its computers, or find any evidence to show that data was lost as a result.
With agencies still in the process of implementing TIC and DHS in the early stages of deploying Einstein 2, the success of such large-scale initiatives will be in large part determined by the extent to which DHS, OMB, and other federal agencies work together to address the challenges of these efforts, the GAO stated.
This report comes on the heals of another GAO study that found about 69% of the IRS' previously noted security flaws remain unfixed and continue to jeopardize the confidentiality, integrity, and availability of the tax agency's systems. The problems put the IRS at increased risk of unauthorized disclosure, modification, or destruction of financial and taxpayer information, the GAO concluded.
The GAO recently issued another report stating that disruptive cyber activities are expected to become the norm in future political and military conflicts.
From the GAO: "The growing connectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt telecommunications, electrical power, and other critical services. As government, private sector, and personal activities continue to move to networked operations, as digital systems add ever more capabilities, as wireless systems become more ubiquitous, and as the design, manufacture, and service of information technology have moved overseas, the threat will continue to grow."
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:58dc04bc-979b-425e-9fa5-b0c082e83330> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2230410/security/report-rips-key-government-security-efforts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00056-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956988 | 811 | 2.640625 | 3 |
In a presentation at the Intel Developer Forum today, Intel CTO Justin Rattner described three major innovations that Intel has been working on, the first of which is what the company calls a "terascale research chip"—a processor with 80 cores on one piece of silicon that can deliver up to a teraflop (1 trillion floating-point operations per second) of computing power. The other two innovations had been previously announced: a technology called through silicon vias (TSVs), and an optical interconnect technology.
The 80-core processor consists of eight simple floating-point cores that each implement a small, stripped-down, non-x86 ISA. These cores are arranged in a tile pattern and connected to each other by means of an on-chip network. Note that these cores are almost certainly in-order, and are certainly less complex than the Cell processor's SPEs. The whole thing is very reminiscent of Sun's Niagara, and in fact I've heard that internally Intel uses their own little water-based metaphor for it; they call it the "sea of cores" approach.
From what I can tell, the cores' connection to memory is a bit odd, and involves a combination of the aforementioned on-chip network and a new technology called through silicon vias.
Like a Dagwood sandwich with multiple toothpicks
The TSVs are a kind of 3D interconnect technology that involves stacking chips directly on top of each other. There contacts on the adjacent faces of each chip that act as vertical wires, or "vias," and that connect the two chips together. To get a mental picture of what I'm describing, just think of a Dagwood sandwich with toothpicks sticking in it, such that the different layers of the sandwich are connected via the toothpicks. The layers of the sandwich (bread, meat, lettuce, etc.) would be the silicon chips, and the toothpicks would be the wires that the chips use to talk to each other. Now, imagine a side of chips with the sandwich...
Anyway, by stacking memory directly on top of a massively multicore processor and then having wires come up through the different points of the processor and connect directly to the memory chip, Intel claims that they can get transfer rates between the processor and memory of up to a terabyte per second.
Though it's not spelled out in the press release and I haven't seen it described this way elsewhere, here's how I imagine that it works. The 80 cores are arranged in a tile configuration and are connected by an on-chip network, as described above. Then, Intel places a via (or a bundle of vias) on the network at intervals across the chip, so that the few tiles sitting near each via can have a fast connection to whatever part of memory that that via connects to. Of course, the tiles in one region of the chip would have a slower connection to the more distant regions of memory through other vias that are further away than they would to the via that's nearby.
Silicon "laser device"
It's one thing to move data at a terabyte/second in between a CPU and a pool of closely coupled RAM, but it's another trick entirely to get such a package to talk to the rest of a computer system at a fast enough rate. This chip and memory combination could starve conventional socket and interconnect technology pretty quickly, which is why Intel is working on using lasers to connect the silicon sandwiches together at very high bandwidths.
I won't spend any time describing this particular technology, because our own Clint Ecker did an excellent job writing it up earlier this month.
What's the point of an 80-core processor?
You're probably wondering what the point of an 80-core processor is, when PS3 programmers are moaning about having to code for a chip with a mere seven small, in-order floating-point cores. This question has few answers, depending on how you approach it.
In the near-term, the point of this terascale chip is that it's a research project. The individual cores are very simplified, and they don't implement a standard ISA, because right now they're there for research purposes. (I'd expect the cores to get more complex, and maybe to offer more than just floating-point, in a production model.) So the chip as a whole provides a platform for tooling around with massively multicore architectures, and figuring how to organize them, connect them to memory, program them, and generally bring ideas from the drawing board into the lab. In other words, this chip is a prototype, and it points in a direction that Intel thinks they'll eventually take.
From a manufacturing and hardware design standpoint, the main problems that go with making use of an 80-core processor are interconnect- and memory latency-related. So Intel is clearly trying to solve those with TSVs and the laser interconnect technology, so that they can make usable systems built around such massively multicore chips.
This brings me to the long-term part of the question about the point of an 80-core processor. Software developers will point out that the only computing problems that could use the muscle of an 80-core chip like this exist in the rarified realm of high-performance computing, where programmers simulate weather patterns and nuclear blasts and whatnot. In the consumer software market, software architects are struggling to make use of the embarrassment of computational riches provided by dual-core processors, quad-core processors, and (most recently) GPUs.
All of this is true, as far as it goes, but I can't help but think that if such systems are widely available in the next decade, entrepreneurs will come up with a ways to make money from them. The nagging issue here is that I have no idea what a mass-market 80-core software application looks like, and neither does Intel (or Microsoft, or Sun, or IBM, etc.).
So to sum up, in the short-term, the terascale chip is a research platform for working out the kinks of massively multicore system and software design. In the long-term, this endeavor definitely has an air of "if we build it, will they come?" about it. But too many hardware makers are moving in this direction for the rest of the industry not to follow them. So even though Intel is forging ahead into uncharted territory with this "sea of cores" initiative, they're not doing so alone. | <urn:uuid:7e8808fc-2a6d-4191-9463-9b0642a9642b> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2006/09/7840/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00056-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95741 | 1,330 | 2.984375 | 3 |
Army models insect flight for future 'bugbots'
Swarms of tiny, autonomous robots could infest tomorrow's battlefields
- By Henry Kenyon
- Mar 25, 2011
If Army researchers are successful, future warfighters will use small robots that mimic the locomotion of creatures such as lizards and houseflies to scout enemy positions. Modern robots are very successful at what they do, but they are still limited by their size, lack of autonomy and the types of movements they can do.
Borrowing from Mother Nature’s design sheet is one of the main goals of the Army Research Laboratory’s Micro-Autonomous Systems Technology program. One purpose of the effort is to develop small robots that can work collectively and autonomously in complex urban environments.
It currently takes many soldiers to control a robot and protect the machine’s operator. MAST seeks to flip this equation around from many soldiers to one robot into many robots to one soldier, explained William Nothwang, the lead scientist for the MAST-Microelectonics Center at ARL.
The Army is especially interested in creating small robots that fly like insects. Researchers are currently studying how houseflies fly by modeling the dynamics of their wings, which have tiny structures called haltere that act like pendulums to help them maintain stability when they fly and maneuver.
MAST’s ultimate goal is to deploy swarms of tiny robots to search caves and buildings. Each micro machine would be equipped with a sensor, either a video camera or a chemical agent detector. The robots would communicate with each other, passing data through the swarm and back to a command center as a single unified message.
The big challenge is replicating insect flight in a bug-sized robot. ARL scientists noted that there have been successful attempts at making mechanical insect wings on a larger scale. The goal is to reduce the size of those wings and to develop the sensors needed to measure and control the complex dynamics necessary to keep such a tiny machine in the air, down to something that is less than a centimeter long.
Although the first steps have been taken by researchers, Nothwang cautioned that there is still much work to do before a haltere-based wing system is small enough to keep a military-grade bugbot aloft. | <urn:uuid:8e92ef10-afa8-4598-ab18-8148431aeca4> | CC-MAIN-2017-09 | https://gcn.com/articles/2011/03/25/army-studies-insects-to-design-bugbots.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00528-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954766 | 469 | 3.15625 | 3 |
Virtualization and Cloud Computing: Does One Require the Other?
Many people believe that cloud computing requires server (or desktop) virtualization. But does it? We will look at using virtualization without cloud computing, cloud computing without virtualization, and then look at using both together. In each case, we'll look at where each deployment might be most useful, some use cases for it, and some limitations.
This white paper examines the relationship between cloud computing and virtualization. Many people believe that cloud computing requires server (or desktop) virtualization. But does it? We will look at using virtualization without cloud computing, cloud computing without virtualization, and then look at using both together. In each case, we'll look at where each deployment might be most useful, some use cases for it and some limitations.
Virtualization without Cloud Computing
Most organizations are virtualized without cloud computing. According to recent surveys, approximately 60 percent of all servers today are virtualized. Virtualization is deployed in businesses of all sizes and affects all industries, organizations, governments, and so forth. Virtualization projects typically start with compute (i.e., server) virtualization, as it is usually the easiest to virtualize and provides the greatest return on investment. This is what is most commonly thought of as "virtualization."
However, more can be virtualized. Both networking and storage can be virtualized. Network Functions Virtualization (NFV) refers to the virtualization of traditional networking functions such as switching, routing, and load balancing. It can include firewalls, Intrusion Detection or Prevention Systems (IDS/IPS), antivirus management and more. Often, NFV is combined with Software Defined Networking (SDN) to automate management of the various physical and virtual network components.
Many vendors also offer Software-Defined Storage (SDS), including traditional vendors, such as EMC, as well as companies that have specialized in SDS for years, such as Data Core. The idea is to use commodity storage devices, often installed in servers, and virtualize access to them so the local storage inside each server gets pooled together and becomes visible as shared network storage.
When virtualized compute, networking, and storage are combined, the result is the Software Defined Data Center (SDDC), which promises a great deal of automation and scalability. Many companies will go to this point and stop.
What is left undone if cloud computing is not also introduced? The self-service provisioning of the VMs necessary for the business workloads to run. It often takes days or even weeks for a VM to go through the approval processes at an organization and for a virtualization administrator to get the necessary VMs created and made available to the users. This decreases a company's agility and often leads users to find a cloud platform on their own, outside the control of IT. This can lead to security issues for the organization, as well as less demand for IT resources, which if taken to the extreme, would drastically reduce or eliminate the need for IT at the company.
So what are some good use cases for using virtualization without cloud computing? Small businesses that don't have an extended VM provisioning process. Medium-sized businesses may also be OK with virtualization only, especially if they don't have developers or others that need VMs provisioned quickly. | <urn:uuid:c01529e0-861b-4726-89e6-2f21a5455b78> | CC-MAIN-2017-09 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/virtualization-and-cloud-computing-does-one-require-the-other/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00052-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948396 | 686 | 2.71875 | 3 |
A lot of iPhones have been sold since the original iPhone was released almost nine years ago. While some older devices are still being used or collected by Apple fans, most of them end up in the recycling heap. A Bloomberg article published on Wednesday goes into some of the details about the iPhone recycling process.
Essentially, iPhones given to Apple to recycle are torn to shreds. The company doesn’t reuse chips or other components for fear of feeding the secondary market with fake Apple products. The shredded material is processed; hazardous waste is dealt with properly, and materials such as gold, copper, aluminum and glass are recycled.
According to Bloomberg, Apple collected more than 40,000 tons of e-waste in 2014 from recycled devices. The company collects and recycles 85 percent by weight, exceeding the 70 percent standard set by the electronics recycling business.
Why this matters: Apple is approaching one billion iPhones sold. Apple is just as secretive about its recycling process as it is with its product development process, so perhaps this is a sign that the company takes the recycling of old devices seriously. Lisa Jackson, Apple’s head of environmental affairs, told Bloomberg that the company is investigating methods that will allow the company to reuse components instead of shredding them.
This story, "An Apple plant in Hong Kong shreds iPhones into tiny pieces" was originally published by Macworld. | <urn:uuid:8d145b60-2173-442d-b239-15f9fa964328> | CC-MAIN-2017-09 | http://www.itnews.com/article/3034251/iphone-ipad/an-apple-plant-in-hong-kong-shreds-iphones-into-tiny-pieces.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00104-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961294 | 280 | 2.84375 | 3 |
To make your home or small business Wi-Fi network safe, the single most important thing to do is implement the authentication security capabilities built into your Access Point and Wi-Fi adapters. Authentication lets you control who can access your wireless network, and encrypts (scrambles) your wireless network data transfers so no one can eavesdrop on you. First set up the Access Point by connecting it with a network cable to your PC, then using a browser to set up the security. Check your manual for detailed instructions. You will be setting up your network password, so be sure to write down and keep it somewhere you can find it. You then need to setup all of your wireless PCs with the matching security and passwords so they can use the network. Usually you can do this by clicking the little antenna in the lower right of your desktop, and then following the guided setup process. Use the newer Wi-Fi Protected Access (WPA) encryption. It shows up as either WPA-Personal or WPS-PSK (for Pre-Shared Key) on your setup screens. They are the same. The newest security on the scene is WPA2. Avoid using the older Wired Equivalency Privacy (WEP), because it can be easily broken in a few minutes with widely available tools. One note is that all the devices on your network have to have the same security, so if you have some older devices that don’t support WPA, you should replace them. | <urn:uuid:e4d2e84d-d369-4d6c-86f1-d6d2b274a914> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/us/how-do-i-make-my-home-or-small-business-wi-fi-network-safe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00524-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928807 | 302 | 2.546875 | 3 |
Science and Technology Quiz - Questions & Answers
Are you looking for quiz questions and answers about science and technology? You've come at the right place. Check out this Science and Technology quiz and see if you can answer the following questions on science, technology and electronics? Play this quiz to see how good you are at Science and Technology questions.
Science and Technology Quiz Questions
Here is the list of quiz questions and answers about Science and Technology. Can you answer the following questions on electronics, science and technology?
Question: Which is a type of Electrically-Erasable Programmable Read-Only Memory?
Question: What is made from a variety of materials, such as carbon, and inhibits the flow of current?
Question: "FET" is a type of transistor, Its full name is ________ Effect Transistor.
Question: A given signal's second harmonic is twice the given signal's __________ frequency. Fill in the blank?
Question: Voltage is sometimes referred to as EMF, or Electromotive ________?
Question: When measuring the characteristics of a small-signal amplifier, say for a radio receiver, one might be concerned with its "Noise ________"?
Question: The average power (in watts) used by a 20 to 25 inch home color television is ________?
- Over 1000
Question: The most common format for a home video recorder is VHS. VHS stands for ________?
- Video Home System
- Very High Speed
- Video Horizontal Standard
- Voltage House Standard
Question: If the picture is stretched or distorted up and down like a fun house mirror the circuit to adjust or repair is ________?
Question: The electromagnetic coils on the neck of the picture tube or tubes which pull the electron beam from side to side and up and down are called a ________?
Question: The input used by an antenna or cable to a TV set uses ________ frequencies?
Question: The transformer that develops the high voltage in a home television is commonly called a ________?
- Tesla coil
- Van de Graaf
Question: Most modern TV's draw power even if turned off. The circuit the power is used in does what function?
- Remote Control
- Color Balance
- High Voltage
Question: In a color television set using a picture tube a high voltage is used to accelerate electron beams to light the screen. What is that voltage?
- 500 Volts
- 5 Thousand Volts
- 25 Thousand Volts
- 100 Thousand Volts
Question: The NTSC (National Television Standards Committee) is also used in the country of ________?
Question: In the USA, the television broadcast standard is ________?
Question: Which is NOT an acceptable method of distributing small power outlets throughout an open plan office area?
- Power Poles
- Power Skirting
- Flush Floor Ducting
- Extension Cords
Question: In the UK, what type of installation requires a fireman's switch?
- Neon Lighting
- High Pressure Sodium Lighting
- Water Features
- Hotel Rooms
Question: What will a UPS be used for in a building?
- To provide power to essential equipment
- To monitor building electricity use
- To carry messages between departments
- To control lighting and power systems
Question: Larger buildings may be supplied with a medium voltage electricity supply, and will required a substation or mini-sub. What is the main item of equipment contained in these?
Question: Some lasers are referred to as being CW. What does CW mean?
- Circular wave
- Constant white
- Continuous wave
- Clear white
Question: What is the process responsible for producing photons in a diode laser?
- Fermi level shift
- Majority carrier injection
- Carrier freeze out
- Electron-hole recombination
Question: What are three types of lasers?
- Gas, Metal Vapor, Rock
- Pointer, Diode, CD
- Diode, Inverted, Pointer
- Gas, Solid State, Diode
Question: What was the active medium used in the first working laser ever constructed?
- A Diamond Block
- Helium-Neon Gas
- A Ruby Rod
- Carbon Dioxide Gas
Question: After the first photons of light are produced, which process is responsible for amplification of the light?
- Blackbody radiation
- Stimulated emission
- Planck's radiation
- Einstein oscillation
Question: Once the active medium is excited, the first photons of light are produced by which physical process?
- Blackbody radiation
- Spontaneous emission
- Synchrotron radiation
- Planck's oscillation
Question: The first step to getting output from a laser is to excite an active medium. What is this process called?
Question: What does AM mean?
- Angelo marconi
- Anno median
- Amplitude modulation
Question: What frequency range is the High Frequency band?
- 100 kHz
- 1 GHz
- 30 to 300 MHz
- 3 to 30 MHz
Question: What does EPROM stand for?
- Electric Programmable Read Only Memory
- Erasable Programmable Read Only Memory
- Evaluable Philotic Random Optic Memory
- Every Person Requires One Mind
Question: What does the term PLC stand for?
- Programmable Lift Computer
- Program List Control
- Programmable Logic Controller
- Piezo Lamp Connector
Question: Which motor is NOT suitable for use as a DC machine?
- Permanent Magnet Motor
- Series Motor
- Squirrel Cage Motor
- Synchronous Motor
Question: What does VVVF stand for?
- Variant Voltage Vile Frequency
- Variable Velocity Variable Fun
- Very Very Vicious Frequency
- Variable Voltage Variable Frequency
Question: The sampling rate, (how many samples per second are stored) for a CD is...?
- 48.4 kHz
- 22,050 Hz
- 44.1 kHz
- 48 kHz
Question: A Compact disc (according to the original CD specifications) hold how many minutes of music?
- 74 mins
- 56 mins
- 60 mins
- 90 mins
Question: Sometimes computers and cash registers in a foodmart are connected to a UPS system. What does UPS mean?
- United Parcel Service
- Uniform Product Support
- Under Paneling Storage
- Uninterruptable Power Supply
Question: What does AC and DC stand for in the electrical field?
- Alternating Current and Direct Current
- A Rock Band from Australia
- Average Current and Discharged Capacitor
- Atlantic City and District of Columbia
Question: Which consists of two plates separated by a dielectric and can store a charge?
Have a science and technology question? Ask it! We are always available to answer your questions and help you understand science and technology. This science and technology quiz will be updated on regular basis.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:8e27df9d-3fe3-46fc-a4d3-0798ec77ce79> | CC-MAIN-2017-09 | http://www.knowledgepublisher.com/article-1062.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00045-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.872234 | 1,541 | 2.96875 | 3 |
NASA's Kepler telescope in trouble
- By Frank Konkel
- May 17, 2013
NASA's Kepler space telescope has found more than 2,700 possible Earth-like planets. (NASA image)
Like an old man with arthritic joints, NASA's planet-seeking Kepler telescope can't move like it used to, and its most recent setback has put the four-year-old observatory into protective safe mode, potentially forever.
NASA officials announced on May 15 that the second of Kepler's four reaction wheels had failed. The first reaction wheel stopped working in July 2012. The wheels position and stabilize the telescope to point precisely at stars, and it needs at least three of the devices to be operational.
It the second time in the past month that NASA scientists found the telescope in thruster-controlled safe mode as a result of a malfunction in the reaction wheels. Although that mode minimizes fuel consumption, NASA officials said the craft only has enough fuel for a few months.
Officials are assessing their options, which include putting the craft into point rest state, which leaves Kepler to float around in space while scientists figure out whether they can fix the wheel, or using Kepler's operational thrusters and its two remaining wheels to turn it into a general data collector rather than having it zero in on specific stars.
Kepler orbits the sun some 40 million miles away, so astronauts cannot repair it in space.
Nevertheless, NASA officials remain optimistic. "We are not down and out," said Charles Sobeck, deputy project manager for Kepler at NASA's Ames Research Center. "The spacecraft is safe and stable. We'll proceed with our investigation."
An impressive legacy
Kepler was launched in March 2009 to seek Earth-like, potentially habitable planets. The $550 million telescope has spotted more than 2,700 potential exoplanets, with 132 confirmed as planets by ground-based telescopes so far. Scientists have said, however, that they expect more than 90 percent of Kepler's finds to be verified as planets.
In April, NASA scientists confirmed Kepler-62e and 62f as planets within the habitable zone of their solar system's parent star, Kepler-62 -- the first time Earth-sized planets were detected in a region of space scientists believe would be hospitable for life as we know it.
Kepler fulfilled its life expectancy in 2012, and though it might not seem old, among operational man-made objects in space, it is a veteran. And scientists still have massive amounts of planetary data from Kepler's observations to examine.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:daaf3144-62db-4397-8177-b7e2629b1f0a> | CC-MAIN-2017-09 | https://fcw.com/articles/2013/05/17/nasa-kepler-trouble.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00097-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956398 | 530 | 3 | 3 |
With no end in sight for multicore CPUs and manycore GPUs, and supercomputers with hundreds of thousands of processors being envisioned, the parallel programming problem looms large indeed. IDC’er Steve Conway, writing for Scientific Computing, reminds us just how bad the problem has become:
To date, three real-world applications have broken the petaflop barrier (10^15 calculations/second), all on the Cray “Jaguar” supercomputer at the Department of Energy’s Oak Ridge National Laboratory. A slightly larger number have surpassed 100 teraflops (10^12 calculations/second), mostly on IBM and Cray systems, and a couple of dozen additional scientific codes are being groomed for future petascale performance. All of these applications are inherently parallel enough to be laboriously decomposed — sliced and diced — for mapping onto highly parallel computers.
His point being that high performance computing applications, in general, are remarkable underachievers, given the top-end hardware available today. According to IDC surveys, over half of the applications don’t scale beyond 8 processors, and a scant 6 percent can use more than 128 processors. Beside the disconnect between growing hardware and software parallelism, Conway also points to a couple of other problems afflicting today’s HPC systems, namely slower processor clock speeds and the growing imbalance between processor cores and bandwidth (memory and I/O). These attributes also need to be taken into account when devising software for modern HPC machines.
Not surprisingly, Conway thinks HPC software will have to be rewritten — as disruptive a prospect as that is — to take advantage of the current crop of multi-teraflop and petaflop systems, much less the future multi-petaflop and exaflop machines. Being a good glass-half-full analyst, he also sees opportunity, noting that those who are able to create the next generation of software tools and applications that can keep pace with the hardware will find themselves at the top of the HPC heap. | <urn:uuid:dc88a94c-807e-4e6b-a79f-0a9cb0c400f2> | CC-MAIN-2017-09 | https://www.hpcwire.com/2010/08/10/hpc_software_losing_ground_to_hardware_parallelism/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00097-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925201 | 425 | 2.6875 | 3 |
Many hotels have sustainability certifications and it’s easy to think of this as one of those things you have to do just to match the competition. While there is some truth to that perception, it turns out that gaining an environmental certification has the unexpected benefit of encouraging guests to conserve resources. This seems to be part of the increased focus on conservation that comes with meeting certification requirements. A new study from Cornell provides a strong indication of the connection between “eco-certification” and hotel efficiency. What’s unusual about this study is that it analyzes the hotel’s own operational efficiency separately from guests’ activities. Surprisingly, it turns out that the two measurements sometimes go in different directions.
Rather than focus on just one of the many eco-certification programs, researchers Jie Zhang, Rohit Verma, and Nitin Joglekar looked at hotels that had been awarded the “eco-friendly hotel” designation by Sabre (www.sabretravelnetwork.com), as indicated by the “Ecoleaf” symbol on Sabre’s Travelocity site. This indicator is given to hotels that have earned any of several certifications, including LEED, Energy Star, Ecotourism Australia, and Green Tourism Business Scheme. Hotels cannot certify themselves in this program, and all certifications are aligned with the Global Sustainable Tourism Criteria.
Although nearly 5,000 hotels worldwide have gained Sabre’s eco-hotel designation, the researchers restricted their study to about 2,900 hotels serving a range of market segments in 49 U.S. states. Using data from PKF Hospitality Research (www.pkfc.com/research), the study matched Ecoleaf hotels with similar non-certified hotels to compare the efficiency of resource use in hotel operations and by guests themselves. The study measured expenses for water, sewer, electricity, and supplies used by the hotels’ rooms, F&B, maintenance, and engineering departments.
The green effect: saving greenbacks
In general, the eco-certified hotels have higher resource efficiency for both hotel operations and customer activities, as compared to hotels that are not certified. This is indicated by lower resource expenses both for operations activities and customer-driven factors. Although the studies did not have a goal of explaining the reason for this effect, we can guess that the certified hotels have tightened their scrutiny of all resource use, thereby cutting operating costs. With regard to guest expenses, either guests are themselves more careful in a hotel that is clearly “green,” or such hotels attract a type of guest who naturally attempts to conserve resources.
Things get complicated, however, when you start looking at hotels in different market segments. For one thing, the efficiency effect changes according to a hotel’s market segment. As we move up the chain scale (based on ADR), the guest expenses go down, while the operational expenses go up. At the same time, earning a certification has a much greater effect on operational savings for upscale hotels than for budget hotels, probably because upscale hotels have greater expenses in the first place. Likewise, the larger the hotel, the lower the operations efficiency (but the greater chance for savings).
One other factor that this study acknowledges but cannot take into account is that the various certifications have widely different criteria. This means that two hotels that have both qualified for some type of eco-certificate may be focusing on two entirely different sets of resource categories (and both would be listed by Sabre and Travelocity). Even given this “noise,” the fact remains that customers do notice when a hotel has been certified. Plus, when they are staying in an eco-certified hotel their resource usage seems to be restrained, in the spirit of that certification. Along that line, managers will want to be as transparent as possible about their sustainability actions, so that guests will notice and respond appropriately, both in terms of their own resource use and in terms of their patronage of the hotel.
What was your first job?
Picking green beans at the Cornell Agricultural Experiment Station
Who inspires you?
Anyone who has persevered in the face of adversity
What are your hobbies?
Bicycle riding and running
What technology excites you?
We live in an age of miracles that we can electronicallycommunicate, look up infor-mation, and so forth in a heartbeat. Medical advances are also quite impressive of late.
Words of Wisdom:
Whether you’re speaking of publishing or hospitality the answer
is the same, don’t let technology get in the way of being a mensch.
What is one other field that you would like to try?
Who would you invite to lunch?
Mark Twain & Theodore Roosevelt
Some Like It Hot
Favorite vacation spot:
New York Adirondacks, but I love the Grand Canyon, Crater lake, Acadia, and the Oregon Coast.
Glenn Withiam is the director of publications for the Cornell Center
for Hospitality Research. To download complimentary copies of any of the research reports from the Center for Hospitality Research, visit www.hotel | <urn:uuid:354f28e5-f107-470a-a1c1-02704a634678> | CC-MAIN-2017-09 | http://hospitalitytechnology.edgl.com/columns/Eco-Certs-Yield-Unexpected-Benefits-in-Cost-Saving-and-Efficiency94385 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00273-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954474 | 1,063 | 2.671875 | 3 |
Raise your hand if you've ever tried to connect to a neighbor's WiFi network in an emergency (or even by accident). If we were in the same place, about a third of the room would have their hands up, according to a survey conducted by wireless industry group Wi-Fi Alliance. This is despite the fact that many of you know to keep your own networks locked down—so much so that even giving out your password to friends feels like a risky venture.
Wi-Fi Alliance interviewed 1,054 Americans over the age of 18 about their WiFi practices during the month of December 2010 and found that 32 percent tried to connect to a WiFi network that wasn't theirs. That's up from just 18 percent in December of 2008, showing that in just two years, the number of people trolling for open WiFi networks almost doubled. This is undoubtedly thanks to the growing popularity of laptops and mobile devices that can connect to WiFi—now, you can connect to other people's networks even when you're out and about, and not just at home.
When managing their own networks though, 40 percent of survey respondents said that they would be more likely to trust someone with a key to their homes than the password to their WiFi access points. Even crazier: more than a quarter said that sharing their WiFi password felt more personal than sharing a toothbrush. (Don't get me wrong, I don't like to give out my WiFi password either. But no one shares my toothbrush—ever.)
Wi-Fi Alliance seems to think that these two behaviors—trying to get onto other people's networks, but not letting others onto their own networks—are contradictory, likely because joining someone else's network increases the possibility of exposing your own surfing habits to strangers. Why would you so fiercely protect your own network if you're going to be joining others willy-nilly?
"Most consumers know that leaving their WiFi network open is not a good thing, but the reality is that many have not taken the steps to protect themselves," Wi-Fi Alliance marketing director Kelly Davis-Felner said in a statement. "Most public hotspots leave security protections turned off, so while connecting to a public WiFi hotspot is great for general internet surfing, users should not transmit sensitive data, such as bank account login information."
When using a device that might connect to a WiFi network, Wi-Fi Alliance says to turn off the device's ability to auto-connect so that you always make a conscious decision to join a network that you're familiar with. And when managing your own network, the group advises WiFi users to implement WPA2 protections on their own networks and use strong passwords—at least eight characters with a mixture of upper and lower case letters, numbers, and symbols. No dictionary words!
"Much like the seatbelts in your car, [WiFi security] won't protect you unless you use it," Felner warned.
If you want to stay extra-obscure, you could also change the settings on your router so that it doesn't even broadcast the SSID to other users. Bonus security points go to the people who require each device's MAC address to be approved on the network before they can connect, but those people lose friend points for making things so tedious. Share the love, man! | <urn:uuid:0ff3c48f-38fa-46fe-b26e-c54541ba8fa5> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2011/02/wifi-users-guard-their-own-networks-happy-to-use-others/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00149-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96834 | 673 | 2.640625 | 3 |
The Value of Cooperation
The Value of Cooperation As computers get faster, this gain in capacity (which technologists call "processing gain") grows. Like a better antenna ("antenna gain"), processing gain means we can have more capacity in a particular radio architecture than otherwise would be possible. Technologists are increasingly discussing a related kind of gain called "cooperation gain." Again, think about a party. If I need to tell you that its time to leave, I could choose to shout that message across the room. Shouting, however, is rude. So instead, imagine I choose to whisper my message to the person standing next to me, and he whispered it to the next person, and she to the next person, and so on. This series of whispers could get my message across the room without forcing me to shout.Radio technology thus tries to find the mix of antenna gain, processing gain and cooperation gain that maximizes total capacity for the system. And as Wi-Fi and meshed wireless networks are increasingly demonstrating, that increase in capacity can be realized without any centralized controller deciding who gets to say what when, or without allocating exclusive rights to "spectrum." Instead, with the proper protocols and an etiquette between different protocols, radios can simply "share" spectrum without central coordination. But wont such "sharing" lead to congestion? Wont this "commons" lead to a "tragedy of the commons"? The answer, surprisingly, is "not necessarily." No doubt a bad architecture will quickly bust. But many believe that there are good architectures for spectrum sharing that would have the property of increasing spectrum capacity as the number of users increase. Its too early to know whether such systems will scale, but its not too early to see that their ability to exist depends upon lots of spectrum remaining free for experimentation. Wi-Fi is the first successful example of these spectrum-sharing technologies. Within thin slices of the spectrum bands, the government has permitted "unlicensed" spectrum use. The 802.11 family of protocols has jumped on these slivers to deliver surprisingly robust data services. These protocols rely on a hobbled version of spread-spectrum technology. Even in this crude implementation, the technology is exploding like wildfire. And this is just the beginning. If the Federal Communications Commission frees more spectrum to such experimentation, there is no end to wireless technologies potential. Especially at a time when broadband competition has all but stalled, using the commons of a spectrum to invite new competitors is a strategy that looks increasingly appealing to policy makers. For more information, see the papers collected at cyberlaw.stanford.edu/spectrum. For a commercial implementation of "meshed" technologies, see www.meshednetworks.com. Lawrence Lessig is a regular columnist for CIO Insight Magazine, and a professor of law at Stanford University Law School. He is the author of The Future of Ideas: The Fate of the Commons in a Connected World and Code and Other Laws of Cyberspace. More from Lawrence Lessig:
Radios can achieve a similar gain from cooperation. Rather than blasting a message at high power so that you can hear it at the other end of the city, I could instead whisper the message to a receiver near me, and it could whisper the message to the next receiver, and so on. Through their cooperation, these nodes operating in a "mesh" could reduce the power required by any particular transmission. And if the power of any particular transmission is reduced, then the total capacity again would increase. | <urn:uuid:52219899-f8e0-45b6-806b-42dc1137bdee> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/IT-Infrastructure/Wireless-Spectrum-Defining-the-Commons-in-Cyberspace/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00569-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929013 | 714 | 2.5625 | 3 |
Google has partnered with scientists at the University of California, Santa Barbara to build new processors for use in quantum computing systems.
Quantum computers aim to use properties of subatomic particles to perform calculations millions of times faster than conventional computers, although there are lots of obstacles to overcome for that to happen.
Google’s Quantum Artificial Intelligence team will work with researchers at UC Santa Barbara to build new quantum information processors to help make quantum computers a reality.
Today’s computers use electrical transistors to represent the ones and zeros of binary computing, but quantum computers will use qubits, or quantum bits, which rely on laws of quantum mechanics to achieve various states.
And while a transistor can only be in one of two states—on or off, representing a 1 or a 0—quantum bits can hold multiple states simultaneously, meaning they can be a 1 or a 0, or both at the same time. That could allow them to perform multiple calculations in parallel, vastly increasing their processing power.
Qubits are also highly unstable, however, and can alter their state at the tiniest change in temperature or magnetism. Physicists at UC Barbara are on the forefront of trying to solve those problems, so it’s easy to see why Google wants to work with them.
The two groups will work on processors based on superconducting electronics, Google said in a blog post. That involves cooling materials to a point near absolute freezing where electrical resistance and magnetic fields are minimized.
Microsoft is also researching quantum computing and published a paper and a video recently that explain in plain English how it works. | <urn:uuid:e23ccc96-25ef-4cc3-b61f-dc9267e1ccfd> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2601382/components-processors/google-to-build-quantum-computing-processors.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00569-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929262 | 330 | 3.8125 | 4 |
A preliminary study from the University of Michigan Transportation Institute took a look at autonomous vehicle crashes reported by Google, Delphi, and Audi, all of which have licenses to operate self-driving vehicles in a number of states. The researchers, Brandon Schoettle and Michael Sivak, compared that data to adjusted statistics pertaining to conventional vehicles. The study showed that self-driving vehicles were actually involved in more accidents on average, per million miles traveled, than their conventional forebears.
However, the researchers cautioned that those numbers may not tell the whole story. The data was pulled from 11 crashes among three makers of self-driving cars whose fleets only cumulatively drove 1.2 million miles on public roads. The data for conventional cars, however, was derived from a sample of the reported crashes that occurred over 3 trillion miles of annual driving. (The researchers did, however, adjust the data to account for unreported crashes.)
Because of the statistical uncertainty that comes with comparing a census of autonomous vehicle crashes with a sample of conventional vehicle crashes, the researchers couldn’t say for sure that self driving vehicles are more likely to be involved in crashes than conventional cars.
What the analysis did find, however, was that every crash an autonomous vehicle was in was caused by a driver of a conventional car. In addition, 73 percent of the crashes involving an autonomous vehicle happened when the car was going 5 mph or less, or while the car was stopped.
While 15.8 percent of crashes involving conventional cars involved a fixed object and 14 percent of crashes involving conventional cars involved a non-fixed object (like a pedestrian jay-walking), autonomous cars only ever collided with another vehicle. In addition, 3.6 percent of conventional vehicle collisions were head-on crashes, but autonomous vehicles have only thus far suffered a rear-end collision, a side-swipe, or an angled collision.
Less than 1 percent of conventional vehicle crashes involved a fatal injury, but no autonomous vehicles have recorded a fatal injury. 28 percent of conventional vehicle crashes resulted in a non-fatal injury; only 18.2 percent of autonomous vehicle crashes resulted in the same kind of injury. By far, the largest result of autonomous vehicle crashes was property damage, at 81.8 percent.
Overall, the preliminary data showed that self-driving vehicles resulted in fewer injuries per crash than conventional vehicles, but only when the data was adjusted for unreported crashes in conventional vehicles.
Still, the preliminary study shows that self-driving cars are not a panacea for vehicle collisions, at least while traffic on the roads is mixed between autonomous and self-driving cars. | <urn:uuid:8aa00a82-1c72-4b5c-b403-fb11474e861e> | CC-MAIN-2017-09 | https://arstechnica.com/cars/2015/10/study-of-self-driving-cars-shows-other-drivers-are-good-at-hitting-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00569-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.970797 | 535 | 2.9375 | 3 |
As today’s cyber attacks continue to increase in frequency and complexity, organizations must respond with new tactics to protect against these attacks. All techniques are not created equal, however, and today's cyber security expert has to understand what tools can and can't help in their cyber security efforts.
This paper explains how sandboxing works, the failings of most sandbox-based approaches, and what organizations should look for in VM-based analysis of cyber threats to improvet heir security approach.
In this paper:
As shocking as the report may have seemed to the public, it only confirmed what Australia’s security experts have long known. Cyber attacks are growing more frequent. They are growing more effective. And they are growing more serious...Many of these incidents involve advanced attacks. Sponsored by foreign governments and well-organized cybercriminals, these attacks are easily slipping past standard security tools. Anti-virus (AV) software, traditional and next-generation firewalls, intrusion-prevention systems (IPS), and other tools are useless against them.
Download the White Paper | <urn:uuid:f2afef1e-255a-492e-b57f-ff469fbfde3f> | CC-MAIN-2017-09 | https://www2.fireeye.com/thinking-outside-the-sandbox-wp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00569-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963656 | 219 | 2.5625 | 3 |
Changing the Friendly Name of an SSL Certificate
SSL Certificates are not required to have friendly names and are not part of the SSL Certificate. However, in environments that require multiple SSL Certificates, the lack of friendly names or poorly used friendly names can make managing your SSL Certificate more difficult.
If you are using multiple SSL Certificates in your environment, good friendly names can help you easily identify each certificate at a glance. You can use friendly names to remind you when a certificate expires, to provide information about who issued the certificate, and to distinguish multiple certificates with the same domain name.
On IIS and Exchange servers, when assigning your SSL Certificates to a website or a domain, friendly names are extremely helpful because certificates are displayed by their friendly names.
How to Edit an SSL Certificate's Friendly Name with the DigiCert Utility
On the Windows server where your SSL Certificates are located, download and save the DigiCert® Certificate Utility for Windows executable (DigiCertUtil.exe).
Run the DigiCert® Certificate Utility for Windows (double-click DigiCertUtil).
In the DigiCert Certificate Utility for Windows©, click SSL (gold lock), right-click on the SSL Certificate whose friendly name you want to change, and then click Edit friendly name.
In the Friendly Name box, enter a unique friendly name for the certificate to help you distinguish this certificate from the other certificates on your server.
Example Naming Conventions:
Domain Name: yourDomain-digicert-(expiration.date) Company Name: yourCompany-digicert-(expiration.date) Certificate Type: wildcard-digicert-(expiration.date)
Note: If you are using a Wildcard certificate with multiple websites, you may want to begin your friendly name with a wildcard character * (e.g. *your.domain-digicert-(expiration.date)). This naming convention makes it easier to identify the wildcard certificate so that you can assign it to multiple websites.
When you are finished, click Save. | <urn:uuid:0f5c4841-9370-4cd5-8712-c6dca768a180> | CC-MAIN-2017-09 | https://www.digicert.com/util/utility-edit-friendly-name.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00445-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.846337 | 431 | 2.75 | 3 |
Data Obfuscation and Data Value – Can They Coexist?
Data is growing exponentially. New technologies are at the root of the growth. With the advent of big data and machine data, enterprises have amassed amounts of data never before seen. Consider the example of Telecommunications companies. Telco has always collected large volumes of call data and customer data. However, the advent of 4G services, combined with the explosion of the mobile internet, has created data volume Telco has never seen before.
In response to the growth, organizations seek new ways to unlock the value of their data. Traditionally, data has been analyzed for a few key reasons. First, data was analyzed in order to identify ways to improve operational efficiency. Secondly, data was analyzed to identify opportunities to increase revenue.
As data expands, companies have found new uses for these growing data sets. Of late, organizations have started providing data to partners, who then sell the ‘intelligence’ they glean from within the data. Consider a coffee shop owner whose store doesn’t open until 8 AM. This owner would be interested in learning how many target customers (Perhaps people aged 25 to 45) walk past the closed shop between 6 AM and 8 AM. If this number is high enough, it may make sense to open the store earlier.
As much as organizations prioritize the value of data, customers prioritize the privacy of data. If an organization loses a customer’s data, it results in a several costs to the organization. These costs include:
- Damage to the company’s reputation
- A reduction of customer trust
- Financial costs associated with the investigation of the loss
- Possible governmental fines
- Possible restitution costs
To guard against these risks, data that organizations provide to their partners must be obfuscated. This protects customer privacy. However, data that has been obfuscated is often of a lower value to the partner. For example, if the date of birth of those passing the coffee shop has been obfuscated, the store owner may not be able to determine if those passing by are potential customers. When data is obfuscated without consideration of the analysis that needs to be done, analysis results may not be correct.
There is away to provide data privacy for the customer while simultaneously monetizing enterprise data. To do so, organizations must allow trusted partners to define masking generalizations. With sufficient data masking governance, it is indeed possible for data obfuscation and data value to coexist.
Currently, there is a great deal of research around ensuring that obfuscated data is both protected and useful. Techniques and algorithms like ‘k-Anonymity’ and ‘l-Diversity’ ensure that sensitive data is safe and secure. However, these techniques have have not yet become mainstream. Once they do, the value of big data will be unlocked. | <urn:uuid:469c2954-5515-4987-a1ea-a72efbff43b3> | CC-MAIN-2017-09 | https://blogs.informatica.com/2014/07/02/data-obfuscation-and-value-of-data-can-they-coexist/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00621-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952664 | 580 | 2.78125 | 3 |
Cost accounting refers to a set of activities that include collection and analysis of data that is related to production/service delivery process of an organization. The purpose of these activities is to identify various fixed & variable costs, and notice which costs can be minimized or removed to achieve a better profitability rate.
There are several benefits of a cost accounting system. First of all, a good system minimizes the time and efforts that are usually employed in the cost accounting process. It also brings consistency in the operations and ensures that the information captured is also stored so that it can be referred to in future.
Cost accounting also makes it easier to track the hidden costs that go unrecorded, and thereby unnoticed. Over a period of time, these hidden costs can cause substantial loss to an organization. The system brings into notice these hidden costs and allows the management to take a decision, accordingly.
Cost accounting is different from financial accounting. While the main purpose of financial accounting is to present the financial position of an organization, cost accounting brings the costs involved in the production/service delivery to the management. Results obtained from the former accounting system can be made available to the general public and stakeholders. Results obtained from the latter system, however, are meant for internal use and by specific individuals or departments.
In a healthcare facility, the importance of an efficient cost accounting system cannot be undermined. The healthcare services market needs to be timely and consistent, in which cost accounting plays a significant role. By eliminating the unwanted expenses and processes, healthcare facilities can drastically bring down the healthcare costs.
In the Asian region, the cost accounting system market is witnessing growth on account of improved care quality and clinical outcomes, high returns on investment on the systems implemented in a facility, and an increasing need to integrate the healthcare systems.
This market is segmented on the basis of companies, components, deployments, end-users, and macro indicators.
The Asian cost accounting system market report is based on the information collected through extensive primary and secondary research. Data and facts have been collected and presented in a logical manner to illustrate the current and future trends of this market. The report analyzes the market shares of leading companies and the strategies being implemented by them to enhance their market share and presence. These strategies include mergers & acquisitions, partnerships, new product launches, capacity expansions, investments in R&D, and others.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
North American Non-Clinical Information System Market
North America is the largest market for non-clinical information systems globally, and is expected to grow at a CAGR of 8.1% from 2013 to 2018, to reach a value of $8,905.5 million in 2018. This market is segmented into sub-segments, components, deployments, end users, applications, and geographies.
European Non-Clinical Information Systems Market
The European non-clinical information systems (NCIS) market has been segmented by types, deployment, components, end users, applications, and geographies. Globally, this is the second-largest NCIS market, and is expected to grow at a CAGR of 6.3% from 2014 to 2019.
Asian Non-Clinical Information Systems Market
Asia is the fastest-growing market for non-clinical information systems, and was valued at $1,336.4 million in 2013. It is expected to grow at a CAGR of 7.2%, from 2013 to 2018, to reach a value of $1,892.2 million in 2018. This market can be segmented by companies, deployments, components, end users, and macro indicators. | <urn:uuid:a6db7731-1663-4b00-9fdf-bd16017d4eb4> | CC-MAIN-2017-09 | http://www.micromarketmonitor.com/market/asia-cost-accounting-system-5637520024.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00321-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945152 | 777 | 2.859375 | 3 |
Liquid-crystal displays (familiar to most as LCDs) rely on the light modulating properties of liquid crystals to bring images to life on a wide variety of screens. From computer monitors to televisions to instrumental panels and signage, LCDs are a pervasive element of modern life.
LCDs employ high-tech films, which must be both thin and robust. The problem is that these films degrade over time as liquid-crystal “mesogens,” which make up the films, redistribute to areas of lower energy in a process called dewetting. Eventually the film ruptures.
Recently a team of scientists at Oak Ridge National Laboratory put the lab’s Titan supercomputer – packed with 18,688 CPUs and an equal number of GPUs – to work to better understand the mechanics of this process, as reported on the OLCF website.
Some of the important uses of high-tech films include protecting pills from dissolving too early, keeping metals from corroding, and reducing friction on hard drives. When the films are manufactured using liquid crystals – macromolecules with both rigid and flexible elements – the innovation potential goes through the roof.
The rigid segments support interaction with electric currents, magnetic fields, ambient light and temperature and more. This has led to the material’s wide prevalance in 21st century flat-panel displays. Researchers are actively looking to expand the use of liquid crystal thin films for nanoscale coatings, optical and photovoltaic devices, biosensors, and other innovative applications, but the tendency toward rupturing has stymied progress. By studying the dewetting process more closely, scientists are paving the way for a better generation of films.
For several decades, the prevailing theory held that one of two mechanisms could account for dewetting, and these two mechanisms were mutually exclusive. Then about 10 years ago experiments showed that these two mechanisms did coexist in many cases, as Postdoctoral fellow Trung Nguyen of Oak Ridge National Laboratory (ORNL) explains. Nguyen, who was coprincipal investigator on the project with W. Michael Brown (then at ORNL, but now working at Intel), ran large-scale molecular dynamics simulations on ORNL’s Titan supercomputer detailing the beginning stages of ruptures forming on thin films on a solid substrate. The work appears as the cover story in the March 21, 2014, print edition of Nanoscale, a journal of the Royal Society of Chemistry.
“This study examined a somewhat controversial argument about the mechanism of the dewetting in the thin films,” stated Nguyen.
The two mechanisms thought to be responsible for the dewetting are thermal nucleation, a heat-mediated cause, and spinodal dewetting, a movement-induced cause. Theoretical models posited decades ago asserted that one or the other would be responsible for dewetting thin film, depending on its initial thickness. The simulation validated that the two mechanisms do coexist, although one does predominate depending on the thickness of the film – with thermal nucleation being more prominent in thicker films and spinodal dewetting more common in thinner films.
The impetus for the ruptures is the liquid-crystal molecules striving to recover lower-energy states. While still in the research stages, it is thought that this finding may boost innovation in using thin films for applications such as energy production, biochemical detection, and mechanical lubrication. The research was facilitated by a 2013 Titan Early Science program allocation of supercomputing time at the Oak Ridge Leadership Computing Facility. Nguyen’s team went through the ORNL’s Center for Accelerated Applications Readiness (CAAR) program, which gives early access to cutting-edge resources for codes that can take advantage of graphics processing units (GPUs) at scale. Under the CAAR program, Brown reworked the LAMMPS molecular dynamics code to leverage a large number of GPUs.
Titan, the most powerful US supercomputer and the world’s second fastest, has a max theoretical computing speed of 27 petaflops and a LINPACK measured at 17.59 petaflops. The Titan Cray XK7 system is also the first major supercomputing system to utilize a hybrid architecture using both conventional 16-core AMD Opteron CPUs plus NVIDIA Tesla K20 GPU parts.
The researchers utilized Titan to simulate 26 million mesogens on a substrate micrometers in length and width, employing 18 million core hours and harnessing up to 4,900 of Titan’s nodes. The study lasted three months, but would have taken about two years without the acceleration of Titan’s GPUs.
“We’re using LAMMPS with GPU acceleration so that the speedup will be seven times relative to a comparable CPU-only architecture – for example, the Cray XE6. If someone wants to rerun the simulations without a GPU, they have to be seven times slower,” Nguyen explained. “The dewetting problems are excellent candidates to use Titan for because we need to use big systems to capture the complexity of the dewetting origin of liquid-crystal thin films, both microscopically and macroscopically.”
This is the first study to simulate liquid-crystal thin films at experimental length- and timescales and also the first to relate the dewetting process to the molecular-level driving force, which causes the molecules to break up.
The Nanoscale paper was also authored by postdoctoral fellow Jan-Michael Carrillo, who worked on the simulation model, and computational scientist Michael Matheson, who developed the software for the analysis and visualization work. | <urn:uuid:38757b16-fb3e-42c7-a781-8e530e7bebcc> | CC-MAIN-2017-09 | https://www.hpcwire.com/2014/04/14/titan-captures-liquid-crystal-film-complexity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00497-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924012 | 1,174 | 3.734375 | 4 |
NASA engineers are still making efforts to get the crippled Kepler Space Telescope working again, but they're also seeking alternative plans in case that doesn't happen.
Engineers from the space agency have been working for nearly three months to fix the telescope, which was launched in 2009 to search for Earth-like planets. The telescope has been spinning out of control because of trouble with two of the four wheels that control its orientation in space.
NASA is no longer able to manipulate the telescope's positioning, and ground engineers have had a hard time communicating with the spacecraft since the communications link comes and goes as it spins.
Tests performed Friday showed that both of the problematic wheels are turning on command, according to Charles Sobeck, engineer and deputy project manager of NASA's Kepler mission. However, there now is friction when the two problematic wheels turn. If the friction doesn't remain at a constant level, the wheels will be unusable, Sobeck told Computerworld.
To hold the spacecraft at a steady state, the wheels need to move at a constant speed. If the friction level fluctuates, the torque that engineers calculate to keep the wheels spinning at a constant speed will be wrong. Sobeck said NASA cannot send up new software that would adjust the torque on the fly to the rate of friction.
The next test, scheduled for today, will check to see if the friction is constant or if it fluctuates.
Sobeck said that having just one of the two wheels perform well may give the telescope enough accuracy to deliver the high-precision photometry necessary for exoplanet detection.
He added that he has no idea of the odds of success. "All we can do is try and see," he said.
NASA, hedging its bets, issued a call for white papers on potential alternative scientific projects that a limited Kepler could work on.
"The purpose of this call for white papers is to solicit community input for alternate science investigations that may be performed using Kepler and are consistent with its probable two-wheel performance," reads the request. "If one of the two reaction wheels cannot be returned to operation, it is unlikely that the spacecraft will resume the nominal Kepler exoplanet and astrophysics mission."
Sobeck said he hopes the scientific community will come up with some intriguing scientific goals for Kepler.
"We're looking for ideas for science you might do with the Kepler mission that you might not be able to easily do from the ground," he said. "Maybe it could be used to find near-Earth asteroids, giving us a different perspective than we get from the ground."
Submissions are due to NASA by Sept. 3. The space agency plans to begin new science programs with Kepler by next summer.
The space telescope is considered one of NASA's great success stories. After wrapping up its primary three-and-a-half-year mission and entering a second phase of research last November, NASA scientists had hoped Kepler would continue working for another four years.
Since it began work on May 12, 2009, the telescope has searched more than 100,000 stars for signs of Earth-like planets in the habitable zone, an area that may have water and could potentially support life. The telescope has so far confirmed more than 100 such planets.
Even if Kepler cannot continue its search for Earth-like planets in the universe, it already has sent back enough data to keep scientists busy, Sobeck said.
"It's rewritten the textbooks on exoplanets," he said. "Even if Kepler never sends down any more data, you'll still see science coming out from this over the next several years."
This article, NASA seeks new science projects for crippled Kepler, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA seeks new science projects for crippled Kepler" was originally published by Computerworld. | <urn:uuid:f1334f66-7a37-453c-a227-e128f229e753> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2168880/data-center/nasa-seeks-new-science-projects-for-crippled-kepler.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00089-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954347 | 860 | 2.765625 | 3 |
A new material being researched could make shape-shifting robots real, but let's hope they aren't as violent as the Transformers from the movies.
Researchers have developed a flexible material that can stretch when heated and will allow robots to change shapes. The material, which can also be rigid, is a mix of elastomer foam with a soft metal alloy.
With the material, robots could become more versatile. Most robots are rigid, much like human skeletons, but by morphing into new shapes, they could be used for new tasks.
A more extreme scenario presented by Cornell University researchers envisioned flying drones that alter their wing shapes and transform into submarines.
Researchers say the shape-shifting technology could be part of a new field called soft robotics.
Researchers at Cornell University labs are still experimenting with the material, and there's no word on when it'll be ready for the real world. But the research plants the seed for others to consider the possibilities of flexible robots.
The material could have more uses beyond robotics, such as the development of prosthetics and other medical equipment, Rob Shepherd, a researcher and engineering professor at Cornell University, said in a video describing the material.
The material by design is stiff and rigid. It deforms when heated above 144 degrees Fahrenheit with a hot air gun. It returns its original shape and regains its rigidity after it cools. The material can stretch up to 600 percent.
To create the material, researchers use a mold created from foam, using 3D printing, and dip it into molten metal. The metal alloy solidifies when placed in a vacuum and the air is pulled from the foam's pores. That process results in the flexible, soft material.
The research is funded by U.S. Air Force, the National Science Foundation, and the Alfred P. Sloan Foundation. A paper on the technology will be published in the upcoming issue of Advanced Materials. | <urn:uuid:2c2608a7-3217-4ece-870e-402aedab34b0> | CC-MAIN-2017-09 | http://www.computerworld.com/article/3046593/robotics/shapeshifting-robots-using-a-new-material-could-be-on-horizon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00141-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934422 | 389 | 3.59375 | 4 |
Is Cloud Based Anti-Virus an Effective Means of Security?
Anti-virus systems are now a de facto part of modern computing. In a world where every webpage, email and downloaded file has the potential to contain malicious code or trojan horses there is a need to be more vigilant than ever about the security of your computer and your data.
Traditionally anti-virus programs have been installed directly on a computer’s hard drive – running in the background to alert users to phishing attacks, dangerous files and other online threats. However, as the cloud computing continues to expand we are now seeing the advent of alternative methods of protection – specifically cloud-based anti-virus programs.
How does it work?
Cloud-based anti-virus typically uses lightweight software which runs on your compute while offloading the majority of data analysis to the provider’s infrastructure. This has the effect of using less of your system’s resources and reducing the likelihood of the anti-virus agent itself being compromised due to a design vulnerability.
What are its advantages?
The benefit that will most appeal to users is the always up-to-date nature of a cloud-based system. There is no requirement to ensure that virus definitions are updated before starting a full system scan because the cloud always has the most recent data, and the anti-virus will often scan suspicious files and webpages by using multiple antivirus engines ensuring that nothing slips through the net.
Another major advantage is speed – the lack of a lengthy installation process and removal of resource-heavy software means not only can a computer be scanned much faster, but the computer itself should run more smoothly than with an ‘always-on’ system running in the background.
What are its limitations?
Naturally, the biggest drawback of using a cloud-based solution is that you need to be connected to the internet to take full advantage of it. If you are not online the software cannot query the anti-virus cloud – though this can be addressed by the program storing a local cache of the most relevant queries.
Secondly, network bandwidth limitations will frequently prevent some cloud-based anti-virus software from sending the entirety of files that need to be scanned. The workaround utilised by providers is that the software will submit information about the file in question rather than the file itself – though this method could potentially be exploited by skilled hackers and cyber-criminals.
Can it replace traditional anti-virus?
There is no doubt that cloud-based anti-virus programs could be used to replace local software – but it isn’t necessarily a recommended strategy. Either way, they certainly give you a ‘second opinion’ by running in conjunction with a more traditional program.
Normal anti-virus use so much of a system’s resources that it is never recommended to install more than one at a time – especially as two local anti-viruses running at the same time could also cause system conflicts – but an online anti-virus tool that performs a quick scan of your system without running in the background shouldn’t cause any problems.
Have you used cloud-based anti-virus? Did you find them to be an effective way of protecting your machine? Let us know in the comments below.
By Daniel Price | <urn:uuid:454ae2b2-424c-4601-a23d-e2e3363d555d> | CC-MAIN-2017-09 | https://cloudtweaks.com/2014/05/cloud-based-anti-virus-effective-means-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00493-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94013 | 681 | 3.046875 | 3 |
But what about the jobs? There will actually be more, says Boeing.
Aircraft manufacturer Boeing is in the final phase of testing robots that will assemble its 777 airliner fuselages.
As part of its ongoing technology investment, the method is dubbed Fuselage Automated Upright Build (FAUB), and uses automated, guided robots that fasten the panels of the fuselage together, drilling and filling the more than approximately 60,000 fasteners that are today installed by hand.
Boeing said that the method offers benefits including improved employee safety, presumably because no human employees are actually involved in the process now.
The firm, which was recently subject to a hack which saw data about its military C-17 transport aircraft stolen, said that more than half of all injuries on the 777 program have occurred during the phase of production that is being automated.
"This is the first time such technology will be used by Boeing to manufacture widebody commercial airplanes and the 777 program is leading the way," said Elizabeth Lund, vice president and general manager, 777 program.
"We’re excited to continue improving the production process here and we’re positioning ourselves to begin building 777X airplanes in the future."
Lund further told Aviation Week that the robots will actually create more job opportunities.
"Will you be able to build the fuselage with fewer people than today? Yes. But there is a lot of work to do at the Everett site, with the 777X coming in and other site changes."
By attaining higher build rates, Boeing said that it can create additional jobs. | <urn:uuid:0d2242f2-9f7c-415a-8c1a-92caefcedee4> | CC-MAIN-2017-09 | http://www.cbronline.com/news/enterprise-it/boeing-set-to-start-building-777-aircraft-with-robots-4318522 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00017-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962394 | 326 | 2.625 | 3 |
Someday the number of scientists -- and I'm using the word loosely here -- who actually believe human activity has had no impact on global warming, and who might even believe that global warming is a myth, will dwindle down to one. And when it does, the professional climate-change deniers and their financial backers will still insist that there is fierce disagreement about global warming. Except there won't be, and there really isn't now. At least not among rational people. The truth is that the vast majority of qualified climate scientists have concluded that humans have caused the Earth's temperatures to rise. Drafts of a report by the Intergovernmental Panel on Climate Change (IPCC) due out in September "say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s," Reuters reports. Just 12 years ago, the same panel said there was a 66% chance that humans were causing global warming. Back in 1995, when there actually was a debate (but shouldn't have been), the number was 50%. None of this will matter to the people who are paid to obstruct efforts to counteract climate change, or to the anti-regulation, Prison Planet crowd. But for the real scientists, the question now is not what causes global warming, but how to assess its impact on a local level. And that has the scientists stumped, according to Reuters:
Drew Shindell, a NASA climate scientist, said the relative lack of progress in regional predictions was the main disappointment of climate science since 2007.
"I talk to people in regional power planning. They ask: 'What's the temperature going to be in this region in the next 20-30 years, because that's where our power grid is?'" he said."We can't really tell. It's a shame," said Shindell.
Or as Reto Knutti, a professor at the Swiss Federal Institute of Technology in Zurich, responded when asked how global warming could affect nature, "You can't write an equation for a tree." Now read this: | <urn:uuid:17b3888a-c476-4d15-88e1-be5dcda5f8dd> | CC-MAIN-2017-09 | http://www.itworld.com/article/2708231/enterprise-software/science-panel-only-95--certain-that-humans-are-causing-global-warming--so-obviously-the-debate-rages.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00489-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.974803 | 427 | 2.796875 | 3 |
Hadoop is an open source software framework for storing and processing large volumes of distributed data. It provides a set of instructions that organizes and processes data on many servers rather than from a centralized management nexus.
Hadoop data systems are not limited in their scale. More hardware and clusters can be added to handle more load without reconfiguration or purchasing expensive software licenses.
For decades, organizations relied primarily on relational databases (RDBMS) in order to store and query their data. But relational databases are limited in the types of data they can store and can only scale so far before companies need to add more RDBMS licenses and dedicated hardware. Thus, there was no easy or cost-efficient way for companies to use the information stored in the vast majority of non-relational data, often referred to as “unstructured” data.
Thanks to greater digitization of business processes and an influx of new devices and machines that generate raw data, the volume of business data has grown precipitously, ushering in the era of “big data.” The Hadoop project provided a viable solution by making it possible and cost-effective to store and process an unlimited volume of data.
Hadoop is a collaborative open source project sponsored by the Apache Software Foundation. As such, it is not a product but instead provides the instructions for storing and processing distributed data; a variety of software manufacturers have used Apache Hadoop to create commercial products for managing big data. | <urn:uuid:abb5ca50-5236-486a-a699-90ddb8f196cb> | CC-MAIN-2017-09 | https://www.informatica.com/services-and-training/glossary-of-terms/hadoop-definition.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00533-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.914998 | 305 | 3.609375 | 4 |
OWASP, also known as Open Web Application Security Project just released the OWASP Top 10 for 2013. The OWASP Top 10 is a list of most common web application vulnerabilities and flaws found in today's web applications. The list of security flaws is based on several datasets from different firms specializing in web application security and is aimed to help businesses who own websites and web applications simplify the process of securing web applications.
The OWASP Top 10 list has been released once every 3 years since 2004.. For more details about all the changes between OWASP Top 10 of 2010 and 2013 refer to What is New and What Changed in OWASP Top 10 2013. Below is the new list for 2013.
A1 - Injection
LDAP Query Injection, OS Command Injection and SQL Injection are all different type of injection flaws. An injection occurs when a malicious hacker takes advantage of insecure web application coding and manages to inject commands into forms such as a login form from where he or she then gain access to sensitive data stored in the web application backend database. Details about a real life example of an SQL injection attack and the dangerous repercussions it leaves can be found in the blog post Details of South African Whistleblowers Exposed via SQL Injection.
A2 - Broken Authentication and Session Management
Authentication in web applications is mostly used to grant or prohibit access to specific information to a particular user and session management is the management of already logged in users. Most common security risks related to authentication and session management are stealing of passwords or session tokens and impersonating legitimate users. Authentication and session management related flaws are typically identified in password reset functionality, by tampering cookies or session ID's etc.
A3 - Cross-Site Scripting
A cross-site scripting (XSS) vulnerability allows a malicious hacker to inject malicious client-side script in a website or web application which is later executed by the victims. Typically, cross-site scripting attacks are used to bypass access controls and to impersonate legitimate users, such as the web application administrator. Some years ago a cross-site scripting vulnerability was used with other vulnerabilities to gain root access on the Apache Foundation servers. For more detailed information about this attack, refer to the blog post XSS to Root in Apache Jira Incident.
A4 - Insecure Direct Object References
Insecure direct object references is a flaw in the design of the web application where access to a sensitive object, such as a directory, a particular record or a database is not fully protected and the object is exposed by the application. A typical example would be when a customer accesses his bank accounts via e-banking and because of a flaw in the web application he is able to see someone else's account as well.
A5 - Security Misconfiguration
Web application security is not just about secure web application coding. To ensure the security of a web application it is important to also secure the configuration of the web server, secure the operating system of the web server and ensure that it is always updated with the latest security patches. The same applies for the web frameworks being used, such as PHP, .NET etc and any other software being used on the web server.
A6 - Sensitive Data Exposure
Sensitive data stored in databases or any other object should be well protected. Credit card details, social security numbers and other sensitive customer details should be encrypted when stored in a database, even if they are not directly accessible via the web application. The same applies for sensitive data being transmitted to and from the web application, such as credentials or payment details. Such information should be transmitted over a secure and encrypted layer.
A7 - Missing Function Level Access Control
An attacker can exploit this type of security flaw by changing the URL in the browser when accessing a web application to try and access a function he does not have access to. If the web application fails to perform proper access control checks specifically for that particular object, the attacker is able to access the function he should not have access to.
A8 - Cross-site Request Forgery (CSRF)
A cross-site request forgery, also referred to as CSRF is widely popular with scammers and spammers because when exploited, the attacker can force a victim's web browser to send a forged HTTP request to a vulnerable web application. Such forged HTTP request would typically contain logged in information such as the cookie details and other authentication related information which are later used to force the victim's browser to send requests to the vulnerable web application while thinking that they are being sent to a legitimate web application.
A9 - Using Components with Known Vulnerabilities
It is quite surprising that this class of vulnerabilities is in 9th place, considering that most of today's successful attacks happen because the attacker exploited a known vulnerability. The main reason malicious hackers are still able to exploit known vulnerabilities is because outdated software is still being used; administrators fail to update all of the software being used on web servers and by the web applications to the latest secure and most stable version on time.
A10 - Unvalidated Redirects and Forwards
Website visitors are frequently redirected and forwarded to different pages and even other third party websites depending on the visitor location, type of browser being used and several other factors. If the functions analysing such data does not properly validate the data, a malicious hackers can exploit such functions and use the legitimate website to redirect its visitors to a phishing website or any other type of malicious website.
Use the OWASP Top 10 in your Web Applications SDLC
There are several long term benefits your business will benefit from when you use and refer to the OWASP Top 10 list in your web application software development life cycle, such as ensuring that your web applications are not vulnerable and also train web developers to write secure code in future development projects.
How to Find OWASP Top 10 Vulnerabilities in Your Web Applications
You can find most of the web application security problems and vulnerabilities listed in the OWASP Top 10 by scanning your web applications with an automated web application security scanner at any stage of the development life cycle.
Netsparker, is a false positive free web application security scanner that automatically scans your website and identifies web application security vulnerabilities that could leave your sensitive data dangerously exposed to malicious hacking attack. Download the Trial Edition of Netsparker to check if your websites and web applications are vulnerable to any of the OWASP Top 10 vulnerabilities. | <urn:uuid:a6b5743a-c13c-4303-a28e-68221592a9ad> | CC-MAIN-2017-09 | https://www.netsparker.com/blog/web-security/owasp-top-10-2013/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00057-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918346 | 1,312 | 2.765625 | 3 |
WASHINGTON–Superintelligent computers could outsmart humans, but scientists largely dismiss any parallels to Terminator and a dystopian "rise of the machines" (much like the hapless scientists in the movies, it must be noted). The struggle between the thirst for research and the anxiety over the consequences was clear from "Are Super Intelligent Computers Really A Threat to Humanity?" a panel discussion held at the Information Technology and Innovation Foundation Tuesday morning.
The risks of rogue machinery are not far off from the cautionary tales played out in movies including Metropolis, 2001: A Space Odyssey, Terminator of course, and most recently, Ex Machina. According to Stuart Russell of U.C. Berkeley, “if the system is better than you at taking into account more information and looking further ahead into the future, and it doesn’t have the exactly the same goals as you…then you have a problem.” A superintelligent computer could avoid being shut down by its creators, and that’s when people might lose control of the machine, Russell warned.
Robert Atkinson, president of the Information Technology and Innovation Foundation, noted how computers were already captivating humans through interactions with personal digital assistants, such as Apple’s Siri. “I looked at how my daughter interacts with Siri. She’s 9 years old. She really thinks Siri is real,” Atkinson said—and Siri is still a very limited technology.
By the time computers can outsmart people, it’ll likely be too late to do anything about it. “Breakthroughs could be happening at any time,” warned Russell.
Here’s the paradox: Even the most pessimistic scientists on the panel did not want to stop research on superintelligent computers, even if it could mean trouble for human beings. Russell wanted research to continue, but with the possibility of halting before things got out of hand. “It seems to me that we need to look at where this road is going. Where does it end? And if it ends somewhere we don’t like, then we need steer it to a different direction,” he said. Atkinson agreed, saying that if the risk is too high, the benefit, no matter how important, should be turned back.
Other scientists on the panel took a less alarmist view. Ronald Arkin, an associate dean in the College of Computing at Georgia Tech, wanted scientists to push forward. “If we don’t fund the basic research, there’s no basic sense of being worried about safety issues at this point of time,” he argued.
Manuela Veloso, a professor at Carnegie Mellon University, said moving into the world of artificial intelligence is no different than other advances in computing. “We just have to sample the world,” she said, “we have to build trust, we have to use, and eventually things become familiar to us.”
“It will be a shame for humans who are so intelligent to not make good use of this technology,” Veloso said.
Are you worried that superintelligent computers will take over the world? Or do you think they could do a better job than humans? Let us know in the comments.
This story, "The Terminator question: Scientists downplay the risks of superintelligent computers " was originally published by PCWorld. | <urn:uuid:26e37cc3-95c0-48ea-8536-1ed647968b25> | CC-MAIN-2017-09 | http://www.itnews.com/article/2942852/the-terminator-question-scientists-downplay-the-risks-of-superintelligent-computers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00233-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957533 | 696 | 2.609375 | 3 |
WASHINGTON -- When engineers talk about roads here, they describe them as networks. There's a good reason for that.
In high-definition displays, roadways appear like wires hanging off the back of server racks. It's a tangled system. The data, vehicles in this case, are illustrated by small, slowly moving dots.
Researchers who study motor vehicle traffic are interested in reducing traffic congestion. Congestion equals loss, they say, the loss of human and economic productivity that comes from sitting in traffic.
"Building a way out of this mess is not going to be a solution," said Gabor Karsai, a professor in the school of engineering at Vanderbilt University and a member of a team using IT to improve traffic flows.
Engineers, auto makers and U.S. transportation officials who gathered at the White House's SmartAmerica conference this week showed various systems and technologies that may be used to make driving safer and more efficient.
On display were technologies ranging from dedicated short range communications (DSRC) that will allow vehicles to share real-time information about location and speed and warn of a need for evasive action, to systems that monitor and regulate traffic flows over a region.
As the importance of IT in transportation increases, the U.S. Department of Transportation has begun discussing the need for standards on data exchanges, said its CIO Richard McKinney.
McKinney, speaking at the SmartAmerica conference, said the transportation industry doesn't want to find itself 10 to 15 years from now with independently developed data standards that hinder communications.
Last week, DOT officials met to examine the role the DOT should play "in the definition of data standards" in "this newborn industry" of digital technologies in transportation.
McKinney said he doesn't know what role the DOT should play, but made it clear that he wants to discuss the topic with industry officials.
"I see that the marriage of information technology with transportation is going to be as transformative as anything," said McKinney. "I'm beginning to see things that I couldn't have imagined as a young man."
The overarching goal is to make driving as safe as air travel, and reduce the 30,000-plus traffic fatalities annually. Among the technologies that could play a major role, is DSRC, which is being used in a pilot test in Southfield, Mich.
DSCR systems enable vehicles to communicate with one another, but it also requires the devices to be deployed along highways as well as in cars. It takes the auto industry five to seven years to add new technologies to vehicles, do DSCR is clearly years away.
But in the more immediate future, there are systems like the one that Vanderbilt's Karsai is working on, along with researchers at the University of California at Berkeley.
The system, which may be deployed in a few years in Southern California, is an integrated infrastructure that will monitor the network, or roads, and then control the traffic flow. Individual traffic lights and freeway ramp lights will talk to each other in a connected system to help ensure that traffic flows smoothly. It will involve regulating ramp lights, which control vehicles entering the highway.
Other systems that may be more pervasive in a few years concern parking, a big time waster for motorists. Technology is now available that can tell drivers where to find parking, how much it will cost and even reserve a space.
The problem, say those working on these systems, is that many garages, which operate on low margins, have not invested in the technology, which also tells drivers which parking spaces may be free.
Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed. His e-mail address is email@example.com.
Read more about government/industries in Computerworld's Government/Industries Topic Center.
This story, "Roads in the Future Will Need Data Standards as Well as Signs, Says DOT's CIO" was originally published by Computerworld. | <urn:uuid:485e2eda-58af-4e18-bd47-5d2441d8c726> | CC-MAIN-2017-09 | http://www.cio.com/article/2375490/government/roads-in-the-future-will-need-data-standards-as-well-as-signs--says-dot-s-cio.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00105-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964451 | 842 | 3.03125 | 3 |
XSLand CSS complement each other on document layout
What is it?
XSL (Extended Style Sheet Language) is used to define the layout of XML documents in a presentation medium such as a web browser window or a printed page. XSL includes the transformation language XSLT, which converts XML into formats such as HTML, PDF and Braille, or into other XML formats such as typesetting languages.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Where did it originate?
The World Wide Web Consortium's (W3C) proposed recommendation for XSL pointed out that without a style sheet, a processor "could not possibly know how to render the content of an XML document other than as an undifferentiated string of characters". The proposal was submitted in 1997 by authors from a number of organisations including Microsoft, ArborText and the University of Edinburgh. XSL builds on the W3C's work on Cascading Style Sheets.
What is it for?
"It is not a replacement for your Wysiwyg authoring tools [a print preview application for developers, short for 'what you see is what you get'], but it is useful for some problems, such as very large documents or those derived from database content, that are not well served by the current tools," said Stephen Deach, a senior member of the W3C-XSL working group.
XSL provides a comprehensive model and a vocabulary for writing style sheets using XML syntax. There are three elements: XSLT; XPath, a language for defining parts of an XML document; and XSL-FO (Formatting Objects), a language for formatting XML documents.
What makes it special?
The W3C said, "XSL is a language quite different from CSS and caters for different needs. Aimed by and large at complex documentation projects, XSL has many uses associated with the automatic generation of tables of contents, indexes, reports and other complex publishing tasks."
Stephen Deach writes, "CSS was limited to what was needed for browsers and easy for the browser manufacturers to implement."
Although CSS can be used to style HTML and XML documents, XSL can transform XML data into HTML/CSS documents or other formats. The two languages complement each other and can be used together.
How difficult is it to master?
Straightforward - and essential - for those learning XML. However, IBM researcher Jared Jackson said, "This means that developers accustomed to writing in Java code or C who learn XSL often find themselves in foreign territory when using XSL's more advanced features."
Where is it used?
Not just in web and XML document design, but also printing. The W3C said XSL aims to allow the specification of printing of web documents to work as well as a word processor. Future support for high-end print typography is planned.
What systems does it run on?
Fewer suppliers and tools support XSL than CSS, although Microsoft, Adobe and others are committed to supporting final W3C specifications.
See the W3C XML site and the Cover Pages
What is coming up?
XSL 2.0 is making its way through the W3C review process.
The W3C website has links to XSL tutorials, articles and training, including Mulberry Technologies and online XSL guide Zvon.org.
Rates of pay
XSL is used in a wide range of roles including web publishing, .net and Java development and rates vary accordingly. | <urn:uuid:6c6c787e-415c-4edb-b365-a8ba5d0fe094> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/2240057094/XSL-to-improve-web-print-support | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00525-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.922698 | 740 | 3.5 | 4 |
As NASA's Voyager 1 spacecraft travels outside the solar system, scientists hope to learn about the forces pushing on the "bubble" around the sun and how interstellar radiation could affect future space exploration.
"There's never been anything like this," said Ed Stone, chief scientist for the Voyager mission. "Nothing has ever been outside the solar bubble before. Nothing."
Stone is hardly a newcomer to the Voyager mission. He was the chief scientist on the project during Voyager's planning stages in the early 1970s.
Not only did Voyager 1 make history by becoming the first human-made spacecraft to journey beyond our solar system, but it did so on 36- to 40-year-old technology, Stone told Computerworld on Friday.
"This was one of our long-range hopes," Stone said. "We had no way to know at the time that this was possible because we didn't know how far away the edge of the bubble was. When Voyager was launched, the space age was only 20 years old. Most things didn't even last a few years back then. We had no idea if Voyager could last for 36 years and go as far as it has."
"This whole mission has been a major part of my life," Stone said. "I've been so fortunate to be part of this historic journey. This is the first spacecraft to sail in the cosmic sea between the stars."
The Voyager 1 was launched in 1977 with its twin spacecraft, Voyager 2 . On Thursday, NASA announced what had already been suspected -- that the spacecraft had left the solar system and had entered interstellar space in August 2012. The probe has journeyed between 14 billion and 15 billion miles.
"The Voyager team needed time to analyze those observations and make sense of them," Stone said during a press conference Thursday. "But we can now answer the question we've all been asking -- 'Are we there yet?' Yes, we are."
Stone explained that it took scientists months to figure out whether Voyager 1 had left the solar system because the instrument that Voyager used to measures plasma, an ionic gas, stopped working in 1980. Plasma is different depending on whether it is inside or outside the heliosphere, which is like a bubble that surrounds the sun. Without that measurement tool, scientists had to analyze plasma waves, which was a more time-consuming process.
Now that Voyager 1 is outside of the heliosphere, scientists will study, for the first time, galactic cosmic rays, interstellar winds and the movement of the heliosphere.
"For the first time, we're seeing radiation from outside the solar system," Stone said. "We're observing the intensity of radiation outside the bubble. The bubble kind of protects us. It's charged particles and doesn't let the outside radiation in... We will see how our star, within its sphere, is interacting with what's around it."
The interstellar radiation and winds constantly put differing amounts of pressure on the outside of the heliosphere. If that pressure grows, how does it affect the size and shape of the heliosphere, and how does the heliosphere keep out that added interstellar radiation?
Those are questions that scientists want to answer, Stone said. The answers will also affect future deep space travel.
The Earth's atmosphere and magnetic field would protect the planet from any extra interstellar radiation. However, the planet Mars, asteroids or distant moons around other planets would not be protected.
That means any robotic spacecraft or rovers, along with any spacecraft carrying astronauts, would be affected by increased levels of radiation if they were traveling through deep space.
"This would affect any kind of flight outside the Earth's magnetic field," said Stone. "It's very important to know how intense this interstellar radiation is... This is a long-term issue."
NASA scientists also are looking forward to the day when Voyager 2 also leaves the solar system and enters interstellar space. Stone said he expects that will happen in three to four years.
Voyager 2 also has a working plasma measurement instrument.
Having both probes past the heliosphere would give scientists two different sets of data, and a more complex image of space, to study.
"It's a whole new journey of exploration," Stone said. "It's the first journey between the stars. It's like sailing on the ocean for the first time after leaving land. We're out in this cosmic sea. Most of the universe, by the way, is this kind of interstellar stuff. This will give us information about most of the volume of the Milky Way."
This article, NASA's Voyager will teach us about future deep space missions, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA's Voyager will teach us about future deep space missions" was originally published by Computerworld. | <urn:uuid:b3fa4131-e85b-4706-b55c-1993faf81e25> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2169949/data-center/nasa--39-s-voyager-will-teach-us-about-future-deep-space-missions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00401-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.96633 | 1,053 | 3.734375 | 4 |
Can AI Mixed With Ambience, Topped With Behavioral Analytics, Be The Perfect Cybersecurity Recipe?
Since cybersecurity threats have become a topic of nightly newscasts, no longer is anyone shocked by their scope and veracity. What is shocking is the financial damage the attacks are predicted to cause as they reverberate throughout the economy.
Cybersecurity Ventures predicts global annual cybercrime costs will grow from $3 trillion in 2015 to $6 trillion annually by 2021, which includes damage and destruction of data, stolen money, lost productivity and theft of intellectual property, personal and financial data, embezzlement and fraud. That doesn’t even include post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data, systems and reputational harm.
While traditional security filters like firewalls and reputation lists are good practice, they are no longer enough. Hackers increasingly bypass perimeter security, enabling cyber thieves to pose as authorized users with access to corporate networks for unlimited periods of time.
Organizational threats manifest themselves through changing and complex signals that are difficult to detect with traditional signature-based and rule-based monitoring solutions. These threats include external attacks that evade perimeter defenses and internal attacks by malicious insiders or negligent employees.
Along with insufficient threat detection, traditional tools can contribute to “alert fatigue” by excessively warning about activities that may not be indicative of a real security incident. This requires skilled security analysts to identify and investigate these alerts when there is already a shortage of these skilled professionals.
Protect the Data, Not the Network
Security pros need to pick up where those traditional security tools end and realize that it’s the data that is ultimately at risk. The safeguarding of the data is as important, if not more imperative, than just protecting the network or the perimeter. Security shields must be placed where the data resides in the IT enterprise as opposed to monitoring data traveling across the network.
Some cybersecurity sleuths deploy a variety of traps, including identifying an offensive file with a threat intelligence platform using signature-based detection and blacklists that scan a computer for known offenders. This identifies whether those types of files exist in the system which are driven by human decisions.
However, millions of files need to be uploaded to cloud-based threat-intelligent platforms; scanning a computer for all of them would slow the machine down to a crawl or make it inoperable. But the threats develop so fast that those techniques don’t keep up with the bad guys and, also, why wait until you are hacked?
The Mix of Forensics and Machine Learning
Instead of signature and reputation-based detection methods, smart CSOs and CISOs are moving from post-incident to pre-incident threat intelligence. They are looking at artificial intelligence innovations that use machine learning algorithms to drive superior forensics results.
In the past, humans had to look at large sets of data to try to distinguish the good characteristics from the bad ones. With machine learning, the computer is trained to find those differences, but much faster with multidimensional signatures that detect problems and examine patterns to identify anomalies that trigger a mitigation response.
The Good, the Bad and the Ugly
Machine learning generally works in two ways: supervised and unsupervised. With supervised learning, humans tell the machines which behaviors are good and bad (ugly), and the machines figure out the commonalities to develop multidimensional signatures. With unsupervised learning, the machines develop the algorithms without having the data labeled, so they analyze the clusters to figure out what’s normal and what’s an anomaly.
The obvious approach is to implement an unsupervised, machine learning protective shield that delivers a defense layer to fortify IT security. A self-learning system with the flexibility of being able to cast a rapidly scalable safety net across an organization’s information ecosystem, distributed or centralized, local or global, cloud or on-premise. Whether data resides in a large health system or is the ERP system of a large energy company or a financial institution, rogue users are identified instantly.
By applying machine learning techniques across a diverse set of data sources, systems become increasingly intelligent by absorbing more and more relevant data. These systems can then help optimize the efficiency of security personnel, enabling organizations to more effectively identify threats. With multiple machine learning modules to scrutinize security data, organizations can identify and connect otherwise unnoticeable, subtle security signals.
Security analysts of all experience levels can also be empowered with machine learning through pre-analyzed context for investigations, making it easier for them to discover threats. This enables CISOs to proactively combat sophisticated attacks by accelerating detection efforts, reducing the time for investigation and response.
The Digital Eye Sees All
Once a machine learning system is in place, organizations need to identify solutions that employ behavioral analytics which will baseline normal behaviors and identify irregularities. While the technology is advanced, the concept is simple.
A pattern of user behavior is established and stored in the system. To adequately address the threat, CISO’s should consider using solutions which are ambient to completely surround an intrusion while harnessing the power of the machine learned system’s cognitive nature. This combination creates an evolving “virtual intelligent eye” defense shield that provides real-time behavior analysis and anomalous user access monitoring.
This type of solution provides an eye that learns, understands, recognizes and remembers normal user habits and behavior as they use applications in their daily work. The eye generates a digital “fingerprint” based on behavior for every single login, by every user, in every single application and database across the organization.
If your organization deploys this type of comprehensive cybersecurity system, a gloomy doomsday scenario offered up by many cybersecurity ventures will no longer be a concern.
About the Author
Santosh Varughese serves as the Co-Founder and President of Cognetyx. He brings more than 30 years of leadership experience building companies into profitable, high-value enterprises.
His mission for Cognetyx is to deploy new, powerful technologies combining the art and science of machine learning and artificial intelligence to directly address the challenge of quashing the insidious growing problem of healthcare data breaches and privacy violations that affect hundreds of millions of Americans.
He began his career with Royal Dutch Shell designing high-speed fiber optic communications between Cray Supercomputer and IBM mainframes. His next venture led him to Procter & Gamble in International Marketing for Pampers and Luvs in Switzerland, then Germany, to Singapore, and finally, Saudi Arabia.
Varughese’s passion and spirit for innovation have led to his involvement in various startups in fields ranging from global product licensing and distribution, advertising agencies, brick & mortar operations and online ventures including his most recent – co-founding Healthpost.com, which was acquired by The Advisory Board (NASDAQ:ABCO) in May 2014.
Edited by Alicia Young | <urn:uuid:304c18a2-e6f7-4bde-83b9-a1cac7b6093d> | CC-MAIN-2017-09 | http://www.cloudsecurityresource.com/topics/cloud-security/articles/427031-ai-mixed-with-ambience-topped-with-behavioral-analytics.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00101-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931628 | 1,423 | 2.546875 | 3 |
Technical Characteristics of RFID
The term RFID denotes a range of wireless identification technologies, not a single device type. One way to classify tags is according to their source of power. The most inexpensive and compact tags are passive, meaning that they derive all of their transmission power from the reading device. Passive tags are also the most physically robust RFID tags. Active tags contain batteries, and are capable of broadcasting at much longer distances than passive ones. (Loosely speaking, your mobile phone is a sophisticated active RFID tag.) Semi-active tags make use of battery power to run local circuitry, but use reader power for communication.
Another important axis of classification is the frequency at which an RFID tag operates. In general, lower frequencies have shorter associated ranges, but offer better penetration of materials; higher frequencies offer greater range, but are subject to greater physical interference. The two most important RFID-frequency categories are as follows:
Ultra-High Frequency (UHF): UHF tags operate in the 868-956 Mhz frequency band. This is the same part of the radio spectrum in which cordless phones and some mobile phones operate. UHF RFID tags will see the widest use in supply-chain and retail applications. One of the big benefits of passive UHF tags is that they have a range, in many environments, of over ten feet (and sometimes as much as tens of feet). Additionally, RFID readers can scan hundreds of UHF tags simultaneously.
A major drawback of UHF tags is that they cannot be easily read in the presence of high concentrations of liquids, as found such things as beverage containers and human beings!
High-Frequency (HF): By comparison with UHF tags, passive HF tags have the drawback of low transmission range -- generally on the order of just over a foot. In general, they are also larger than UHF tags; flat HF tags are typically about 50mm by 100mm in size. HF tags, however, have the advantage of being readable in the presence of water.
HF tags operate at 13.56 Mhz, a frequency known as the industrial-scientific-medical (ISM) band. HF tags are popular in some smartcard applications and also for various industrial uses.
Other frequencies: RFID tags also come in a low-frequency (LF) variety operating at 120-140 Khz. These tags tend to be popular for use in building-access badges and animal tagging. RFID tags can also operate at higher UHF frequencies, most notably at 2.45 GHz.
In order for an RFID reader to identify many tags in its read range, it must engage with the tags in what is known as an anti-collision or singulation protocol. If all tags were to transmit to the reader simultaneously, then their signals would interfere with one another, rendering reading ineffective. A singulation protocol addresses this problem by enabling tags to take turns in transmitting to a reader.
For UHF tags, singulation is generally a variant of a protocol known as tree-walking. Briefly stated, in tree-walking, the space of k-bit identifiers is viewed as the leaves in a tree of depth k. A reader traverses the tree, asking subsets of tags to broadcast a single bit at a time. A feature of the basic tree-walking protocol is that the RFID reader broadcasts tag serial numbers over very large distances, which can introduce vulnerability to eavesdropping.
The anti-collision protocol used in HF tags is generally a variant of the classic ALOHA protocol. Briefly stated, tags in the ALOHA protocol transmit their identifiers to the reader at a variety of randomly determined times so as to avoid transmission collisions. ALOHA-based RFID reading leaks less information than most UHF tree-walking protocols. On the other hand, most HF readers are capable of scanning only several dozen tags simultaneously.
The least expensive RFID tags, such as basic EPC tags, are read-only. Writeable tags are more expensive, while rewritable tags (containing EEPROM) are still more expensive. In a highly networked environment, however, large amounts of information can easily be associated with read-only tags in a database; in this case the tag simply serves as a pointer to an associated database entry.
Cryptography and Security
The tags that will be most inexpensive and most prevalent, such as basic EPC tags, lack the computing power to perform even basic cryptographic operations. (They will have about 500-5000 gates, many devoted to the basic tag functions. By contrast, the Advanced Encryption Standard (AES) requires some 20,000-30,000 gates.) Such tags are at best capable of employing static keys, i.e., PINs and passwords as security mechanisms. For example, the "kill codes" used to disable EPC tags for purposes of privacy, are secured by PINs. The limited capabilities of such RFID tags make privacy and security enforcement a special challenge.
More expensive RFID tags are capable of advanced functionality, and often include the ability to perform basic cryptographic algorithms, such as symmetric-key encryption and challenge-response identification protocols. (Public-key cryptographic is expensive, and used on few RFID tags.) | <urn:uuid:701db7d8-7415-4015-b4bc-1bb7e8c89351> | CC-MAIN-2017-09 | https://www.emc.com/emc-plus/rsa-labs/research-areas/technical-characteristics-of-rfid.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00101-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923712 | 1,075 | 3.796875 | 4 |
About the BlackBerry Keyboard
Use the BlackBerry Keyboard to communicate with the world. With prediction, you can select word suggestions that appear while you type. After you set up your keyboard preferences, you can learn how to cut, copy, and paste text, and delete words. You can type efficiently by creating custom text shortcuts and by using prediction, correction, and spell check. The BlackBerry Keyboard can learn the words and names that you might type by accessing the BlackBerry Hub and the Contacts app. You can also set up or change your typing and keyboard languages.
If your device has both a touch-sensitive physical keyboard and a touch screen keyboard, both keyboards support finger swipe gestures that allow you to choose word suggestions, edit text, and show the number and symbol list quickly. | <urn:uuid:0b511534-fea9-4855-963e-fbd715fdcb12> | CC-MAIN-2017-09 | http://help.blackberry.com/en/keyboard/latest/help/keyboard.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00453-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910804 | 155 | 2.53125 | 3 |
Cyber-criminals targeting the Internet of Things (IoT) could result in disasters in the physical world, according to security expert Bruce Schneier.
Writing in Motherboard magazine, Schneier said that the Internet of Things had given the internet “hands and feet” and the ability to “directly affect the physical world”.
“What used to be attacks against data and information have become attacks against flesh, steel, and concrete,” he said.
He said that threats such as hacking cars while they drive on a motorway, remotely killing a person by hacking a medical device, or taking control of missile systems were all possibilities.
“The Internet of Things will allow for attacks we can’t even imagine,” he warned.
Government action required on IoT
Schneier, who is CTO of Resilient Systems, which is part of IBM, said that security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement.
“This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organisations to understand.”
He said that governments needed to play a larger role in combatting the threat of hacking within IoT.
“The next president will probably be forced to deal with a large-scale internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen,” he said.
Roy Fisher, a security consultant at MWR InfoSecurity told Internet of Business that IoT in an enterprise environment – i.e. the theory of multiple systems across a large or potentially multinational organisation, has multiple implications in terms of security controls.
“For this to be viable, organisations may need to segregate off environments to localise the risk posed through a potential flaw in one of the components. This will not only require a large overhead from implementation but could also hinder the benefits of IoT,” he said. | <urn:uuid:d6384738-da7f-49e8-aa3a-c31863652a80> | CC-MAIN-2017-09 | https://internetofbusiness.com/schneier-warns-cybersecurity-threat-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00046-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954083 | 439 | 2.78125 | 3 |
Girding The GridBy Larry Barrett | Posted 2003-09-08 Email Print
The Northeast blackout revealed an electric system in disrepair. The bright side? It's holding up well for being hatched in 1948.
About 50 million electricity customers throughout the Northeast and Canada learned that lesson the hard wayan Aug. 14 blackout exposed how vulnerable the nation's electric system is to accidental or intentional interruption.
Investigators will mine data from Supervisory Control and Data Acquisition (SCADA) software, much like an airplane's black box, to get a real-time picture of what went wrong. In the meantime, here are the numbers behind the grid:
10The number of seconds utility operators have to react to a cascading collapse of the power grid.
Energy management systems are primarily used to distribute power across the grid. They aren't built to do the computations necessary to adjust the flow of electricity after rapid-fire catastrophic events, says Chanan Singh, head of the electrical engineering department at Texas A&M. "There just wasn't enough time," in August, says Singh. "That has to be done by human beings.''
Because of dense populations in cities, power plants need to balance the distribution of electricity between urban areas. If one generator dips its production, others in the power grid offset the fluctuation. This balancing act takes place in nanoseconds.
Singh says overlapping power sources are good "99% of the time," but a sharp drop in power spurs shutdowns.
$90MCost to install an energy management system for a utility.
These systems are designed to balance load, the amount of electricity traveling through the grid, when supply and demand are off-kilter. The system tracks the frequency, voltage and sequence of electricity between two points. If the generators are firing "in-step" in a region, the system adjusts the load on its own. When generators are out of synch, the system alerts operators who decide how to balance the system.
"It's early in the investigation, but it's apparent the right information didn't get to grid operators in the time it takes to take corrective action," says Jill Feblowitz, an AMR Research analyst.
1One system failure. One scapegoat.
The transmission lines of FirstEnergy Corp. of Akron, Ohio, have been singled out as the choke point that triggered the blackout. Early reports indicated a lightning strike may have damaged a generator and key transmission lines in Ohio. FirstEnergy officials, who won't reveal their suppliers, admit that at least one part of their system failed.
"We do know that our alarm-screen function failed," says FirstEnergy spokesman Mark Durbin. "But we can't believe that a problem with our transmission lines could have been the cause of such a widespread problem."
Credit Suisse First Boston analyst James Heckler says blaming FirstEnergy for the mass outage is "premature and likely overstated."
138+The number of U.S. vendors who provide SCADA systems to utilities worldwide.
"There's no standardization of I.T. systems for utility providers," says Scott Castelaz, vice president of corporate development and external affairs at Encorp, a Windsor, Colo., developer of energy management software and hardware. "That becomes a real problem with so many agencies overlapping in densely populated regions."
Analysts say a universal reporting and monitoring system helps operators respond to emergencies. Instead of calling each other by phone when something fails, alerts can come automatically.
"In some ways, it's (utility providers') responsibility to connect to each other through their systems," says Feblowitz.
1948Year that President Truman launched plan to bring transmission lines and generators to rural America.
Power companies after world war II built transmission lines and generators in places far removed from cities. While rural residents were thrilled, they couldn't consume the energy at their disposal. More capacity was built to shuttle excess power to metropolitan areas. The electric grid has been neglected ever since.
"Most of the grid's infrastructure is more than 50 years old and hasn't been upgraded," says Castelaz. "The power companies are very proud that despite its age, the system still works pretty well most of the time." | <urn:uuid:cb049728-ba9f-4bcb-9335-9e5d69cda009> | CC-MAIN-2017-09 | http://www.baselinemag.com/c/a/Past-News/Girding-The-Grid | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00222-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954791 | 869 | 2.828125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.