text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Dynamic Languages: Perl, Ruby, Python
Perl, Ruby, Python — so many languages, so little time. Which should you learn or learn next? All are interpreted, not compiled, and all three are dynamic languages. With the exception of Perl (which is a true programming language, according to some) all are scripting languages that are easy to learn and use — far simpler, in the end, than C or C++.
Most important, perhaps, is the fact that all three are in demand, and they’re unlikely to fall out of favor in the near future.
Perl is the granddaddy here, the oldest, best-known and most widely used of the three. It was first released in 1987 by the now-famous Larry Wall, and it quickly caught on for its string processing power, among other features. Because it’s old and popular, Perl is well-documented. Indeed, the best-known book on Perl (“Programming Perl,” also called “The Camel Book” for the camel on its cover) was released in 1991. It’s still in use today.
And although you can find enough information on Ruby and Python to fill a warehouse twice over, Perl is still the most fully explained — in the last two decades, even its arcane nooks and crannies have been exposed to someone’s flashlight.
Perl is also the more powerful, and more complex, than Ruby or Python. Hence, it’s harder to learn and less readable. Because of its age, Perl projects can suffer from cruft, a problem if you’re the new kid on the block who’s been asked to wade into a project and alter someone else’s code.
Ruby: Diamond in the Rough?
Enter Ruby, released by Japan’s Yukihiro Matsumoto (“Matz” for short) in 1993. It’s much simpler to learn than Perl, even though it draws on Perl’s syntax in no small measure. (In fact, many Perl hackers learn Ruby because it’s similar.)
Similar, but not the same — Ruby is fairly easy to read, even if you’re not reading your own code. And the language follows the principle of least surprise (POLS), that is, it tends to work as you’d expect, with few curveballs.
Programmers also like Ruby for two more reasons. First, it’s strictly object-oriented. Yes, Perl and Python have objects, but Ruby adheres most closely to the object paradigm. Or, as a common refrain goes, “With Ruby, everything’s an object.”
The second reason Ruby is popular is Ruby on Rails, a Web application framework that makes building even complex Web systems fairly easy. Ruby on Rails sponsor 37Signals even bills it as “Web development that doesn’t hurt.”
Python: No Venom Here
If you like Ruby for its simplicity, you might fall in love with Python. It’s the simplest, most programmer-friendly of the three, and most programmers consider it the most readable too. That can be a big benefit if you’re working in teams and have to deal with other people’s code.
But with readability comes a price: Python can be verbose, and it runs more slowly than Ruby or Perl. Yet, it tends to run fast enough for the job at hand, and if you’re truly looking for speed, why use an interpreted language in the first place? Better to turn to Ferraris such as C or C++.
Where Ruby has Rails, Python has Django, a Web application framework designed to take the sting out of complex online applications. It’s nowhere near as popular as Rails, but some Python programmers swear by it.
If you’re still torn among Perl, Ruby, and Python, try this: Learn just enough of each language to produce a simple script or two. It’s not hard, and you’ll get a taste (if not the full flavor) of how each language works. After all, sometimes a taste is all it takes.
David Garrett is an IT consultant and former IT director who writes about the nexus of business and technology. He can be reached at editor (at) certmag (dot) com. | <urn:uuid:c6d95970-96fd-4a2c-80e0-2ab42616d0e8> | CC-MAIN-2017-04 | http://certmag.com/dynamic-languages-perl-ruby-python/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93934 | 917 | 2.890625 | 3 |
So what happened?
They bet on trends that looked solid, but werent. Its a recipe for disaster, and it gets us in trouble over and over; trouble that could have been avoided if we knew the difference between hard and soft trends.
People typically dont believe forecasts because forecasts are based on trends, and people dont trust trends. We think trends are like fads: here today, but for who knows how long?
Science sees the word trend differently. It means a general direction in which something is developing or changing. And one of the principal findings of my 25 years of research is that there are two distinct kinds of trends, which I call soft trends, (like the trillion-dollar surplus that never materialize), and hard trends.
A hard trend is a projection based on measurable, tangible, and fully predictable facts, events, or objects. A soft trend is a projection based on statistics that have the appearance of being tangible, fully predictable facts. A hard trend is something that will happen: a future fact. A soft trend is something that might happen: a future maybe.
This distinction completely changes how we view the future. Understanding the difference between hard and soft trends allows us to know which parts of the future we can be right about. It gives us the insight we need to start with certainty, because it shows us where we are dealing with future facts and where we are dealing with hypothetical outcomes, future maybes.
The reason we typically dont trust trends is that we havent learned how to make the distinction between hard trends and soft trends. Once we know the difference, we know where to find certainty and the future suddenly becomes visible.
That trillion-dollar surplus the government predicted at the end of the nineties was a soft trend, only we treated it like a hard trend. We were not only expecting it to happen, we were acting on it as if it had already happened: hence we were spending like crazy. So much money was coming in during 99, we were going nuts. We were gazing at the soft trend like a rabbit hypnotized by a snake.
Unfortunately, the distinction between hard and soft trend is not always quite so obvious. To many observers, that trillion-dollar surplus looked quite believable. Thats the problem with soft trends. Sometimes they have the appearance of being credible. Still, soft is soft, and unless the trend is based on a direction of change that is clearly fixed, there is nothing certain. Saying something could happen is very different from saying it will happen, and that difference makes all the difference.
A hard trend can be either cyclic or linear in nature; both types of change yield hard trends. For example, if the stock market is falling today, we know that in the future, it will go back up again and we know that with certainty. The rise and fall of the stock market is a cyclic change, and a hard trend.
Exactly when will it turn and start going up again, and how high will it go when it does? We dont know. The exact timing and extent of the markets behavior is a soft trend, because our behavior and choices can influence it. What we know is that after it falls, it will rise, and after it rises, it will fall. That may sound like a fairly simplistic hard trend, but it has been reliable enough to make Warren Buffett a very rich man.
On the other hand, if the rate at which our laptop computers can process an audio or video clip has gotten a lot faster in the last few decades, what can we know about the future? Theyll be even faster. The increasing speed and capacity of computer processors is not cyclic, it is linear and a hard trend.
Exactly which manufacturers will be introducing the newest, breakthrough models five years from now? We dont know. The acceleration of the technology is a hard trend but who takes advantage of that technological advancement and brings it to market, thats a soft trend.
Here is an example of the difference between a hard trend and soft trend: Ten years from now (assuming you are still living), you are going to be ten years older than you are today. Thats a hard trend. Why? Because theres nothing you or anyone can do to change that fact.
What will your state of health be like then? Worse? Much worse? Better? About the same? I dont know. Neither do you, and neither does anyone else. It is not definitively knowable, because that is a soft trend. Why? Because you can do things to affect it.
This is, in a nutshell, the power of flash foresight: Knowing how to identify hard trends gives us the ability to see the future. Knowing how to identify soft trends gives us the ability to shape the future.
In 1993, I was invited to address a convention of the National Booksellers Association, attended by a crowd of some ten thousand bookstore owners. My keynote address included these remarks: | <urn:uuid:55874457-a566-42e4-b35f-7e91ee6869c8> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/11050_3928911_2/Special-Report---Finding-Certainty-in-the-Midst-of-Change.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96617 | 1,009 | 2.5625 | 3 |
These programs are not viruses but DoS tools. DoS tools are programs that can be used to make denial of service attacks against any machine in the Internet - typically a web server.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
BACKGROUND INFORMATION ON DENIAL OF SERVICE ATTACKS By Camillo Sars, F-Secure Crypto Research e-mail: Camillo.Sars@F-Secure.com
Denial of Service (DoS) Attacks are attacks on computer systems that aim to disrupt or terminate services provided by the systems. On the Internet, this usually means (repeatedly) crashing services or exhausting some limited resource. DoS attacks can often be performed over the network, and exploit security flaws that exist in the services.
Typical DoS attacks are: - Exhausting the network bandwith of a site. - Exhausting the [inbound] network connections of a service. - Crashing a service using some security flaw. - Crashing the computer running a service using some security flaw.
Recently heavy DoS attacks have been described [1,2]. These attacks use a network of computers to distribute the attack sources over several network locations. These attacks are known as Distributed Denial of Service Attacks.
The most known Distributed DoS attack tools to date are called "trin00"[3,4] and "Tribe Flood Network" (TFN).
The attack tools for Distributed DoS attacks use a master-slave configuration. The slave processes are installed on a large number of compromised Internet hosts, where they report their successfull installation to their master process. The master process thus collects a list of many compromised hosts running the slave process. The resulting master-slave network may include a large number of hosts in widely different network locations.
The slaves carry one or several DoS routines that can be invoked remotely by the master process. The master process can also control the targets and parameters for the attack. Some of the commands are password protected to prevent unauthorized activation or deactivation of the attacks.
Slave processes can be installed on virtually any suitable system, as the loss of a single slave process has very little effect on the overall performance of the network.
The master process can poll the status of its slave processes and keeps a list of known slaves. When the attacker connects to the master, a password is required before access is allowed. Once the correct password has been supplied, the attacker can issue commands to the master. The commands direct all the active slaves of the master process, so large-scale attacks can be launched and terminated very quickly.
Master processes are often carefully protected and installed on systems where detection is unlikely because of bad administration practices or heavy user activity.
An attacker can connect to a master process from virtually any internet host, as the master accepts standard telnet-type connections. A single attacker may control several DoS master processes, giving instant access to huge numbers of slave processes.
Attacked systems will notice a huge increase in network traffic. Depending on the attack, the traffic may come from valid internet addresses or from random addresses created by the slave processes.
If the attacked system is directly vulnerable to any DoS attacks performed by the slave processes, the system will crash or malfunction and cannot be reactivated without immediately crashing again.
If the attacked system does not crash from the attacks, its network capacity will quickly be exhausted. Reports indicate attack rates of several gigabits per second, which far exceed the capacity of most Internet sites.
If you are the target of a large distributed DoS attack, there are no good ways to defend yourself. Several well-known internet sites have been completely cut off by DoS attacks recently, including Yahoo.com .
If your systems have been compromised and attackers are running masters or slaves on your systems, you must take immediate action to fix the security holes that were used to compromise your system . Your systems may be actively participating in DoS attacks as long as the processes exist.
The only way to completely eliminate this kind of attacks is to decrease the number of systems that can be compromised to a level that is too low for attackers to set up large distributed DoS networks.
Acknowledgements and References
The information in this document is based on several sources, but most notably on information from the Incidents mailing list. This document is intended for informational purposes only.
Incidents Mailing List [INCIDENTS@SecurityFocus.com]. Send message containing "QUERY INCIDENTS" to [LISTSERV@LISTS.SECURITYFOCUS.COM]. "CERT Incident Note IN-99-07", The CERT Coordination Center, 1999. David Dittrich [firstname.lastname@example.org], "The DoS Project's "trinoo" distributed denial of service attack tool", University of Washington, 1999. ISS X-Force, "Denial of Service Attack using the trin00 and Tribe Flood Network programs", Internet Security Systems Inc., 1999. CNET News.com, "How a basic attack crippled Yahoo". [http://news. cnet. com/news /0-1005-200-1544455.html?tag=st] February 2000. | <urn:uuid:c765f79d-ea4a-4a19-96f9-12d3a860879b> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/trin00.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923051 | 1,152 | 2.984375 | 3 |
E-Voting Takes Another HitA group of computer scientists have shown how voting results, held in electronic voting machines, can be changed using a novel hacking technique. It's yet another reason why we need to have a verifiable, auditable, paper-trail for electronic voting machines.
A group of computer scientists have shown how voting results, held in electronic voting machines, can be changed using a novel hacking technique. It's yet another reason why we need to have a verifiable, auditable, paper-trail for electronic voting machines.The technique they used to change votes, dubbed return oriented programming, was first described by Hovav Shacham, a professor of computer science at UC San Diego's Jacobs School of Engineering. Shacham is also an author of a study that detailed the attack on voting systems presented earlier this week at the 2009 Electronic Voting Technology Workshop / Workshop on Trustworthy Elections (EVT/WOTE 2009).
From a statement:
To take over the voting machine, the computer scientists found a flaw in its software that could be exploited with return-oriented programming. But before they could find a flaw in the software, they had to reverse engineer the machine's software and its hardware-without the benefit of source code.
Essentially, return-oriented programming is a technique that uses pieces of existing system code to exploit the system. In this demonstration, the researchers successfully performed a buffer-overflow.
The team of scientists involved in the study included Shacham, as well as researchers from the University of Michigan and Princeton University. The hacked voting system was a Sequoia AVC Advantage electronic voting machine.
Shacham concluded that paper-based elections are the ay to go. I wouldn't go that far, but he did:
"Based on our understanding of security and computer technology, it looks like paper-based elections are the way to go. Probably the best approach would involve fast optical scanners reading paper ballots. These kinds of paper-based systems are amenable to statistical audits, which is something the election security research community is shifting to."
I'd settle for verifiable paper-based audit trail.
Professor Edward Felten, a long-time observer of electronic voting systems also commented:
"This research shows that voting machines must be secure even against attacks that were not yet invented when the machines were designed and sold. Preventing not-yet-discovered attacks requires an extraordinary level of security engineering, or the use of safeguards such as voter-verified paper ballots," said Edward Felten, an author on the new study; Director of the Center for Information Technology Policy; and Professor of Computer Science and Public Affairs at Princeton University.
In February 2008, Felten demonstrated how he was able to access several electronic voting systems at multiple locations in New Jersey. | <urn:uuid:e4bfe003-bb7e-45bf-a29f-4847d4838730> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/e-voting-takes-another-hit/d/d-id/1082270?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00462-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958667 | 573 | 2.734375 | 3 |
NEC is working on a new type of small data center unit that uses convection to slash power usage by a third, even in Japan's hot, sticky climate.
The Japanese electronics giant said Tuesday it has designed a new type of portable data center that can use the temperature difference between hot air exhaled from servers and untreated air from outside to create air flow and lessen the need for dedicated coolers.
While using convection to cool data centers is not a new concept, NEC said that most such facilities in Japan operate under old temperature and humidity standards published by ASHRAE, the American Society of Heating, Refrigerating and Air-Conditioning. The company said that newer standards, combined with its innovations in combining cooling and air flow, will allow for the greater power savings.
ASHRAE standards published in 2004 called for an operating environment of between 20 C and 25 C, and 40 to 50% humidity for enterprise servers and storage. But newer ranges published this year, expand that to between 15 C and 32 C, and 20 to 80% humidity.
NEC said that under the older ranges, only a tiny fraction of possible location and weather combinations in Japan can provide a suitable environment, meaning the vast majority of data centers are built totally enclosed. But under the new standards, NEC has calculated that it can use tightly controlled convection technology to allow for using outside air over 60% of the year in locations as diverse as urban Tokyo and chilly Sapporo in northern Japan.
The company said it has developed a portable data center module about six meters in length that can hold six racks of servers, with each running up to 8kW of power, which can use the new cooling method. Portable data centers are especially popular in Japan's cities, where the streets are narrow and land is expensive, as they can be squeezed in and installed at low cost.
The company said it aims to have a finalized version by 2013. It said it also hopes to apply the technology to larger, fixed data centers as well in the future. | <urn:uuid:1bd116bf-822e-4745-8f0a-16dbb9d6e984> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2493431/sustainable-it/nec-data-center-units-to-use-convection-to-cut-power-use-by-30-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00186-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958455 | 415 | 2.9375 | 3 |
Garbage-in garbage-out is the calling card for data quality.
As such, organizations have spent a lot of time and effort ensuring systems have high quality trusted data. This strategy worked when stakeholder, partners and customers operated in silos. As the digital ecosystem emerges and the walls come down between systems, data quality issues are once again coming to light as data is shared across business processes and stakeholders.
Unfortunately, when bad data happens, it is too late and it impacts the ability of getting products to market, knowing the customer, or creates regulatory issues.
Learn how to succeed at getting the right data to the right person at the right time in the right format by:
- Recognizing the factors that cause data quality challenges in complex business ecosystems
- Creating processes to address data quality issues and govern globally and locally
- Investing in technologies to create data that is trusted and relevant to the business | <urn:uuid:4399fc0e-c6c6-479e-b691-ac90875d5d9a> | CC-MAIN-2017-04 | http://resources.boaweb.com/data-governance/03-29-16-gartnerwebinar-michelegoetz-dataquality | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934678 | 185 | 2.53125 | 3 |
Just last week I wrote about the efforts at CERN to preserve the world’s first website and all of the accompanying technology. Those efforts included preserving the experience of surfing those first web sites using one of the earliest browsers, the line mode browser. A couple of days after I wrote my piece, CERN officially launched a line mode browser emulator.
The line mode browser was not the world’s first web browser. That distinction belongs to WorldWideWeb, which was a browser that Tim Berners-Lee created to run on his NeXT computer that had a graphical user interface and a mouse. Since most people in the early 1990s didn’t have access to such a cutting edge computer, CERN graduate student Nicola Pellow developed a browser that could run on the more common computers of the day, with simple, text-only displays and no mouse.
Recently, CERN assembled a dozen developers to create a line mode browser emulator that would run in current browsers. They have done so, and written all about the process of bringing this historic piece of technology back to life, including interviews with the people responsible for making it happen. It’s interesting to learn about how they made it work.
One of their main goals were to recreate the look of the browser, via colors and fonts, and how the old, dumb terminals would draw one character at a time on the screen. They recreate that effect by covering the page in black and then revealing each character by erasing a character-sized rectangle from that cover, one-by-one, line-by-line. Clever!
They also recreated the sound of typing on older keyboards, specifically an IBM RS/6000 keyboard, by using HTML5 audio elements. Even more clever!
You can use the emulator to view that first website, and really experience it as many of those first web surfers did. You can also use it to view any current website, interestingly enough. When you do view a current site through the emulator, you’ll see a lot of code on the screen, because the original line mode browser would simply display the content between unknown tags inline.
The code has also been open-sourced, so you can download it yourself and play with it to your heart’s content.
I encourage you to try the emulator and step back in time to an era when the web was lacking Vines, animated GIFs and cat videos. Good times...
Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:2836cd7e-bac6-41ba-9b43-68ecc1d9405f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2704873/virtualization/see---and-hear---what-it-was-like-to-surf-the-web-20-years-ago.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955417 | 556 | 2.953125 | 3 |
Using off-the-shelf gaming technology that tracks brain activity, a team of scientists has shown that it's possible to steal passwords and other personal information.
Researchers from the University of Oxford, University of Geneva and the University of California at Berkeley demonstrated the possibility of brain hacking using software built to work with Emotiv Systems' $299 EPOC neuro-headset.
Developers build software today that responds to signals emitted over Bluetooth from EPOC and other so-called brain computer interfaces (BCI), such as MindWave from NeuroSky. Of course, if software developers can build apps for such devices, so can criminals.
"The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored," the researchers said in a paper presented in July at the USENIX computer conference. "We take a rst step in studying the security implications of such devices and demonstrate that this upcoming technology could be turned against users to reveal their private and secret information."
The researchers found that the software they built to read signals from EPOC significantly improved the chances of guessing personal identification numbers (PINs), the general area participants in the experiment lived, people they knew, their month of birth, and the name of their bank.
The Emotiv device, used in gaming and as a hands-free keyboard, uses sensors to record electrical activity along the scalp. Voltage in the brain spikes when people see something they recognize, so tracking the fluctuation makes it possible to gather information about people by showing them series of images.
The researchers conducted their experiments on 28 computer science students. In the PIN experiment, the subjects chose a four-digit number and then watched as the numbers zero to nine were flashed on a computer screen 10 times for each digit. While the images flashed before the subjects, the researchers tracked brain activity through signals from the EPOC neuro-headset.
The same form of repetitive showing of images was used in the other experiments, such as a series of bankcards to determine a subject's bank or images of people to find the one they knew.
In general, the researchers' chance of guessing correctly increased to between 20% and 30%, up from 10% without the brain tracking. The exception was in figuring out people's month of birth. The rate of guessing correctly increased to as much as 60%.
Nevertheless, the overall reliability was not high enough for an attack targeted at a few individuals. "The attack works, but not in a reliable way," Mario Frank, a UC Berkeley researcher in the study, said on Friday. "With the equipment that we used, it's not possible to be sure that you found the true answer."
A criminal would have to build malware that could be distributed to as many people as possible. Such a tactic is used in distributing malware via email, knowing that only a small fraction of recipients will open the attachments. However, that small fraction is enough to create botnets of hundreds of thousands of computers.
With BCI devices, the user base today is too small to launch large-scale attacks. Also, users buy software directly from manufacturers, so it would be difficult for criminals to distribute malware.
However, a security risk could arise in the future, if brain-tracking devices become standard for interacting with computers and online stores are created to sell hundreds of thousands of applications, much like people buy apps for Android smartphones today.
To minimize risk, device manufacturers should start building security mechanisms today, such as limiting the information software can access from the headset to only the data needed to run the app, experts say.
"One thing that could be improved, for instance, is that the device itself does some pre-processing and only outputs the data that is required for the application," Frank said.
Such precautions should be taken today to prevent unnecessary risks in the future, he said. | <urn:uuid:4ac13b46-5465-4c89-978a-b3dc49025076> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2132246/mobile-security/brain-could-be-target-for-hackers--researchers-show.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959721 | 796 | 3.171875 | 3 |
The Defense Department should collect human genome sequence data from military personnel to help determine their genetic traits, which could improve performance and cut medical costs, a Defense scientific advisory panel recommended in a report released in December 2010.
The report, posted Jan. 13 on the Federation of American Scientists' website by Steven Aftergood, director of the organization's Project on Government Secrecy, said collecting genome data might help uncover phenotypes that pertain to short- and long-term medical readiness, physical and mental performance, and response to drugs and vaccines.
Thomas Murray, president of The Hastings Center, a nonprofit bioethics research institute in Garrison, N.Y., said phenotypes are the physical manifestation of DNA code contained in cells, which determine characteristics such as hair color, height, strength and mental acuity. Phenotypes also reflect exposure to environmental conditions and toxins as well as stress during childhood.
The Defense human genome report, prepared by JASON, an independent scientific advisory group, said the ability to determine phenotypes could help predict the response of individual troops to battlefield stress, including post-traumatic stress disorder, as well as the ability to tolerate conditions such as sleep deprivation and dehydration.
It also could help determine the ability of troops to withstand prolonged exposure to a range of elements, such as heat, cold or high altitude, or their susceptibility to complications like traumatic bone fracture, prolonged bleeding, or slow wound healing, the report said.
The collection of human genome data also could help the Veterans Affairs Department treat soldiers after they leave active duty, the report said, and recommended genome data be incorporated into the Defense and VA electronic health record systems.
While the National Institutes of Health and the Energy Department spent $300 million to initially sequence the human genome a decade ago, the JASON report noted costs have dropped to $20,000 and predicted that soon-to-be-released DNA sequencing systems will drive costs of chemical reagents below $100 per genome sequence.
The lower cost will enable Defense to sequence the data for its active duty force of 1.4 million troops relatively cheaply, the report said. But even at $100 for the chemicals needed to do the sequencing, the bill amounts to $140 million, which does not include data storage or computing costs.
Nor does it cover the costs of genetic counseling, Murray said, referring to the face-to-face session in which every member of the military has their genome explained.
Murray said it could take years before genome studies could have any bearing on troop health and readiness. He also is concerned that Defense could use genomic information to pin the blame for some conditions -- such as combat stress -- on genetic code, rather than taking responsibility for situations that caused those conditions, such as multiple combat tours.
According to Blaine Bettinger, a Syracuse, N.Y.-based intellectual property lawyer who has a doctorate in biochemistry with a concentration in genetics and writes the Genetic Genealogist blog, a mass collection of genome data at Defense could eventually help improve the health of military members and their families. Collecting basic genomic information on such a large population could also "benefit all of humanity," Bettinger said.
But Bettinger warned that collection of such data also could be used against individuals if, for example, they had conditions the military could cite as a reason to limit their careers.
Dr. Robert Cook-Deegan, director of the Center for Genome Ethics, Law & Policy at Duke University's Institute for Genome Sciences & Policy, said the report did not make a deep or persuasive case on how genomic information could help Defense better manage conditions such as PTSD.
In an e-mail to NextGov, Cook-Deegan said, "No one knows how important genetic factors will prove to be, or how that story will play out. The technical capacity for DNA sequencing merely means that the genetic part of that complex story will probably move faster than it otherwise would, but the causal pathway is still pretty complex, and it's not at all clear to me what Defense would do if it could identify some folks more prone to exhibiting symptoms of PTSD than others, and the report does not delve into such issues."
If he had one recommendation for Defense, Cook-Deegan said, it would be to take the lead to help set standards for the data formats that will be needed to make the genomic information portable and interoperable. | <urn:uuid:5d7eba31-9976-4188-8f8e-0d5396283a42> | CC-MAIN-2017-04 | http://www.nextgov.com/health/2011/01/report-urges-defense-to-collect-genome-data-on-all-troops/48322/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955921 | 894 | 2.765625 | 3 |
Researchers at the Universidad de Cantabria in northern Spain are just beginning to tap into the power of ALTAMIRA, a new 80-teraflop supercomputer cluster installed at the university in 2012.
The IBM-built system is one of the largest in Spain, and will help university researchers delve into a number of fields, including astrophysics, physics, medicine, and climate change.
ALTAMIRA is installed at the Instituto de Física de Cantabria, a joint research center co-founded by the university and the Spanish National Research Council (CSIC).
“Universidad de Cantabria has always been committed to driving scientific progress across Europe by developing and supporting projects within several highly specialized fields,” said Jesús Marco de Lucas, a CSIC researcher at the Cantabria Physics Institute, in an IBM case study on the ALTAMIRA implementation. “To do so, we need a computational infrastructure that is as advanced as our research.”
ALTAMIRA will replace a 4-teraflop HPC resource at the university, and provide the extra computing horsepower needed to model systems and make discoveries in a variety of areas, including processing large maps of the universe, simulating ocean waves and tsunamis, searching for new sub-atomic particles, and supporting the development of personalized medicine.
The cluster, which was funded by the INNOCAMPUS national initiative, will be utilized by CSIC researchers at the university. The plan is to bring outside corporations into ALTAMIRA’s fold as well, including companies in the financial, telecommunications, and energy sectors that are headquartered in Cantabria’s local science and technology park.
“In the Cantabria region there are several institutions that are very interested in energy distribution and consumption,” de Lucas said in the case study. “Universidad de Cantabria wanted to give them an opportunity to gain a better understanding about how to optimize the use of energy.” The system will also be used in support of research conducted elsewhere in Europe.
It’s conceivable that ALTAMIRA will also be processing large amounts of human genetic data for the local hospital too. The university plans to support the personalized medicine endeavors of the Hospital Universitario Marqués de Valdecilla. “Our previous IT system was not powerful enough to support the intense requirements of such research projects,” de Lucas says in the case study.
ALTAMIRA is an IBM iDataPlex cluster composed of 3,840 Intel Xeon cores spread across IBM iDataPlex, BladeCenter, and System x server nodes, and various IBM storage servers. The cluster uses an Infiniband FDR interconnect, the GPFS file system, and runs the Scientific Linux operating system.
The cluster debuted in June 2012 at number 358 on the Top 500 list with a sustained performance of 74.4 teraflops and a peak performance of 79.9 teraflops. It has since been dropped from the Top 500 list, which has a 96.6-teraflop system listed in 500th place. At 79.76 kilowatts, ALTAMIRA’s energy consumption was enough to get it listed on the Top Green list of most energy efficient HPC systems. | <urn:uuid:0f155695-0605-47df-a98e-eb630dee818d> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/07/23/altamira_boosts_scientific_discovery_at_spanish_university/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9258 | 683 | 2.828125 | 3 |
(Note: This is the sixth and final article in a series on advanced analytics.)
Model-making is at the heart of advanced analytics. Thankfully, few of us need to create analytical models or learn the statistical techniques upon which they're based. However, any self-respecting business intelligence (BI) professional needs to understand the modeling process so he can better support the data requirements of analytical modelers.
An analytical model is simply a mathematical equation that describes relationships among variables in a historical data set. The equation either estimates or classifies data values. In essence, a model draws a "line" through a set of data points that can be used to predict outcomes. For example, a linear regression draws a straight line through data points on a scatterplot that shows the impact of advertising spend on sales for various ad campaigns. The model's formula--in this case, "Sales=17.813 + (.0897* advertising spend)"-- enables executives to accurately estimate sales if they spend a specific amount on advertising. (See figure 1.)
Algorithms that create analytical models (or equations) come in all shapes and sizes. Classification algorithms, such as neural networks, decision trees, clustering, and logistic regression, use a variety of techniques to create formulas that segregate data values into groups. Online retailers often use these algorithms to create target market segments or determine which products to recommend to buyers based on their past and current purchases. (See figure 2.)
Trusting Models. Unfortunately, some models are more opaque than others; that is, it's hard to understand the logic the model used to identify relevant patterns and relationships in the data. The problem with these "black box" models is that business people often have a hard time trusting them until they see quantitative results, such as reduced costs or higher revenues. Getting business users to understand and trust the output of analytical models is perhaps the biggest challenge in data mining.
To earn trust, analytical models have to validate a business person's intuitive understanding of how the business operates. In reality, most models don't uncover brand new insights; rather they unearth relationships that people understand as true but aren't looking at or acting upon. The models simply refocus people's attention on what is important and true and dispel assumptions (whether conscious or unconscious) that aren't valid.
Given the power of analytical models, it's important that analytical modelers take a disciplined approach. Analytical modelers need to adhere to a methodology to work productively and generate accurate models. The modeling process consists of six distinct tasks:
- Define the project
- Explore the data
- Prepare the data
- Create the model
- Deploy the model
- Manage the model
Interestingly, preparing the data is the most time-consuming part of the process, and if not done right, can torpedo the analytical model and project. "[Data preparation] can easily be the difference between success and failure, between usable insights and incomprehensible murk, between worthwhile predictions and useless guesses," writes Dorian Pyle in his book, "Data Preparation for Data Mining."
Figure 3 shows a breakdown of the time required for each of these six steps. Data preparation consumes one-quarter (25%) of an analytical modeler's time, followed by model creation (23%), data exploration (18%), project definition (13%), scoring and deployment (12%), and model management (9%). Thus, almost half of an analytical modelers' time (43%) is spent exploring and preparing data, although this varies based on the condition and availability of data. Analytical modelers are like house painters who must spend lots of time preparing a paint surface to ensure a long-lasting paint finish.
Figure 3. Analytical Modeling Tasks
From Wayne Eckerson, "Predictive Analytics: Extending the Value of Your Data Warehousing Investment," 2007. Based on 166 respondents who have a predictive modeling practice.
Project Definition. Although defining an analytical project doesn't take as long as some of the other steps, it's the most critical task in the process. Modelers that don't know explicitly what they're trying to accomplish won't be able to create useful analytical models. Thus, before they start, good analytical modelers spend a lot of time defining objectives, impact, and scope.
Project objectives consist of the assumptions or hypotheses that a model will evaluate. Often, it helps to brainstorm hypotheses and then prioritize them based on business requirements. Project impact defines the model output (e.g., a report, a chart, or scoring program), how the business will use that output (e.g., embedded in a daily sales report or operational application or used in strategic planning), and the projected return on investment. Project scope defines who, what, where, when, why, and how of the project, including timelines and staff assignments.
For example, a project objective might be: "Reduce the amount of false positives when scanning credit card transactions for fraud." While the output might be: "A computer model capable of running on a server and measuring 7,000 transaction per minute, scoring each with probability and confidence, and routing transactions above a certain threshold to an operator for manual intervention."
Data Exploration. Data exploration or data discovery involves sifting through various sources of data to find the data sets that best fit the project. During this phase, the analytical modeler will document each potential data set with the following items:
- Access methods: Source systems, data interfaces, machine formats (e.g. ASCII or EBCDIC), access rights, and data availability.
- Data characteristics: Field names, field lengths, content, format, granularity and statistics (e.g. counts, mean, mode, median, and min/max values)
- Business rules: Referential integrity rules, defaults, other business rules
- Data pollution: Data entry errors, misused fields, bogus data
- Data completeness: Empty or missing values, sparsity
- Data consistency: Labels and definitions
Typically, an analytical modeler will compile all this information into a document and use it to help prioritize which data sets to use for which variables. (See figure 4.) A data warehouse with well documented metadata can greatly accelerate the data exploration phase because it also maintains much of this information. However, analytical modelers often want to explore external data and other data sets that don't exist in the data warehouse and must compile this information manually.
Data Preparation. Once analytical modelers document and select their data sets, they then must standardize and enrich the data. First, this means correcting any data errors that exist in the data and standardizing the machine format (e.g. ASCII vs EBCDIC). Then, it involves merging and flattening the data into a single wide table which may consist of hundreds of variables (i.e., columns). Finally, it means enriching the data with third party data, such as demographic, psychographic, or behavioral data that can enhance the models.
From there, analytical modelers transform the data so it's in an optimal form to address project objectives and meet processing requirements for specific machine learning techniques. Common transformations include summarizing data using reverse pivoting(See figure 5), transforming categorical values into numerical values, normalizing numeric values so they range from 0 to 1, consolidating continuous data into a finite set of bins or categories, removing redundant variables, and filling in missing values.
Modelers try to eliminate variables and values that aren't relevant as well as fill in empty fields with estimated or default values. In some cases, modelers may want to increase the bias or skew in a data set by duplicating outliers, giving them more weight in the model output. These are just some of the many data preparation techniques that analytical modelers use.
Figure 5. Reverse Pivoting
To model a banking "customer" not bank transactions, analytical modelers use a technique called reverse pivoting to summarize banking transactions to show customer activity by period.
Analytical Modeling. Analytical modeling is as much art as science. Much of the craft involves knowing what data sets and variables to select and how to format and transform the data for specific data models. Often, a modeler will start with 100+ variables and then, through data transformation and experimentation, winnow them down to 12 to 20 variables that are most predictive of the desired outcome.
In addition, an analytical modeler needs to select historical data that has enough of the "answers" built in it with a minimal amount of noise. Noise consists of patterns and relationships that have no business value, such as a person's birth date and age, which gives a 100 percent correlation. A data modeler will eliminate one of those variable to reduce noise. In addition, they will validate their models by testing them against random subsets of the data which they set aside in advance. If the scores remain compatible across training, testing, and validation data sets then they know they have a fairly accurate and relevant model.
Finally, the modeler must choose the right analytical techniques and algorithms or combinations of techniques to apply to a given hypothesis. This is where modelers' knowledge of business processes, project objectives, corporate data, and analytical techniques come into play. They may need to try many combinations of variables and techniques before they generate a model with sufficient predictive value.
Every analytical technique and algorithm has its strengths and weaknesses, as summarized in the tables below. The goal is to pick the right modeling technique so you have to do as little preparation and transformation as possible, according to Michael Berry and Gordon Linhoff in their book, "Data Mining Techniques: For Marketing, Sales, and Customer Support."
Table 1. Analytical Models
Table 2. Analytical Techniques
Deploy the Model. Model deployment takes many forms, as mentioned above. Executives can simply look at the model, absorb its insights, and use it to guide their strategic or operational planning. But models can also be operationalized. The most basic way to do operationalize a model is to embed it in an operational report. For example, a daily sales report for a telecommunications company might list each sales representative's customers by their propensity to churn. Or a model might be applied at the point of customer interaction, whether at a branch office or at an online checkout counter.
To apply models, you first have to score all the relevant records in your database. This involves converting the model into SQL or some other program that can run inside the database that holds the records that you want to score. Scoring involves running the model against each record and generating a numeric value, usually between 0 and 1, which is then appended to the record as an additional column. A higher score generally means a higher propensity to portray the desired or predicted behavior. Scoring is usually a batch process that happens at night or on the weekend depending on the volume of records that need to be scored. However, scoring can also happen in real-time, which is essentially what online retailers do when they make real-time recommendations based on purchases a customer just made.
Model Management. Once the model is built and deployed, it must be maintained. Models become obsolete over time, as the market or environment in which they operate changes. This is particularly true for volatile environments, such as customer marketing or risk management. Also, complex models that deliver high business value usually require a team of people to create, modify, update, and certify the models.
In such an environment, it's critical to have a model repository that can track versions, audit usage, and manage a model through its lifecycle. Once an organization has more than one operational model, it's imperative it implements model management utilities, which most data mining vendors now support.
Analytical models can be powerful. They can help organizations use information proactively instead of reactively. They can make predictions that streamline business processes, reduce costs, increase revenues, and improve customer satisfaction.
To create analytical models is as much art as science. A well-trained modeler needs to step through a variety of data-oriented tasks to create accurate models. Much of the heavy lifting involved in creating analytical models involves exploring and preparing the data. A well designed data warehouse or data mart can accelerate the modeling process by collecting and documenting a large portion of the data that modelers require and transforming that data into wide, flat tables conducive to the modeling process.
Posted November 29, 2011 1:42 PM
Permalink | No Comments | | <urn:uuid:069eed48-33f1-4ec3-94a6-0c08a0bac980> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/eckerson/archives/predictive_analytics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92415 | 2,553 | 3.109375 | 3 |
Wan D.,CAS Institute of Earth Environment |
Wan D.,University of Chinese Academy of Sciences |
Mu G.,Xinjiang Institute of Ecology and Geography |
Mu G.,Cele Field National Station of Observation and Research for Desert Grass Land Ecosystem in Xinjiang |
And 3 more authors.
Environmental Earth Sciences | Year: 2013
Heavy aeolian deposition is one of the most threatening natural hazards to oases in arid areas. How an oasis affects aeolian deposition is tightly related to the local ecological environments. To examine the effects of oasis on aeolian deposition under different weather conditions, monthly aeolian deposition from April 2008 to March 2009 and additional samples during dust storms in April and May 2008 were collected at four sites along the Qira oasis. The monthly ADRs (aeolian deposition rates) varied greatly with seasons and sites, ranging from 19.4 to 421.2 g/m2/month and averaging 198.8 g/m2/month. Aeolian deposition in the oasis was composed dominantly of sand and silt. Based on the variations of ADRs from the four sites, it can be found that the oasis exhibits two different effects on aeolian deposition under different weather conditions. During dust storms, the oasis demonstrates a significantly shielding effect due to the obstruction of the oasis-protection systems, resulting in most aeolian particles being deposited at the windward side of the oasis. While during non-dust storm periods with weak winds, the oasis exhibits an "attracting" effect on aeolian deposition, leading to a higher ADR inside the oasis. Owing to the annual ADR is dominated by the non-dust storm ADR in Qira, the oasis seems to become an important aeolian deposition area caused by the "attracting" effect of the oasis. © 2012 Springer-Verlag. Source | <urn:uuid:c907acad-9cea-498c-9dbe-2f1f42296945> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cele-national-station-of-observation-and-research-for-desert-grass-land-ecosystem-in-xinjiang-406661/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919612 | 406 | 2.921875 | 3 |
Table Of Contents
Dependency on network resources has grown tremendously over the past ten years. In today's world, a company's success is highly dependent on its network availability. As a result, companies are increasingly less tolerant of network failures. Therefore, network troubleshooting has become a crucial element to many organizations.
Not only has the dependency for network grown, but the industry also is moving toward increasingly complex environments, involving multiple media types, multiple protocols, and often interconnection to unknown networks. These unknown networks may be defined as a transit network belonging to a Internet service provider (ISP), or a telco that interconnects private networks. The convergence of voice and video into data networks has also added to the complexity and the importance of network reliability.
More complex network environments mean that the potential for connectivity and performance problems in internetworks is high, and the source of problems is often elusive.
Symptoms, Problems, and Solutions
Failures in internetworks are characterized by certain symptoms. These symptoms might be general (such as clients being incapable of accessing specific servers) or more specific (routes not existing in a routing table). Each symptom can be traced to one or more problems or causes by using specific troubleshooting tools and techniques. After being identified, each problem can be remedied by implementing a solution consisting of a series of actions.
This book describes how to define symptoms, identify problems, and implement solutions in generic environments. You should always apply the specific context in which you are troubleshooting to determine how to detect symptoms and diagnose problems for your specific environment.
General Problem-Solving Model
When you're troubleshooting a network environment, a systematic approach works best. An unsystematic approach to troubleshooting can result in wasting valuable time and resources, and can sometimes make symptoms even worse. Define the specific symptoms, identify all potential problems that could be causing the symptoms, and then systematically eliminate each potential problem (from most likely to least likely) until the symptoms disappear.
Figure 1-1 illustrates the process flow for the general problem-solving model. This process flow is not a rigid outline for troubleshooting an internetwork; it is a foundation from which you can build a problem-solving process to suit your particular environment.
Figure 1-1 General Problem-Solving Model
The following steps detail the problem-solving process outlined in Figure 1-1:
Step 1 When analyzing a network problem, make a clear problem statement. You should define the problem in terms of a set of symptoms and potential causes.
To properly analyze the problem, identify the general symptoms and then ascertain what kinds of problems (causes) could result in these symptoms. For example, hosts might not be responding to service requests from clients (a symptom). Possible causes might include a misconfigured host, bad interface cards, or missing router configuration commands.
Step 2 Gather the facts that you need to help isolate possible causes.
Ask questions of affected users, network administrators, managers, and other key people. Collect information from sources such as network management systems, protocol analyzer traces, output from router diagnostic commands, or software release notes.
Step 3 Consider possible problems based on the facts that you gathered. Using the facts, you can eliminate some of the potential problems from your list.
Depending on the data, for example, you might be able to eliminate hardware as a problem so that you can focus on software problems. At every opportunity, try to narrow the number of potential problems so that you can create an efficient plan of action.
Step 4 Create an action plan based on the remaining potential problems. Begin with the most likely problem, and devise a plan in which only one variable is manipulated.
Changing only one variable at a time enables you to reproduce a given solution to a specific problem. If you alter more than one variable simultaneously, you might solve the problem, but identifying the specific change that eliminated the symptom becomes far more difficult and will not help you solve the same problem if it occurs in the future.
Step 5 Implement the action plan, performing each step carefully while testing to see whether the symptom disappears.
Step 6 Whenever you change a variable, be sure to gather results. Generally, you should use the same method of gathering facts that you used in Step 2 (that is, working with the key people affected, in conjunction with utilizing your diagnostic tools).
Step 7 Analyze the results to determine whether the problem has been resolved. If it has, then the process is complete.
Step 8 If the problem has not been resolved, you must create an action plan based on the next most likely problem in your list. Return to Step 4, change one variable at a time, and repeat the process until the problem is solved.
Note If you exhaust all the common causes and actions—either those outlined in this book or ones that you have identified for your environment—you should contact your Cisco technical support representative.
Preparing for Network Failure
It is always easier to recover from a network failure if you are prepared ahead of time. Possibly the most important requirement in any network environment is to have current and accurate information about that network available to the network support personnel at all times. Only with complete information can intelligent decisions be made about network change, and only with complete information can troubleshooting be done as quickly and as easily as possible.
During the process of network troubleshooting, the network is expected to exhibit abnormal behavior. Therefore, it is always a good practice to set up a maintenance time window for troubleshooting to minimize any business impact. Always document any changes being made so that it is easier to back out if troubleshooting has failed to identify the problem within the maintenance window.
To determine whether you are prepared for a network failure, answer the following questions:
•Do you have an accurate physical and logical map of your internetwork?
Does your organization or department have an up-to-date internetwork map that outlines the physical location of all the devices on the network and how they are connected, as well as a logical map of network addresses, network numbers, subnetworks, and so forth?
•Do you have a list of all network protocols implemented in your network?
For each of the protocols implemented, do you have a list of the network numbers, subnetworks, zones, areas, and so on that are associated with them?
•Do you know which protocols are being routed?
For each routed protocol, do you have correct, up-to-date router configuration?
•Do you know which protocols are being bridged?
Are any filters configured in any bridges, and do you have a copy of these configurations?
•Do you know all the points of contact to external networks, including any connections to the Internet?
For each external network connection, do you know what routing protocol is being used?
•Do you have an established baseline for your network?
Has your organization documented normal network behavior and performance at different times of the day so that you can compare the current problems with a baseline?
If you can answer yes to all questions, you will be able to recover from a failure more quickly and more easily than if you are not prepared. Lastly, for every problem solved, be sure to document the problems with solutions provided. This way, you will create a problem/answer database that others in your organization can refer to in case similar problems occur later. This will invariably reduce the time to troubleshoot your networks and, consequently, minimize your business impact. | <urn:uuid:5091cba8-81e2-4257-aab0-46cff90f332b> | CC-MAIN-2017-04 | http://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1901.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926237 | 1,534 | 2.890625 | 3 |
The last three postings provided an introduction to procurement, relating the steps and stages to PMI’s system of terminology. In this and the next several postings, I will explain the boundaries of each of the procurement processes.
The PMI refers to procurement planning as ‘plan procurements.’ In ‘plan procurements,’ the project’s needs (for assets, skills and services) are converted through make-or-buy analysis into procurement decisions.
General project planning leads into procurement planning. Planning begins by converting stakeholder objectives into technical requirements.
Complete and thorough requirements analysis is fundamental to the success of any project because it determines exactly what must be accomplished in order to satisfy stakeholders.
The ‘scope baseline’ is developed from the requirements documents. The baseline is made up of the scope statement, work breakdown structure (WBS) and WBS Dictionary.
Time and cost management planning are driven by expectations set out in the scope baseline.
PMI Planning Steps
- Project Charter
- Collect Requirements
- Define Activities
- Sequence Activities
- Estimate Activity Resources
- Estimate Activity Duration
- Develop Schedule
- Estimate Costs
- Determine budget
- Human Resources
Project analysis leads to the identification of ‘activity resource requirements.’ These ‘resource requirements’ are a complete listing of everything the project needs in order to be completed.
The separation between what assets and skills the organization can dedicate to the project – versus what the project must have in order to be completed – determines what must be procured.
Make-or-buy decisions are related to the dilemma of what can, should and must be purchased if the project is to be completed as required by the stakeholders.
In my next submission I will outline the documents generated by the ‘plan procurements’ process. | <urn:uuid:548c9378-f437-44ed-84cc-2c142f6af63b> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/04/28/procurement-explained-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898709 | 381 | 2.640625 | 3 |
Analyzing Big Data in DNA to Find Diseases
September 20, 2012
Mass amounts of raw data cause problems for more fields than just computer science. Life scientists struggle to wade through the amounts of data surrounding sequencing human genes and genetic characteristics. However, according to “Computational Method for Pinpointing Genetic Factors That Cause Disease” on Science Daily, Researchers are Roswell Park Cancer Institute and the Center for Human Genome Variation at Duke University Medical Center have developed an approach for analyzing this data to quickly cull out relevant genetic patterns and find variants that lead to particular disorders.
The study is outlined in the September issue of The American Journal of Human Genetics. We learn:
“[Zhu, the paper’s first author, notes,] ‘We’re confident that our method can be applied to genome-wide association studies related to diseases for which there are no known causal variants, and by extension may advance the development of targeted approaches to treating those diseases.’
‘This approach helps to intergrade the large body of data available in GWASs with the rapidly accumulating sequence data,’ adds David B. Goldstein, […]Director of the Center for Human Genome Variation at DUMC and senior author of the paper.’”
The technological advancement allowing scientists to pinpoint such causal variants is fascinating. However, as this technology advances, we are left to wonder how insurers will begin to use these predictive methods. Could faulty genes be analyzed in the future to justify declining policies?
Andrea Hayden, September 20, 2012 | <urn:uuid:057f1a0f-e1d0-431c-ab5d-9dd7f29d0fdf> | CC-MAIN-2017-04 | http://arnoldit.com/wordpress/2012/09/20/analyzing-big-data-in-dna-to-find-diseases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913309 | 323 | 2.71875 | 3 |
Organizations Using the Internet
Modified 13 July 2002
Cuba timeline, from ``Encyclopedia Britannica''
- 1492 -- Christopher Columbus claims Cuba for Spain
- 1868-1878 -- ``Ten Year's War'' with Spain for independance
- 1895 -- Second war of independence with Spain
- 1898 -- Spanish-American war started with explosion of U.S.S. Maine in Havana harbor.
- 1899 -- Cuban independance from Spain, followed by U.S. occupation, control of Cuban foreign and internal affairs, and establishment of U.S. naval base at Guantanamo Bay in 1901.
- 1902 -- Tomas Estrada Palma elected first Cuban president
- 1905-1906 -- Period of rebellion
- 1906-1906 -- Second period of U.S. occupation
- 1909 -- New Cuban administration of Jose Miguel Gomez took office, established pattern of graft, corruption, injustice toward Afro-Cubans, and inequalities in distribution of wealth, continued through following presidents and dictators.
- 1952 -- Former Cuban president General Fulgencio Batista overthrew government of president Carlos Prio Socarras, established a dictatorship.
- 1953 -- Communist revolutionary Fidel Castro began organizing rebel force to overthrow Batista. Led attack on military barracks, most of his forces were killed, Castro imprisoned.
- 1955 -- Fidel Castro and his brother Raul released on political amnesty, continued campaign from Mexico
- 1956 -- Castro and armed expedition of 81 men landed on Cuban coast. All were killed except Fidel, Raul, Che Guevara, and nine others. The survivors escaped into Sierra Maestra range of southwestern Oriente province, waged guerrila war.
- 1958 -- Castro's tiny force overthrew Batista
- 1960 -- Castro made trade agreement with USSR, ties with US broken.
- 1961 -- U.S.-backed Bay of Pigs invasion failed
- 1962 -- USSR installed ballistic missiles in Cuba, nuclear war between the US and USSR averted when missiles were removed
- 1975-1989 -- Cuban expeditionary forces fought in Angolan civil war on side of Popular Movement for the Liberation of Angola.
- 1978 -- Cuban troops assisted Ethiopia in defeating invasion by Somalia
- 1991 -- USSR collapsed, surprising Castro and ending generous subsides to Cuba
- Present -- Powerful Cuban-American political lobby maintains U.S. embargo against Cuba and peculiar laws (e.g., Cuban cigars are illegal to possess in the U.S.)
Internet pages include:
- Cuban American Veterans Association http://www.cava.org
- Cuban American National Foundation http://www.canfnet.org/
- Cuba Independiente y Democratica http://www.cubacid.com/
- Free Cuba Foundation http://www.fiu.edu/~fcf/whatdone.html
- Junta Patriotica Cubana http://www.vais.net/~cabenedi/
- Grupo de Apoyo a la Disidencia http://www.gad.org
- Information on anti-castro forces: http://www.Rose-Hulman.Edu/~delacova/belligerence.htm
- PCC -- Partido Comunista de Cuba (Communist Party of Cuba) -- http://www2.cuba.cu/politica/webpcc/
- Information on Cuban espionage in the U.S.: http://www.Rose-Hulman.Edu/~delacova/cuban-espionage.htm
Intro Page Cybersecurity Home Page | <urn:uuid:f95ce6fe-0e2e-4e8b-b9fc-454e2ae51c66> | CC-MAIN-2017-04 | http://cromwell-intl.com/cybersecurity/netusers/Index/cu | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.814519 | 745 | 3.5625 | 4 |
After the release of the immensely popular film Stand and Deliver, teachers throughout the country wanted to see for themselves what the real Jaime Escalante was doing. How was it possible to get all students, even the "unteachable" ones, to excel and succeed in math and science?
To fill that vacuum, FASE Productions, the media division of the Foundation for Advancements in Science and Education (FASE), created the television series Futures with Jaime Escalante. It combined the excitement of Escalante's classroom with behind-the-scenes views of high-tech jobs, and soon became the most popular classroom series in the history of PBS.
The series won over 50 awards and was hailed by education and business leaders alike as a breakthrough. Amar Bose, acoustics pioneer and MIT professor of computer science, called it "the first effective work that addresses America's alarmingly low standards of mathematics achievement."
With Futures, FASE ushered in an entirely new approach to educational television. Celebrities such as Bill Cosby, Arnold Schwarzenneger, Jackie Joyner-Kersee, Kathy Bates, Weird Al Yankovic, Billy Bob Thornton and Cindy Crawford have appeared in more than 50 FASE Productions programs, showing students that careers in the technology-based workplace of tomorrow run the gamut from protecting endangered animals to designing homes for outer space.
"FASE programs explain science and math in very real terms, and show them the people involved in these subjects in very human terms," said Garland Thompson, editorial director for Career Communications, which publishes the country's leading magazines for African American and Hispanic professionals in science and technology. "Reform efforts and our emerging math and science curriculum are vitally important -- but FASE is the only group consistently on the national stage talking to the kids in language they can understand."
CBS correspondent Ed Bradley acknowledged the Foundation's influence in 1994, as he presented FASE the broadcasting industry's highest honor -- the George Foster Peabody Award -- for the second time within three years. "With all the bad news about television, there is at least one production company regularly winning Peabody Awards for consistent excellence," he said. Since it launched its first programs in 1990, FASE Productions has received nearly 120 awards, including the Parent's Choice Award, the Action for Children's Television Award, the Robert Townsend Social Issues Award, the entertainment industry's Environmental Media Award and the Catholic Broadcasting Association's Gabriel Award.
Making subjects like engineering, optics or water conservation exciting for students is one thing. Changing student attitudes is another. Independent studies of classroom use of Futures have shown that it has a positive, long-lasting effect on student attitudes. African-American students' interest in a career in engineering went from 29 percent to 58 percent, while Hispanic students' interest in a career in architecture went from 28 percent to 65 percent, after they viewed episodes on these fields over the course of a semester.
This is welcome news for students and teachers alike, an indication that national goals to prepare all students for the technological workplace are attainable. "If students are grouped according to the expectation that some can learn challenging math and science and others cannot, then those expectations are likely to be fulfilled," said National Science Foundation's Director Dr. Neal Lane in announcing the results of a recent international study of student achievement in mathematics and science achievement. "All students can rise to the challenge." Through its mathematics and science reform initiatives, NSF has been a key partner in helping FASE develop and distribute its programs.
VIDEO FIELD TRIPS
To bring students a job-site view of the ways professionals solve problems in engineering, design, science, the arts and other fields, FASE Productions created the award-winning series Interactions . One of the first "video field trips," Interactions takes students on-site to meet men and women who explore the ocean, market blue jeans, plan trips to distant planets and protect the environment.
Erik Phelps, regional manager for Community Economic Development at GTE, uses the "Digital Communication" episode of Interactions in seminars for key leaders in government and the private sector, as well as community and school groups. "People ask to take my copy of the video home with them. They say, 'I want my kids to see this,' or 'I want the people in my office to see this.' It explains the subject in a way that they can understand, and brings home the fact that minorities hold creative and responsible positions in this field."
"Teachers are being told to connect their lessons to the working world," said Steve Heard, FASE Productions' executive producer and co-writer of its programs. "But they don't necessarily know what goes on behind the scenes at Johnson Space Center, how Levi markets its jeans, or what steps an industrial designer takes to execute an idea. That's where we come in."
Like Futures, Interactions has gone beyond just being informative and entertaining. It creates measurable effects on students. In an independent study, 600 students in Boston, Chicago and Los Angeles used the series in their classrooms over a period of two weeks. Ninety-five percent reported learning something about the real-world uses of math. All of their teachers found the lessons "effective," with the great majority ranking them "very effective."
LOS ANGELES TO HARLEM
In a national survey of more than 1,000 elementary students, FASE found the great majority unaware of career possibilities beyond such stereotypes as doctor, teacher, fireman or basketball player. Few could say what work their own parents did. To broaden their horizons, FASE Productions created The Eddie Files, which looks at real-world uses of the classroom curriculum through the eyes of "Eddie," a fictional 11-year-old. Eddie, who is never seen on screen, is a student of East Harlem's renowned math teacher, Kay Toliver.
In addition to receiving the prestigious Parent's Choice Award, last month The Eddie Files was named "the most innovative and outstanding instructional telecommunications project" of 1996 by the Association for Educational Communications Technology. These awards were shared by the U.S. departments of Education and Energy and the National Endowment for Children's Educational Television, whose support helped launch the series.
"There is a tremendous need to show underserved populations role models," said Jeanette Pinkston, director of Community Outreach at Georgia Public Television. "When these students view The Eddie Files, they see professionals who look like them actually doing math and science. That's been the missing link."
Kay Toliver, the host of The Eddie Files, has achieved spectacular results by setting high standards and instilling a love of learning that helps her students meet these standards. FASE first profiled her work in its Peabody Award-winning documentary, Good Morning Miss Toliver. The program has become one of the most popular in-service resources in the country, and a regular feature at events focusing on new teaching standards.
The Westinghouse Foundation joined FASE Productions to produce new Eddie Files episodes. Westinghouse, whose 56-year-old "Science Talent Search" program counts five Nobel Prize winners among its past finalists, has a long-standing commitment to science education.
HIGH MARKS FROM HOME VIEWERS
As popular as its programs are in schools, FASE has also made its mark on the home audience. Its landmark special Math ... Who Needs It?! brought Jaime Escalante together with an all-star lineup of comedians including Bill Cosby, Paula Poundstone and Paul Rodriguez, and professionals (including the late Dizzie Gillespie) in fields from music and skateboard design to robotics and roller coaster engineering.
The unprecedented combination of comedy and workplace documentary worked. "Who would believe a TV hour about math could be provocative, informative and very funny?" wrote Peggy Charren, the founder of Action's Children Television and a long-time advocate for quality children's programming. "Four Stars," said TV Guide. "Math ... Who Needs It?! adds up to one important hour of TV," said the New York Post.
Another powerful family program, Living and Working in Space: The Countdown Has Begun, will be seen on the SciFi Channel this spring. In the special, engineers, interior decorators, space suit designers and visionaries join with some of Hollywood's best known personalities -- including Kathy Bates, Billy Bob Thornton, Jeffrey Tambor and Weird Al Yankovic -- to explore not only the techno future, but the humor and drama of what day-to-day life in space might be. The broadcast, which will occur on National Space Day (May 22), is scheduled to include an appearance by Sen. John Glenn and a live broadcast from the MIR space station.
"FASE is putting a face on space exploration," said Leonard David, director of Space Data Resources and one of the most published space journalists in the country. "During the past six years, FASE has been at the forefront, capturing the excitement of space exploration through intimate looks at the people who are making it happen -- from planetary scientists, astronauts and propulsion specialists to medical doctors and space architects."
REACHING A WORLDWIDE AUDIENCE
FASE programs have been translated into Spanish, French and Arabic, and have reached students in locations as far-flung as Bangkok, Belgrade, Cairo, Uganda, Pakistan, Pretoria and Fiji. FASE's Steve Heard was recently the guest of officials in Ghana to consult on that country's effort to develop a math and science television series modeled after Futures with Jaime Escalante.
FASE Productions complements its production division with an equally strong outreach division that integrates its work with government and private-sector efforts to invigorate education. In cooperation with more than 160 educational, community and professional organizations, FASE Outreach conducts activities ranging from conference presentations to the research and publication of articles on educational technology.
"FASE has made important contributions to our knowledge of the design and effectiveness of television as an educational medium," said Dr. Milton Chen, author of The Smart Parent's Guide to Kids' TV. "This body of work will be of great value to producing organizations, educational agencies, universities and others interested in educational media."
For more information contact the Foundation for Advancements in Science and Education, 4801 Wilshire Blvd., Suite 215 Los Angeles, CA 90010. Call 213/937-9911.
No task is more
students to live
and work in
[ May Table of Contents] | <urn:uuid:acde8960-cc1a-47ef-8b40-70692410846b> | CC-MAIN-2017-04 | http://www.govtech.com/featured/Programs-Ignite-Interest-in-Technical-Careers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953973 | 2,144 | 2.59375 | 3 |
are text string messages
given to a Web browser by a Web server. Whenever you visit a web page or navigate different pages with your browser, the web site generates a unique ID number which your browser stores in a text (cookie) file that is sent back to the server each time the browser requests a page from that server. Cookies allow third-party providers such as ad serving networks, spyware or adware providers to track personal information. The main purpose of cookies is to identify users and prepare customized Web pages for them.
Cookies can be categorized as:
• Trusted cookies
are from sites you trust, use often, and want to be able to identify and personalize content for you.
• Nuisance cookies
are from those sites you do not recognize or often use but somehow it's put a cookie on your machine.
• Bad cookies
are those that can be linked to an ad company or something that tracks your movements across the web. They are called "profiling cookies," "persistent cookies," "long term tracking cookies," "third party tracking cookies" or "tracking cookies”.
The type of cookie that is a cause for concern is the last category because they can be considered a privacy risk. These types of cookies are used to track your Web browsing habits (your movement from site to site). Ad companies use them to record your activity on all sites where they have placed ads. They can keep count of how many times you visited a web page, store your username and password so you don't have to log in and retain your custom settings. When you visit one of these sites, a cookie is placed on your computer. Each time you visit another site that hosts one of their ads, that same cookie is read, and soon they have assembled a list of which of their sites you have visited and which of their ads that you have clicked on. They are used all over the Internet and advertisement companies often plant them whenever your browser loads one of their banners. Cookies are NOT a "threat"
. As text files they cannot be executed to cause any damage. Cookies do not cause any pop ups nor do they install malware.
As long as you surf the Internet, you are going to get cookies and some of your security programs will flag them for removal. However, you can minimize this by reading "Blocking & Managing Unwanted Cookies | <urn:uuid:d7ec5a2c-1115-48e0-ad71-c02278048782> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/114024/ad-aware-se/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951275 | 483 | 3.28125 | 3 |
In optical fiber communications system, several transmission bands have been defined and standardized, from the original O-band to the U/XL-band. The E- and U/XL-bands have typically been avoided because they have high transmission loss regions. The E-band represents the water peak region, while the U/XL-band resides at the very end of the transmission window for silica glass.
Intercity and metro ring fiber already carry signals on multiple wavelengths to increase bandwidth. Fibers entering the home will soon do the same. Now there are several types of optical telecom systems have been developed, some based on time division multiplexing (TDM) and others on wavelength division multiplexing (WDM), either dense wavelength division multiplexing (DWDM) or coarse wavelength division multiplexing (CWDM). This article may represent the evolution of optical wavelength bands mainly by describing these three high-performance systems.
Dense Wavelength Division Multiplexing
DWDM systems were developed to deal with the rising bandwidth needs of backbone optical networks. The narrow spacing (usually 0.2 nm) between wavelength bands increases the number of wavelengths and enables data rates of several Terabits per second (Tbps) in a single fiber.
These systems were first developed for laser-light wavelengths in the C-band, and later in the L-band, leveraging the wavelengths with the lowest attenuation rates in glass fiber as well as the possibility of optical amplification. Erbium-doped fiber amplifiers (EDFAs, which work at these wavelengths) are a key enabling technology for these systems. Because WDM systems use many wavelengths at the same time, which may lead to much attenuation. Therefore optical amplification technology is introduced. Raman amplification and erbium-doped fiber amplifiers are two common types used in WDM system.
In order to meet the demand for “unlimited bandwidth,” it was believed that DWDM would have to be extended to more bands. In the future, however, the L-band will also prove to be useful. Because EDFAs are less efficient in the L-band, the use of Raman amplification technology will be re-addressed, with related pumping wavelengths close to 1485 nm.
Coarse Wave Division Multiplexing
CWDM is the low-cost version of WDM. Generally these systems are not amplified and therefore have limited range. They typically use less expensive light sources that are not temperaturestabilized. Larger gaps between wavelengths are necessary, usually 20 nm. Of course, this reduces the number of wavelengths that can be used and thus also reduces the total available bandwidth.
Current systems use the S-, C- and L-bands because these bands inhabit the natural region for low optical losses in glass fiber. Although extension into the O and E-band (1310 nm to 1450 nm) is possible, system reach (the distance the light can travel in fiber and still provide good signal without amplification) will suffer as a result of losses incurred by use of the 1310 nm region in modern fibers.
Time Division Multiplexing
TDM systems use either one wavelength band or two (with one wavelength band allocated to each direction). TDM solutions are currently in the spotlight with the deployment of fiber-to-the-home (FTTH) technologies. Both EPON and GPON are TDM systems. The standard bandwidth allocation for GPON requires between 1260 and 1360 nm upstream, 1440 to 1500 nm downstream, and 1550 to 1560 nm for cable-TV video.
To meet the rise in bandwidth demand, these systems will require upgrading. Some predict that TDM and CWDM (or even DWDM) will have to coexist in the same installed network fibers. To achieve this, work is underway within the standardization bodies to define filters that block non-GPON wavelengths to currently installed customers. This will require the CWDM portion to use wavelength bands far away from those reserved for GPON. Consequently, they will have to use the L-band or the C- and L-bands and provided video is not used.
In each case, sufficient performance has been demonstrated to ensure high performance for today’s and tomorrow’s systems. From this article, we know that the original O-band hasn’t satisfied the rapid development of high bandwidth anymore. And the evolution of optical wavelength bands just means more and more bands will be called for. In the future, with the growth of FTTH applications, there is no doubt that C- and L-bands will play more and more important roles in optical transmission system. | <urn:uuid:372c0f9f-a679-4081-ba0d-8599e93da568> | CC-MAIN-2017-04 | http://www.fs.com/blog/from-o-to-l-the-evolution-of-optical-wavelength-bands-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94128 | 958 | 3.625 | 4 |
With the proliferation of Information Technology (Companies) and the efficiency it gives businesses today, securing your network against SPAM, viruses, Malware and Hackers has become vital to survival. The very technology that has allowed businesses to become so efficient and productive also brings the risk of catastrophe, if that technology fails or is compromised. Therefore, it is vital that any business that relies on IT for its productivity make sure that they have taken practical steps to secure their network infrastructure.
There are several areas of concern when considering securing any IT network, including user access control, data backup and protection, firewalls, etc., but for the purpose of this article the focus will be limited to protecting a network from viruses, SPAM, and other Malware.
Most computer users are familiar with the need to protect their computers from viruses and most companies have antivirus software installed on their servers and workstations. However, one of the biggest threats to network functionality comes from email. Emails are delivered directly to the user’s desktop and can go through firewalls as well as virus scans, depending upon the nature of the Malware they may contain. Then, if a user inadvertently opens the infected email and perhaps clicks on the links it contains, they can infect their workstation and alow it to spread to the entire network.
To secure the multiple points of potential infection it is recommended that email be scanned by a third-party SPAM filtering, Malware, and virus protection vendor before being delivered to the company’s Exchange or mail server. The company’s firewall should also be configured to only accept eamil from the third party scanning servers. This will vastly reduce the SPAM getting to the company Mail server, and stop known Malware. Finally, the Exchange or mail server should also have virus and Malware software running on it. Why, since the mail is supposedly checked before delivery by the third party scanning service? Because users can bring in infected laptops and connect to the network behind the firewall, users can also access webmail from their personal accounts and bring Malware into their computers directly. Also, users can get Malware from websites they visit. When this happens the virus or Malware must be stopped on the network side of the firewall and to do that requires that the proper software be installed directly on the Exchange or Mail server. A recommended antivirus anti-malware solution for a typical small business network is illustrated below in figure 1. | <urn:uuid:6ba2b1ba-5b38-42c2-b0a6-01e5b28f9b66> | CC-MAIN-2017-04 | http://www.bvainc.com/securing-your-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954044 | 499 | 2.703125 | 3 |
A new distributed computing project, called Quantum Cures, was recently established to help find treatments and cures for orphan and rare diseases. The Quantum Cures Foundation is investigating new small molecules as potential drug candidates for diseases such as spina bifida, cleft palates, Hodgkin’s lymphoma, sleeping sickness, and others. These are examples of orphan, rare and neglected (ORN) diseases, which don’t receive a lot of attention from major drug companies even though they may affect millions of people worldwide.
A 2005 report from the National Institutes of Health Office of Rare Diseases stated that there are 6,000–7,000 rare diseases that affect 25 million Americans, and one in 10 Americans will be diagnosed with a rare disease during their lifetime. The FDA classifies the vast majority of rare diseases as serious or life threatening.
Medical researchers have compiled a list of “targets” proteins that are implicated in disease pathways, but when it comes to testing compound drugs against these proteins, computing resources are limited. That’s where volunteer computing can really make a difference.
Distributed, or volunteer, computing models, like the well-known SETI@home project, draw on otherwise-idle computing cycles to create a virtual supercomputer. The research possibilities are as unlimited as the untapped computing potential of the volunteer machines.
Quantum Cures Co-Founder Lawrence Husick agrees with this sentiment: “There is substantial interest in and reason to pursue development of treatment and cures for a wide range of diseases which have, up until now, not received the attention they deserve,” he said. “By enlisting the help and computer time from many people, we can begin to deliver the resources needed to find the answers, and improve the quality of life of millions, both today and in the future.”
Quantum Cures plans to enlist the help of tens of thousands of volunteers around the United States who allow their computers’ spare cycles to be used for research purposes. The free program, which has been donated by TeraDiscoveries, will be available on the Quantum Cures website in June. Interested parties can sign up at www.quantumcures.org.
Quantum Cures is also looking for research partners. Researchers working with drug targets for orphan, rare and neglected diseases are invited to submit a proposal to the Quantum Cures Foundation for a molecular design project. If selected, a portion of donated computer time will be put toward the design of new drug and vaccine candidate molecules. | <urn:uuid:55f6c78c-5024-4d53-bb7b-225fb954a077> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/03/13/computing_for_a_cure-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948735 | 523 | 3.34375 | 3 |
How much information is there in the world? University of Southern California researchers calculated that the world has access to enough data-storage capacity to hold 295 exabytes (295 followed by 20 zeros) of information.
reports aplenty about technology driving a data explosion. University of
Southern California researchers actually sat down and calculated that humans
can store, communicate and compute about 295 exabytes of information, or about
404 billion CDs.
In a study
published Feb. 10 in Science Express
, an electronic journal that provides
select Science articles ahead of print, researchers examined data from more
than 1,000 sources to calculate how much data-storage capacity exists. The
study, which looked at data from 1986 to 2007, did not try to calculate exactly
how much data actually existed.
"This is the
first study to quantify humankind's ability to handle information and how it
has changed in the last two decades," said lead author Martin Hilbert, a doctoral
candidate at the USC Annenberg School for Communication and Journalism.
his team calculated the figure by first estimating the amount of data held on
60 analog and digital technologies during the period from 1986 to 2007. They
considered everything from computer hard drives to obsolete floppy discs, and
X-ray film to microchips on credit cards, he said.
technological information-processing capacities are growing at exponential
rates," Hilbert said. General-purpose computing capacity is growing at about 58
percent per year, the study said. Telecommunications grew by 28 percent
annually, and storage capacity grew by 23 percent, according to the study.
what you can do with information is transmit it through space, and we call that
communication. You can transmit it through time; we call that storage. Or you
can transform it, manipulate it, change the meaning of it and we call that
computation," Hilbert said.
total, 295 exabytes refers to storage capacity in 2007, according to the
researchers. This is about 80 times more information per person than was ever
stored in the historic Library of Alexandria in Egypt, Hilbert told eWEEK. The
actual number for 2011 is likely to be much higher.
1.9 zettabytes of information through broadcast technology such as televisions
and GPS during that 21-year period, the study found. That's equivalent to every
person in the world receiving 174 newspapers every day or every television in
the world running for three hours a day, Hilbert said.
More than 65
exabytes of information was shared over two-communications technology, such as
cell phones and e-mail. Communications have increased by an average of 28 percent
every year since 1986. About 65 exabytes of information was shared in 2007, or
the equivalent of every single person sending out the contents of six
newspapers every day.
word-based chat, one would need to chat for two months and three weeks nonstop
to communicate the information that the average person telecommunicates through
multimedia content in one day only," Hilbert said.
All this feels
like unimaginable numbers. Just for comparison, an exabyte is equivalent to
1,000 petabytes, or a million terabytes. An exabyte has 20 zeros following the
number. A zettabyte is 1,000 exabytes.
performing calculations, the researcher discovered the digital age "began" in
2002, the first year there was more data stored on digital storage than on
analog, Hilbert said. About 75 percent of stored information was in an analog
format such as videocassettes and books in 2000. By 2007, the flip was nearly
complete, with 94 percent of information stored in digital form, Hilbert said.
storage types examined read like a list of forgotten devices. In 1986, "vinyl long-play
records" made up 14 percent of storage and audiocassettes made up 12 percent,
according to the study. Digital storage first became a significant factor in
2000, when it accounted for 25 percent of total storage capacity. The
proportion of paper-based storage such as books and newspapers declined, from a
mere 0.33 percent in 1986 to 0.007 percent in 2007. However, that didn't mean
information from paper sources declined, since in absolute terms, paper grew
from 8.7 to 19.4 optimally compressed petabytes, the study estimated.
of the Open University of Catalonia co-authored the study with Hilbert. Hilbert
said a copy of the Science article
is available on his | <urn:uuid:098fa4ea-2394-4979-bae6-72ae5b4cf3f5> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Global-Data-Storage-Capacity-Totals-295-Exabytes-USC-Study-487733 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00003-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93324 | 944 | 2.96875 | 3 |
High performance computing is known for its ability to accelerate scientific experiments and discovery through modeling and simulation. The complexity of the mathematical models and huge amounts of data that must be processed in a short time mandates the use of high throughput hardware infrastructure and optimized software stack. HPC applications very quickly consume processing power, memory, storage, and network bandwidth for a relatively short time. Other distributed systems utilize the aggregated power of the cluster but do not necessary utilize all available resources as aggressively in such a short amount of time as HPC applications do.
In essence, this is why cloud services and infrastructure may make a perfect sense to most businesses. It is expected that 80% of the general purpose applications will be hosted in clouds by year 2020. However, for HPC, there needs to be a deeper analysis of how HPC users can make use of cloud architectures.
So, the main objective of cloud computing is to allow end users to plug their applications into virtual machines in a manner quite similar to hosting them on physically dedicated machines. Users should be able to access and manage this infrastructure exactly the same way they would if they had the physical machines on-premises.
In HPC, applications are developed to deal with large numbers of compute nodes, with relatively large memories and huge storage capacities. Keep in mind that cloud services can be provided at two levels: (1) cloud infrastructure, or (2) cloud-hosted Applications. The first type targets advanced users who would like to utilize cloud infrastructure to build their own proprietary software serving their specific needs. The second type targets users who would like to use readymade applications running on top of the cloud infrastructure without digging into the details of the virtualized resources exposed by the cloud, such as storage, processing, interconnection, etc.
In this article I will be focusing mainly on the virtualization of cloud infrastructure and usage pattern of its recourses. I’ll briefly touch upon possible HPC applications that can be offered through cloud Infrastructure and characterize their utilization of resources in such infrastructure.
Before digging into how the cloud infrastructure can expose it services to HPC users, let’s focus first on the building unit: a virtualized node. Node virtualization is not a straight forward task for HPC usage patterns. Let me go with you through the possible usage patterns. It may appear as a low level analysis, but I think this will give us a deeper understanding of what is actually required to build HPC in the cloud.
I’ll be discussing processing, memory, storage, and network usage patterns. I’ll try to uncover also some of the overall required policies and mechanism for resources management and scheduling. This is very critical aspect in providing the appropriate services to HPC users through the cloud.
HPC applications are, to a great extent, scientific algorithms focusing on simulating mathematical models in earth science, chemistry, physics, etc. In addition to the main objective of utilizing the aggregated processing power of large HPC clusters, these applications focus also on utilizing micro resources inside each processor, especially with multi-core processors. Utilizing multi-threading for a fine-grained parallelism is a very critical component in speeding up these applications. Also, these applications utilize even more specific processor features such as the pipeline organization, branch prediction, and instructions prefetching to speedup execution.
The other family of HPC applications is based on combinatorial algorithms, such as graph traversal, sorting, and string matching. These algorithms utilize basically integer units inside the microprocessor. However, they still utilize multi-threading capabilities inside each compute node to speedup execution.
In general purpose and business-oriented applications, multi-threaded models might be utilized. However, multi-threaded models are deployed to serve high level requests, such as different database transactions in order to gain execution speedup. Threads can be easily mapped to virtual processors and scheduled by the OS to the physical processors.
It is quite challenging to manage virtualization of processors if accelerators are provided in the cloud, such as the Cell processor or GPGPUs. Each process may utilize one or more GPUs to accelerate some compute intensive parts, or kernels. The question is: how can we virtualize and schedule these accelerators? Are they going to accessible directly through hosted applications? Or a lightweight virtualization mechanism responsible mainly for scheduling and accounting the accelerators? Some research efforts such as GViM and GFusion are actively working on the area of accelerators virtualization.
Most HPC applications swing between memory intensity and arithmetic intensity. The more floating point operations (flops) required per one byte accessed from system’s main memory, the higher the application’s arithmetic intensity, and vice versa. The key here is not only the amount of memory required in a single virtualized node. It is also about the usage patterns related to processing requirements. HPC applications usually use memory in a very demanding pattern. The better the algorithm is designed, the peak bandwidth is maintained most of the algorithm’s execution life time.
Furthermore, as arithmetic intensity decreases,more pressure is added over the memory system. The processor is spending less time computing and more time moving data to or from system’s memory. Also, advanced HPC developers often times consider physical properties of the memory system to maximize bandwidth, such as number of banks, size of memory controller buffer, latency, maximum bandwidth, etc.
I think standard virtualization abstracts all these hardware properties and consider the standard memory usage pattern, i.e. small requests that do not form stream of data movement. I also believe that some good research in the area of memory abstraction can be done. Hypervisors need to consider multiplexing the physical memory in a way that would maintain most of its physical properties. This should give more space for memory performance optimization.
HPC applications need two types of permanent storage: (1) I/O, and (2) Scratch storage. The first type stores the input data and final execution output, such as the FFT points, input matrixes, etc. The second storage type is basically used for storing intermediate results, check-pointing or for volatile input sets. I/O storage needs to be stored in a centralized place so that all threads or processes in a cluster can have unconditional access to it. I/O reads or writes take place in bursts. All processes read input data sets almost at the same time and write output also concurrently, assuming good load balancing. This mandates storage devices with very high bandwidth to satisfy many requests at the same time. From my observations, most HPC applications ask for relatively large chunks of data in every I/O attempt, which would reduce the effect of read or write latency on these devices.
I see most cloud systems provide the conventional physically centralized storage devices connected to a high speed interconnection. This architecture might be a good one if the whole HPC system is working on a single problem at a given time. However, if multiple applications are using resources through a cloud, this physical architecture may need to be rethought. Distributed rack-aware file systems, such as Hadoop Distributed File System (HDFS), might be a very good option in some cases. Building multiple storage devices and attaching each one to a few racks or a cabinet is another excellent option. It will match the HPC applications utilizing the cloud architecture; each application will use one or few racks. It makes sense to place storage near the processors. I think possibilities are many and may need a separate article, so I will come back to that later.
The scratch storage by default should be local to each processor. Most HPC architectures provide such scratch storage spaces. Each rack would have one or more hard disks to quickly store and retrieve scratch data. This scratch data is volatile and usually get erased when application execution ends. I think the best reconsideration is replace these hard disks with the new SSD to save power and speed up execution since accessing them might be quite frequent.
Using the cloud model, there are three sources of network traffic: (1) Remote user communication, (2) I/O, and (3) Inter-process communication. Remote user communication takes place when large data sets are being sent or received from a remote site. End user usually prepares the input or retrieves the results. It can be optimized again by distributing storage to different NAS devices. However, utilizing systems such as Hadoop Distributed File System (HDFS) may not be the optimum solution if users are reading and writing large chunks of data in most of their HPC applications.
This architecture will overload the internal interconnection and compute nodes as well. Inter-processor communication, on the other hand, is characterized by high frequency and small data chunks. Latency in this case is a very important factor. In addition to low latency networking equipment, this bottleneck can be easily avoided by placing virtual nodes as close as possible to each other, on the same physical node if possible.
Thus far, I have tried to pinpoint some of the qualitative aspects of resources usage patterns. Scheduling and virtualizing resources same way done for general purpose applications, I think will produce disappointing results. Cloud infrastructure is still lucrative if comparing its economics to building in-house HPC machines. However, cloud for HPC has to be efficient enough to reach proper performance ceilings without disappointing customers who probably experienced at a certain point to run their HPC applications on dedicated machines.
Subsequent articles, which will be featured here as part of a continued series, will discuss some of my findings in characterizing resources usage of specific HPC applications, such as BLAST, DGEMM, FFT, etc., using the cloud infrastructure.
About the Author
Mohamed Ahmed is an assistant professor at the department of computer science and engineering of the American University in Cairo (AUC). He got his BS and MSc from the AUC. He received his PhD from the University Of Connecticut (UCONN). During his masters he was one of the early researchers who built a component-based operating system using object oriented technologies. He decided to move to the wild world of high performance computing (HPC) working in different sub-domains, such as performance engineering, HPC applications, and cloud computing for HPC systems.
Dr. Mohamed has one provisional patent and several peer-reviewed publications in operating systems engineering, reliability, threading models, and programming models. Dr. Mohamed’s research interests basically fall under HPC. His current focus is in utilizing multi-/many-core microprocessors in massively parallel systems. One of his objectives is to make HPC systems available for both researchers in other science domains and industry in a faction of current cost of HPC infrastructure and ready to use in a very short time. He is currently working on porting applications and algorithms for biology, material sciences, and computational chemistry to new compute acceleration architectures such as GPGPUs.
For more, please see: | <urn:uuid:6d644c8c-c92d-438f-877d-8554019bfe90> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/08/16/challenges_ahead_for_hpc_applications_in_the_cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00517-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941336 | 2,246 | 2.5625 | 3 |
Learn how to:
- Improve business agility and responsiveness with Bluelock Virtual Datacenters
- Improve datacenter security and interoperability with VMware vCloud® Datacenter services
- Support more business demands with tailored cloud solutions
- Increase resource and workload flexibility
Cloud Computing Services
More and more organizations are using cloud computing services to streamline their IT practices, better manage their IT costs and achieve greater agility. Download this whitepaper to learn not only how cloud computing services can benefit you, but also how to decide what should go in the public cloud, what should stay in your private cloud and which types of Virtual Datacenters are best for your needs.
What are cloud computing services?
Cloud computing services help to optimize and protect your cloud computing infrastructure. These can be internally or externally managed and can include virtual machine backups, firewalls, load balancing, monitoring, anti-virus, operating system patching and networking. Cloud services provide infrastructure support to organizations in the cloud.
What is Infrastructure as a Service?
IaaS, or Infrastructure-as-a-Service, is a cloud computing service model where providers offer web-based computing resources, both physical and virtual, on-demand. IaaS resources include virtual machines, servers, storage, load balancers, network hardware and more. | <urn:uuid:1df516ae-e057-42d4-b9f3-42c36a6996b9> | CC-MAIN-2017-04 | http://go.bluelock.com/choosing-the-right-virtual-datacenter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911938 | 271 | 2.53125 | 3 |
Ok, this is strange! At least this was my first reaction when I saw that in one of my CCIE labs that I am trying to resolve all the links between routers are addresses with a subnet /31.
Isn’t that weird that something like this you see for this first time after couple of years in networking. For me it was. It blow my mind out. I asked my more experienced networking colleagues later but for them it seemed new too. They said at first: Ok men, that’s not possible!
Well, try to type it on router interface and you will se that it is possible. It strange for sure, but it’s possible. Router OS (Cisco IOS in this case) will try to be sure that you will use this kind of subneting only for Point-to-point links. That’s why it will issue a warning message if you apply this subnet mask on an Ethernet interface. For serial it will go without the warning.
The idea behind this is of course simple if you put it this way:
On point to point links we actually do not need special broadcast address of that subnet because there’s only one way you can send a packet across point to point link. All we have is the IP address on the other side of the link. We know that if we want to send broadcast it will go there no matter that address is separate broadcast address or any other address. There cannot be more destination than one and the router will then know that broadcast will be directed on the same link as the normal unicast for the link destination address.
Why should we have network name defined as first address of a range and not being able to use it on the interface, we want to use that one too.
If we have 256 different addresses in /24 range. Why we need to divide this on 64 subnet with 4 addresses each if we want to use only two addresses on every side on the link. This is the idea. For one /24 subnet we can use /31 subnets for point-to-point links and with that get double the number of point-to-point links that we can cover.
R1(config)#int fa 0/0 R1(config-if)#ip add 192.168.0.0 255.255.255.254 % Warning: use /31 mask on non point-to-point interface cautiously R1(config-if)# | <urn:uuid:4a7e878f-f591-41b9-9285-dca0048fd05d> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2014/point-to-point-subnet | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93763 | 513 | 2.71875 | 3 |
NIST Releases Federal Risk Assessment GuideFederal technology standards body issues new guidelines for evaluating cyber security vulnerabilities.
(click image for larger view)
Slideshow: Inside DHS' Classified Cyber-Coordination Headquarters
The federal organization for creating technology standards has released new guidance to help agencies assess risk within their IT systems as part of an overall strategy to instill more prevention in federal cybersecurity.
The National Institute for Standards and Technology (NIST) is currently seeking comments through Nov. 4 on its Guide for Conducting Risk Assessments, which updates an original version published nine years ago.
The guide is aimed at helping agencies evaluate the current threat landscape as well as identify potential vulnerabilities and the adverse impacts they may have on agency business operations and missions, according to NIST.
[Pacific Northwest National Laboratory CIO Jerry Johnson takes you inside the cyber attack that he faced down--and shares his security lessons learned, in Anatomy of a Zero-Day Attack.]
Risk assessment is one of four steps in agencies' general security risk-management strategy, according to NIST. Assessment helps agencies determine the appropriate response to cyberattacks or threats before they happen and guides their IT investment decisions for cyber-defense solutions, according to the organization.
It also helps agencies maintain ongoing situational awareness of the security of their IT systems, something that is becoming more important to the federal government as it moves from a mere reactionary or compulsory security approach to one that proactively addresses risks and takes more consistent, preventative measures.
Indeed, in testimony Wednesday before Congress, a federal IT official noted the government's new focus on risk mitigation as key to its future security measures, particularly as they pertain to cloud computing and its security risks.
The government is "shifting the risk from annual reporting under FISMA to robust monitoring and more mitigation" in an attempt to strengthen the security of federal networks, said David McClure, associate administrator for the General Services Administration's office of citizen services and innovative technologies during a House subcommittee on technology and innovation hearing.
To this end, NIST has been working to provide cybersecurity guidelines and standards to agencies as they work to better lock down federal IT systems.
Changes also have been made to how agencies report their security compliance. Agencies recently were required to report security data to an online compliance tool called CyberScope as part of fiscal year 2011 requirements for the Federal Information Security Management Act (FISMA), a standard for federal IT security created and maintained by NIST.
In the new, all-digital issue of InformationWeek Government: As federal agencies close data centers, they must drive up utilization of their remaining systems. That requires a well-conceived virtualization strategy. Download the issue now. (Free registration required.) | <urn:uuid:e09330ba-57b1-43e4-9e2a-bc5f1e869289> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/nist-releases-federal-risk-assessment-guide/d/d-id/1100284 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947227 | 557 | 2.546875 | 3 |
Q&A: What You Can Learn by Monitoring Network Flows
We explain how to leverage data from switches and routers using flow monitoring.
Simple Network Management Protocol (SNMP) gives a high-level view of aggregate traffic on the network. Packet sniffers provide an in-depth view of packet content. What most enterprises are missing is the middle view: network flows. Most switches and routers are designed to generate information on exactly what traffic is passing through each port, but accessing this data requires additional software. Enterprise Strategies interviewed Michael Patterson, product manager for the Scrutinizer NetFlow and sFlow Analyzer at Plixer International to find out how to leverage the data available through flow monitoring.
Enterprise Strategies: What is Netflow? Is it a technology, a software agent, a hardware add-on?
Michael Patterson: NetFlow is a technology developed by Cisco Systems and embedded as part of Cisco IOS the operating system used by Cisco's routers and switches. The network device analyzes the traffic going through it by seven values:
- IP source address: Who is sending the traffic
- Destination IP address: Who is receiving the traffic
- Source and destination ports: Shows what application is using the network
- Layer 3 protocol
- Class of service: For services (such as VoIP) that need priority access
- Router or switch interface
When packets match on all seven of those criteria, they are considered part of the same flow. The device counts the number of packets in a given flow, bundles it with the data on up to 30 other flows (NetFlow v5), and sends it to a server containing a Cisco NetFlow analysis tool to collect and analyze the flow information. A single collector will gather data from multiple network devices.
The current version of NetFlow (v.9) forms the basis of the Internet Engineering Task Force's IP Flow Information Export (IPFIX) protocol. NetFlow v.9 also introduced Flexible NetFlow, which allows a user to select which of the seven key fields to track, and also to track additional fields such as time stamps, next-hop IP addresses, and subnet masks. Since the user can define what type of data and which parameters of that flow to track, it reduces the amount of data reported.
How is NetFlow used and who in IT uses it (network admins, security administrators, etc.)?
NetFlow is used by network administrators, security staff, accounting server admins, and others to view network traffic as it passes through a switch or router. Separate caches can be designated for different types of information. For example, network security can have its own data cache designed for network anomaly detection, while the network administrators could use a different cache optimized for detecting and troubleshooting VoIP quality of service issues.
Is Netflow sufficient these days to monitor network use, troubleshoot networks, and control network security? If not, what else is needed?
NetFlow, although a powerful and useful tool, is only one part of an administrator's complete tool kit, including SNMP, sFlow, and packet analyzers. SNMP, for example, provides a high-level view of the amount of traffic traveling through a port, but not what makes up that traffic or which user or device is generating it. NetFlow will tell you which users and applications are generating the network load. Packet analyzers provide a much more detailed look at the packets. Due to their cost, however, they cannot monitor all network links and are usually only deployed when there is a known problem.
Is IT using Netflow information in ways that it wasn't originally designed for?
Over the years, users have found a wide array of uses for NetFlow including:
- VoIP QoS: With NetFlow, administrators can use what DSCP value packets are using and spot any bottlenecks that are affecting VoIP and reroute traffic as needed.
- Capacity Planning and Management: SNMP gives overall bandwidth statistics, but not what traffic is using that bandwidth. With NetFlow, administrators can ensure that the traffic is valid, kill any unnecessary applications or services (such as watching YouTube during business hours), and move valid but low-priority services such as Patch Tuesday updates to off hours. By viewing the amount of traffic per user generated by a particular application, you can also see the impact of adding additional users and increase capacity as needed.
- Billing: Because NetFlow tracks the number of packets and bytes by user and protocol, that data can be used for chargeback.
- Security: Network security can have its own data cache designed for network anomaly detection. In addition, there are ways to turn NetFlow triggers into actionable security countermeasures. If you know what traffic is supposed to be traveling across a port, anything not expected is a potential security risk. If the NetFlow monitoring software detects unusual activity, it can send messages to the firewall or NAC to shut down that port or block that traffic.
What are the biggest misunderstandings about what Netflow is or how it should be used?
The biggest misunderstanding is that NetFlow is only used for locating bandwidth hogs. Although it does do this, the functions are much broader. A second is that it is limited to Cisco equipment. That was true initially, but NetFlow or other flow technologies (IPFIX, sFlow, Netstream) are now in use on switches and routers by Adtran, Enterasys, Extreme Networks, Juniper, Riverbed, Alcatel, Foundry, HP, 3Com, and others. Some NetFlow collectors will collect data using all of these protocols.
In what situations would you that recommend NetFlow not be used?
NetFlow’s dependence on the switch/router’s processor and memory can limit its deployment because too much NetFlow processing can slow the primary forwarding function. In such a case, you would only use it on key interfaces to avoid having a noticeable impact on the performance of the switch/router and the network. When more detailed information is needed, a packet analyzer should be deployed. Some vendors like Enterasys have implemented NetFlow in hardware but, it is not mainstream like sFlow implementations.
What might be considered the main pitfalls of NetFlow?
NetFlow does not provide visibility into switched or broadcast layer-2 traffic. In addition, because of its overhead, it cannot be used on all network links. FnF (Flexible NetFlow), which is an extension of NetFlow v9, does allow for a packet export as does sFlow. However, few vendors have taken advantage of it.
What tools does Plixer provide and how do they facilitate the use of NetFlow?
Cisco routers, and other equipment, will generate the NetFlow data, but you still need a way to collect and analyze that data. Plixer International, Inc. provides two tools for configuring Netflow commands on the hardware and then monitoring and reporting on the flow data. Flowalyzer is a free toolkit for testing and configuring hardware and software for sending and receiving NetFlow and sFlow data. It can help IT professionals troubleshoot hardware from Cisco, Enterasys, and other vendors, as well as NetFlow collector software, ensuring that whichever flow technology they use is configured properly on both ends.
Plixer's Scrutinizer analyzes and reports on NetFlow data (and other flow protocols such as sFlow, Netstream, jFlow and IPFIX) to provide information on what applications, conversations, flows, and protocols are generating network traffic and to analyze network behavior for troubleshooting purposes. A free version of Scrutinizer is also available from the Plixer Web site (http://www.plixer.com/support/download_request.php). | <urn:uuid:74b080e6-ec41-4024-9ee6-bfd7ecb35b21> | CC-MAIN-2017-04 | https://esj.com/articles/2010/03/02/monitoring-network-flows.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00024-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917693 | 1,607 | 2.765625 | 3 |
NASA has a landing-on-Mars problem, and that strange flying-saucer-shaped test vehicle pictured at right may help them solve it.
Since 1976, when NASA's Viking probe first landed on the Red Planet, the agency has relied on the same parachute design to help all its Mars probes and rovers descend to the planet's surface intact.
So far, that Viking-era parachute system has worked fabulously. Most recently, it helped the 1-ton Curiosity rover survive its 'Seven Minutes of Terror' and land safely on the planet. But if NASA wants to land bigger spacecraft on Mars, the landing system will need to change.
"We were really pushing the envelope on what we can squeeze out of that parachute with Curiosity," said Mark Adler, who is responsible for testing new decelerating technologies at NASA's Jet Propulsion Laboratory in La Cañada Flintridge. "Curiosity is awesome, but we want bigger."
To that end, a team of engineers at JPL has been trying to come up with new ways to slow down a big spacecraft moving faster than the speed of sound, and this must be done in the thin atmosphere of Mars without adding a lot of weight to the spacecraft.
The project is known as the Low Density Supersonic Decelerators (LDSD) mission.
On Wednesday, the team presented to the media an almost-finished rocket-powered vehicle that would allow them to test some of their ideas.
Sewn into the underside of the test vehicle is a giant parachute 100 feet in diameter, about twice as big as the one used by Curiosity.
Around the rim of the vehicle is a drag device inspired by a puffer fish's ability to change its size rapidly without changing its mass. It is essentially an inner tube that can inflate in a fraction of a second, changing the diameter of the saucer from 15 feet to 20 feet, which will help slow it enough to deploy the giant parachute.
Testing these new technologies has proved challenging, however. The new parachute is so big that it won't fit in any of the wind tunnels that NASA traditionally uses to test its parachutes. So, the LDSD team had to try something else.
In a few days, the agency will move the vehicle from the clean room at JPL where it is being built to the Navy's Pacific Missile Range Facility in Hawaii. There, in the first week of June, it will be carried to an altitude of 120,000 feet by a giant balloon. Then rockets on the vehicle will take over, pushing it to an altitude of 180,000 feet and helping to reach supersonic speeds. The thin atmosphere at this altitude is similar to the thin atmosphere on Mars.
Once the vehicle is going 3.5 times the speed of sound, the inner tube should inflate, slowing the vehicle down to 2.5 times the speed of sound, when it is safe to release the parachute.
Will it work? Ian Clark, LDSD principal investigator, said he kind of hoped it wouldn't -- at least not yet.
"It's still extremely experimental, and we are pushing beyond any technologies that we already have," he said. "If it all goes successfully, it means we weren't pushing hard enough."
©2014 the Los Angeles Times | <urn:uuid:ccd81202-82d2-4635-bc24-3be23c624dc9> | CC-MAIN-2017-04 | http://www.govtech.com/federal/NASA-Wants-to-Land-a-Flying-Saucer-on-Mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960899 | 677 | 3.78125 | 4 |
Extended Run Times Makes Fuel Cells the Power Source of Choice in Portable Devices
The increasing sophistication of portable devices has placed tremendous pressure on power sources to cater to their power output requirements. Since conventional batteries have pulled up short, there is huge demand for innovative power sources such as methanol fuel cells (MFCs) that are robust, easy to use, and affordable, offering extended run times. Fuel cells create but do not store energy the way conventional batteries do, thereby producing only negligible or no harmful emissions, and by-products such as water vapor and trace carbon dioxide. Fuel cells also pack in higher power densities, which make them better suited for power-hungry portable devices. Currently, fuel cell designs are in the prototype stage and act as external power sources for the device. Developers have to ensure that these fuel cells are compact enough for the battery compartments of existing portable devices, produce adequate power, and are lightweight and of flexible design for multiple devices.
However, certain types of MFCs are facing a stiff fight from hydrogen-based formats, which are likely to offer higher power output. Some competing MFC designs include direct methanol, methanol reformat polymer electrolyte membrane (PEM) fuel cells, hydrogen, and solutions including a sodium borohydride mix with other stabilizing and energy-containing additives. "Numerous designs are being tested, and competitors are focusing on different aspects such as the types of fuel or design of cartridge," says the analyst of this research service. "However, the common objective is to secure a viable niche market from the conventional rechargeable battery market."
Military Markets Throw up Opportunities for Further Fuel Cell Development
Fuel cells are dealing with many challenges typical to an emerging technology. Early fuel cell customers will help mold and evolve the technology to a level where it can be launched in the more lucrative mainstream market. "The progression between the early adopter stage and commercial market is oftentimes challenging for emerging technologies such as fuel cells," notes the analyst. "However, the military market provides tremendous support for emerging technology development that can create a shorter early adopter stage as compared to the commercial product marketplace." Power sources for military portable equipment should be able to withstand mechanical abuse, since they will be used for long periods and in extreme weather such as dry desert heat as well as cold sub-terrain and humid conditions. They should also be able to withstand shocks, since most of the equipment on field can be operated under water and are often air dropped.
Fuel cells score over conventional batteries in the military space by offering longer runtime and rapid re-refueling options. "The National Academies' National Research Council has recommended the U.S. Army to test micropower sources for portable electronic devices, since batteries used to power sensors, computers, and communication devices are heavy and encumber the infantry," observes the analyst. "Micro fuel cells are being considered for this purpose due to its benefits of extended runtimes, light weight, and rapid recharge." | <urn:uuid:439789ef-158d-409a-9cd6-80318aee1d45> | CC-MAIN-2017-04 | http://www.frost.com/prod/servlet/report-analyst.pag?repid=N312-01-00-00-00 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953267 | 611 | 2.9375 | 3 |
Contour Crafting, a technology NASA's eyeing for off-planet housing, is a robotic extruding method. Like smaller 3D printers, the “printer” head follows designs from CAD software. The printer uses cement or a plaster/polymer mix, and designs can be customized while work is underway.
The machines can also automatically embed electrical, plumbing and air-conditioning conduits, and place electronic sensors to monitor a building's temperature and health, says Behrokh Khoshnevis, professor of industrial and systems engineering at USC's Viterbi School of Engineering.
Khoshnevis, who is leading the effort to perfect Contour Crafting construction, believes 3D printed buildings will address the problem of population growth; the technology -- which Khoshnevis expects to be commercially viable within two years -- could finish a house's shell in a day. | <urn:uuid:48cd2df4-38d3-4645-a177-1d14d7faa0ca> | CC-MAIN-2017-04 | http://www.cio.com/article/2368456/hardware/141185-From-burgers-to-buildings-10-things-you-didnt-know-3D-printers-could-make.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915365 | 178 | 3.265625 | 3 |
A very simple concept that, implemented correctly, can greatly affect a companys ability to reliably backup and recover data. The column also explained how there are essentially three ways to implement a disk to disk to tape backup: from the host, from the SAN (in the form of an appliance) or within a tape device. In last month's column I discussed the host-based solutions. This column will take a closer look at the appliance-based option.
Virtual Tape Library
The backup software remains the same. An appliance-based solution is generically called a virtual tape library (VTL). The VTL can emulate many different tape libraries that the backup software recognizes and can be integrated into the existing infrastructure seamlessly.
Virtual tape is a concept that has been in the data center for many years. Originally introduced for IBM mainframes, it is now exploding in the open systems arena. VTLs are logically just like physical tape libraries: they logically appear and operate as physical tape devices including virtual tape drives, data cartridges, tape slots, barcode labels and robotic arms.
A VTL is physically a highly-intelligent optimized disk based storage appliance. Because a VTL completely emulates a standard library, the introduction of virtual tape is seamless and transparent to existing tape backup/recovery applications.
Traditional tape devices have a few problems: they are slow and the only way to solve that problem is to add more and more tape drives. Tape library robotics are prone to failure and the tape media itself is delicate and must be stored in a conditioned, secure environment.
To increase backup performance, backups can be multiplexed across multiple drives and tapes. This increases the odds of a failed backup due to a bad tape, a faulty drive or malfunctioning robotics.
* (See story correction below.)
Restores from tape are also time-consuming. Consider trying to recover a file that was part of a 5-tape multiplexed backup. Each of the five tapes must be located in the library and loaded into tape drives. If the drives have tapes in them already, the tapes must be removed from the drives and moved to free slots before another tape may be loaded. Once the tapes are loaded, they must be advanced to where the file is and then, finally, the file can be read from tape. It can take many minutes just to start the recovery. If the tapes are not in the library, it can take many hours to recover a single file.
On the plus side, tape-based solutions are usually considered to be relatively inexpensive. But when the tape media is considered, the cost can skyrocket.
Some studies show that users will buy thirty times the amount of slots worth of tapes during the life of a library. For a medium-sized, 100-slot library, thats 3000 tapes. LTO-3 tapes are currently in the $100 price range. Add the fact that extensive human intervention is required to manage and maintain a tape solution and they are not very inexpensive. It can cost a lot of money to be able to successfully backup your data 60% of the time.
Comparing the problems associated with tape solutions with a virtual tape solution shows how a VTL can change how a datacenter can be run.
A virtual tape library solution can perform 10-times tape speeds for backups. Speeding up the backups will greatly shrink the backup window which allows servers to be backed up faster. With existing backups finishing quicker, second and third tier servers that have not been backed up in the past may now fit into the backup schedule.
Recoveries are also significantly faster using a VTL (typically much faster than the backup). A single file can be recovered from a VTL faster than most tape libraries can find and load a tape into a drive. Full backups that span multiple tapes (as apposed to multiplexed) will also recover very slowly compared to a VTL since after the data is read from one tape the next tape must be located and loaded compared to a VTL that just keeps streaming data from disk.
All virtual tape loads are immediate so there is virtually no delay when a new tape is loaded. People who resort to multiplexing backups to increase the performance of the backup are usually shocked to discover that their recovery times will be about twice as long as the backup was.
Most virtual tape library solutions also contain RAID protected storage that has redundant, hot-swap components (drives, power, cooling). Backups that use a VTL rarely fail because of a VTL failure. Recoveries will never fail due to a bad or lost tape.
VTLs are just not prone to the types of failures that a traditional tape library has. For example, a backup to disk will never fail because of a bad tape, broken tape drive or broken robotics.
The initial cost of a tape library can be less than the cost of a VTL. But when a three or five year cost-of-ownership is considered (tape media, failed backups, lost data (due to failed recoveries), management costs, etc.) a VTL will be less expensive.
Also consider the lower cost of backup software. Some backup software is tiered based on the number of tape slots. By configuring a virtual library to have few slots, but very large tapes, the software tier can be lowered. For backup software that is tiered on the number of tape drives, configure a virtual library with fewer drives. Some backup software solutions are now adding a virtual tape library option which is priced based on the capacity of the library.
Todays virtual tape libraries range from a customer supplied server with VTL software and separate disk to a completely productized solution where the server, software and disk are all bundled.
There are pros and cons for both extremes.
With an unbundled solution, the user gets to purchase each piece separately. The pieces include the VTL software, server, disk and potentially the SAN infrastructure. Unfortunately, the user must also purchase separate support agreements for the VTL software, server, disk and SAN infrastructure and each piece must be managed and monitored by the IT staff.
With a bundled solution, all the pieces are included, tightly integrated and guaranteed to work together. The solution is managed and monitored as one entity and support is covered by one contract.
With current bundled VTL solutions expanding to a petabyte or more, scaling the solutions is not an issue. Adding an additional VTLs for each petabyte of backup is acceptable for most environments. The only negative with a bundled solution is that it is bundled. Some people just do not like that. When it comes to backup, it is best to keep things very simple so a bundled VTL solution is probably the best bet (there are not very many home-made tape libraries in production so why make your own VTL solution?).
Adding a virtual tape library into an existing tape environment will always improve the reliability of backups and recoveries. Even if a VTL is configured to backup only as fast as an existing library, the increase in performance for day-to-day data recovery makes the investment a no-brainer.
Add the robust data processing capabilities not available in physical tape libraries (ex. replication, single-instance data storage) and a VTL can open the door for tremendous advances in tape backup methodology and revolutionize traditional operations.
If you are not considering a VTL today, you should tomorrow.
Jim McKinstry is senior systems engineer with Engenio Information Technologies, an OEM of storage solutions for IBM, TeraData, Sun and others.
* The following paragraph, which was orginially part of the article, was wrongly attributed to the analyst firm Gartner. Gartner said they never published these numbers: "The analyst firm, Gartner, has reported that almost 50% of all backups are not recoverable in full, and that approximately 60% of all backups fail in general. These failures are mostly associated with tape, drive or robotic failures." | <urn:uuid:05c5e1dc-e784-48d2-b492-0c7df08f6f3c> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3549851/Tips-on-Disk-to-Disk-Backup-Part-III.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95122 | 1,634 | 2.875 | 3 |
An independent security consultant demonstrated a "cookiejacking" technique to show how attackers can steal Web cookies to access user accounts online.
unpatched vulnerability in Internet Explorer allows attackers to steal login
credentials to various Websites via cookies, according to a security
can exploit the Internet Explorer flaw to steal cookies from user computers and
use the saved information to access user data. The researcher, Rosario
Valotta, demonstrated the exploit at the Hack in the Box security conference in
Amsterdam on May 20.
are text files that Websites constantly save onto computers with information about
user activity, such as login credentials, the contents of a shopping cart, or
what sites the user has recently visited.
attacker has to guess the users' username for accounts, but can find passwords
by using "an advanced clickjacking technique." Clickjacking occurs when users
are tricked into clicking on a button or link that looks innocent, but is
crafted to steal information. The "cookiejacking" attack violates IE's
cross-zone interaction policy and exploits a zero-day vulnerability that is present
in all versions of Internet Explorer and can be exploited on all Windows
versions, according to a May 23 post on his Tentacolo Viola blog
cookie. Any Website. Ouch," Valotta wrote. The stolen cookies can be used to
download malware onto user machines or log in to user accounts. The proof of
concept targeted Facebook, Twitter and Google Mail cookies, but Valotta said
any Website can be targeted.
created a game that opened up in a new Internet Explorer window to illustrate
his "cookiejacking" technique. While users played the game by clicking and
dragging objects, what was really happening was that the cookie file was being
opened and the contents of the file were being selected and copied. This way,
the attacker can intercept cookies for any sites the user had accessed during
that Web session. For the attack to work, the attacker would need to know which
are stored in different locations.
put the test case on Facebook and got 80 responses, Valotta said.
Explorer uses "Security Zones" to group Websites according to level of trust,
and prevents content from different zones from interacting. Sites that users
consider safe, which are assigned to a higher trust zone, shouldn't be sharing
information with less trusted sites. When a cookie file is loaded into the
browser using an IFrame embedded on a malicious file, it violates the Cross
Zone policy as "an Internet page is accessing a local file," Valotta wrote.
displaying the contents of the cookie file in the IFrame is not enough, since
This is why Valotta created the game to trick users into dragging and dropping
game pieces, actually cookie content, into an "attacker controlled HTML
is complicated for the attacker, but not for the victim," Valotta told The
number of things the person needs to obtain before launching a successful
attack makes it only a moderate risk for users. Considering that many malicious
attacks involve tricking users into giving up usernames and there are rogue
portals that already check what operating system the victims are running before
delivering a customized payload, neither of the "obstacles" will slow down any
criminals interested in using this technique. Valotta also pointed out that
Internet Explorer automatically returns usernames as plaintext when getting
images or other resources from the remote server. All an attacker needs is a
script to "sniff" the username.
is aware of the issue and will roll out a patch in an upcoming update, a
Microsoft spokesperson told eWEEK. | <urn:uuid:ca8de31d-254a-4395-93f9-ade59e3f70d2> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/IE-Flaw-Lets-Attackers-Steal-Cookies-Access-User-Accounts-402503 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00078-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91487 | 758 | 2.6875 | 3 |
Associations with Big Data tend to be pretty clinical – it’s often considered a tool to make more accurate scientific statements, identify trends in social media and news, and develop products by gauging customer response. In other words, the cloud computing tool was largely viewed as a shortcut to making money and creating new offerings for the public, whether that was a breakthrough medication, a new way to communicate wirelessly, or something the world had never even heard of. A less common but equally fascinating use of the technology, however, is as a storytelling mechanism – a capability that can be the most powerful use of all.
The value of storytelling
The concept of storytelling and the value of its teller is a tradition ingrained in basic human culture that has existed for thousands of years. In generations past, before the written word and widespread publishing of books and magazines, storytellers would enthrall listeners with memorized speeches in the manner of Ovid’s “Metamorphoses” and Homer’s “Odyssey.” A recent piece on the Fast CoCreate blog detailed some of the finer points of this tradition.
“Results repeatedly show that our attitudes, fears, hopes, and values are strongly influenced by story,” the source stated. “In fact, fiction seems to be more effective at changing beliefs than writing that is specifically designed to persuade through argument and evidence.”
These statements have plenty of evidence to back them up – stories sell. The movie and publishing industry bring in billions every year, and even our most prevalent social media tools, especially Facebook, are designed to tell the “story” of a user’s life online by highlighting what events and posts have received the most attention. This is just one example of mass data being boiled down to a basic storyline, but it’s a valuable one. Even Snapchat, the ever-present application that is famous for showing a user an image for a few seconds that disappears shortly thereafter, has introduced the “Snapchat Stories” feature that lets users create a narrative from their brief messages.
How Big Data tells a story with accuracy and impact
There’s no doubt that the science behind Big Data is inescapable, but some data scientists have struggled to transform this information into a palatable story for the everyday user to consume. Jeff Bladt and Bob Filbin, data scientists for the activist charity-driven website Dosomething.org, wrote about this process, with which they’re still constantly experimenting, in a recent issue of Harvard Business Review.
“We’re tasked with transforming data into directives,” they explained. “Good analysis parses numerical outputs into an understanding of the organization. We ‘humanize’ the data by turning raw numbers into a story about our performance.”
Their insights made it clear that Big Data is the “what”; however, those making business decisions and products based on the data are the “why.” As such, many companies have been hard at work developing the tools necessary to take the information being collected and translate it into something useful and compelling to sell. This move will undoubtedly make the services offered from the cloud computing system more likely to stand the test of time.
Bladt and Filbin recommended that when developing an analysis of Big Data for a client, professionals should only sync information directly relevant to the business. In addition, they should implement a user-friendly visual presentation such as the popular infographic that is both attractive to the eye and informative.
As in previous generations, science and storytelling need to coexist to remain powerful, a fact that rings true when considering the developing uses of Big Data. | <urn:uuid:e5f62d48-4a41-49e8-b3ef-982a50b474fd> | CC-MAIN-2017-04 | https://www.datapipe.com/blog/2014/07/24/big-data-storytelling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00078-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947169 | 765 | 3.234375 | 3 |
It might surprise you to learn that a very large percentage of websites have the lifespan of a typical mayfly—24 hours or less. Blue Coat dubbed such sites “One Day Wonders”, and has released a research study on the risks of these fly-by-night sites.
There are a select few Web domains that account for the vast majority of Web traffic—household names like Google, Amazon, and Netflix. There are probably fewer than 100—possibly even fewer than 25—that an average user visits with any regularity. The reality, though, is that there are hundreds of millions of domains in existence, and that many exist for very brief periods of time.
Blue Coat researchers were curious about these short-lived sites, so they analyzed more than 660 million unique hostnames, requested by 75 million users around the world over a 90-day period. What they found is that more than 70 percent of the requested hostnames were “One Day Wonders” that appeared, and disappeared from the Internet within a single day.
According to the Blue Coat study, of the top 50 parent domains that frequently use these “One Day Wonders” domains, 22 percent were deemed malicious. That means that roughly 470 million of the 660 million domains Blue Coat analyzed existed for barely more than the blink of an eye in online terms.
There are variety of reasons why legitimate companies might employ short-lived, One Day Wonders domains. There are also some very good reasons why cyber criminals would do so.
One Day Wonders are employed by cyber criminals to manage botnets, or host malware. By creating new, unique domains in sufficiently high volume, cyber criminals can overwhelm security solutions designed to analyze and assess the relative security of websites. Once a website is identified as malicious, security tools begin to detect and avoid it, so the avalanche of One Day Wonders helps the attackers stay one step ahead of your defenses.
The research from Blue Coat is an important step in defending against these threats, though. Those 11 domains (22 percent of the top 50 domains that most frequently use One Day Wonders) account for a huge percentage of the total malicious One Day Wonders sites. The unique domains themselves might be new, and capable of evading detection, but the shady reputation of the parent domain is all we need to assume that any subdomains are a greater-than-average risk.
The Blue Coat study illustrates why some of the traditional security tools offer little protection. Antimalware and firewall tools are generally reactionary, and depend on a threat being identified before the security tool is able to detect and avoid it. Guarding against these flash-in-the-pan threats requires a different approach that relies real-time threat intelligence rather than yesterday’s malicious code signatures.
To learn more, check out the complete report from Blue Coat. | <urn:uuid:f666b150-eddf-4a2c-b421-16617ad60f3d> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2601360/malware-cybercrime/blue-coat-reveals-dangers-of-one-day-wonders.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940717 | 575 | 2.640625 | 3 |
It would be unappropriate to differentriate between DBMS & RDBMS. As RDBMS is nothing different but a part of DBMS.
DBMS (Database Management System):- is the way you organize your data. Or how you store/process that.
RDBMS (Relational Database Management System):- is a classification under DBMS where you organize data in relational way.
There are 3 classifications of DBMS:-
1> Hierarchical- Where you organize data in hierarchical fashion. e.g. IMS
2> Relational- Where data is stored in relational fashion like tables. e.g. DB2
3> Network Database- Used in networking. | <urn:uuid:c893456d-0775-48e1-93f4-bf539cf68848> | CC-MAIN-2017-04 | http://ibmmainframes.com/about13723.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.857256 | 145 | 2.890625 | 3 |
Formulas are used to create filters.
To open the filter feature, click Data from the menu and select Filter or click the Apply Filter button on the toolbar.
To create use the formula builder, click the simple tab.
- When entering a value in the Simple Tab, select a column to filter, select a expression to use, enter a value if needed, and select between Constant/Formula/Column from the drop-down list and build a formula.
To create complex formulas, click the Advanced tab.
- Click the Advanced tab in the Apply Filter dialog box and enter a formula such as TIMESTAMP(#A) < TIMESTAMP(NOW()-200000).
You can create a formula that combines multiple conditions using multiple referenced columns as well as uses nested functions or constants.
Text-based expressions must return a Boolean value and they are required to reference columns in the current sheet. When the expression, applied to the current record, returns false, the record will be dropped. Otherwise it remains.
YEAR(#A) >= 2004 && YEAR(#A) <= 2010
SUM(#A;#C) == 100
#A < NOW()-7d
LEN(T(#A)) > 2 | <urn:uuid:6a0f80d5-9221-4810-8836-9da9c25e7043> | CC-MAIN-2017-04 | http://www.datameer.com/documentation/current/Filtering+Data+Using+Formulas | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.699269 | 258 | 2.5625 | 3 |
Wurie's challenge of the search is one of several cases the U.S. Supreme Court could take up this term in its never-ending pursuit of fast-moving technology. From government snooping to cellphone technology to wearable computers and driverless cars, the tech world is reshaping how people live far faster than the legal world can adapt.
“Technology is not subject to rules governing how fast it can go. Technology goes as fast as it goes,” said Michael Madison, University of Pittsburgh law professor and director of Pitt's Innovation Practice Institute, which trains lawyers to work with entrepreneurs.
Personal cellphone use transformed social interaction over the past decade, yet two of the major cases that allow the government to search phones were decided in 1969 and 1979, the year the first cell network activated — in Tokyo.
In 1979, the Supreme Court ruled that police did not need a warrant to see whom a person dialed because phone companies, not individuals, kept that information. But in the cellphone age, calls include more than numbers; they include location data.
“From that information, (police) can figure out whether you go to a church on Sunday, a mosque on Friday, a synagogue on Saturday. Did you go to an AA meeting? A gay bar?” said Hanni Fakhoury, lawyer for the Electronic Frontier Foundation, which advocates greater privacy protection.
That 1979 case dealt with a few calls over a few days from one person, yet the National Security Agency used it to justify a program that vacuums up and stores for five years much of the call data in the United States.
NSA analysts can see every number a person called in the past five years, check all the calls of those people over the same five years and then check all the people those people called as well, according to a lawsuit challenging the program in the Federal District Court for the District of Columbia.
If that first person called 100 numbers over five years and everyone in the two subsequent layers called 100 people, the NSA's net would ensnare the phone records of 1 million people, Judge Richard J. Leon wrote.
When the case was decided, the notion that government would be capable of such an operation “was at best, in 1979, the stuff of science fiction,” Leon wrote.
He ruled the program likely violated the Fourth Amendment guarantee against unreasonable search and seizure.
Privacy advocates worry that devices such as Google Glass — a computer worn like glasses that includes a tiny screen and a video camera — will make it impossible to know when someone is being watched and recorded. But they've raised a problem for law enforcement, too. A California woman was ticketed in October for violating California's law against having a video screen on in the front of a vehicle.
The woman, Cecilia Abadie, said it was off — something the police officer can't know for sure because the screen is visible only to her.
“We are just getting to the start of what technology can do,” Madison said.
A case the Supreme Court decided in 2012 — United States v. Jones — shows how rapidly changing technology can rocket past the laborious judicial process.
The case began when police put a GPS tracker on a suspect's car without a warrant in 2005 — before Apple released the first iPhone and Facebook expanded beyond college campuses.
Location data in today's smartphones, combined with “check-ins” and other information that people post online voluntarily, made the GPS trackers in the Jones case all but obsolete, said Wesley Oliver, Duquesne University law professor.
By the time the Supreme Court ruled police should have gotten a warrant, officers interested in someone's whereabouts could walk into a courthouse for a subpoena rather than sneak into a parking garage to stick a transmitter on a suspect's car.
“There's always going to be some kind of technology frontier that gets out ahead of where the legal system is,” Madison said.
Automobile technology pushed the boundaries of American law since Henry Ford's Model T became popular.
“When you put automobiles in the hands of an enormous amount of people, a lot of good things happen, but also a lot of horrible things happen. People start to die or be maimed. Judges start taking on lawsuits. You start to build up a body of law,” Madison said.
Carnegie Mellon University scientists outfitted a car with an autonomous navigation system that allowed it to ferry two top transportation officials from Cranberry to Pittsburgh International Airport in September.
A state police spokeswoman told the Tribune-Review then that no laws govern computer-driven vehicles, and she wondered whom police would cite if the driverless car broke a speed limit or rear-ended someone.
“We're not going to have a good sense of that until we have a lot more of them,” Madison said. “What does it feel like to be surrounded by automobiles that don't have anybody behind the wheel? Would you put a kid in an autonomous automobile? ... What about autonomous buses? What about autonomous trucks? This is technology that's being applied to what we've been using for 90 years.” | <urn:uuid:d37d4b56-3578-40e0-a512-99b84073500a> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Law-struggles-to-Adapt-to-High-Tech-Gadgets.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959217 | 1,059 | 2.84375 | 3 |
Definition: A formal, abstract definition of a computer. Using a model one can more easily analyze the intrinsic execution time or memory space of an algorithm while ignoring many implementation issues. There are many models of computation which differ in computing power (that is, some models can perform computations impossible for other models) and the cost of various operations.
Specialization (... is a kind of me.)
Turing machine, random access machine, primitive recursive, cellular automaton, finite state machine, cell probe model, pointer machine, alternation, alternating Turing machine, nondeterministic Turing machine, oracle Turing machine, probabilistic Turing machine, universal Turing machine, quantum computation, parallel models: multiprocessor model, work-depth model, parallel random-access machine, shared memory.
See also big-O notation.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 27 February 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "model of computation", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 27 February 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/modelOfComputation.html | <urn:uuid:c269e49f-b7a3-467c-9894-8ff44d10ba2d> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/modelOfComputation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.803835 | 286 | 2.75 | 3 |
Studies show adults are spending approximately 11 hours each day using electronic media, including listening to the
radio, tinkering on their smartphone, and browsing the internet on a desktop computer. Most of these electronics are linked to the internet either through hard wires like fiber optic cables, or mobile networks, and they’re eating up data capacity faster than ever before. That constant connection has some asking: is there enough data capacity to deal with the telecom demand?
Each U.S. household uses a different amount of bandwidth monthly depending on the number of connected devices and what they use those devices for. Gigaom has an interesting breakdown of the broadband usage by demographic. For example, the survey shows that two young parents with a child use about 125 GB of internet a month, but a household with two parents and three pre-teens can exceed 300 GB in a 30-day period. Multiply these figures by the number of internet users in America – about 84% of the total population – and you’ve got a country that demands a lot of bandwidth.
That demand is only going to go up, especially as streaming sites such as Netflix become more popular. Netflix uses 36% of all bandwidth in the States.
The Ways the Internet Reaches Us
To understand the issue, one must first have some background on how the American data supply is currently transmitted. Today, there are three main means of transmission: copper, wireless, and fiber.
- Copper: Copper cabling goes back to the advent of the telephone more than 100 years ago. Just as it can transmit vocal signals, it can also transmit internet data. Known as broadband, this connection is already in most homes and businesses due to the past prevalence of landlines. Copper cable is considered the most cost-effective and accessible way to tap into an internet connection. The fastest broadband connections take about 32 minutes to download a standard HD movie.
- Wireless: Also known as 4G and LTE, wireless technology is what telecom companies such as AT&T and Verizon use to power the data on your smartphone. The average LTE speed in the U.S. is 9 megabytes per second, which is substantially lower than in the rest of the world. The fastest connection would take more than an hour to download an HD movie.
- Fiber: Launched in the 1970s, fiber optic has quickly become a popular way to broadcast television channels, in addition to internet and telephone services. Superfast fiber connections can download an HD movie in approximately 25 seconds.
While broadband connections have worked well for years, they simply cannot keep up with America’s current demand for bandwidth.
The Fiber Fix
Just as fiber optics revolutionized the telecommunications industry when it was mainstreamed in the mid-1980s, new developments may also help the American internet connection stay nimble in light of greater demand.
Not only can a single fiber cable transmit more data at once than its copper counterpart, but fiber optic cables are also less susceptible to outside interference and security breaches and are much more durable.
99% of the world’s internet capacity is currently transmitted over fiber cable. While it is regarded as the future of the telecom industry, there remains a shortage of fiber connections directly linking homes and businesses. Despite fiber dominating international and deep sea internet connections, nearly every home remains linked to the web with a copper wire connection. Much of that relates to the economics of switching the connection. The cost of changing a home’s line from copper to fiber is estimated to be between $1,143 USD and $1,479 USD. In most cases, the cost of labor alone justifies keeping the connectivity status quo.
Unlike copper technology, which has stayed similar for a century, fiber optic technology is constantly changing.
Traditionally, copper cables, and now traditional fiber optic connections, offered more than enough bandwidth for communities, and everyone’s internet connection managed to operate smoothly and quickly. That’s changed. There are now so many users that the previously uninterrupted beams of data are beginning to get muddled.
That’s where something called orbital angular momentum fiber comes into play. A Boston University researcher quoted in the previously mentioned article says OAM fiber is a type of fiber optic that sends more flexible bursts of data down the cable. This means the same number of people can use the same amount of data – only it’s not slowed down.
It is fiber research like this that has experts hopeful that America can solve its bogged down bandwidth issue.
Just as fiber optics revolutionized the telecommunications industry when it was in the mid-1980s, new developments may also help the American internet connection stay nimble in light of greater demand.
Not only can a single fiber cable transmit more data at once than its copper counterpart, but fiber optic cables are also less susceptible to outside interference and security breaches, and are much more durable.
With the price of fiber optic technology dropping each year and more Americans gaining access to the service, faster internet and greater bandwidth is within everyone’s grasp. Think of that the next time you fret over bandwidth limits. | <urn:uuid:b63c4b90-d777-4196-afa2-90a4aeb1cc9b> | CC-MAIN-2017-04 | https://fieldnation.com/blog/can-fiber-optics-solve-the-growing-need-for-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943661 | 1,042 | 2.859375 | 3 |
A Digital Hybrid is a person that’s part Digital Native and part Digital Immigrant.
A Digital Native is a person who had the opportunity to use digital/computer technologies during his/her formative years. These technologies could be video games, smartphones, computers, other similar technologies, and the internet in general.
A Digital Immigrant is a person who did not have access to these digital technologies during their youth, but now has access in adulthood. The lack of access could be due to age (it didn’t yet exist) or simply due to a lack of access based on their personal circumstances.
A Digital Hybrid is a person who had limited access to digital technologies during his/her youth, but embraced technology in college or in later life as a hobby or profession.
I think the Digital Hybrid distinction is an important middle ground to define because there is a large divide within the Digital Immigrant category between those who are computer literate and those who are not. Using me as an example, I’m a Baby Boomer and my formative years did not include computers of any type. I was first introduced to computer technology in college when I was required to take a class in BASIC programming as part of the standard freshman curriculum. I loved it, majored in it, spent the majority of my professional life in Information Technology (IT) organizations as a programmer, and in time, in IT leadership roles. As a result, I have a very different understanding than someone my age that chose a different professional path with no connection to technology.
This is an important distinction for Digital Natives to understand when speaking with people seemingly less technical, based on their generation, but technically knowledgeable by choice.
Digital Hybrids typically have a combination of native and immigrant technological strengths and weaknesses and have a unique ability to blend technical and manual processes. This blended background provides Digital Hybrids with the ability to understand both natives and immigrants, as well as bridge the gap between the two. Using me as the example, I understand why Digital Natives go to the internet first for everything from research to hotel reservations to making plans with their friends and why many Digital Immigrants love to use a stylus because it feels like a pencil in your hand.
I know Digital Natives that use their smart phone for emails, calendaring, to-do-lists and just about everything else. I also know Digital Immigrants that have no desire for a smart phone and do everything using an old style address book. As for me, in typical Digital Hybrid style, I do everything on my smartphone except for my to-do-list, which I do with pencil and paper.
If, like me, you are a Digital Hybrid, here are some suggestions that you may find of value.
• Don’t underestimate the power of having both a digital and non-digital mentality. It will help you design better business processes that require a combination of manual and automated components. • Use this digital/non-digital understanding to your professional advantage by using it to effectively communicate with people of both technical and non-technical orientations. • Don’t cling to old technologies just because you’re comfortable with them. Be willing to step outside your comfort zone and dive headfirst into new technologies if it has professional advantage. • Don’t be intimidated by new industry-changing technological advances. They may be less intuitive to you because it’s not your first technology, but once understood, you will be in an ideal position to interface this new technology with existing mainstream products.
In closing, regardless of your technical orientation, now that you are familiar with the concept of Digital Hybrids, use this knowledge to better understand the people you work with and how to best communicate with them on technology related topics.
If you have any questions about your career in IT, please email me at eric@ManagerMechanics.com or find me on Twitter at @EricPBloom.
Until next time, work hard, work smart, and continue to build your professional brand.
Read more of Eric Bloom's Your IT Career blog and follow the latest IT news at ITworld. Follow Eric on Twitter at @EricPBloom. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:7e294722-6b2e-4587-88a0-bd9bb09b9fdd> | CC-MAIN-2017-04 | http://www.itworld.com/article/2703908/careers/are-you-a-digital-hybrid-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950059 | 881 | 2.984375 | 3 |
Windows 95; Windows 98; Windows Me; Windows NT; Windows 2000; Windows XP (32-bit); Windows XP Professional (64-bit); Windows Vista (32-bit); Windows Vista (64-bit); Windows 7 (32-bit); Windows 7 (64-bit); Windows 8 (32-bit); Windows 8 (64-bit);
This article describes a number of procedures for determining the printer's wireless signal strength – via the network settings page, the printer display or control panel, or the printer's web page (EWS). Gains or losses in signal strength affect bandwidth throughput. The strength of the radio frequency (RF) is measured by the printer's Wi-Fi adapter, printer server, wireless print server, or internal network adapter (INA).
NOTE: Antenna placement can greatly impact performance. However, it does NOT determine the host computer's signal strength to the access point, and it is important to distinguish between the two when trying alleviate communication issues. See Wireless Glossary for an explanation of some of the technical terms used here.
Before you begin
Make sure the printer is turned on and that paper is loaded into the auto-sheet feeder (ASF).
Wireless Setting Default Value Configured Values (Example) BSS Type Ad hoc (computer-to-printer). This value will change to Infrastructure after configuration with an access point. (Commonly known as your connection type or mode.) Infrastructure SSID Print Server macdaddy3 (after configuration with an access point) Wireless Security Mode Disabled until configured. WPA- PSK (optionally WEP, WPA2)
Security recommended - WPA Personal.
Signal Strength Not listed -40dBm Current Access Point Not listed 0080C80A2236 (MAC address of the access point the print server is communicating with) Current Channel Not listed 6 Quality Not listed Excellent Pairwise Cipher Not listed TKIP (part of PSK Security Mode) Groupwise Cipher Not listed TKIP (part of PSK Security Mode) Trouble Code Can't find Wireless Network None
Procedure: Printing out the Network Settings Page (X3500 – X4500 Series printer displayed in the example)
Blue – first line of printer display
Red – second line of display
Step Action 1 Press Settings to display Settings/ on the display. 2 Press to display Settings/ Network Setup. 3 Press to display Network Setup/ Print Setup Page. 4 Press to observe the message Load Plain Letter paper and press . 5 You will see Preparing page before the printer starts printing.
Printer Model Article Z1400 & Z1500 Series Click here Z2400 Series Click here X6500 Series Click here X4850 & X7550 Series Click here X4600 Series Click here X9350 Series Click here
X3500, X4500 & X6500 Series Procedure
You will find equivalent settings in the same place on other models; however, the navigation buttons will be different.
Blue - first line of printer display
Red - second line of display
|1||Press Settings to display Settings / Maintenance> on the LCD.|
|2||Press to display Settings / Network Setup.|
|3||Press to display Network Setup / Print Setup Page.|
|4||Press to display Network Setup / Wireless Setup.|
|5||Press to display Network Setup / Network Name.|
|6||Press to display Network Setup / Wireless Signal Quality.|
|7||Press to observe the message Unacceptable, Poor, Fair, Good, or Excellent.|
Step Action 1 Open your web-browser interface (e.g. Internet Explorer, Fire Fox). 2
Enter your printer's IP address into the web address field (sometimes referred to a the URL field).
Example: http://<your printer's_IP_address>, where <your printer's_IP_address> is the value found next to Address on the Network Settings Page.
3 Click on Reports. 4 Click on Print Server Setup Page. 5 Observe the values next to Signal Strength and Quality.
Is your Signal Strength Low?
Several variables affect signal strength and consistent communication with the printer. Below is a brief list of things to consider when determining the placement of the printer or the positioning of the antenna.
NOTE: Please refer to your wireless router/access point documentation for additional suggestions on maintaining signal strength and combating interference.
Place printer in a location which does not prevent the antenna from transmitting/receiving signals. Areas behind large objects may hinder signal transmission to the printer.Suggestions: Provide a more direct path to the printer, adjust both printer and access point antenna angles, or consider the purchase of high-gain antennas for your access point.NOTE: This phenomenon is often called RF shadow, and is known as dead space around objects where the access point radio frequency (APRF) signal cannot reach the desired wireless device.
The advertised range of most access points is 300 feet. A combination of the other three variables (Obstructions, Interference, Antenna Placement) can reduce this to as little as 50 feet.Suggestions: Move the printer closer to the access point, move the access point closer to printer, consider the purchase of high-gain antennas for your access point, or try adjusting antenna angles. One other option is the purchase of a wireless repeater (range extender).NOTE: Some consumer grade access points can function as repeaters, but this will slow down the overall data throughput.
Neighboring wireless networks or other devices using similar frequencies (2.4GHz cordless phones, microwaves, baby monitors etc.) can affect signal quality and printing performance.Suggestions: Relocate the printer or access point. Also, consider changing the frequency channel of your access point.NOTE: Lexmark is not responsible for the configuration of your access point. If you are unsure how to change channels while ensuring wireless clients maintain wireless connectivity, please consult your wireless router/access point customer-support resources.
Vertical polarization (antenna vertical/perpendicular to ground) is typical for antenna placement, but changes in elevation between the access point and printer, or reflection, refraction, diffraction, absorption and scattering of RF waves off various obstructions can greatly diminish signal strength.Suggestions: Depending on the location of the printer you may have to adjust the angle of the printer or access point antenna(s).NOTE: Dual antennas on access points create antenna diversity, which helps to control signal multipath caused by the many types of obstructions and the numerous effects they have on wireless signal transmission.
Wireless & Networking Glossary
Click here for explanations of wireless and networking terms.
Still need help?
Please contact Lexmark Technical Support for additional assistance. NOTE: When calling for support, you will need your printer model type and serial number (SN).
Please call from near the printer in case the technician asks you to perform a task involving the device. | <urn:uuid:f61c9a50-1f0c-4ba2-897d-23a28ba41a21> | CC-MAIN-2017-04 | http://support.lexmark.com/index?modifiedDate=06%2F04%2F13&page=content&actp=LIST_RECENT&id=HO3186&locale=en&userlocale=EN_US | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.823713 | 1,432 | 2.515625 | 3 |
Numerical data often can be modeled as a number of independent (predictive) variables (aka columns/features/attributes) along with one dependent (response) variable. In a recent post about Multiple Correlation, you learned how you can identify independent variables that are most relevant for the response variable.
For example, in a data set with eight numeric variables describing properties of a vehicle, through Multiple Correlation you figured that the four variables acceleration, distance, horsepower and weight contain best information to be able to predict the values of mpg (miles per gallon).
Multiple Regression is a technique where you now use these variables to learn a model that enables you to predict the value of the response variable, given a new record where you only know the values of the dependent variables (but the value of mpg is unknown).
I will briefly explain the mathematical background of how to learn such a multiple regression model, walk through the details how this can be implemented in Datameer on big data with a set of custom linear algebra functions and show how the derived model can be used in Datameer to make predictions on new data. By the end of this post, you’ll know how to scale on potentially big data (billions of records), both on learning the model as well as making predictions on new records.
Now wait a minute, you’re probably saying. Isn’t the point of this post that you shouldn’t *have* to know how something like multiple regression works, mathematically speaking? You’re absolutely right. But, we want to “show our work” anyway for those of you who are interested in taking a look under the hood. For those of you who don’t necessarily need to know how it works and just want to be able to use it in Datameer — skip ahead to the last section — “Get The Multiple Regression Application”.
Multiple Linear Regression attempts to fit a series of independent variables (each denoted as X) and a dependent variable (Y) in to a linear model.
This means we want to find the best way to describe the Y variable as a linear combination of the X variables. Using matrix algebra, we can describe this problem as a general linear system:
Written in shorter form, the equation becomes:
The large letters are the matrices and the smaller letters describe the dimensions of each term. We are solving for the beta vector. After some transformations, this can be expressed as:
β is a vector and from this vector we can take our required values to construct the desired equation.
This equation can then be used to make predictions on data where the values of Y are unknown.
As input data in this example, we use an illustrative data set of 384 records describing the properties distance, horsepower, weight, acceleration and mpg (miles per gallon) of a vehicle. Mpg represents the dependent variable (Y). We also introduce a constant intercept column (equal to 1) to initialize β0.
To be able to compute the values of the β vector in Datameer on big data, we utilize the formula described above:
We essentially break it up into three steps.
The first step is to compute the inverse of the transpose product of the independent variables input data. We do this in Datameer by applying the custom function GROUP_MATRIX_TRANSPOSE_PRODUCT on all independent variables and directly apply the custom function MATRIX_INVERSE on that:
This results in a list of lists represent a 5 by 5 matrix, with each inner list being a row of that matrix. This how the custom function formats its output as a matrix representation. Note that this scales on big data – the custom functions can deal with arbitrarily many rows of data in X. We call this result of the left part of the above formula the “betaCoefficcientInverse”.
The next step is to compute the product of the transposed input data with the Y vector (XTY).
Similar to the step above, we create a group-by sheet and apply a custom function GROUP_MATRIX_TRANSPOSE_VECTOR_PRODUCT, which returns a one-column matrix with six entries:
We call this result matrix “untransformedBeta”. Note that each inner list in this result matrix is single-element (single column) row vector.
In a final step we join these two intermediate result matrices into one sheet to then apply the custom function MATRIX_PRODUCT to compute the final result:
This final result contains the regression model consisting of five entries – the intercept and the factors that can now directly be used to make predictions on new data.
To use that model, we join it with each row of our new data set we want to apply it on, in order to predict the value of Y (mpg in our running example) for each record. The prediction itself is done by simply applying the formula described in the math section above:
Since the model is represented as a list of lists we apply the LISTELEMENT function in order to retrieve the intercept and the factor values.
The exact formula in Datameer is:
Note that in this example, we use the input data itself where we applied the model on. Obviously, this is done not using the known values of mpg. However, we still copied the known value of mpg into the result sheet, enabling you to compare the predicted values with the actual values. Not very surprisingly, the performance of the model on the training data looks quite convincing. In a real-world application, you would choose a setup with a hold-out set or cross-validation to determine the actual model performance, which is something that can be conveniently done with Datameer as well.
As promised, we did the heavy lifting and turned all the above into a single app to help you achieve the steps outlined above, without actually having to build it out yourself. Ready to give it a shot? To install, simply follow these steps: | <urn:uuid:c801ab8b-80db-46e2-9b0b-5b27f36bbfa7> | CC-MAIN-2017-04 | https://www.datameer.com/company/datameer-blog/predictive-analytics-multiple-regressions-datameer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.86446 | 1,261 | 3.296875 | 3 |
Last Friday the University of Southern California (USC) announced it was establishing a quantum computing center under the school’s prestigious Viterbi School of Engineering’s Information Sciences Institute. Partnering with USC will be Lockheed Martin, which will earn the company a spot in center’s official name: the USC Lockheed Martin Quantum Computing Center. No word on how exactly much money was being invested by either USC or Lockheed.
According to the USC press release: “With the construction of the multi-million dollar quantum computing center, USC now has the infrastructure in place to support future generations of quantum computer chips, positioning the school and its partners at the forefront of quantum computing research.”
Quantum computers are able to represent bits as both zero and one simultaneously, which enables such systems to perform calculations that are not feasible for classical binary computers, such as integer factorization and complex decision problems.
The notable aspect about this new USC facility is that it intends to be an operational quantum computing center; that is, it will run commercial quantum computers. Since Canadian startup D-Wave Systems is the only vendor claiming to have quantum computers, the center will employ the company’s superconducting quantum computer technology to power its initial system.
Back in May, D-Wave sold its first quantum computer to Lockheed Martin, who intended to use it for their “most challenging computation problems.” The system was based on the company’s latest 128-qubit chip, which needs to cooled to near absolute zero (-459F) to operate. The Lockheed sale gave D-Wave a big boost to its credibility, not to mention its prospects for attracting other customers.
In the past, the company has come under scrutiny, with critics claiming the technology does not deliver true quantum computing. But a May 2011 article in Nature validated at least some of D-Wave’s claims.
In any case, the USC center will give D-Wave some additional visibility, and, given the more open academic setting, a public platform to demonstrate the technology to other potential customers. | <urn:uuid:22d58216-13ab-4693-bc3e-d1516b62fbf3> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/10/31/d-wave_entangles_with_usc_and_lockheed_at_new_quantum_computing_center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943591 | 424 | 2.6875 | 3 |
A long time ago, we reviewed some devices which should be in any hacker's toolbox. One of these devices was a USB Rubber Ducky — a device which resembles a regular USB flash drive. When connected to a computer, it claims to be a keyboard and quickly enters all its commands. It's a pretty cool thing and very useful for pentests, but why pay 40 dollars or more if a regular USB flash drive can be taught the same tricks?
Don't forget that making the described changes to your USB flash drive may not only void the device's warranty but may also kill it. Experiment at your own risk!
Last year's Black Hat was full of many interesting reports. One of the most discussed was a report on the fatal vulnerability of USB devices, which allows regular USB flash drives to be turned into a tool for spreading malware. The attack was called BadUSB, but later jokes appeared on the Internet referring to USBola, comparing this attack to the well-known virus.
Similar ideas for using HID devices for malicious purposes have been around for a while. It's a sin not to use the fact that the OS trusts devices connected to a USB interface. If we search the magazine's archives, we can find an article on a similar topic describing the technique of using a special Teensy device to control a PC running Windows 7 (actually, with any OS). The device disguised itself as a regular USB flash drive. All this suggested that the same trick could also be played with flash drives.
A USB is a really universal interface. Just think how many devices we connect it to and how many devices it works with! Mouses, keyboards, printers, scanners, gamepads, modems, access points, web cameras, telephones, etc. Without thinking, we plug the USB into the socket and the OS automatically determines the type of device and loads the required drivers.
But how does it do it?
How flash drives work
In fact, the OS knows nothing about the connected device. It has to wait until the device tells it what kind it is. Let's consider a simple example. When we plug a USB flash drive into a USB socket, the flash drive informs the operating system of its type and volume. It is worth remembering our shrewd Chinese colleagues, who learned how to produce higher capacity flash drives (some almost 2 TB). To figure out how this is possible, let's remember (or learn) how the OS recognizes USB devices.
USB device initialization algorithm
The purpose of USB devices is defined by class codes communicated to the USB host for installation of the necessary drivers. The class codes allow the host to work with single-type devices from different manufacturers. The device may support one or several classes, the number of which is determined by the number of USB endpoints. When connected, the host requests a range of standard details from the devices (descriptors), which it uses to decide on how to work with it. The descriptors contain information about the manufacturer and device type, which the host uses to select the program driver.
A regular USB flash drive will have class code 08h (Mass Storage Device — MSD), while a web camera equipped with a microphone will have two codes: 01h (Audio) and 0Eh (Video Device Class).
When connected, the USB device is registered, receives an address and sends its descriptor/descriptors to allow the OS to install the necessary drivers and send back the required configuration. After that, the host immediately starts working with the device. Once the work is completed, the device is de-registered. It is important to note that the devices may have several descriptors, they can also de-register and register as a different device.
If you open the body of a USB flash drive, in addition to the mass storage visible to the user, there is a controller responsible for the above-described actions.
Bad USB or some history
At the last year's Black Hat conference, two researchers (Karsten Nohl and Jakob Lell) shared their experience on how to install a personal upgrade to the firmware of the USB flash drive controller. After a while, this USB flash drive was registered as a keyboard and entered the selected commands. Due to the serious nature of the problem, the guys decided not to make the code for this available. However, soon after, two other researchers (Adam Caudill and Brandon Wilson) presented to the whole world at Derbycon conference an operable PoC tailored to Phison 2251-03 microcontroller The code is available at GitHub.
As you might have guessed, today we will try to turn a regular USB flash drive into a pentester's secret weapon!
First of all, we will need a suitable device. As the code has been uploaded for the specific microcontroller only, we have two options — either find a USB flash drive managed by this controller, or perform some very challenging work researching and upgrading the firmware of another microcontroller. This time, we will select an easier way and try to find a suitable USB flash drive (here is the list of vulnerable equipment). The controller is quite popular, so, miraculously, I found a suitable USB flash drive among the dozen I have at home.
Starting the magic
Having found the suitable device (which we won't miss if it fails), we can start its transformation. First of all, we need to download the sources which the guys made available. Actually, the content is described in detail on their official wiki page, but, just in case, I will remind you what they have uploaded to GitHub:
- DriveCom — an app for communicating with Phison USB flash drives;
- EmbedPayload — an app for embedding Rubber Ducky inject.bin key scripts into custom firmware for subsequent execution when the USB flash drive is connected;
- Injector — an app that extracts addresses from the firmware and embeds the patching code in the firmware;
- firmware — custom 8051 firmware written in C;
- patch — collection of 8051 patches written in C.
When you use Ducky scripts, you should remember that the DELAY command, which performs a delay for a set number of milliseconds, will work a little differently on the USB flash drive than on Rubber Ducky, so you will have to adjust the delay time.
Preparing the system
Having downloaded the archive with sources from GitHub, you will find that most of them have been written in C# and require compilation, so you will need a studio. Another tool you will need is the Small Device C Compiler, or SDCC. Install it in
C:\Program Files\SDCC, you will need it to compile firmware and patches.
Having compiled all the tools contained in the archive, check again if this USB flash drive is suitable for firmware upgrade:
DriveCom.exe /drive=F /action=GetInfo
F is the letter of the drive.
Getting the burner image
The next important step is to select an appropriate burner image (8051 binary file, responsible for dumping activities and uploading firmware to the device). They are typically named:
xx is the controller version (for instance, for PS2251-03 it will be 03),
yyy is version number (not important), and
z reflects the memory page size and can look like:
- 2KM — for 2K NAND chips;
- 4KM — for 4K NAND chips;
- M — for 8K NAND chips.
You can look for a suitable burner image for your USB flash drive here.
Dumping the original firmware
Before commencing your dirty experiments which could kill the USB flash drive, it is strongly recommended to dump the original firmware, so that if something goes wrong you can try to recover the device. First, switch the device to boot mode:
tools\DriveCom.exe /drive=F /action=SetBootMode
Then, use the DriveCom utility, passing the drive letter, the path to the burner image, and the path to the file where the original dumped firmware will be saved. It will look like this:
tools\DriveCom.exe /drive=F /action=DumpFirmware /burner=BN03V104M.BIN /firmware=fw.bin
If you have done everything correctly, the source firmware will be saved to the
To check what controller is installed on the USB flash drive, you can use the utility usbflashinfo.
Preparing the payload
Now it's time to think about the functions we want our USB flash drive to have. Teensy has a separate Kautilya toolkit, which can be used to automatically create payloads. For USB Rubber Ducky, there is a whole website, with a friendly interface, which lets you create any scripts for your device online. This is in addition to the list of finished scripts, which are available on the project's GitHub. Fortunately, Ducky scripts may be converted into binary to embed them then into firmware. To do this, we will need a utility Duck Encoder.
As for the scripts, there are several options:
- you can write the required script yourself, as the used syntax is easy to master (see the project's official website);
- use finished ones uploaded to GitHub. As they have a reverse shell and other goodies — you will only have to make minor corrections and convert them into binary form;
- or use the above-mentioned website, which will lead you step-by-step through all the settings and will let you download the finished script in the form of a Ducky script (or already in converted binary form).
To convert the script into binary, execute the following command:
java -jar duckencoder.java -i keys.txt -o inject.bin
keys.txt is a Ducky script, and
inject.bin is the source binary file.
Flashing the firmware
As soon as we have the finished payload, it's time to embed it into the firmware. This is done with the following two commands:
copy CFW.bin hid.bin
tools\EmbedPayload.exe inject.bin hid.bin
Please note that the firmware is first copied to
hid.bin, and only then is it flashed. This is because the payload can only be embedded into the firmware once, so the original
CFW.bin must be left untouched.
After this manipulation, we will have a
hid.bin custom firmware file with an embedded payload. You will only have to place the obtained firmware in the flash drive:
tools\DriveCom.exe /drive=F /action=SendFirmware /burner=BN03V104M.BIN /firmware=hid.bin
F is the drive letter.
In addition to using the HID nature of the USB flash drive and turning it into a keyboard which types our payloads, there are some other tricks that can be done. For instance, you can create a hidden partition on the device, decreasing the space seen by the OS. To do this, you will first need to determine the number of logical blocks on the device:
tools\DriveCom.exe /drive=E /action=GetNumLBAs
Then find the
base.c file in the
patch folder, uncomment the line
#define FEATURE_EXPOSE_HIDDEN_PARTITION and add another directive —
define, which sets a new LBA number:
#define NUM_LBAS 0xE6C980UL (this number must be even, so if you got, say,
0xE6C981 at the previous step, you can decrease the number to
0xE6C940, for example).
Having edited the sources, you need to place the firmware which you want to patch into the
patch folder, name it
fw.bin and run
build.bat, which will create a modified
fw.bin file in
patch\bin\. You can now flash this to the USB flash drive.
The options Password Patch and No Boot Mode Patch are done in the same way; you can read more about them on the project's GitHub. My primary goal was to teach the USB flash drive to perform pre-set actions, which we have accomplished.
We have reached our goal. Moreover, I hope you now understand that USB flash drives (and other USB devices) can no longer be seen simply as a drive that stores your information. In fact, it is almost a computer, which can be taught to execute specific tasks. Although, PoC has so far only been made available for a specific controller, you can be sure that, as you are reading this article, someone is definitely working on others.
So, be careful when you plug in a USB device and keep your eyes open.
If the experiments have gone wrong and the USB flash drive behaves in a weird way, you can try to bring it back to life by manually switching it into boot mode and using the utility to restore the original firmware. To do this, before you connect it, you need to close contacts 1 and 2 (sometimes 2 and 3) of the controller, which are located diagonally from the point (see image). Then you can try to bring the device back to life by using the official utility MPAL | <urn:uuid:5130b237-6136-4684-8d67-f7b1d96c0207> | CC-MAIN-2017-04 | https://hackmag.com/security/rubber-ducky/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921661 | 2,759 | 3.140625 | 3 |
14 Amazing DARPA Technologies On TapGo inside the labs of the Defense Advanced Research Projects Agency for a look at some of the most intriguing technologies they're developing in computing, electronics, communications, and more.
3 of 14
DARPA has partnered with Raytheon Missiles Sytems to develop wireless hotspots capable of transmitting data at a minimum of 1 Gbps through aerial, mobile, and fixed endpoints, over a 25-mile range. The technology, which will be tested on an unmanned aerial vehicle, aims to connect remote soldiers with support from nearby bases and intelligence resources. Image credit: DARPA
Military Transformers: 20 Innovative Defense Technologies
DARPA Demonstrates Robot 'Pack Mules'
DARPA Seeks 'Plan X' Cyber Warfare Tools
DARPA Cheetah Robot Sets World Speed Record
DARPA Demos Inexpensive, Moldable Robots
DARPA Unveils Gigapixel Camera
DARPA: Consumer Tech Can Aid Electronic Warfare
3 of 14 | <urn:uuid:f2d6ca70-10be-4eee-8f54-5fb2f0954686> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/14-amazing-darpa-technologies-on-tap/d/d-id/1106551?page_number=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.802907 | 209 | 2.59375 | 3 |
The Hidden Dangers of Web Services
Its meant to bring a certain agnosticism to technology and standardize the transmission of data and information through XML and SOAP. Despite what web services and the resulting service-oriented architecture (SOA) delivers now and promises to deliver in the near future the technology still has a major hurdle to overcome: Security.
The security concerns that come bundled with SOA are similar to the threats that rose with the growth of the Internet. In fact, there are several security parallels to be drawn for systems between Internet connectivity and SOA.
Both allow more direct communication to potentially vulnerable code residing on the backend. Both were adopted faster than one could manage the risk: SOA in particular creates a level of connectivity that was never designed for by the makers of the legacy applications it often exposes.
Mainframe terminal application developers, for example, were most concerned with performance, accidental entries, and reliability because their users were trusted and hard-wired. Connectivity removes trust from the operating environment.
The SOA risks are often very individualized, taking different forms at every instance and in every company. The result is, in many cases, security landmines waiting to be triggered by the treading attacker.
If web services are implemented with security in mind though, they can fundamentally reduce risk through proper filtering and limiting the exposed surface of the application to the outside world.
Data Gone Wild
Most software security vulnerabilities come from assumptions about data not being enforced. As an example, consider an application that processes customer information, and one particular field, Customer Surname, that is assumed to be no longer than 20 characters.
The application typically receives this data from a client application that checks to make sure no more than 20 characters are passed to the server application. Now imagine putting a web services interface on that server application.
Data is transmitted as part of an XML document, with likely no constraint. If a client sends a request with more than 20 characters, the result could be a potentially exploitable application fault such as a buffer overflow.
This means when software is exposed through web services we need to take care that data is properly constrained. The ideal case is, of course, to validate data within the application but, for legacy systems, this filtering may not exist and no longer be a feasible option.
Beyond data being sent to the server, when a web service is created an implicit contract is forged between the provider and user application about the format and range of data exchanged.
Back-end server code may be changed which alters response data. A client may be built assuming it will receive a fixed-sized response. If the implementation changes, client applications may be at risk. When implementing web services then, it is critical to establish a set of data boundaries and those boundaries remain consistent even after plumbing changes of the underlying code.
Walk Down the Right Paths
When moving from a controlled client to a web services approach one faces the possibility that functions may be called out of sequence. Think about a simple every-day process like making a cup of coffee and imagine if we had software functions for the key activities:
Executed in this order, we get a decent cup of coffee. The process has some tolerance for failure too in that the order of pour_coffee() and pour_cream() arent particularly important but the fact that they both precede stir() is.
While existing software might have forced users to do these steps in order, slapping a web services front-en onto the application means careless users or attackers might be able to execute them out of order because they may be able to access these functions directlywith unspecified results.
If you carry this analogy to financial transactions, there may be some disastrous implications of allowing activities to be done out of sequence.
One of the biggest mistakes when rolling out web services is failing to look at how the system will be deployed. A common pitfall is to assume your application, transactions, and data will be protected by existing enterprise defenses.
An interesting property of web services transactions is data often flies by such defenses without any real inspection. Given that transactions happen as globs of XML, network defenses lack the contextual information to determine if a specific message is hostile or contains data that can potentially cause the application to fail.
Another common snag is failing to test the security of both the client and the server. One must always protect the server from malicious data being sent from a user but it is equally important to make sure client applications are robust and can handle responses to web services requests that may be generated by an attacker impersonating the server.
Web services continue to change the way applications interact. The approach offers a solution to the problem of having applications communicate without needing to consider their implementation.
Like any new technology, it brings benefit as well as risk to the IT environment.
As more information flows through HTTP, we need to consider not just the code thats written but how that shift in communications conflicts with or negates network defenses that may already be in place.
As with any new technology trend it is important to remember the law of attacker economics: The more a technology gets used, the more it will be attacked. If built with security in mind, though, SOA has the ability to reduce risk by shielding an application instead of simply exposing it to would-be attackers.
Dr. Herbert Thompson is chief security strategist at Security Innovation. | <urn:uuid:7d3678a7-1977-4d8d-97dc-8c4a0bc6f7e6> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/trends/article.php/11047_3629386_2/The-Hidden-Dangers-of-Web-Services.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942007 | 1,098 | 2.78125 | 3 |
When I was a young programmer, there existed a group of people known as Operators.... They were responsible for keeping the OS updated, Monitoring the system for errors, printing
Many examples/tips to make programming on the IBM system i much more enjoyable. These tips written in RPGLE, SQLRPGLE, CLLE are free to use and modify as see fit.
Various examples of using SQL on IBM system i - DB2 tables. This section includes both stand alone examples and embedded within SQLRPGLE & CLLE using QSHELL.
Subfiles are specified in DDS for a display file to allow you to handle multiple records, same type on the display. A subfile is a group of records read/written to a display.
This section provides introductory, conceptual, and guidance information about how to use OS/400 application programming interfaces (APIs) with your application programs.
Data queues are a type of system object (type *DTAQ) that you can create and maintain using OS/400 commands and APIs. They provide a means of fast asynchronous
The integrated file system is a part of OS/400© that lets you support stream input/output and storage management similar to personal computer and UNIX© operating systems
Qshell is a command environment based on POSIX and X/Open standards made up of the shell interpreter (or qsh) and QSHELL utilities (or commands)
This program SQLRPGLE searches all tables in a library for a selected field. It writes these fields to a table
This example program is a very basic project tracking application. I think the most useful part of this example is
We have put together a program that will enable you to send Iseries reports to anyone you wish via email.
Add or subtract a duration in years, months, days, hours, minutes, seconds, or microseconds Determine the duration between two dates,
This program displays a screen, gets input then submits itself to batch passing paramaters to print a report.
We have been collecting your city and zipcodes and are now ready to start posting the compiled lists. Let us | <urn:uuid:3da96164-bbad-4b03-ae49-f0bb991893fd> | CC-MAIN-2017-04 | http://www.code400.com/mylinks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901533 | 433 | 2.609375 | 3 |
Scientists in Europe say they have developed a cloud service that lets intelligent robots dial in to get help with circumstances they may have not encountered before or a problem they cannot solve.
Developed by the European RoboEarth project, the Rapyuta database is a repository of information, stored in a standardized language that robots can access for information as well as offload complicated computations that may take a lot of memory for an individual robot to handle, the RoboEarth outfit says.
[RELATED: 11 cool robots you may not have heard of]
[MORE: The year in madly cool robots]
The database will not only let robots look up new information but its creators say it will make robots cheaper by not requiring them to store large amounts of data on their own systems, reducing memory requirements. The system could also allow the development of robotic teams to address a problem or handle a new activity.
"Rapyuta helps robots to offload heavy computation by providing secured customizable computing environments in the cloud. Robots can start their own computational environment, launch any computational node uploaded by the developer, and communicate with the launched nodes using the WebSockets protocol," according to RoboEarth. As wireless data speeds increase more and more robotic thinking could be offloaded to the Web, researchers stated.
"The system could be particularly useful for drones, self-driving cars or other mobile robots who have to do a lot of number crunching just to get round," Mohanarajah Gajamohan, technical head of the project at the Swiss Federal Institute of Technology in Zurich told the BBC in a report on the database.
The name Rapyuta is inspired from the movie "Tenku no Shiro Rapyuta" (Castle in the Sky) where Rapyuta is the castle in the sky inhabited by robots.
Check out these other hot stories: | <urn:uuid:71be77c1-b680-4215-abfa-b38339d21a1e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224245/wireless/robots-get-an-open-source-web-based-helpline.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00355-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935542 | 376 | 3.09375 | 3 |
As NASA's Voyager 1 spacecraft travels outside the solar system, scientists hope to learn about the forces pushing on the "bubble" around the sun and how interstellar radiation could affect future space exploration.
"There's never been anything like this," said Ed Stone, chief scientist for the Voyager mission. "Nothing has ever been outside the solar bubble before. Nothing."
Stone is hardly a newcomer to the Voyager mission. He was the chief scientist on the project during Voyager's planning stages in the early 1970s.
Not only did Voyager 1 make history by becoming the first human-made spacecraft to journey beyond the solar system, but it did so on 36- to 40-year-old technology, Stone told Computerworld on Friday.
"This was one of our long-range hopes," Stone said. "We had no way to know at the time that this was possible because we didn't know how far away the edge of the bubble was. When Voyager was launched, the space age was only 20 years old. Most things didn't even last a few years back then. We had no idea if Voyager could last for 36 years and go as far as it has."
"This whole mission has been a major part of my life," Stone said. "I've been so fortunate to be part of this historic journey. This is the first spacecraft to sail in the cosmic sea between the stars."
The Voyager 1 was launched in 1977 with its twin spacecraft, Voyager 2 . On Thursday, NASA announced what had already been suspected -- that the spacecraft had left the solar system and had entered interstellar space in August 2012. The probe has journeyed between 14 billion and 15 billion miles.
"The Voyager team needed time to analyze those observations and make sense of them," Stone said during a press conference Thursday. "But we can now answer the question we've all been asking -- 'Are we there yet?' Yes, we are."
Stone explained that it took scientists months to figure out whether Voyager 1 had left the solar system because the instrument that Voyager used to measures plasma, an ionic gas, stopped working in 1980. Plasma is different depending on whether it is inside or outside the heliosphere, which is like a bubble that surrounds the sun. Without that measurement tool, scientists had to analyze plasma waves, which was a more time-consuming process.
Now that Voyager 1 is outside of the heliosphere, scientists will study, for the first time, galactic cosmic rays, interstellar winds and the movement of the heliosphere.
"For the first time, we're seeing radiation from outside the solar system," Stone said. "We're observing the intensity of radiation outside the bubble. The bubble kind of protects us. It's charged particles and doesn't let the outside radiation in... We will see how our star, within its sphere, is interacting with what's around it."
The interstellar radiation and winds constantly put differing amounts of pressure on the outside of the heliosphere. If that pressure grows, how does it affect the size and shape of the heliosphere, and how does the heliosphere keep out that added interstellar radiation?
Those are questions that scientists want to answer, Stone said. The answers will also affect future deep space travel.
The Earth's atmosphere and magnetic field would protect the planet from any extra interstellar radiation. However, the planet Mars, asteroids or distant moons around other planets would not be protected.
That means any robotic spacecraft or rovers, along with any spacecraft carrying astronauts, would be affected by increased levels of radiation if they were traveling through deep space.
"This would affect any kind of flight outside the Earth's magnetic field," said Stone. "It's very important to know how intense this interstellar radiation is... This is a long-term issue."
NASA scientists also are looking forward to the day when Voyager 2 also leaves the solar system and enters interstellar space. Stone said he expects that will happen in three to four years.
Voyager 2 also has a working plasma measurement instrument.
Having both probes past the heliosphere would give scientists two different sets of data, and a more complex image of space, to study.
"It's a whole new journey of exploration," Stone said. "It's the first journey between the stars. It's like sailing on the ocean for the first time after leaving land. We're out in this cosmic sea. Most of the universe, by the way, is this kind of interstellar stuff. This will give us information about most of the volume of the Milky Way."
This article, NASA's Voyager will teach us about future deep space missions, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:22cd55bb-a870-4350-93b6-6e37783f46e1> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2485068/emerging-technology/nasa-s-voyager-to-prepare-us-for-future-deep-space-missions.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967714 | 1,014 | 3.734375 | 4 |
GCN LAB IMPRESSIONS
IBM's 'racetrack' could change the world of computer memory
- By Greg Crowe
- Dec 08, 2011
Earlier this week IBM scientists had a sort of mini-expo at the Institute of Electrical and Electronics Engineers International Electron Devices Meeting. Among the stuff they demonstrated was their progress toward a new kind of memory that is both higher in capacity and faster than anything currently on the market.
Originally announced as a practical possibility in 2008, this “racetrack” architecture, once perfected, could change the computer industry for the better, which is, I suppose, a scientist’s job, after all. Ultimately, the work could lead to data-centric computing that would allow access to massive amounts of stored information just like that – in less than a nanosecond, IBM says.
This type of memory uses nanowires made of a nickel-iron alloy, which are not made the way larger wires would be, by drawing hot metal through smaller and smaller dies. These are far too small for that – about 20 nanometers thick.
Wires this small are actually made the same way integrated circuits are made – by depositing a layer of metal onto a silicon wafer, and etching away metal until only the wires are left. This is kind of like Michelangelo’s methods, but with microscopic metal instead of marble.
Pulses of electrons can be delivered through the wire by a write device to make electromagnetic “stripes,” which represent the data. When they are spinning one way, the resulting magnetic field is treated as a “1,” and when it is going the other way, it is a “0.” A current moves the stripes along the wire so a read device can read the stripes in succession, hence the racetrack moniker.
Data can be written to and read from each stripe in less than a nanosecond –a billionth of a second – and since the wires are so small, you can have quite a lot of them on one chip.
The only limitation is how many of these stripes they’ll be able to cram on to each wire. For the recent demonstration, they had exactly one on each, but that was sufficient to demonstrate that the concept was working. Now it’s just a question of finding the metal alloy that has the right magnetic properties to maximize stripe capacity.
So, someday soon, we’ll be using computers with huge amounts of really fast memory. Maybe they’ll also be powered by grapheme-sheet batteries so we can really feel like we’re in the future. Of course, by then, we will be.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:00f6e2ba-3a5d-4687-ad1d-cfa6938c1846> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/12/08/ibm-racetrack-memory-data-in-nanosecond.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965561 | 578 | 3.453125 | 3 |
With six fatal attacks in the past two years, Australia has a bigger shark problem than any other country. The latest effort to reduce that rate involves sharks tweeting their whereabouts.
As reported by NPR, a project undertaken by the Australian government has tagged 338 sharks with acoustic transmitters that send a message to a computer system which automatically tweets when the sharks wearing them swim within a half mile of populated beaches. Those who follow the Twitter account Surf Life Saving WA can receive updates on nearby sharks that may be a threat, including details like the shark's breed and size.
Fisheries advise: tagged Tiger shark detected at 2km off Scarborough receiver at 09:41:00 PM on 2-Jan-2014— Surf Life Saving WA (@SLSWA) January 2, 2014
Although she warns beachgoers that not all sharks in the area are tagged, shark expert Alison Kock told NPR that the geotagging program "is exactly what we need more of when it comes to finding solutions to human-wildlife conflict."
Similarly, others have questioned whether the tag-and-tweet system will actually improve safety, NPR reported.
"It can, in fact, provide a false sense of security — that is, if there is no tweet, then there is no danger — and that simply is not a reasonable interpretation," Kim Holland, a marine biologist who leads shark research at the University of Hawaii, told NPR. | <urn:uuid:2c5f7c66-18e9-40f5-ab86-06ee97288740> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226059/opensource-subnet/sharks-are-using-twitter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00529-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939371 | 286 | 3.0625 | 3 |
Orjuela L.I.,National University of Colombia |
Orjuela L.I.,Grupo Of Entomologia Instituto Nacional Of Salud |
Ahumada M.L.,Grupo Of Entomologia Instituto Nacional Of Salud |
Avila I.,Instituto Departamental Of Salud Of Narino |
And 3 more authors.
Malaria Journal | Year: 2015
Background: Anopheles calderoni was first recognized in Colombia in 2010 as this species had been misidentified as Anopheles punctimacula due to morphological similarities. An. calderoni is considered a malaria vector in Peru and has been found naturally infected with Plasmodium falciparum in Colombia. However, its biting behaviour, population dynamics and epidemiological importance have not been well described for Colombia. Methods: To assess the contribution of An. calderoni to malaria transmission and its human biting behaviour and spatial/temporal distribution in the southwest of Colombia, human landing catches (HLC) and larval collections were carried out in a cross-sectional, entomological study in 22 localities between 2011 and 2012, and a longitudinal study was performed in the Boca de Prieta locality in Olaya Herrera municipality between July 2012 and June 2013. All mosquitoes determined as An. calderoni were tested by ELISA to establish infection with Plasmodium spp. Results: Larvae of An. calderoni were found in four localities in 12 out of 244 breeding sites inspected. An. calderoni adults were collected in 14 out of 22 localities during the cross-sectional study and represented 41.3% (459 of 1,111) of the collected adult specimens. Other species found were Anopheles albimanus (54.7%), Anopheles apicimacula (2.1%), Anopheles neivai (1.7%), and Anopheles argyritarsis (0.2%). In the localities that reported the highest malaria Annual Parasite Index (>10/1,000 inhabitants) during the year of sampling, An. calderoni was the predominant species (>90% of the specimens collected). In the longitudinal study, 1,528 An. calderoni were collected by HLC with highest biting rates in February, May and June 2013, periods of high precipitation. In general, the species showed a preference to bite outdoors (p < 0.001). In Boca de Prieta, two specimens of An. calderoni were ELISA positive for Plasmodium circumsporozoite protein: one for P. falciparum and one for Plasmodium vivax VK-210. This represents an overall sporozoite rate of 0.1% and an annual entomological inoculation rate of 2.84 infective bites/human/year. Conclusions: This study shows that An. calderoni is a primary malaria vector in the southwest of Colombia. Its observed preference for outdoor biting is a major challenge for malaria control. © 2015 Orjuela et al. Source
Rodriguez J.C.P.,Ministry of Social Protection |
Uribe G.A.,Ministry of Social Protection |
Araujo R.M.,Pan American Health Organization |
Narvaez P.C.,National Institute of Health |
And 2 more authors.
Memorias do Instituto Oswaldo Cruz | Year: 2011
Malaria is currently one of the most serious public health problems in Colombia with an endemic/epidemic transmission pattern that has maintained endemic levels and an average of 105,000 annual clinical cases being reported over the last five years. Plasmodium vivax accounts for approximately 70% of reported cases with the remainder attributed almost exclusively to Plasmodium falciparum. A limited number of severe and complicated cases have resulted in mortality, which is a downward trend that has been maintained over the last few years. More than 90% of the malaria cases in Colombia are confined to 70 municipalities (about 7% of the total municipalities of Colombia), with high predominance (85%) in rural areas. The purpose of this paper is to review the progress of malaria-eradication activities and control measures over the past century within the eco-epidemiologic context of malaria transmission together with official consolidated morbidity and mortality reports. This review may contribute to the formulation of new antimalarial strategies and policies intended to achieve malaria elimination/eradication in Colombia and in the region. Source
Chaparro P.,National Institute of Health of Colombia |
Chaparro P.,National University of Colombia |
Padilla J.,Ministry of Health and Social Protection of Colombia |
Vallejo A.F.,Caucaseco Scientific Research Center |
And 3 more authors.
Malaria Journal | Year: 2013
Background: Although malaria has presented a significant reduction in morbidity and mortality worldwide during the last decade, it remains a serious global public health problem. In Colombia, during this period, many factors have contributed to sustained disease transmission, with significant fluctuations in an overall downward trend in the number of reported malaria cases. Despite its epidemiological importance, few studies have used surveillance data to describe the malaria situation in Colombia. This study aims to describe the characteristics of malaria cases reported during 2010 to the Public Health Surveillance System (SIVIGILA) of the National Institute of Health (INS) of Colombia. Methods. A descriptive study was conducted using malaria information from SIVIGILA 2010. Cases, frequencies, proportions, ratio and measures of central tendency and data dispersion were calculated. In addition, the annual parasite index (API) and the differences between the variables reported in 2009 and 2010 were estimated. Results: A total of 117,108 cases were recorded by SIVIGILA in 2010 for a national API of 10.5/1,000 habitants, with a greater number of cases occurring during the first half of the year. More than 90% of cases were reported in seven departments (=states): Antioquia: 46,476 (39.7%); Chocó: 22,493 (19.2%); Cordoba: 20,182 (17.2%); Valle: 6,360 (5.4%); Guaviare: 5,876 (5.0%); Nariño: 4,085 (3.5%); and Bolivar: 3,590 (3.1%). Plasmodium vivax represented ∼71% of the cases; Plasmodium falciparum ∼28%; and few infrequent cases caused by Plasmodium malariae. Conclusions: Overall, a greater incidence was found in men (65%) than in women (35%). Although about a third of cases occurred in children <15 years, most of these cases occurred in children >5 years of age. The ethnic distribution indicated that about 68% of the cases occurred in mestizos and whites, followed by 23% in Afro-descendants, and the remainder (9%) in indigenous communities. In over half of the cases, consultation occurred early, with 623 complicated and 23 fatal cases. However, the overall incidence increased, corresponding to an epidemic burst and indicating the need to strengthen prevention and control activities as well as surveillance to reduce the risk of outbreaks and the consequent economic and social impact. © 2013 Chaparro et al.; licensee BioMed Central Ltd. Source
Herrera S.,Caucaseco Scientific Research Center |
Herrera S.,Malaria Vaccine and Drug Development Center |
Ochoa-Orozco S.A.,Caucaseco Scientific Research Center |
Ochoa-Orozco S.A.,Malaria Vaccine and Drug Development Center |
And 5 more authors.
PLoS Neglected Tropical Diseases | Year: 2015
Malaria remains endemic in 21 countries of the American continent with an estimated 427,000 cases per year. Approximately 10% of these occur in the Mesoamerican and Caribbean regions. During the last decade, malaria transmission in Mesoamerica showed a decrease of ~85%; whereas, in the Caribbean region, Hispaniola (comprising the Dominican Republic [DR] and Haiti) presented an overall rise in malaria transmission, primarily due to a steady increase in Haiti, while DR experienced a significant transmission decrease in this period. The significant malaria reduction observed recently in the region prompted the launch of an initiative for Malaria Elimination in Mesoamerica and Hispaniola (EMMIE) with the active involvement of the National Malaria Control Programs (NMCPs) of nine countries, the Regional Coordination Mechanism (RCM) for Mesoamerica, and the Council of Health Ministries of Central America and Dominican Republic (COMISCA). The EMMIE initiative is supported by the Global Fund for Aids, Tuberculosis and Malaria (GFATM) with active participation of multiple partners including Ministries of Health, bilateral and multilateral agencies, as well as research centers. EMMIE’s main goal is to achieve elimination of malaria transmission in the region by 2020. Here we discuss the prospects, challenges, and research needs associated with this initiative that, if successful, could represent a paradigm for other malaria-affected regions. © 2015 Herrera et al. Source
Vallejo A.F.,Malaria Vaccine and Drug Development Center |
Garcia J.,Malaria Vaccine and Drug Development Center |
Amado-Garavito A.B.,Malaria Vaccine and Drug Development Center |
Arevalo-Herrera M.,Caucaseco Scientific Research Center |
And 3 more authors.
Malaria Journal | Year: 2016
Background: The use of molecular techniques has put in the spotlight the existence of a large mass of malaria sub-microscopic infections among apparently healthy populations. These sub-microscopic infections are considered an important pool for maintained malaria transmission. Methods: In order to assess the appearance of Plasmodium vivax gametocytes in circulation, gametocyte density and the parasite infectivity to Anopheles mosquitoes, a study was designed to compare three groups of volunteers either experimentally infected with P. vivax sporozoites (early infections; n = 16) or naturally infected patients (acute malaria, n = 16 and asymptomatic, n = 14). In order to determine gametocyte stage, a quantitative reverse transcriptase PCR (RT-qPCR) assay targeting two sexual stage-specific molecular markers was used. Parasite infectivity was assessed by membrane feeding assays (MFA). Results: In early infections P. vivax gametocytes could be detected starting at day 7 without giving rise to infected mosquitoes during 13 days of follow-up. Asymptomatic carriers, with presumably long-lasting infections, presented the highest proportion of mature gametocytes and were as infective as acute patients. Conclusions: This study shows the potential role of P. vivax asymptomatic carriers in malaria transmission should be considered when new policies are envisioned to redirect malaria control strategies towards targeting asymptomatic infections as a tool for malaria elimination. © 2016 Vallejo et al. Source | <urn:uuid:1ebea390-655c-49f8-a526-0c99772b6213> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/caucaseco-scientific-research-center-94985/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00161-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911001 | 2,334 | 2.875 | 3 |
8 Principles of Better Unit Testing
Writing good, robust unit tests is not hard -- it just takes a little practice. These pointers will help you write better unit tests.
By Dror Helper
Writing unit tests should be easy for software developers – after all, writing tests is just like writing production code. However, this is not always the case. The rules that apply for writing good production code do not always apply to creating a good unit test.
Not many software professionals recognize that they need to follow different rules for writing unit tests, and so software developers continue to write bad unit tests, following best practices for writing production code that are not appropriate for writing unit tests.
What makes a good unit test?
Unit tests are short, quick, and automated tests that make sure a specific part of your program works. They test specific functionality of a method or class that have a clear pass/fail condition. By writing unit tests, developers can make sure their code works, before passing it to QA for further testing.
For example, the following unit test checks for a valid user and password when the method CheckPassword returns true:
In other words, a unit test is just a method written in code.
A "good" unit test follows these rules:
- The test only fails when a new bug is introduced into the system or requirements change
- When the test fails, it is easy to understand the reason for the failure.
To write good unit tests, the developer that writes the tests needs to follow these guidelines:
Guideline #1: Know what you're testing
Although this seems like a trivial guideline, it is not always easy to follow.
A test written without a clear objective in mind is easy to spot. This type of test is long, hard to understand, and usually tests more than one thing.
There is nothing wrong with testing every aspect of a specific scenario/object. The problem is that developers tend to gather several such tests into a single method, creating a very complex and fragile “unit test.” For example:
One trick is to use the scenario tested and expected result as part of the test method name. When a developer has a problem naming a test, that means the test lacks focus.
Testing only one thing creates a more readable test. When a simple test fails, it is easier to find the cause and fix it than to do so with a long and complex test.
The example above is actually three different tests. Once we define the objective of each test, it is easy to split the code tested:
Guideline #2: Unit tests should be self-sufficient
A good unit test should be isolated. Avoid dependencies such as environment settings, register values, or databases. A single test should not depend on running other tests before it, nor should it be affected by the order of execution of other tests. Running the same unit test 1,000 times should return the same result every time.
Using global states such as static variables, external data (i.e. registry, database), or environment settings may cause "leaks" between tests. Make sure to properly initialize and clean each of the "global states" between test runs or avoid using them completely.
Guideline #3: Tests should be deterministic
The worst test is the one that passes some of the time. A test should either pass all the time or fail until fixed. Having a unit test that passes some of the time is equivalent to not having a test at all.
For example, the following test passes most of the time:
The test above can fail when running on a slow computer and pass later on another machine. A development team "learns" to ignore when such test fails rendering the test ineffective. A non-deterministic test is irrelevant because when it fails, there is no definitive indication that there is a bug in the code.
Another “practice” that must be avoided is writing tests with random input. Using randomized data in a unit test introduces uncertainty. When that test fails, it is impossible to reproduce because the test data changes each time it runs.
Guideline #4: Naming conventions
To know why a test failed, we need to be able to understand it at a glance. The first thing that you notice about a failed test is its name -- the test method name is very important. When a well-named test fails, it is easier to understand what was tested and why it failed.
For example, when testing a calculator class that can divide two numbers there are several options.
Guideline #5: Do repeat yourself
One of the first lessons I learned in Computer Science 101 is that writing the same code twice is bad. In production code, you should avoid duplication because it causes maintainability issues. Readability is very important in unit testing, so it is acceptable to have duplicate code. Avoiding duplication in tests creates tests that are difficult to read and understand:
In other words, having to change 4-5 similar tests is preferable to not understanding one non-duplicated test when it fails. Eliminating duplication is usually a good thing -- as long as it does not obscure anything. Object creating can be refactored to factory methods and custom assertions can be created to check a complex object -- as long as the test's readability does not suffer.
Guideline #6: Test results, not implementation
Successful unit testing requires writing tests that would only fail in case of an actual error or requirement change. There are a few rules that help avoid writing fragile unit tests. These are tests that would fail due to an internal change in the software that does not affect the user.
Since the same developer that wrote the code and knows how the solution was implemented usually writes unit tests, it is difficult not to test the inner workings of how a feature was implemented. The problem is that implementation tends to change and the test will fail even if the result is the same.
Another issue arises when testing internal/private methods and objects. There is a reason that these methods are private -- they are not meant to be "seen" outside of the class and are part of the internal mechanics of the class. Only test private methods if you have a very good reason to do so. Trivial refactoring can cause complication errors and failures in the tests.
Guideline #7: Avoid overspecification
It is tempting to create a well-defined, controlled, and strict test that observes the exact process flow during the test by setting every single object and testing every single aspect being tested. The problem is that this "locks" the scenario under test, preventing it from changing in ways that do not affect the result.
For example, try to avoid writing a test that expects a certain method to be called exactly three times. There are reasons for writing very precise tests, but usually such micromanagement of test execution will only lead to a very fragile test. Use an Isolation framework to set default behavior of external objects and make sure that it is not set to throw an exception if an unexpected method was called. This option is usually referred to as "strict" by several Isolation frameworks.
Guideline #8: Use an Isolation framework
Writing good unit tests can be hard when the class under test has internal or external dependencies. In order to run a test, you may need a connection to a fully populated database or a remote server. In some cases, you may need to instantiate a complex class created by someone else.
These dependencies hinder the ability to write unit tests. When such dependencies need a complex setup for the automated test to run, the result is fragile tests that break, even if the code under test works perfectly.
A mocking framework (or Isolation framework) is a third-party library and a huge time saver. In fact, the savings in lines of code between using a mocking framework and writing hand-rolled mocks for the same code can go up to 90 percent. Instead of creating our fake objects by hand, we can use the framework to create them with only a few API calls. Each mocking framework has a set of APIs for creating and using fake objects without the user needing to maintain irrelevant details of the specific test. If a fake is created for a specific class, then when that class adds a new method, nothing needs to change in the test.
The Bottom Line
Writing good, robust unit tests is not hard. It just takes a little practice. This list is far from comprehensive, but it outlines a few key points that will help you write better unit tests. In addition, remember that if a specific test keeps failing, investigate the root cause, and find a better way to test that feature.
- - -
Dror Helper is a software architect at Better Place. He was previously a software developer at Typemock. You can contact the author at his blog, http://blog.drorhelper.com. | <urn:uuid:e4e141c4-8744-4777-b4a7-7ef3206dc315> | CC-MAIN-2017-04 | https://esj.com/articles/2012/09/24/better-unit-testing.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00557-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930824 | 1,842 | 3 | 3 |
National Guard Bureau
The National Guard Bureau (NGB) is the federal instrument responsible for the administration of the National Guard of the United States established by the United States Congress as a joint bureau of the Department of the Army and the Department of the Air Force. NGB is a joint activity of the Department of Defense and acts as a conduit between the states and the Departments of the Army and Air Force. In this capacity, NGB administers policies and oversees federal funding for the National Guards of the states, territories and District of Columbia that affect the federal mission of National Guard. The NGB is headed by a Chief, who is a full general and a member of either the Army or Air National Guard. | <urn:uuid:1b950a87-fa72-4344-8c29-423f82989341> | CC-MAIN-2017-04 | http://www.halfaker.com/national-guard-bureau | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955012 | 143 | 2.796875 | 3 |
Originally published February 6, 2008
The enormous interest in master data management (MDM) that has appeared in the past couple of years has not yet generated a great deal of methodological progress. Hopefully, as data professionals, consultants, and vendors grapple with the complex issues involved, the situation will improve. A central problem, however, is that there is little agreement about what master data is. It is usually defined by examples, like product, customer, or account, as if to say “I know it when I see it”. Alternatively, master data is defined using generalities such as that it is simply highly shared data, or that it is data used by an application, but which is not produced by the application.
Definitions do matter. They tell us something fundamental about what is being defined. In the case of master data, there is a special need for a greater understanding because MDM is still at an early level of maturity. For several years, I have been using an approach to categorizing data that provides a detailed definition of master data. I have found this approach useful in that it can be practically applied to master data management problems.
A fundamental question about data is whether it is homogenous. In other words, are the boxes we see in a data model, or the tables contained in a physical database, all the same in terms of their properties, behaviors, and management needs as data? The fact that we are even talking about master data management indicates that there are qualitative differences among entities (at the logical level) or tables (at the physical level). There is, in fact, strong evidence that we can categorize data within a taxonomy that recognizes the different roles that data plays in the operational transactions of the enterprise.
Figure 1 shows a taxonomy of data related to segregating the management needs of data from a perspective of the use of data in operational transactions. It divides data into 6 distinct categories.
Figure 1: The Six Layers of Data
The first category of data in this scheme is metadata. What is meant by this is the metadata that truly describes data. For a logical data model, this will be the descriptive information about entities, attributes, and relationships. For a physically implemented database, this will be information about tables and columns. The latter is found in the system catalog of a database, but it is increasingly being materialized as tables in databases too.
Metadata, as the term is used here, is important because it has semantic content that needs to be managed. Tables and columns have meanings. The metadata has to be ready before a database can be implemented and should remain unchanged for the lifespan of the database. If it has to change, there is likely to be significant impact. For instance, if the datatype of Customer Last Name has to be increased from Char(20) to Char(40), then many programs, screens, and reports will be affected.
Below metadata in the hierarchy shown in Figure 1 is reference data. “Reference data” is used to mean many things today, but in the sense used here, it describes what are usually termed “code tables”. These are also called “lookup tables” and “domain values”. Reference data tables usually consist of a code column and a description column. Typically, these tables have just a few rows in them. In general, the data in these tables changes infrequently. Because of this apparent structural simplicity, low volume, and slow rate of change, these tables get very little respect. However, they can represent anywhere from 20% to 50% of the tables in an implemented database. Also, although they receive little attention, IT professionals fear changing the values in them.
Reference data tables share something with metadata – their physical values have semantic content. For instance, a customer preferred status of “bronze” may mean that a customer with this status has 30 days to pay their bills and can only be extended $1,000 of credit. No other kind of data in a database has this property. The semantic property is why this data is used to drive business rules. If business rule logic refers to actual data values, it is a near certainty that these values will come from reference data tables. Reference data can be defined as follows:
Reference data is any kind of data that is used solely to categorize other data found in a database, or solely for relating data in a database to information beyond the boundaries of the enterprise.
Next in the hierarchy of Figure 1 is enterprise structure data. This is data that allows us to report business activity by business responsibility. Examples are Chart of Accounts and Organization Structure. One of the main issues with this kind of data is managing hierarchies, which may be incomplete or “ragged”. Additionally, this category of data is notoriously difficult to manage when it comes to change. For instance, a product line may be reassigned from one line of business to another. Inevitably, historical reports have to be produced from the perspective of the product line being the responsibility of either line of business. One example would be the need to see the performance of the recently assigned line of business as if it had been responsible for the product line for the past 5 years.
Operational transactions always have parties to them. These are the things that have to be present for a transaction to occur, and are represented in Figure 1 by the transaction structure data layer. The most common entities given as examples of this category of data are product and customer. It can be defined as follows:
Transaction structure data is data that represents the direct participants in a transaction, and which must be present before a transaction executes.
Thus, we have to know something about a product and a customer before we can actually sell the product to the customer.
Transaction structure data typically consists of entities with large numbers of attributes, which makes them very easy to spot in data models. This class of data inevitably has problems of identity management. It is easy to appreciate for customers, whose names may be incorrectly captured or change. Yet even products can change their identifiers as they pass through their life cycle or are rebranded. Standardization of identity is extremely difficult to achieve for this class of data, even though it is the subject of many initiatives in this regard.
Another characteristic of transaction structure data is the fact that it is usually implemented as single tables that contain hidden subtypes. Certain columns in a product table, for example, will only apply to certain kinds of products, or to products at a certain point in their life cycle, or to some kind of externally imposed grouping such as dangerous products. Sorting out what columns are relevant to a particular product record is a difficult and frequently neglected MDM challenge.
Transaction activity data, the fifth layer in Figure 1, is the normal “event” data that we see in operational transactions in an enterprise. It has been the focus of IT from the early days of automation. Transaction audit data, the final layer in Figure 1, tracks the state changes in transaction activity data. It is what is usually found in transaction logs, although this kind of table is also frequently seen in databases too.
At this point, a definition of master data can be provided. It is the aggregation of reference data, enterprise structure data, and transaction structure data. As has been shown, each of these is rather different in its properties, behaviors and management needs. However, they do form a group that is distinct from the other three layers in Figure 1.
Accepting that there are different kinds of data with different management needs is important. It means that “one-size-fits-all” approaches to MDM are likely to be unsatisfactory. It also means that the perspective that there is nothing special about master data, and that MDM is just the application of the same old data management techniques, is wrong. Both the “one-size-fits-all” and the “same-old, same-old” views still enjoy considerable acceptance. This is true even among MDM vendors and consultants, although, for obvious reasons, they tend to only express these views in private.
What the taxonomy in Figure 1 shows is that there really are different categories of data, and that it really does make sense to think of master data as different from other kinds of data and as having specific management requirements. The case for MDM is thus a genuine one.
Recent articles by Malcolm Chisholm | <urn:uuid:c60da71a-2820-4519-9fbc-a5388b43471d> | CC-MAIN-2017-04 | http://www.b-eye-network.com/view/6758 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950959 | 1,730 | 2.640625 | 3 |
Mass production of incredibly powerful quantum computers may be only 10 years away thanks to researchers at the University of New South Wales who have demonstrated a quantum bit based on the nucleus of a single atom in silicon.
The breakthrough is a significant step forward from the creation of the world’s first quantum bit in September last year.
UNSW professor Andrew Dzurak said last year, researchers wrote and read back quantum information on an electron that was bound to an atom.
“This year, we have drilled down inside the atom, writing and reading information on the nucleus of an atom, which is a million times smaller,” Dzurak said. “When we work with the nucleus, we have a more accurate quantum bit than we had in September last year.
“The previous quantum bit, although demonstrated, didn’t have the accuracy necessary to do reliable calculations; no we have a quantum bit that can do that.”
Dzurak said having more accurate quantum bits will enable scientists to “scale up” and make more viable quantum machines.
“We have moved to a more advanced level [in quantum computing], with a [quantum bit] that is hundreds of thousands of times more accurate than previously,” he said. “We achieved a read-out fidelity of 99.8 per cent, which sets a new benchmark for qubit accuracy in solid state devices.”
Dzurak said that quantum technology can be manufactured now, but commercial quantum-based machines are still 10 years away.
He compared the cycle of quantum computer development to the discovery of the first transistor in a silicon chip – which was first demonstrated in 1947 – and how it took “a couple of decades” before integrated circuits and modern computers were created.
He says developing one quantum computer to hundreds of thousands takes a “significant engineering life span.”
Quantum bits, or qubits, are the building blocks of quantum computers and offer enormous advantages for searching databases, breaking modern encryption and modelling “atomic-scale” systems such as biological molecules and drugs. These qubits are coupled together to create massive increases in computing power.
The new quantum process
The new discovery was published on Thursday in Nature and describes how information is stored and retrieved using the magnetic spin of a nucleus.
“We have adapted magnetic resonance technology, commonly known for its application in chemical analysis and MRI scans, to control and read-out the nuclear spin of a single atom in real-time,” said UNSW associate professor Andrea Morello.Read more: Australian scientists claim world record for preserving quantum information
According to the researchers, the nucleus of a phosphorus atom is an extremely weak magnet, which can point in two natural directions, either “up or down.” In the quantum world, the magnet can exist in both states simulatenously – a feature known as “quantum superposition.”
These natural positions are equivalent to the “zero and “one” of a binary code, as used in existing classical computers, UNSW scientists said. In this experiment, the scientists controlled the direction of the nucleus, “writing” a value onto its spin and then “reading” that value out – turning the nucleus into a functioning qubit.
The accuracy of this qubit rivals what many consider to be today’s best quantum bit – a single atom in an electromagnetic trap inside a vacuum chamber, the researchers said.
“Our nuclear spin qubit operates at a similar level of accuracy but it’s not in a vacuum cleaner – it’s in a silicon chip and can be wired up and operated electrically like normal integrated circuits,” said Morello.
“Silicon is the dominant material in the microelectronics industry, which means our qubit is more compatible with existing industry technology and is more easily scalable.” | <urn:uuid:84b035e2-0cc5-4796-9b12-17510b6701b4> | CC-MAIN-2017-04 | http://www.computerworld.com.au/article/459422/scientists_demonstrate_key_component_quantum_machine/?fp=16&fpid=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937253 | 813 | 3.734375 | 4 |
In a global population of 7.395 billion people, 3.419 billion people are internet users and of those, 2.307 billion people communicate via social media (according to the study “Digital in 2016” by We Are Social). Since a large portion of the world’s population communicates using the internet, cyber security is a priority for users.
Adaptive Kernel Live Patching: An Open Collaborative Effort To Ameliorate Android N-Day Root Exploits reveals that Android’s biggest threat to users is its kernel vulnerability. It is common for underground businesses to use kernel vulnerabilities in their malware and APTs. It’s extremely difficult to patch vulnerable devices at scale, due to a large number of vendors not providing up-to-date kernel source code for all of their devices. This talk presents the adaptive Android live patching framework, which offers access to live patching for kernels and multiple advantages for developers.
Moving beyond the encryption discussion, into the social engineering realm, Exploiting Curiosity and Context: How to make People Click on a Dangerous Link Despite Their Security Awareness describes how hackers will attempt to retrieve your personal information by leading you to click on malware-infected links sent to your email or through social media. This presentation looks at experiments conducted to determine the reason online users clicked on a link from an unrecognizable source. Malicious links were sent to 1,700 university students via email or a Facebook message from an unknown sender, claiming the link led to pictures of a party from the previous week. Once the data was collected, participants were sent a survey to assess their overall security awareness and asked about their clicking behavior. This talk offers a deep look at the factors that can make almost anyone click on a dangerous link.
Another threat that can compromise online user security through digital interactions is spear phishing. Weaponizing Data Science For Social Engineering: Automated E2E Spear Phishing On Twitter informs attendees of the threats a neural network presents by tweeting phishing posts targeting specific users. Social media in general, especially Twitter, offers hackers an opportunity to access extensive user data. Twitter’s vulnerabilities consist of its bot-friendly API, colloquial syntax, and shortened link features that make the platform ideal for spreading malicious content. In order to generate appealing tweets for a specific user, the machine is trained to conduct spear phishing pen-testing to extract topics from the user's timeline and from the people the user follows or retweets. This talk discloses the dangers of spear phishing on Twitter.
Digital communication, while growing in popularity, has its tradeoffs in regards to security. If you're interested in gaining insight into how skilled hackers attempt to retrieve personal information online, check out Advanced Open Source Intelligence (OSINT) Techniques. This Training provides multiple free online resources that surpass typical searching restrictions, making it easy to dig for user information from any platform, including social channels. Participants will be able to recognize the strategies hackers potentially use to access their private information and in turn, understand how to enhance security for their own digital interactions.
Beyond just digital communications, Black Hat USA 2016 will cover the full spectrum of information security so be sure to check out all of our Briefings and Trainings and plan to join us at Black Hat USA 2016 July 30-August 4 at Mandalay Bay in Las Vegas, Nevada. | <urn:uuid:c016ffd8-7e54-45b3-8d08-4a494528e950> | CC-MAIN-2017-04 | https://www.blackhat.com/latestintel/06062016-digital-communication-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906379 | 683 | 2.59375 | 3 |
Signs of Groundwater on Mars?
/ January 22, 2013
NASA's Mars Reconnaissance Orbiter has provided new evidence of a wet underground environment on Mars, according to NASA.
Researchers analyzing spectrometer data from the orbiter, which looked down on the floor of the 57-mile wide and 1.4-mile deep McLaughlin Crater, think the crater once allowed underground water that otherwise would have stayed hidden to flow into the crater's interior.
"Layered, flat rocks at the bottom of the crater contain carbonate and clay minerals that form in the presence of water. McLaughlin lacks large inflow channels, and small channels originating within the crater wall end near a level that could have marked the surface of a lake," NASA reported.
The above photo of layered rocks on the floor of McLaughlin Crater, taken by the High Resolution Imaging Science Experiment camera on NASA's Mars Reconnaissance Orbiter, shows sedimentary rocks that contain spectroscopic evidence for minerals formed through interaction with water.
Image credit: NASA/JPL-Caltech/Univ. of Arizona | <urn:uuid:0963dd7f-75d9-4d2c-8b12-1f534dc5f58a> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Signs-of-Groundwater-on-Mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913499 | 224 | 4.03125 | 4 |
Dr. Tanya Byron's report Safer Children in a Digital World, released Thursday, has concluded that a general lack of confidence and awareness among parents is leaving children vulnerable to risks.
Speaking this morning, UK Prime Minister Gordon Brown gave his backing to the report, saying that everything possible should be done to give parents and teachers the right information.
"If our children were leaving the house, or going to a swimming pool or going to play in the street, we would take all the care possible about their safety -- is there proper policing, is there proper safety?" said Brown. "When a child goes on to the computer and on to the Internet or on to a video game we should be thinking in the same way."
The independent report challenges industry to take greater responsibility in supporting families, with recommendations for improved access to parental control software and better regulation of online advertising. "It's really difficult for parents because we didn't grow up in the computer age," the Prime Minister said.
Byron described her recommendations to the Prime Minister as "pretty tough," adding that the issue of digital safety in the UK should be taken "really seriously."
"The Internet and video games are now very much a part of growing up and offer unprecedented opportunities to learn, develop and have fun," Byron said. "However, with new opportunities come potential risks. My recommendations will help children and young people make the most of what all digital and interactive technologies can offer, while enabling them and their parents to navigate all these new media waters safely and with the knowledge that more is being done by government and the Internet and video game industries to help and support them."
In order to improve children's online safety, Byron makes a number of ground breaking recommendations including:
She also recommends reforming the video game classification system with one set of symbols on the front of all boxes which are the same as those for film, and lowering the statutory requirement to classify video games to 12+, so that it is the same as British film classification and easier for parents to understand.
Brown believes Britain "can lead the world" in online safety. "Other countries have got the same problems and all of us as parents are worried about our children so let's see if we can make a difference in this," he said.
The Department for Children, Schools and Families and Department of State for Culture, Media and Sport will now work together with other key Departments including the Home Office and the Department for Business, Enterprise and Regulatory Reform to take forward Byron's recommendations.
View the report here. | <urn:uuid:694d2c61-a138-4052-be8c-cfa4c73e2217> | CC-MAIN-2017-04 | http://www.govtech.com/security/British-Prime-Minister-Backs-Internet-Safety-Report.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970617 | 515 | 2.578125 | 3 |
In this case, DERA has come up with a way to combat email borne viruses. Realizing that you will not always be able to intercept and stop a virus as it makes itss way into the network, you can try to prevent it from "germinating" and spreading to other systems. This is a good idea, having defense in depth. If you don't catch incoming viruses at least you will be less likely to propagate them to other systems. Unfortunately, the method suggested by DERA is extremely simplistic and will no doubt annoy users to no end.
DERA's idea is that when you send out an email you will receive a message asking you if you really want to send it. Thus, viruses that generate emails to spread themselves will be stopped because the user will realize they have not intended to send a message, and they will not click "OK."
The second problem is that writing a virus with the ability to respond to this message will not be terribly difficult. As this software becomes more popular, virus writers will compensate for it by automatically replying to the messages. Of these two problems, I suspect the user interface will be the major downfall. Security measures almost never work if they are intrusive, because users will first try to circumvent them, and then they will loudly complain if they cannot. Also, these systems only work if the client uses the company mail servers. If someone has Outlook set up to use Hotmail as well, for example, the virus may successfully spread through that account.
There are also much better ways to deal with this solution that will not be as "in your face." For example, you could have the mail server either hold all email for several seconds or minutes before sending it and apply either rate limits on the amount of mail a user can send out, or if the user sends too many messages that are identical or nearly identical, have it flag them and raise an alarm. This is made easier by the fact that most viruses send themselves out as attachments, making them easier to spot. Hooking into your authentication system is another option. If a user is not logged in, but their machine is trying to send out email, this is an obviously suspicious activity. More intelligent approaches such as these, while harder to implement, are probably going to be more effective as they will not annoy users to the same degree.
Of course, this all ignores many of the simple steps you can take to block the spread of these viruses. Simply blocking .vbs extensions at the mail server (both incoming and outgoing) will very quickly reduce your risk exposure by a significant degree. Firewalling outgoing connections to port 25 (SMTP, the mail transfer protocol) and forcing users to use the company's mail servers will at least ensure that their outgoing messages must pass through your filters, and you will have a log of them. For most UNIX systems, there are a number of free log monitoring utilities you can use to alert you if a user suddenly starts to send out a lot of email.
Remember, security doesn't have to reduce usability. | <urn:uuid:477d6de2-4cd4-46c9-a4a8-a93db1119033> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/772871/Security-Column-Managing-Outgoing-Viruses.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00548-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961246 | 621 | 2.59375 | 3 |
"Change we can believe in" was President Barack Obama's campaign slogan, and whether anyone believes in it, change is exactly what the U.S. is getting. One example is the federal government's policy on energy. The Obama administration crafted the comprehensive New Energy for America Plan, the centerpiece of which is putting 1 million electric vehicles on U.S. roads by 2015. But that aggressive plan raises a concern: Can the country's aging electric grid support these new plug-in hybrid electric and plug-in electric vehicles?
Experts would say yes. In fact, with the right technology, electric vehicles could do more to help the grid than harm it. Through vehicle-to-grid (V2G) technology, plug-in vehicles are capable of adding power capacity to the grid during high demand - known as peak shaving - and also storing renewable energy that can be returned to the grid during peak hours. V2G technology also may benefit consumers who could sell that excess power back to grid operators.
For V2G technology to work, however, plug-in electric vehicles must be grid-integrated. This would require car manufacturers to make vehicles with two-way connections that let them take energy from the grid for charging and give back excess power. They'll also need a control system that grants grid operators access to vehicles' batteries and a way to track energy exchange between the vehicle and grid. Finally concerns remain about the electric grid's stability, despite demonstrations of how the grid and plug-in vehicles can have a mutually beneficial relationship.
Photo: Cadillac Converj concept car/Photo by Kenavt/Wikipedia
The nation's power grid is designed to support peak energy loads, so when electricity demand is low - typically between midnight and 6 a.m. - unused energy is produced by coal- and gas-fired power plants. Charging plug-in vehicles during off-peak hours could use that excess energy, which is what some researchers call "filling the trough." And if plug-in vehicles charge while demand is low, it wouldn't be necessary to increase the grid's delivery capacity.
Additionally grid-integrated vehicles would include a timer to control when charging cycles begin and end. Controlled charging would mitigate too many people charging at any given time, which could overwhelm the grid. "If you have some kind of controlled charging, impacts on the grid will largely be positive," said Paul Denholm, a senior energy analyst for the National Renewable Energy Laboratory.
Willett Kempton, a University of Delaware professor and father of V2G technology, said a Super Bowl broadcast will tax the grid more than plug-in vehicles. "On average a vehicle pulls something like 400 watts, which is about the same as a plasma TV," he said. "The thing about the Super Bowl is everybody turns their TVs on at the same time ... more of a problem than cars, which are plugged in at varying times throughout the day and night."
So the question isn't just if the grid is up to supporting plug-in vehicles, it's also whether these vehicles are up to supporting the grid.
Cars have come a long way since Henry Ford's first Model T produced in 1908. The modern array of energy-efficient vehicles is a showcase of technological advancement.
Plug-in hybrids, like the Chevrolet Volt, have an electric motor and an internal combustion engine, similar to conventional hybrid vehicles. But they differ because their high-capacity lithium-ion batteries can be recharged through an external electrical outlet. The internal combustion engine kicks in when the batteries are depleted, giving the vehicle more range. Full electric plug-in vehicles, like the new Nissan Leaf, are powered solely by rechargeable lithium-ion battery packs, which are recharged by an external power
In either case, the potential benefit to the grid lies in their high-capacity battery technology. According to the U.S. Department of Energy, lithium-ion batteries store three times more energy per pound than the nickel-metal hydride batteries used in the Toyota Prius. But researchers and manufacturers say lithium-ion batteries haven't been developed to support V2G services, and using the batteries to push power back to the grid could significantly shorten their lifespan.
Thomas Turrentine, director of the Plug-In Hybrid Electric Vehicle Research Center at the University of California, Davis, one of the leading institutes in plug-in hybrid development, echoed this sentiment. "For the manufacturers of these vehicles, they're probably not very excited about people using their batteries to provide those services," he said. "Manufacturers first just want to get the vehicles on the road and working properly before they take the next step toward those types of services."
Another stumbling block is a lack of standards for how plug-in vehicles will link to the grid. Tracy Woodard, director of government affairs with Nissan North America in Nashville, Tenn., cited this as one reason Nissan didn't make the Leaf grid-integrated and V2G-compatible. "There's not a communication standard right now between cars and the utilities," Woodard said, "and we really need to see a common standard before we really proliferate that."
Kempton said a committee is working on implementing a communications standard for V2G, including 20 participants from the University of Delaware. He said he anticipates standards will be developed soon.
"From the standpoint of Nissan or any OEM [original equipment manufacturer], of course if they're going to make 50,000 cars a year, which I think Nissan plans to do with the Leaf, they'd like to have a standard already agreed upon, and I can understand that," Kempton said. "On the other hand ... there are other auto manufacturers that are saying, 'We want to try this out on 600 vehicles.'" He said these manufacturers will be prepared when standardization finally arrives.
Some say lack of standards could delay mass production of V2G compatible plug-ins for years. But Kempton said grid-compatible plug-ins could arrive sooner than many observers expect. "I have a fleet running right now, cars that are doing not only grid integration but they're actually also doing vehicle-to-grid," he said. "So that doesn't seem like it's five years away to me."
The framework for energy exchange between vehicles and the grid, according to Kempton, is firmly established although it hasn't yet been deployed on a large scale. He's managing a tracking system that monitors what energy is pulled from and pushed to the grid by vehicles that are part of a larger fleet.
But power from plug-ins won't be attractive unless it's aggregated across multiple vehicles. Grid operators want to attain energy in megawatts (MW), and single vehicles can only deliver power in kilowatts (kW) or tenths of kilowatts. For instance, Kempton estimated that 200 or 250 Nissan Leafs would be needed to generate 1 MW.
For consumers who want to sell power to the grid, energy from their vehicles would have to be packaged with power from other vehicles. This fleet would be managed by an individual who would monitor when and where cars are plugged in, and how much money each vehicle earned. This kind of service is ideal for large companies like FedEx or UPS, as well as government entities with large vehicle fleets like the U.S. Postal Service. Parked cars can turn a profit through V2G connections.
"Right now, we've got four vehicles that are online and they're providing a total of 42 kW to the grid operator [and] getting paid for it," Kempton said. The University of Delaware has six vehicles combined with a larger
fleet to produce 1 MW, which is sold to a grid operator. "Other than the size, which is so small it hardly matters, we are providing a valuable service to the grid; we are getting paid for it, and we have money to redistribute to cars when you get to a size that it's worth setting up an accounting system," he said.
But whether grid-integrated, V2G-compatible vehicles are close to taking to U.S. highways in droves, academics and manufacturers agree that they're coming. Turrentine said the market will be ripe for plug-in electric vehicles once manufacturers start producing the models and integrating them with the grid so they can provide V2G services. "The market for plug-in hybrids in particular ... will be quite solid, especially with the incentives, so we will see consumers buying these vehicles," he said.
Nissan's Woodard agreed. "I definitely see it coming; it's just a matter of getting it implemented." | <urn:uuid:ea4e58bf-11c4-4a2a-93b5-32083b200094> | CC-MAIN-2017-04 | http://www.govtech.com/featured/New-Federal-Energy-Policy-May-Result.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00364-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969044 | 1,774 | 2.8125 | 3 |
Big Data, Small Data, what’s the difference? Volume. From an Information Governance perspective, Big Data needs to be managed by the same principles and processes as any other data, the only difference is volume. Records, information, and data produced or processed by an organization have to be managed in a consistent manner regardless of the media or the size or volumes of the records. Information must be managed in a compliant manner for its entire lifecycle and either be produced or evidence of disposition produced when required. Big Data is a concept being bandied about the marketplace with a focus on enterprise search and data mining for developing useful business intelligence and harvesting value inherent in an organization’s data store. From an Information Governance perspective, all data needs to be managed in a manner consistent with rules, regulations, and business requirements and needs to be applied consistently across data repositories and processes and activities need to be documented and audited for compliance.
In my opinion, Big Data is an IT buzz word attempting to create demand for enterprise search products and services to enable organizations to process large amounts of real time data and information. There is nothing really new here from the perspective of Information Governance or Records Management. This data will have some value to the organization and will have to be classified, managed, retained, and disposed of according to Information Governance principles, same as the rest of the data, information, and records of an organization. The difference is volume, but we already knew that. Here's more on Information Governance and Archive Systems' Strategic Consulting Services.
VP of Information Governance at Archive Systems | <urn:uuid:6c20b64a-4af5-4de3-8790-7fd9a37fc5d6> | CC-MAIN-2017-04 | http://www.archivesystems.com/blog/6/what-does-big-data-mean-for-information-governance | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00208-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931487 | 323 | 2.59375 | 3 |
Speech interfaces are becoming de rigueur because users find them easy to use. But implementing these interfaces is not easy. Speech technologies are complex and require expert knowledge. AT&T should know. The AT&T WATSONSM speech engine represents decades of speech research and over 20 years of continuous use in the company’s large-scale dialog systems.
For the first time, this advanced technology is being made available to outside developers as the Speech API. Now developers have an easy way to incorporate fast, accurate speech recognition to build voice-enabled applications. Read more
Understanding massive data sets is a continuing research challenge. As the scale of data increases, the limitations of static 2D and 3D data representations become more apparent.
In this video, created as part of AT&T’s continuing STEM outreach, researchers commandeer a Kinect to more directly interact with data. Through the use of simple gestures, it becomes easy to pull data closer or manipulate it to view it from a global or local perspective.
In an episode of Touch, a Fox TV drama about human interconnectivity, a father aims his smart phone at a building to retrieve a message left by his autistic son. This scene both advances the story and demonstrates Air GraffitiTM, prototype technology from AT&T Research to tag a physical location "in the air" with videos, photos, and songs.
Story and technology are even more tightly integrated in the Daybreak web series debuting May 31.
The problems studied by Pătraşcu involve efficient data structures and understanding how computers can most efficiently represent and manipulate data. His contributions to fundamental results on lower bounds for data structures revolutionized and revitalized a field that was silent for over a decade.
Women and minorities have traditionally avoided majoring in science, technology, engineering, and math (STEM). But as the need grows for more scientists and engineers, STEM fields must become more inclusive.
In this US News and Report interview, Alicia Abella, executive director of technical research at AT&T Labs, describes what colleges, professionals, and parents can do to encourage women to consider STEM majors.
Charles Kalmanek, Vice President of Research, AT&T Labs, Inc., has been named to the Open Internet Advisory Committee. Set up by the Federal Communications Commission (FCC), this committee will study the impact of the FCC's 2010 net-neutrality order and make recommendations for preserving the open Internet.
Representatives from Netflix, Disney, Mozilla, and other companies will also serve on the committee.
Ever drive off without your wallet, laptop, or other needed item? Got My Stuff, a new, car-based project from AT&T Research may fix that problem. It works like this.
When you turn your car key, Got My Stuff checks to make sure you don't forget items you need for your destination. But why limit such a useful service to the car? Got My Stuff may soon move to the home, the workplace, and other locations where it’s possible to forget something.
In The News | <urn:uuid:d5c3d589-8d31-4594-b17d-199a04cac014> | CC-MAIN-2017-04 | http://www.research.att.com/editions/2012_summer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917904 | 628 | 2.625 | 3 |
6 challenges to the future of IT infrastructure
- By Kathleen Hickey
- Jul 15, 2014
Big data will continue to present a major challenge for scientific research in the years to come, according to a white paper prepared by CERN openlab, a public-private partnership between the European Organization for Nuclear Research, known as CERN, as well as IT companies and a number of European laboratories and researchers from the Human Brain Project.
The partners defined six major challenges covering the most crucial needs of IT infrastructures: data acquisition, computing platforms, data storage architectures, compute provisioning and management, networks and communication, and data analytics.
The report also broke the scientific communities’ big data challenges into several categories: collecting and analyzing the data to support scientific discoveries; developing cost-effective and secure computer infrastructures for handling large amounts of data; performing accurate simulations; and sharing data across thousands of scientists and engineers.
These emerging issues require a new skill set for scientists and engineers. “It is vital that new generations of scientists and engineers are formed with adequate skills and expertise in modern parallel programming, statistical methods, data analysis, efficient resource utilization and a broader understanding of the possible connections across seemingly separate knowledge fields,” noted the report.
The report presents a number of use cases in different scientific and technological fields for each of the six challenge areas.
1. Data acquisition
Researchers need access to high-performance computing resources with ever larger data sets and a means of collaborating with dispersed scientific teams. However, firewalls that protect email, Web browsing and other applications can cause packet loss in the TCP/IP networks, dramatically slowing data speeds to the point of making online collaboration unviable. Routers and switches without enough high-speed memory to handle large bursts in traffic can cause the same problems.
Scientific research will require more sophisticated and flexible means to collect, filter and store data via high speed networks. The authors expect that future computing systems should be able to be rapidly reconfigured to take into account changes in theories and algorithms or to exploit idle cycles. Additionally, costs and complexity must be reduced by replacing custom electronics with high-performance commodity processors and efficient software.
2. Computing platforms
The massive amount of space and energy required to power supercomputers has been a limiting factor in growing processing power. Throughput can only be increased nowadays by exploiting multi-core platforms or new general-purpose graphical processors, the report stated, but existing software must be optimized or even redesigned to do that.
To address this issue, Sandia National Laboratories announced a project to develop new types of supercomputers with faster computing speeds at a lower cost and with less energy needs. Technologies being explored include nano-based computing, quantum computing and intelligent computing (computers that learn on their own).
“We think that by combining capabilities in microelectronics and computer architecture, Sandia can help initiate the jump to the next technology curve sooner and with less risk,” said Rob Leland, head of Sandia’s Computing Research Center. The project, Beyond Moore Computing, addresses the plateauing of Moore’s Law, which threatens to make future computers impractical due to their enormous energy consumption.
3. Data storage architectures
Today, most physics data is still stored with custom solutions. Cloud storage architecture, such as Amazon Simple Storage Service (S3), however, may provide scalable and potentially more cost effective alternatives, the authors noted.
The scientific community needs flexibility beyond space and cost in cloud storage options, so it can optimize storage architecture to the application. Likewise, it needs archival and long-term storage solutions.
Reliable, efficient and cost-effective data storage architectures must be designed to accommodate a variety of applications and different needs of the user community.
4. Compute management and provisioning
High-performance computing will require automation and virtualization to manage growing amounts of data, without involving proportionately more people. At the same time, the authors said, access to resources within and across different scientific infrastructures must be made secure and transparent to foster collaboration.
One way the scientific community is addressing this is through distributed systems, which divide a problem into many tasks, each of which is solved by one or more computers that communicate with each other. Grid computing, a type of distributed computing, supports computations across multiple administrative domains and involves virtualization of computing resources.
In the United States, the Open Science Grid (OSG), jointly funded by the Department of Energy and the National Science Foundation, is being used as a high-throughput grid for solving scientific problems by breaking them down into a large number of individual jobs that can run independently. In one example, OSG is being used to plan for a new high-energy electron-ion collider at Brookhaven National Laboratory.
5. Networks and connectivity
Good, reliable networking is crucial to scientific research. Optimization of data transfer requires new software-based approaches to network architecture design. The ability to migrate a public IP address, for example, would allow application services to be moved to other hardware. And adding intelligence to both wired and Wi-Fi networks could help the network optimize its traffic delivery to improve service and contain costs.
6. Data analytics
Finally, as data becomes too vast and diverse for humans to understand at a glance, there must be new ways to separate signal from noise and find emerging patterns, so as to continue making scientific discoveries, the authors said.
Data analytics as a service would consist of near-real-time processing, batch processing and integration of data repositories. An ideal platform would be a standards-based, common framework that could easily transfer data between the layers and the tools, so analyses could be performed with the most appropriate solutions. Besides CERN-specific applications, these analytics would be used for industrial control systems as well as IT and network monitoring.
"In order to get the kind of performance we need to take on new problems in science, not to mention to drive the massive amounts of data that we're all generating and using in our everyday lives, we'll need to have new kinds of technology that are much more efficient and that can be eventually manufactured at an affordable cost," Dan Olds, an analyst with The Gabriel Consulting Group told Computerworld. | <urn:uuid:ff09f716-9e0b-4d6b-abea-0a6bc7c9ad74> | CC-MAIN-2017-04 | https://gcn.com/articles/2014/07/15/6-challenges-it-infrastructure.aspx?admgarea=TC_BigData | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93461 | 1,282 | 2.546875 | 3 |
In fiber optic network, whether installing new cable, or troubleshooting existing cable, cable testing always plays an important role in the process. Optical power meter which is widely used for power measurement and loss testing is well known to us. Today, we are going to talk about this familiar and essential fiber optic tester—optical power meter, in details.
As its name suggests, optical power meter is a meter which is used for testing optical power. So, what is optical power? And how to measure power by using optical power meter?
In simple terms, optical power is the brightness or “intensity” of light. In optical networking, optical power is measured in “dBm” which refers to a decibel relative to 1 milliwatt (mW) of power. Thus a source with a power level of 0 dBm has a power of 1 mW. Likewise, 3 dBm is 2 mW and -3 dBm is 0.5 mW, etc. And one more thing should be known is that 0 mW is negative infinity dBm.
Measuring power at the transmitter or receiver requires only an optical power meter, an adapter for the fiber optic connector on the cables used, and the ability to turn on the network electronics.
The optical power meter must be set to the proper range (usually dBm, but sometimes mW) and the proper wavelength when measuring power. When all are ready, attach the optical power meter to the cable at the receiver to measure receiver power, or to a short test cable that is attached to the system source to measure transmitter power. Mark the value, and compare it to the specified power for the system and make sure it is in the acceptable range for the system.
In addition to measuring optical power, optical power meter can be used to test optical lost by using together with light source. What is optical loss and how does the optical power meter achieve loss testing?
When light travels through fiber, some energy is lost, e.g., absorbed by the glass particles and converted to heat; or scattered by microscopic imperfections in the fiber. We call this loss of intensity “attenuation”. Attenuation is measured in dB loss per length of cable. dB is a ratio of two powers. Even the best connectors and splices aren’t perfect. Thus, every time we connect two fibers together, we get loss. We called this loss as insertion loss which is the attenuation caused by the insertion of the device such as a splice or connection point to a cable. Actual loss depends on your fiber connector and mating conditions. Additionally, insertion loss is also used to describe loss from Mux since it is the “penalty you pay just for inserting the fiber”.
Loss of a cable is the difference between the power coupled into the cable at the transmitter end and what comes out at the receiver end. But Loss testing requires not only optical power meter, but also a light source. In general, multimode fiber is tested at 850 nm and optionally at 1300 nm with LED sources. Single-mode fiber is tested at 1310 nm and optionally at 1550 nm with laser sources. The measured loss is compared to the loss budget, namely estimated loss calculated for the link. In addition, in order to measure loss, it is necessary to create reproducible test conditions for testing fibers and connectors that simulate actual operating conditions. This simulation is created by choosing an appropriate source and mating a launch reference cable with a calibrated launch power that becomes the “0 dB” loss reference to the source.
There are two methods used to measure loss which are called “single-ended loss” and “double-ended loss”. Single-ended loss works by using only the launch cable while the double-ended loss works using a received cable attached to the meter also. The method “signle-ended loss” is described in FOTP-171. By using this method, you can test the loss of the connector mated to the launch cable and the loss of any fiber, splices or other connectors in the cable you are testing. Thus, it is the best possible method of testing patchcords, since it tests each connector individually. The method “double-ended loss” is specified in OFSTP-14. In this way, you can measure loss of two connectors and the loss of all the cable or cables, including connectors and splices in between. The following picture shows these two methods to us. From left to right: Single-ended loss testing (Patch Cord), Double-ended loss testing (installed cable plants).
As described above, optical power meter is very useful and necessary for fiber optic testing such as optical power measurement and loss testing. Thus, to select a suitable optical power meter is very important. According to the user’s specific application, several points should be considered when choosing an optical power meter:
In addition to optical power meter and light source, other tools such as launch cable, mating adapters, visual fault locator or fiber tracer, cleaning and inspection kits as well as other testers are also required for fiber optic testing. Fiberstore offers a comprehensive solution of fiber optic testers and tools which help you achieve a reliable and valuable fiber optic system. Contact us via firstname.lastname@example.org for more information. | <urn:uuid:40f8eef0-0056-451e-9121-5776ddcaa5d6> | CC-MAIN-2017-04 | http://www.fs.com/blog/optical-power-meter-an-essential-tester-for-fiber-optic-testing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943234 | 1,096 | 3.484375 | 3 |
Oregon high school teacher Mike Brown formed the Coastal Studies and Technology Center in 1992 to provide opportunities for students to study GIS and other technologies while participating in environmental projects conducted by state and local government. Barely two years into the program, the center, its founder and students were chosen by President Clinton and the EPA as the National Educational Model Program for 1994, and won the EPA's Region 10 President's Environmental Youth Award. These honors included White House invitations to give presentations to the president, vice president and the director of the EPA.
THE COASTAL STUDIES AND TECHNOLOGY CENTER
Today, the Coastal Studies and Technology Center, on the small campus (550 students) of Seaside High School, is a nonprofit corporation managed by the Seaside School District, the community, the State University at Portland, and Clatsop Community College. In addition to serving as a research and development center for students, faculty and the larger community, the center provides science programs that emphasize modern technology applications in coastal and watershed studies. Hands-on assignments include GIS modeling, tabular and spatial data collection, GPS mapping and video editing, among others. In addition to encouraging individual GIS projects, the center provides opportunities for students to take part in regional and local environmental projects conducted by government agencies and other public entities.
"Initially, we wanted to have a vehicle that would enable our students to participate in scientific projects that were taking place in our own community," Brown said. "We also wanted the center to be a base for scientists temporarily working in this area so that our students could participate in that work. By forming partnerships and connections, we've been able to establish that."
WHITE HOUSE RECOGNITION
Brown's ideas quickly led to the center's involvement in an environmental study of the lower Columbia River. Through connections established with the National Marine Fisheries Service (NMFS), the Army Corps of Engineers (ACE) and the Columbia River Estuary Study Task Force (CREST), students from the center and other schools helped to determine if removal of a section of the south jetty at the mouth of the lower Columbia River would restore the ecosystem of Trestle Bay.
According to CREST Director Jon Graves, years of sediment had built up around the jetty and formed a 600-acre lagoon in Trestle Bay, separating it from the Columbia River. "Water was going back and forth, but not large fish and crabs. One of the proposals was to open a 500-foot section of the jetty, allowing the tide to flush out the built-up sediment and restore salmon, crab and bird habitats."
Eric Kranzush, now a junior at Seaside, worked on the project in his freshman year. "We took field notes on temperature and salinity in the tidal marsh around Trestle Bay, and classified sedimentation samples and benthic [bottom-dwelling] invertebrates provided by the NMFS. CREST gave us aerial photos and satellite imagery. We entered data collected from field and lab work directly into ArcView, then built layers of different tidal influences over the satellite image. We did a lot of multiple-layer images for the NMFS."
Graves said Seaside students also recorded vegetation plots with GPS, and used aerial photos to do salt-marsh mapping of Trestle Bay. "As a result of that study, the jetty was opened, the section taken out and moved about 200 feet north into the river. In 1997, the students will be back again, looking at benthic invertebrate use in Trestle Bay to see if removal of the jetty has changed the communities out there."
Asked about the quality of data provided by the center, Graves responded, "They do very good work. For example, Oregon's land-use planning requires every estuary to have an inventory done on all the physical, biological and chemical aspects. The inventories that the Coastal Studies and Technology Center did for the Necanicum River Estuary are now the adopted inventory of the Clatsop County land-use plan. The center has a long track record of doing quality work that is adopted by state agencies."
Kranzush pointed out that it was for the Trestle Bay Project that the center received the 1994 President's Environmental Youth Award. "When the award was presented, Vice President Gore was really impressed that freshmen and sophomore high-school students were interested in using computer models to keep salmon alive and improve their habitat."
White House recognition has opened more opportunities for the center, said Brown. "It has given us more credibility to hook into other projects going on out there. For example, we have become a founding partner in the Marine Environmental Research Training Laboratory in Astoria, along with Clatsop Community College, Portland State University and the Oregon Graduate Institute. Getting that lab going enabled us to get a $4 million Navy research grant that our students will be able to take part in. That grant opened up major funding opportunities for us. My old funding level was about $1,000 a year. Just this one grant is about 125 years worth of funding."
Brown explained that the grant is mostly interested in the dynamics of the Columbia River Estuary and the near-shore environment. "Our role will be in tidal and river-flow monitoring; setting up remote monitoring stations, collecting data on tide salinity, temperature, and so forth. We will process some of the data here, then move it along to a research scientist who will use it to develop computer and GIS models of the Columbia River Estuary."
Center students are currently involved with several projects, including a Tsunami Inundation Study with the city of Seaside, the Oregon Graduate Institute (OGI), and the Department of Geological and Mineral Industries. The department is providing the center and OGI with Northwest coastal data from a 1964 Alaska earthquake. According to Kranzush, the students are coordinating their efforts to model damage that would result if a tsunami struck Seaside.
The center is also doing an ongoing project with the Oregon Department of Environmental Quality (DEQ) analyzing ground water pollution in county drinking wells. Another project is a watershed enhancement study with the city and the DEQ. The project calls for modeling the environmental impact of proposed commercial development along the local Neawana River, and creating watershed enhancement models to offset such a possibility. According to Kranzush, the DEQ will provide most of the data. Students will collect some field data for incorporation into an ArcView database that will be sent to the city and the DEQ.
Brown, who has also written a GIS curriculum with help from ESRI, CREST and an EPA education grant, said, "sometimes we have projects at the center level. Other times, students have their own projects that they want to start and keep going. We get a chance to test out a lot of ideas that have impact both in the community and the classroom, and those are really powerful things for us, especially being able to apply GIS to these projects. The process gives students a chance to look at the environment in totally new ways."
Kranzush, who is considering careers in environmental and chemical engineering, said this has been an enlightening experience. "When we worked on the Trestle Bay Project, one of the things you noticed was how much influence tides have on the marshes and the land around them. I wasn't really familiar with how tides worked before, but this project allowed me to see how they clean out the bay and improve the habitat. These studies also help you to see how the environment is affected by human factors -- pollution, chemical processes, deforestation. You can see by doing the modeling what the effects of different industrial activities will be. Some of it is pretty shocking."
CENTER WITH A FUTURE
Commenting on the scope of environmental projects for the center, Brown said, "the kinds and amounts of work that need to be done in this region are immense. I see the role of the center as continuing to be a broker, finding opportunities for students, teachers and community members to be involved in this work. There is a lot to do. We are still analyzing water in the drinking wells, and I just heard that some grant money came through on that."
Kranzush continues to be involved with center projects. "Mr. Brown had a great idea; the district loves it and so do the kids. Hands-on research with government agencies makes students actually feel like they are doing something, instead of just sitting at a desk, doing book work. Like in the Trestle Bay Project, there was a result, they did move the jetty, and kids are able to see that." | <urn:uuid:b35d9925-aaf7-4f0d-9529-ae3b00b160eb> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/High-School-Students-Win-National-Awards.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961851 | 1,780 | 3.015625 | 3 |
In control systems, the communication and work between vendors, asset owners and engineers that take place on a daily basis can be vast, and security may not be the first item on everyone's mind; the mission is to keep the systems running, secure or not.
But the very real possibility of cyber warfare has changed that. The question is what must the control systems community do to adapt to the threat of cyber warfare?
Simply stated, the community must get back to the basics of security, take part in creating better regulations, and band together to face the threat as a community instead of as individuals.
With the media attention given to the Stuxnet worm since June 2010, the world has been forced to realize the possibilities and threats of cyber warfare. Cyber warfare took place long before the release of Stuxnet, but its release caused nation-states, corporations and other groups across the world to realize the benefits of using a domain of warfare with limited entry costs and the possibility of non-attribution, which is the ability to operate without positively being connected to an operation.
The idea of using cyberspace to inflict physical damage, such as damaging nuclear centrifuges, was an unproven theory to most before Stuxnet. With the theory publicly proven true, most vendors and asset owners realized that control systems are valued and legitimate targets.
As the communities behind cybersecurity, hacking and control systems began to overlap, it became obvious that it was not only the large control systems, but also the smaller ones that were targets.
To properly hack into a system one must understand it. Before attacking high-profile targets, it is wise for any hacker—nation-state-backed or not—to compromise smaller control systems, or related systems, for reconnaissance purposes. A hacker can not only understand control systems and network layouts better for future attacks, but may also gain important information, such as firewall and security configurations, trusted network access, operation manuals, design schematics or even password files.
All of this information is important to carrying out an effective attack against larger control systems, such as the electrical power grid, water filtration plants, oil refineries and nuclear reactors. This style of reconnaissance is perfectly demonstrated with the Duqu malware.
In October, Duqu was discovered operating on a number of targets including those in Europe, Sudan and Iran. These targets have not been fully identified, but Symantec has stated that the targets include industrial manufacturers. Duqu is primarily an information-gathering platform with strong ties to Stuxnet.
The kind of information gathered from Duqu is the type that would be required to create a cyber weapon that would target control systems. The Duqu malware seems to target industrial manufacturers, but this may only represent another vector of attack against control systems that rely on the parts these manufacturers create.
With an understanding that all control systems need to be protected, the focus becomes what smaller control system owners and operators can afford to do in terms of security. A limited number of people understand both control systems and cybersecurity well enough to properly defend the networks, which makes these personnel highly sought after and generally unattainable for many in the control systems community.
Because of this and the fact that there is no checklist to supplying complete security, the task of securing networks can seem daunting and nearly impossible. What owners and controllers can do is adopt a security mindset and get back to the basics of cybersecurity.
The basics of cybersecurity begin with evaluating the systems. No one knows the network layout more in depth than the owners and controllers of those networks. Excluding the insider threat, no attacker has this level of knowledge, and this is one of the asset owner's greatest defenses. End users and the companies that employ them must take responsibility for their systems and recognize when hardware and software in their networks are missing or acting in a manner outside of their intended use.
Furthermore, if pieces of hardware or software that are unaccounted for are attached to systems, there should be concern. This network accountability is not an easy task, but is much less cumbersome than surviving a network attack where business secrets are stolen or network operations are halted.
After accepting and properly implementing network accountability, security measures must be put into place. An air gap—the complete isolation of your network—is difficult, if not impossible to achieve. However, air gap best practices are a good step towards network security. Asset owners should ensure that their networks are not connected to outbound connections, and that there are methods of physical and electromagnetic security in place.
Those in charge of network security must then assume this barrier of defense will be compromised. With this assumption, other steps for security must be taken. A defense-in-depth approach is as unique to each situation as is the network it protects, but some security steps are universal.
On a control system network there should be a demilitarized zone (DMZ) that separates internal parts of the network from other less operationally important sections. Firewalls with properly defined rule sets should limit traffic to only what is necessary to continue operations. Networks should use intrusion detection systems (IDS) or intrusion prevention systems (IPS) to look for malicious network activity. Vulnerability assessments using trusted software and reputable red teams should look for vulnerabilities in the network.
Identifying vulnerabilities allows for patching and mediation to occur in areas that hackers would use to compromise a network. User agreements must be established with employees, so that proper use of the network is clearly defined.
No number of security steps will prevent a network compromise if users are allowed to use the network improperly by, for example, connecting personal external hard drives to it. Asset owners must also implement access controls to limit who can gain physical or network access to resources.
One of the most important parts of network security is detection. As Capt. Jeremy Sparks, instructor at the Air Force's Undergraduate Cyberspace Training school teaches the future Air Force's network defenders: Prevention is key, but detection is a must. Detection not only mitigates the damage and duration of an attack, but it can also deter and prevent an attacker altogether. One of the most appealing aspects of cyber warfare is limited attribution. Without this aspect, the motivation of nation-states and hackers to conduct operations in cyberspace greatly decreases.
All of what is mentioned above is a broad look at network security for control systems; it is not an all-inclusive list. The security mindset must be used to think about each level of the network and what would be available to prevent or mitigate a compromise there. It is an ongoing process that must be given proper attention and resources even when both are limited.
Control system and software vendors must take responsibility as well and provide better software and hardware that has a focus on security instead of just availability. Better code and hardware testing, as well as longer durations for patching support are all a great start. Asset owners must participate in this process too, and work with vendors to identify issues.
Both vendors and asset owners must then work with the government and regulation committees to identify regulations and standards that must be enforced. The minimum standard is not something that can foster true security, especially with systems that affect national security. However, this is not an issue of pointing blame at any party involved. Instead, this is an issue of getting the community to come together, and bringing different experiences to find solutions.
This community is where the battle over control systems will be won. Both the cyber community and the control systems community have very talented and passionate individuals working together to bring about positive change.
The best advice for those involved in control systems is not based in varying and ever-evolving security practices. Instead, the single greatest piece of advice is to reach out to the community, and share information, practices and lessons learned.
There is a real fight going on in cyberspace involving control systems, but it is not a fight one has to wage alone. With a security mindset, networking and a touch of optimism the community as a whole can enable itself to truly secure control systems.
Author's note: I want to thank the individuals I spoke with at the 11th ACS Control System Cyber Security Conference. The information and inspiration gained from the community involved was invaluable. I would also like to thank the Air Force's Undergraduate Cyberspace Training school at Keesler AFB, Mississippi, especially my mentors, Jeremy Sparks and Paul Brandau, for their continued work and acceptance that cyber security is not solely a military issue, but one that affects us all.
About the Author
Robert M. Lee is an officer in the United States Air Force. However, this article and his views do not constitute an endorsement by or opinion of the Air Force or Department of Defense.
Cross-Posted from Control Global | <urn:uuid:a216d369-5261-450c-9137-71c4dbf9703e> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/18675-The-Control-Systems-Community-and-Cyber-Warfare.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95424 | 1,782 | 2.78125 | 3 |
Over a week after the tragic earthquake and tsunami in Japan, a number of research institutions are beginning to take a look at their centers to begin the process of evaluating damage.
One of the most affected centers is Tohoku University, which was near the epicenter of the quake in Sendai. The institution was one of Japan’s preeminent materials science, engineering and biomedicine centers, but will be shut down until at least the end of April, according to a report from Nature News. Tohoku University is home to the Cyberscience Center, which houses a 31.2 teraflop system, but as of now, there are no updates about the status of the resource. As it stands, the university area is difficult to access due to dangerous aftershocks, and recovery efforts are being hindered by a lack of electricity and water.
The problems extend beyond quake and tsunami damage; rolling blackouts have put an indefinite stop to computationally-driven research. As reported, “Many institutions in the region, including the University of Tokyo and some RIKEN institutes have been forced to drastically reduce electricity use and shut down large facilities such as supercomputers.”
The disaster has also caused something of a short-term “brain drain” at a number of institutions. For instance, as Adrian Moore, who leads a segment of the Brain Science Institute at RIKEN in Wako, noted, five out of six non-Japanese postdocs and students have left the area until the nuclear and other threats are resolved.
While the government is reportedly considering funneling emergency funds to aid in research center and university rebuilding efforts, in some ways it seems that this is an element of infrastructure that might need to be put on the backburner while more urgent human-related matters are handled. | <urn:uuid:db4adeed-7d55-4284-b803-a58257ea5f2a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/03/21/japanese_science_takes_big_hit_from_earthquake/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963016 | 374 | 2.984375 | 3 |
Gauss malware, Apple iPhone show what encryption can do
- By Kevin McCaney
- Aug 16, 2012
Some of the most sound advice for securing sensitive information, whether it be in an e-mail, on a mobile device or at rest in a database, involves encryption. Simply put, encryption can keep data safe, for good or ill, as a couple recent examples illustrate.
After researchers at Kaspersky Labs come across Gauss, the latest in the Stuxnet/Duqu/Flame state-sponsored malware chain, and started examining it, they ran into a problem. The malware contained an encrypted “warhead” that the researchers couldn’t crack.
Gauss has a module called “Godel” (many of the malware’s components are named after famous mathematicians) with a payload of unknown purpose. “Despite our best efforts, we were unable to break the encryption,” researchers said in a blog post.
Stuxnet/Flame/Gauss and the limits of cyber espionage
Mobile security guide catches up with smart phones, BYOD
So Kaspersky offered up all of its information on Gauss and asked “anyone interested in cryptology and mathematics to join us in solving the mystery and extracting the hidden payload.” A long list of people have contributed ideas, but as of this writing the warhead remains a mystery.
Gauss was discovered infiltrating systems in the Middle East, primarily in Lebanon, and is believed to be part of a U.S.-led cyber warfare program that includes Stuxnet and Flame, both of which were found mostly attacking systems in Iran. U.S. officials likely are hoping Gauss’ encryption holds up.
The investigation into Gauss might illustrate how cyberspace differs from traditional battlegrounds. During, say, the Cold War, if scientists found an unfamiliar warhead they probably wouldn’t make a public project out of taking it apart and seeing what’s inside. But the Gauss investigation also shows the power of encryption -- be it in malware, in industrial systems in computers or even phones.
Agencies should take note of how encryption protects data, particularly as they try to manage security for mobile devices that can be lost or stolen.
Law enforcement officials worry that good encryption could hurt their chances of retrieving forensic evidence against suspected criminals, but that same protection could also be applied to devices being carried by government employees.
Apple, for example, has improved the security on the iPhone to the point that it could leave law enforcement at a disadvantage against criminals who carry them, Simson L. Garfinkel writes in Technology Review.
The most significant of Apple’s security steps for the iPhone is the addition of the Advanced Encryption Standard, a U.S. government standard since 2001 and considered to be unbreakable, Garfinkel writes. And the iPhone’s tightly knitted architecture makes it easy for users to apply the encryption. And of course, encryption tools are available for Android and other mobile devices.
"I can tell you from the Department of Justice perspective, if that drive is encrypted, you're done,” Ovie Carroll, director of the cyber-crime lab at the Justice Department’s Computer Crime and Intellectual Property Section, said at a recent conference, Garfinkel reports. “When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted, you have lost any chance of recovering that data."
Law-abiding users, however, can let law enforcement officials and the courts worry about criminals’ phones. Instead, agencies deploying smart phones and tablets or allowing them as part of BYOD programs could take note of how well a good encryption program works.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:2be8b4a1-c685-42a2-9994-57a86d41f665> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/08/16/gauss-encrypted-warhead-iphone-encryption.aspx?admgarea=JR_CYBER | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935936 | 791 | 2.921875 | 3 |
Made with Code is part of a big movement to get more girls to code, at a time when fewer than one percent of high school girls express an interest in majoring in computer science. This matters not just because some of the best job prospects are in computer science, but because coding--and, more importantly, the logic it teaches--is a skill that's useful in just about every field.
Made with Code was launched by Google last week, but it's also backed by Girls Inc., MIT Media Lab, Girl Scouts, National Center for Women & Information Technology, Seventeen, TechCrunch, notable individuals such as Mindy Kaling and Chelsea Clinton, and others.
The site introduces girls to projects like creating a music track, 3D-printing a bracelet (with free printing via Shapeways), and creating animated GIFs. There are a few resources as well for parents and educators on the site, as well as an events lookup so you can find a coding workshop near you.
Made with Code also promises $50 million in support of programs to get more females into CS.
I think every kid should know coding basics, and the extra encouragement for girls is especially important to overcome negative stereotypes like boys are better at math. My daughter's just eight years old, so I'm definitely bookmarking the site to give her a head start.
Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:818ffb95-8e00-4410-a2a7-0a15c86ace75> | CC-MAIN-2017-04 | http://www.itworld.com/article/2695831/consumerization/made-with-code--encouraging-and-inspiring-our-daughters-to-code.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949648 | 330 | 3.125 | 3 |
Senior Systems Engineer Jim Taylor frequently shares “IT Tidbits” with the Green House Data technical staff, both in person and via e-mail dist-lists. This new blog series brings you a closer look at his latest tips.
From time to time, our Global Service Center staff and customers alike must troubleshoot Domain Name System (DNS) errors on their servers. Every server on the public internet is assigned an IP address by a Domain Name Server. The ISP has a DNS server that looks up DNS records and IP addresses against the master records, which are held in 13 servers maintained by independent organizations around the globe.
DNS errors can stem from many sources, including the configuration of DNS settings. The first step for many network issues is often a DNS lookup to gather more information and see if any of the issues are from a DNS issue. Two methods to accomplish DNS groundwork are nslookup and whois.
Unlike ping, which does return a DNS lookup, nslookup delivers more information and can be set to use various DNS servers. The ping command will only return the “A” record for a domain. The A record, or Address record, simply points the web URL (like greenhousedata.com) to the assigned IP address. This is called “resolving,” where a DNS server checks to see if a given URL has an IP address.
Nslookup is similar in that it asks the DNS server for information on a domain, but it can gather more information about mail servers, IP addresses, and more.
Use the nslookup command from the Windows command prompt, and it will return the default DNS server and its IP address. If you include a URL after nslookup, it will return the DNS server name and the IP address.
You can set specific queries for nslookup by typing “nslookup”, hitting Return, then “set xx” where “xx” is the query type, hitting Return, then typing the URL for the server you want information from, and hitting Enter one final time.
Some query examples are:
Find IP address
Find all DNS information
Find canonical name (the overarching name that defines the subdomain, IP address, etc)
Find the mailbox domain name
Grab more information about an exchange server
Find information about Well Known Services
Whois is another tool that can offer DNS information, but it can also be used on expired domains. On Windows machines, you’ll need an application, but there are also some websites that can run whois queries, like www.whosis.net.
An application will add whois to your command line, so once installed you’ll run it just like nslookup. On a Unix/Linux/Mac OS computer, you can run whois from the command line in Terminal.
Type in “whois URL” to return information on a given domain. The command will display relevant information including the Registrar (the organization who registered the domain with the DNS), the Name Servers (servers in charge of the domain’s DNS), Creation Date, Expiration Date, and any public contact information.
It is vital to run whois before making configuration changes to your DNS zone files. Whois is also useful when attempting to identify incoming traffic, like when stopping spam or trademark infringement.
Posted by: Systems Engineer Jim Taylor | <urn:uuid:6e13ba49-c5b5-4d95-9357-05e051e55751> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/jims-it-tidbits-check-dns-records-with-nslookup-whois | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885935 | 705 | 3.3125 | 3 |
Most people associate high performance computing with those big multi-rack supercomputers humming away in national labs. But if you’re in the HPC community, you know that the vast majority of systems are much smaller — commodity clusters made up of a handful of nodes, or perhaps dozens, or even hundreds. Now though, HPC technology is making its way into even smaller system, in particular, embedded devices and appliances.
An article penned by Mentor Graphics’ Pete Decher that appeared this week in EE Times describes the trend, noting that “with the introduction of more compact and more powerful embedded processors, embedded systems are becoming HPC capable.”
The beneficiaries of this technology are medical devices (MRI and CT imaging), military and aerospace systems (e.g. radar and navigation), automotive computers (collision avoidance), and even handheld consumer devices (voice recognition). The use of high performance hardware and software is not completely new to the embedded space of course, but recent advances in processor technology are giving the industry access to computational power that used to only be available in HPC clusters. Decher writes:
All of this is possible due to advancements in processing hardware. What we’re seeing now is what the military and aerospace community call commercial off-the-shelf or COTS, which usually connotes commodity-type devices that are capable of high-performance computing. Companies like Intel, Freescale, NVIDIA, Xilinx, and TI are creating an explosion of new devices targeted at HPC applications. Intel recently introduced its new multicore Sandy Bridge class of devices (2nd Generation iCore processors) with Advanced Vector (math) eXtensions called AVX. In the same timeframe, Intel has also introduced its new Many Integrated Cores (MIC) processor architecture. Code named “Knights Corner”, this architecture supports the interconnection of 50 Larrabee class cores. Freescale recently introduced a new generation of high-end multicore Power PC chips called the QorIQ AMP Series, with a re-introduction of an improved AltiVec vector processing accelerator. The new QorIQ architecture can support up to 24 virtual cores per chip.
Then there is the whole GPGPU phenomenon, courtesy of NVIDIA and AMD, that already delivers more than a teraflop of single precision floating point performance in a single chip. Xilinx and Altera are introducing devices that integrate FPGA logic with multicore CPUs. (For example, the Zynq-7000 from Xilinx has a dual-core ARM A9 processor with the Neon vector accelerator, plus an FPGA fabric.). Along those same lines are Texas Instruments’ Integra line, which integrates a C6x DSP with an ARM Cortex A8 CPU.
The downside to all these new architectural wonders is programming complexity. Decher says the difficulty of software development on heterogeneous platforms is high, noting that “typical embedded software development costs are exceeding well over 50% of the entire system cost.” A side-effect of this complexity is software portability, since, for example, programs developed for GPUs typically aren’t interchangeable with those developed for say DSPs.
Ideally, says Decher, you would have access to high-level libraries that were hardware-independent, enabling applications to be easily ported from one platform to another. But in the midst of all this microprocessor diversity, that’s probably not completely attainable.
Software issues aside, Decher sees an expansive future for HPC in the embedded space. As more and more flops becomes available in these chips, their use will penetrate into every imaginable device with a need for compute-intensive work. | <urn:uuid:5788ddec-2f6e-4695-832c-02a58f458362> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/12/01/hpc_going_embedded/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00502-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929156 | 767 | 2.6875 | 3 |
Fiber Optical Identifier is an essential installation and maintenance instrument which can identify the optical fiber by detecting the optical signals transmitted through the cables, during this process the fiber optic identifier do no harm or damage to the fiber cable and it also don’t need opening the fiber at the splice point for identification or interrupting the service.During fiber optic network installation, maintenance, or restoration, it is also often necessary to identify a specific fiber without disrupting live service.
The Fiber optic identifier have a slot on the top. The fiber under test is inserted into the slot, then the fiber identifier performs a macro-bend on the fiber. The macro-bend makes some light leak out from the fiber and the optical sensor detects it. The detector can detect both the presence of the light and the direction of light.
A fiber optic identifier can detect “no signal”, “tone” or “traffic” and it also indicates the traffic direction. The optical signal loss induced by this technique is so small, usually at 1dB level, that it doesn’t cause any trouble on the live traffic.
Fiber optic identifiers can detect 250um bare fibers, 900um tight buffered fibers, 2.0mm fiber cables, 3.0mm fiber cables, bare fiber ribbons and jacketed fiber ribbons. Most fiber identifiers need to change a head fiber optic adapter in order to support all these kinds of fibers and cables. While some other models are cleverly designed and they don’t need to change the head adapter at all. Some models only support single mode fibers and others can support both single mode and multimode fibers.
Most high end fiber optic identifiers are equipped with a LCD display which can display the optical power detected. However, this power measurement cannot be used as a accurate absolute power measurement of the optical signal due to inconsistencies in fiber optic cables and the impact of user technique on the measurements. | <urn:uuid:57476deb-5ae9-4de7-ada3-e3045fcb3931> | CC-MAIN-2017-04 | http://www.fs.com/blog/how-does-fiber-identifier-work-in-your-fiber-optic-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898377 | 397 | 2.921875 | 3 |
The vision of a networked world - one that changes virtually every aspect of how we live, learn, govern, work and play - is alive and well in Kentucky.
"Just as the interstate highway system spurred economic development in the 20th century, the networked world is fundamental to a thriving 21st century economy," said Linda Johnson, president of the Center for Information Technology Enterprise (CITE). "So it is important for Kentucky to understand its backbone, network infrastructure and connectivity, as well as its availability to high-speed affordable network access and who is using the Internet - business, government and citizens."
CITE recently completed a comprehensive three-year study, called Connect Kentucky, designed to assess and plan for the state's competitive future.
"During my administration, we have built on landmark reforms in post-secondary education by developing a strategic approach to ensure that Kentucky prospers in a changing economy" said Kentucky's Governor Paul E. Patton. "In 2000, we established the Office for the New Economy and the Kentucky Innovations Commission, which for the first time formalizes the linkage between our investment in education and our plans for economic development. The resulting strategic plan identifies key focus areas to ensure the state's competitiveness in the knowledge-based economy."
Practically speaking, Connect Kentucky
is a public-private partnership involving 15 business partners and nine public partners that formed a statewide steering committee. The committee ensured national benchmarks were established against which the state measured itself, but it then proposed a very concrete action agenda for the future.
"Using a set of guidelines developed by the Computer Systems Policy Project group, we were looking really at three major things," said Johnson. "Our network infrastructure, the condition of our network access and how our network was being used. To use a well-known metaphor, Kentucky is assessing the condition of its Internet highway, the availability of affordable high-speed on-ramps to the Internet."
To complete this assessment, Johnson said that they conducted at least six pieces of significant research. They mapped Kentucky's private Internet network backbone. They also tested dial-up connection speeds in 26 locations across the state to assess how the dial-up network was performing. "Of course I believe that the Internet and the network of the future will not be built on the dial-up network of the past," Johnson added. "So we have also mapped DSL and cable-modem coverage across the state. And we have assessed businesses, consumers and government online."
The comprehensive survey found, for instance, that 69 percent of Kentucky businesses used computer technology to handle some or all of their business functions. Only 36 percent, however, had connections to the Internet, and slightly more than 20 percent had a business Web site. Moreover, of the 64 percent of Kentucky businesses that were not online in any form, more than half indicated that they have no need to use the Internet. This indicates a lack of awareness of e-commerce strategies in the state's business community, said the Connect Kentucky report.
Part of the Connect Kentucky assessment took a hard look at e-government progress within the state. While the state's wide area network - the Kentucky Information Highway - connects more than 4,000 government facilities and educational entities, the report identified several concerns requiring official attention.
For example, executive-level leaders hold diverse views and understandings of e-government. All state Cabinets have a Web presence, but the level of collaboration among agencies is relatively low. And resource constraints have reduced capacity to deliver advanced e-government solutions.
The report identified several specific weaknesses, including the lack of reusable components such as online payments, Web-page templates or shared customer profiles. No framework existed to manage Internet content, and skilled employees were scarce and hard to retain. These factors worked to dampen the vision of better government through network access and electronic service delivery - which were seen as essential in the coming years.
"Citizens must see themselves as the owners of their government, and electronic government can be used to convey that ownership to the people," said Ken Oilschlager, president of the Kentucky Chamber of Commerce. "This will require citizen-centric design, personalization options, visibility through marketing and access for all."
The biggest gap in e-government exists at the local level, Johnson said. "The steering committee clearly recognized that work was needed here to ensure that all citizens across the state have an opportunity to participate in e-government initiatives, particularly at the local level," she added.
The next phase of Connect Kentucky involves a recently released strategic initiative - available online
- that outlines major goals, lists implementable action strategies for achieving those goals and defines standards for determining success over the next three years. For example, one of the standards will measure the sophistication of Kentucky's manufacturers in exploiting computers, the Internet and Web sites against national averages for firms of comparable size. "We collected benchmark data this year that tells us where Kentucky manufacturers are with respect to the U.S. national averages," said Johnson.
One project goal is to create and implement market-driven strategies that increase business, consumer and government Internet use. Specific projects fulfill this objective, including a portal for physicians to submit transactions and educate health care providers and consumers; promotion of online open enrollment for health care benefits; and collaboration with the state Chamber of Commerce to create an online business directory.
Measures of success include increased Internet use by the health care community; improved high-speed Internet access in large and small communities, and a larger number of Kentucky cities and counties using transactional Internet applications. A review of 120 Kentucky city and county Web sites revealed that 55 percent of them contained only informational content.
Public policy initiatives include the deregulation of all broadband services to create regulatory parity among competing providers, implementation of telecommunications tax equity and modernization to create competitive investment and eliminate disincentives, and state government initiatives "designed to lead by example" to promote business and citizen participation in the networked world.
Finally, the plan calls for a broad public campaign to boost awareness of e-commerce, e-government and e-learning benefits. Action strategies include working with community and faith-based organizations to promote citizen use of the Internet, publicizing security and privacy practices throughout the state to encourage more e-commerce, and creating a "media SWAT team" to raise public awareness of the benefits afforded by the Internet.
Success standards here include an increasing number of local governments with applications online, rising enrollment in Kentucky's Virtual University, and a narrowing of the digital divide between income and Internet participation.
Closing the digital divide is particularly important to the state. "While Kentuckiens continue to embrace the Internet and buy goods and services online, there continues to be a disparity among demographic groups based on age, income and education, as well as geographically, between rural and urban regions," said Oilschlager.
Johnson praises the action agenda developed by the steering in its entirety. "I think the committee has created a very bold strategy agenda that will not only move, but really will propel Kentucky forward over the next several years."
She said the plan will, in a very concrete way, help to create needed broadband access and infrastructure throughout Kentucky. More importantly, it will encourage Kentucky citizens and businesses to fully utilize the Internet to participate in better government and increased prosperity in the region. | <urn:uuid:857469c5-e5c6-435f-a9d7-d06a54431231> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Kentucky-Gets-Connected.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95428 | 1,489 | 2.515625 | 3 |
Monitor power to reach higher levels of efficiency
Thursday, Jun 27th 2013
In recent years there has been an increased focus on efficiency in the data center, and this trend is not likely to slow down anytime soon, supporting the idea that IT managers must monitor power in order to operate at an optimal level. ZDNet recently reported that Singapore data centers are considerably power inefficient, and offered up some insight from Gay Chi Sen, director of data center infrastructure management solutions for Schneider Electric Japan and Greater China.
Sen explained that areas like Singapore with warmer climates generally tend to use more power to cool down their data centers. In order to reach higher levels of efficiency, data center managers can utilize temperature monitoring tools to focus efforts on reaching more efficient cooling first.
Patrick Donovan, senior research analyst with Schneider Electric's Data Center Science Center, explored the commonly overlooked problem of dynamic power variations in IT equipment in network and server rooms as well as data centers as a whole.
Donovan explained that the total electrical power consumed by IT equipment in data centers was relatively stable historically, but new designs for server processors include energy management capabilities that can result in substantial power consumption fluctuations. Roughly 20 years ago, server power variation was largely separate from computational loads, with the few fluctuations caused mainly by disk drive spin-up and fans, Donovan noted. However, today's processing equipment has additional power management capabilities such as the ability to change the frequency of the clocks, adjust voltage magnitude and move virtual loads. So while power variation hovered around 5 percent two decades ago, today's servers can have a power variation of anywhere from 45 to 106 percent.
Monitoring to fix inefficiencies
According to Donovan, this results in numerous issues, including overheating, branch circuit overload and loss of redundancy. However, with tools that allow IT managers to monitor power and server temperature, an organization can make significant strides toward reducing fluctuations in consumption and reaching efficiency.
"Typically, servers operate at light computational loads, with actual power draw amounting to less than the server's potential maximum power draw capabilities," Donovan wrote. "However, because many data center and network managers can be unaware of this power use discrepancy, they often plug more servers than are necessary into a single branch circuit. This in turn creates the potential for possible circuit overloads, as the branch circuit rating can be exceeded by the total maximum server power consumption."
Donovan also explained that when servers are simultaneously subjected to heavy loads, circuit overloads will occur. When branch circuits are overloaded, the entire circuit can be tripped and power shut off to computing equipment. Furthermore, since this happens as a result of heavy loads, power outages at this time can have significant negative effects on business.
Temperature monitoring to counter overheating
One of the other problems Donovan discussed was overheating. Servers in general consume power and release it as heat, but when there are large variances in consumption because of workloads, the heat released from IT equipment also rises.
"As such, sudden fluctuations in power consumption can cause dangerous increases in heat production, creating heat spots," Donovan wrote. "While data center cooling systems are put in place to regulate overall temperature, they may not be designed to handle specific, localized hot spots caused by increases in power consumption. As temperature rises, equipment is likely to shut down or act abnormally."
A server room monitor is one tool that could prove effective in resolving the problem of overheating. Other than focusing on cooling power consumption, however, businesses can take on tasks like removing unutilized servers from the data center and monitor to identify available capacities that are not yet being fully utilized. | <urn:uuid:d35f8d78-8ff9-4706-82c3-8bacfa41b1e4> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/monitor-power-to-reach-higher-levels-of-efficiency-463983 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953398 | 736 | 2.53125 | 3 |
With the UK set for the worst drought since 1976, the Science and Technology Committee is recommending the Met Office develop a 10-year plan for supercomputing capacity.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In 2009, the Met Office signed a five-year contract to deliver supercomputing capacity to improve weather forecasting. At the time, it was the second fastest supercomputer in the UK, and was expected to attain a 30-fold increase in speed to 1 petaflop during 2011.
In the report, the Met Office said: “Enhanced supercomputing power would probably have allowed more confident warnings, better indications of possible peak rainfall intensities, and longer lead time information on the potential risk to be issued.”
Illustrating the need for greater supercomputer capacity, the report noted that each day the Met Office receives and uses approximately half a million observations. This includes data on temperature, pressure, wind speed, wind direction and humidity. This data is collected and used to build complex mathematical models for forecasting.
“Forecasting involves making billions of mathematical calculations; therefore powerful supercomputers are required to carry out these calculations as quickly as possible,” said the Science and Technology Committee.
The report stated that it would be possible to deliver more accurate forecasts if greater computer capacity were available. "It is of great concern to us that these scientific advances in weather forecasting and the associated public benefits (particularly in regard to severe weather warnings) are ready and waiting, but are being held back by insufficient supercomputing capacity. We consider that a step-change in supercomputing capacity is required in the UK and the government should finalise the business case for further investment in supercomputing capacity soon.”
Drawing evidence from government chief scientific adviser, professor Sir John Beddington, the report stated: “A step-change increase in supercomputing capacity [...] would be required to most effectively meet the government’s key evidence and advice needs.” However, this improvement in supercomputing capacity would require a four-fold increase in cost.
While commercial organisations are turning to grid computing in the cloud to deliver low-cost supercomputing capacity on-demand, the Met Office stated that cloud computing was not sufficient for its needs. However, Beddington stated that “in a limited number of instances, grid or network computing may offer a viable and cost-effective approach, such as for low-resolution ensembles”.
The Science and Technology Committee is recommending that the Met Office work with the research councils and other partners in the UK and abroad to develop a 10-year strategy for supercomputing resources in weather and climate.
“This should include an assessment of which areas in weather and climate research and forecasting might benefit from low-cost options to enhance supercomputing capacity,” the report stated. | <urn:uuid:21580008-80d2-4e9c-b942-fae5d8c2f31d> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240118002/Met-Office-needs-10-year-supercomputer-plan-but-no-clouds | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951341 | 606 | 3.0625 | 3 |
Google Classroom Launches to Help Teachers Focus on Teaching
Google Classroom is available to teachers to help them create and collect assignments and improve class communications so they can spend more time with students.Google Classroom, a free, new tool in the Google Apps for Education suite, has been released by Google after a three-month preview period to help teachers streamline many administrative tasks so they can spend more time teaching subject matter to their students. The Classroom feature was announced by Zach Yeskel, Google's Classroom product manager and a former high school math teacher, in an Aug. 12 post on the Google Enterprise Blog. "When we introduced Classroom back in May [in limited preview mode], we asked educators to give it a try," wrote Yeskel. "The response was exciting—more than 100,000 educators from more than 45 countries signed up for a preview. Today, we're starting to open Classroom to all Google Apps for Education users, helping teachers spend more time teaching and less time shuffling papers." The idea of Classroom is to help teachers create and organize assignments quickly, provide feedback efficiently and communicate with their classes easily with Web-based educational tools that are simple to use.
Through Google technology services, including Google Docs, Drive and Gmail, teachers can use Classroom to create and collect assignments from students without using paper, and quickly monitor which students have or have not turned in assignments. They can also provide instant feedback on student work using Classroom. | <urn:uuid:ec2a739d-0040-4fa4-b5a4-f2fe1698f241> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-classroom-launches-to-help-teachers-focus-on-teaching.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967598 | 300 | 2.71875 | 3 |
VANCOUVER, BRITISH COLUMBIA--(Marketwired - April 22, 2014) - As the world celebrates Earth Day, a new study released today by the Fraser Institute, an independent, non-partisan Canadian public policy think-tank, finds that higher levels of economic freedom lead to cleaner air.
"The level of economic freedom in a country affects the ability of citizens to produce and sell in the marketplace, and own private property. It's a simple concept that drives prosperity and ultimately benefits the environment," said Joel Wood, associate director of the Centre for Environmental Studies at the Fraser Institute and co-author of Economic Freedom and Air Quality.
The study examines the relationship between economic freedom and concentrations of fine particulate matter (PM10) air pollution in more than 100 countries (from 2000 to 2010) using the Fraser Institute's Economic Freedom of the World Index, which measures economic freedom worldwide.
In 2010, for example, the 20 highest ranked countries (including Canada) have PM10 levels almost 40 per cent lower than the 20 lowest ranked countries.
And for a one point increase in the economic freedom index (when controlling for other factors such as national income, political institutions, and other country-specific characteristics), the study finds a 7.15 per cent reduction in PM10 concentrations, on average.
"Anyone interested in the environment, be they policy makers, activists or ordinary citizens, should understand that people who live in the world's freest countries generally breathe cleaner air than people in countries with less economic freedom," Wood said.
So how does economic freedom improve air quality?
By ensuring private property rights, rule of law, and limiting the size of government.
"While property rights incentivize people to protect their investments, they also provide protection from polluters. If you own land or a home that's being damaged by pollution, you're better able to negotiate with the polluter, within a contract or in a courtroom, and mitigate or eliminate the effects of that pollution," Wood said.
However, while private property rights can help keep polluters in check, government regulation can alter incentives and hinder the ability of citizens to act.
For example, an overly large government may spawn bureaucratic inefficiency, heavy influence from special interest groups, and a prevalence of state-owned enterprises that are immune to citizen action.
In the Ukraine, for example, home to a current conflict based in part on individual rights and freedoms, the so-called Orange Revolution in 2004 prompted political and economic liberalization. During a four-year post-reform period, Ukraine's PM10 levels dropped by 41 per cent.
Moreover, free trade, a common trait of countries with high levels of economic freedom, allows new cleaner technologies to cross borders and benefit the environment.
"Economic freedom, founded on individual property rights, rule of law, and free markets, is vital to sustainable development in Canada and around the world," Wood said.
Follow the Fraser Institute on Twitter / Like us on Facebook
The Fraser Institute is an independent Canadian public policy research and educational organization with offices in Vancouver, Calgary, Toronto, and Montreal and ties to a global network of 86 think-tanks. Its mission is to measure, study, and communicate the impact of competitive markets and government intervention on the welfare of individuals. To protect the Institute's independence, it does not accept grants from governments or contracts for research. | <urn:uuid:841ac411-ec49-4078-b6ea-d6da02e6c03a> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/fraser-institute-air-pollution-declines-as-economic-freedom-rises-1901426.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929717 | 691 | 2.8125 | 3 |
Most people understand that good password security is the first and most effective strategy for protecting sensitive systems and data, yet systems are regularly compromised via breached user accounts.
It is fairly common knowledge that one should use strong passwords that are not easily "guessed" - such as by employing passwords that are 12 to 16 characters in length that use both upper and lower case letters, and which include non-alphanumeric characters.
But sophisticated hackers are not always simply attempting to "guess" passwords based on information lifted from social networks and the like, but instead are using various methods to undermine what most would think to be a secure password choice.
PC Pro's Davey Winder posted a nice little writeup on the the top ten methods hackers use to crack passwords
Winder's top ten and a brief excerpt of the technique are as follows:
1. Dictionary attack
"This uses a simple file containing words that can, surprise surprise, be found in a dictionary. In other words, if you will excuse the pun, this attack uses exactly the kind of words that many people use as their password..."
2. Brute force attack
"This method is similar to the dictionary attack but with the added bonus, for the hacker, of being able to detect non-dictionary words by working through all possible alpha-numeric combinations from aaa1 to zzz10..."
3. Rainbow table attack
"A rainbow table is a list of pre-computed hashes - the numerical value of an encrypted password, used by most systems today - and that’s the hashes of all possible password combinations for any given hashing algorithm mind. The time it takes to crack a password using a rainbow table is reduced to the time it takes to look it up in the list..."
"There's an easy way to hack: ask the user for his or her password. A phishing email leads the unsuspecting reader to a faked online banking, payment or other site in order to login and put right some terrible problem with their security..."
5. Social engineering
"A favourite of the social engineer is to telephone an office posing as an IT security tech guy and simply ask for the network access password. You’d be amazed how often this works..."
"A key logger or screen scraper can be installed by malware which records everything you type or takes screen shots during a login process, and then forwards a copy of this file to hacker central..."
7. Offline cracking
"Often the target in question has been compromised via an hack on a third party, which then provides access to the system servers and those all-important user password hash files. The password cracker can then take as long as they need to try and crack the code without alerting the target system or individual user..."
8. Shoulder surfing
"The service personnel ‘uniform’ provides a kind of free pass to wander around unhindered, and make note of passwords being entered by genuine members of staff. It also provides an excellent opportunity to eyeball all those post-it notes stuck to the front of LCD screens with logins scribbled upon them..."
"Savvy hackers have realised that many corporate passwords are made up of words that are connected to the business itself. Studying corporate literature, website sales material and even the websites of competitors and listed customers can provide the ammunition to build a custom word list to use in a brute force attack..."
"The password crackers best friend, of course, is the predictability of the user. Unless a truly random password has been created using software dedicated to the task, a user generated ‘random’ password is unlikely to be anything of the sort..."
For the complete description of Winder's top ten password cracking methods refer to the full article at PC Pro: | <urn:uuid:a047210e-218b-46f2-b3a5-be2aedea5beb> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/18538-Top-Ten-Password-Cracking-Methods.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935845 | 774 | 2.90625 | 3 |
Valdes S.A.C.,Federal University of Uberlandia |
Vieira L.G.,Federal University of Uberlandia |
Ferreira C.H.,Federal University of Uberlandia |
Mendonca J.D.S.,Federal University of Uberlandia |
And 3 more authors.
Zoological Science | Year: 2015
Eggshell evaluation may serve as an indicator of the effect of substances released in the environment, which may change eggshell shape, size, structure, and/or chemical composition. Additionally, exposure may interfere with hatching rates in contaminated eggs. The objective of this study was to better understand how exposure to the insecticide methyl parathion interferes with chemical changes in eggshells of Podocnemis expansa throughout their artificial incubation, as well as with egg hatchability. A total of 343 P. expansa eggs were collected in a natural reproduction area for the species. These eggs were transferred to and artificially incubated in the Wild Animal Teaching and Research Laboratory at Universidade Federal de Uberlândia. On the first day of artificial incubation, 0, 35, 350, and 3500 ppb of methyl parathion were incorporated to the substrate. Eggs were collected every three days for chemical analysis of eggshells. Hatchability was evaluated as the number of hatchlings in each treatment, for the eggs that were not used in the chemical analysis. Student's T-test was used for data on eggshell chemical composition, and the Binomial Test for Two Proportions was used in the hatchability analysis, at a 5% significance level. It was observed that the incorporation of methyl parathion to the substrate on the first day of artificial incubation of P. expansa eggs reduced the levels of total fat in the shells throughout their incubation, besides reducing egg hatchability. © 2015 Zoological Society of Japan. Source | <urn:uuid:75dc6e4f-b5c7-4aa4-a7e7-7bc25d2b9455> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/centro-universitario-of-patos-of-minas-279/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957413 | 395 | 2.671875 | 3 |
Robocode is a Java teaching game which help developers learn the programming language as they create Java 'robots,' which are actually Java objects that battle onscreen.
"The bots are Java classes," explains Dennis McFarlin, a systems supervisor on IBM?s alphaWorks developer Web site, "which appear as if they're little tanks. They look around to see if there are any other Java classes, or bots, in the same space, and when they find another opponent, they shoot bullets at it and try to disable or destroy it."
More than 500 robots have been battling since the competition kicked off at IBM developerWorks Live! Conference in May. IBM divided the competition into three different levels, based on the skill of the programmer. Eight finalists in each level faced off at Linuxworld. Dutch developer Enno Peters' "Yngwie" 'bot claimed victory in the advanced category, while bots from programmers in Germany and Singapore won the intermediate and beginning levels.
The competition was a good way to "further my Java knowledge," says David Karlov, a computer science student at the University of Technology, in Sydney, Australia. It was also "a lot of fun," he added. Karlov's robot, named Joker, finished third in the intermediate category.
Robocode is one of IBM alphaWorks' most successful downloads, with more than 155,000 copies downloaded since it was first posted. Recognizing Robocode's potential as a way to teach Java, IBM has begun distributing academic licenses for the program. | <urn:uuid:e6130045-f0cc-474e-bbd3-b4c4c81dbf5c> | CC-MAIN-2017-04 | http://www.cioupdate.com/news/article.php/1447391/For-Java-Robots-Its-a-Battle-to-the-Death.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00155-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965519 | 313 | 2.734375 | 3 |
Below I have outlined a series of categories that outline how you can increase the security of your computer to help reduce the chance of being infected again in the future.
Do not use P2P programs
Peer-to-peer or file-sharing programs (such as uTorrent, Limewire and Bitorrent) are probably the primary route of infection nowadays. These programs allow file sharing between users as the name(s) suggest. It is almost impossible to know whether the file you’re downloading through P2P programs is safe.
It is therefore possible to be infected by downloading infected files via peer-to-peer programs and so I recommend that you do not use these programs. Should you wish to use them, they must be used with extreme care. Some further reading on this subject, along with included links, are as follows: "File-Sharing, otherwise known as Peer To Peer" and "Risks of File-Sharing Technology."
In addition, P2P programs facilitate cyber crime and help distribute pirated software, movies and other illegal material.
Practice Safe Internet
Another one of the main reasons people get infected in the first place is that they are not practicing Safe Internet. You practice Safe Internet when you educate yourself on how to properly use the Internet through the use of security tools and good practice. Knowing how you can get infected and what types of files and sites to avoid will be the most crucial step in keeping your computer malware free. The reality is that the majority of people who are infected with malware are ones who click on things they shouldn't be clicking on. Whether these things are files or sites it doesn't really matter. If something is out to get you, and you click on it, it most likely will.
Below are a list of simple precautions to take to keep your computer clean and running securely:
- If you receive an attachment from someone you do not know, DO NOT OPEN IT! Simple as that. Opening attachments from people you do not know is a very common method for viruses or worms to infect your computer.
- If you receive an attachment and it ends with a .exe, .com, .bat, or .pif do not open the attachment unless you know for a fact that it is clean. For the casual computer user, you will almost never receive a valid attachment of this type.
- If you receive an attachment from someone you know, and it looks suspicious, then it probably is. The email could be from someone you know who is themselves infected with malware which is trying to infect everyone in their address book. A key thing to look out for here is: does the email sound as though it’s from the person you know? Often, the email may simply have a web link or a “Run this file to make your PC run fast” message in it.
- If you are browsing the Internet and a popup appears saying that you are infected, ignore it!. These are, as far as I am concerned, scams that are being used to scare you into purchasing a piece of software. For an example of these types of pop-ups, or Foistware, you should read this article: Foistware, And how to avoid it.
There are also programs that disguise themselves as Anti-Spyware or security products but are instead scams. Removal instructions for a lot of these "rogues" can be found here.
- Another tactic to fool you on the web is when a site displays a popup that looks like a normal Windows message or alert. When you click on them, though, they instead bring you to another site that is trying to push a product on you, or will download a file to your PC without your knowledge. You can check to see if it's a real alert by right-clicking on the window. If there is a menu that comes up saying Add to Favorites... you know it's a fake. DO NOT click on these windows, instead close them by finding the open window on your Taskbar, right click and chose close.
- Do not visit pornographic websites. I know this may bother some of you, but the fact is that a large amount of malware is pushed through these types of sites. I am not saying all adult sites do this, but a lot do, as this can often form part of their funding.
- When using an Instant Messaging program be cautious about clicking on links people send to you. It is not uncommon for infections to send a message to everyone in the infected person's contact list that contains a link to an infection. Instead when you receive a message that contains a link you should message back to the person asking if it is legit.
- Stay away from Warez and Crack sites! As with Peer-2-Peer programs, in addition to the obvious copyright issues, the downloads from these sites are typically overrun with infections.
- Be careful of what you download off of web sites and Peer-2-Peer networks. Some sites disguise malware as legitimate software to trick you into installing them and Peer-2-Peer networks are crawling with it. If you want to download files from a site, and are not sure if they are legitimate, you can use tools such as BitDefender Traffic Light, Norton Safe Web, or McAfee SiteAdvisor to look up info on the site and stay protected against malicious sites. Please be sure to only choose and install one of those tool bars.
- DO NOT INSTALL any software without first reading the End User License Agreement, otherwise known as the EULA. A tactic that some developers use is to offer their software for free, but have spyware and other programs you do not want bundled with it. This is where they make their money. By reading the agreement there is a good chance you can spot this and not install the software.
Sometimes even legitimate programs will try to bundle extra, unwanted, software with the program you want - this is done to raise money for the program. Be sure to untick any boxes which may indicate that other programs will be downloaded.
Microsoft continually releases security and stability updates for its supported operating systems and you should always apply these to help keep your PC secure.
- Windows XP users
You should visit Windows Update to check for the latest updates to your system. The latest service pack (SP3) can be obtained directly from Microsoft here.
- Windows Vista users
You should run the Windows Update program from your start menu to access the latest updates to your operating system (information can be found here). The latest service pack (SP2) can be obtained directly from Microsoft here.
- Windows 7 users
You should run the Windows Update program from your start menu to access the latest updates to your operating system (information can be found here). The latest service pack (SP1) can be obtained directly from Microsoft here
Most modern browsers have come on in leaps and bounds with their inbuilt, default security. The best way to keep your browser secure nowadays is simply to keep it up-to-date.
The latest versions of the three common browsers can be found below:
It is very important that your computer has an up-to-date anti-virus software on it which has a real-time agent running. This alone can save you a lot of trouble with malware in the future.
See this link for a listing of some online & their stand-alone antivirus programs: Virus, Spyware, and Malware Protection and Removal Resources, a couple of free Anti-Virus programs you may be interested in are Microsoft Security Essentials and Avast.
It is imperative that you update your Antivirus software at least once a week (even more if you wish). If you do not update your antivirus software then it will not be able to catch any of the new variants that may come out. If you use a commercial antivirus program you must make sure you keep renewing your subscription. Otherwise, once your subscription runs out, you may not be able to update the programs virus definitions.
Use a Firewall
I can not stress how important it is that you use a Firewall on your computer. Without a firewall your computer is susceptible to being hacked and taken over. Simply using a Firewall in its default configuration can lower your risk greatly.
All versions of Windows starting from XP have an in-built firewall. With Windows XP this firewall will protect you from incoming traffic (i.e. hackers). Starting with Windows Vista, the firewall was beefed up to also protect you against outgoing traffic (i.e. malicious programs installed on your machine should be blocked from sending data, such as your bank details and passwords, out).
In addition, if you connect to the internet via a router, this will normally have a firewall in-built.
Some people will recommend installing a different firewall (instead of the Windows’ built one), this is personal choice, but the message is to definitely have one! For a tutorial on Firewalls and a listing of some available ones see this link: Understanding and Using Firewalls
Install an Anti-Malware program
Recommended, and free, Anti-Malware programs are Malwarebytes Anti-Malware, Emsisoft Anti-Malware, Zemana, and HitmanPro.
You should regularly (perhaps once a week) scan your computer with an Anti-Malware program just as you would with an antivirus software.
Make sure your applications have all of their updates
It is also possible for other programs on your computer to have security vulnerability that can allow malware to infect you. Therefore, it is very important to check for the latest versions of commonly installed applications that are regularly patched to fix vulnerabilities (such as Adobe Reader and Java). You can check these by visiting Secunia Software Inspector.
Follow this list and your potential for being infected again will reduce dramatically. | <urn:uuid:1638d73a-0c43-431e-a1ef-138264722f69> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/2520/how-did-i-get-infected/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00183-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94013 | 2,027 | 3.171875 | 3 |
CARD NUMBER FORMATS
When a contactless or prox card is presented to a reader, the reader captures the number that is programmed into that card over a radio frequency (RF) interface. The reader then sends that number to the system that grants access to doors, networks, or applications on a PC. The various shapes that the card number might have are called formats.
FACILITY CODE AND CARD NUMBERS
Cards are programmed with 0s and 1s, which are often arranged into sections – the facility code or prefix which is the same for each card; and the ID number which is different for each card. The access control system looks first to see that the facility code is correct for that facility, and then it checks the ID number of the card for the requested permission. Sometimes a format is designed without a facility code, in which case each card has a longer ID number.
The most common card format is the 26-bit open format, with available facility codes between 0 and 255, and ID numbers between 0 and 65,535. Other common formats are 34-bits, 35-bits (often called Corporate 1000) and 37-bits.
UNIQUE CARD NUMBERS
It is very important that every card enrolled in a system be recognized by that system as unique. If a particular format cannot meet the requirements of a large institution, it will be difficult to avoid the collision of ID numbers in the system. In the case of the 26-bit format, for each facility code there are only 65,535 unique ID numbers. Upon exhausting all the ID numbers for one facility code, it is possible to create another facility code and start over at 0 with new ID numbers. However, some systems are configured to only look at the ID numbers , resulting in ID number collisions. Here is an example of two cards that could cause this problem:
CARDS AND MORE THAN ONE SYSTEM
Many institutions have a local access control system which manages all the prox card numbers locally. However, some institutions use a Single Sign-On application such as Imprivata, which is managed centrally for several institutions. In this configuration, a prox card number which is unique to the local access control system could collide with other prox card numbers in the enterprise SSO application, especially if the latter were only looking at the ID numbers and not the facility codes.
As organizations grow, their card formats must grow with them, in order to provide enough unique ID numbers. Formats such as Corporate 1000, which has over 1,000,000 ID numbers per facility code, and a 32-bit format with 1000 facility codes and over 2 million ID numbers are available for programming into all types of contactless cards. ColorID has helped thousands of institutions select formats and configure their various systems and readers to read those formats.
About ColorID, LLC
Every year, ColorID assists more than 1000 colleges and universities and their project managers personally oversee 700 custom projects each year, including many small and large recarding projects. ColorID offers best-in-class products and solutions, including: contactless, smart and financial cards from every major manufacturer, multiple ID printer platforms; transaction and point-of-sale software and hardware, a variety of handheld devices for identification and tracking applications and biometrics solutions, including fingerprint and iris readers. The company’s manufacturing partners include: Iris ID, HID, Fargo, Datacard, CardSmith, Gemalto, Zebra, NiSCA, Evolis, Allegion, Aptiq, Magicard, Brady People ID, Integrated Biometrics, Oberthur, NBS, Vision Database Systems and many others.
Contact ColorID at 704-987-2238 or toll free in Canada and the US at 888-682-6567. Visit ColorID on the web at: www.colorid.com or email ColorID at firstname.lastname@example.org.
20480-F Chartwell Center Dr.
Cornelius, NC 28031
ColorID provides the highest quality products with superb service at an exceptional value. We want your experience with ColorID to be a positive one - from the ease of ordering products - to the quality of our products - to our follow up and our attention to detail.
CONVENIENT PAYMENT OPTIONS | <urn:uuid:22fb6a9e-7742-49d2-b57d-a80c7e9ff797> | CC-MAIN-2017-04 | https://www.colorid.com/learning-center/advanced-technology-card-formats | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00393-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934992 | 882 | 2.90625 | 3 |
Sometimes the buzz around a new technology is SO strong that it’s hard to see the trees or the forest. Take big data for example. Crack open the NY Times, Computer World, or whatever technology magazine you’d like and you’ll often read about the “Next Big Technology Wave” or how there’s a shortage of big data professionals. And in the end you have to ask yourself, what exactly is big data? Here’s the long and short of it from my viewpoint.
Big data is a bit of a buzzword and technology meme that describes a scenario where an organization has SO much data, in a LOT of different formats and it’s being generated so quickly, they can’t keep up with it. Or certainly they can’t derive value out of it. Because while many companies may store the information, most aren’t doing any real analytics on it because there is just TOO much. Enter big data which was made possible due to cheap storage, cheap compute cycles (distributed computing-cloud), and software that can process huge amounts of information.
While big data is not solely focused on unstructured data, that category of data does pose the biggest hurdles and challenge. Relational databases are built from the ground up to store corporate information in neat rows and columns for fast, optimized queries. But as the world evolved into Web 2.0, there is so much user generated data based upon internet activities (Facebook, Yahoo, Google) that it’s been nearly impossible to keep up with, never mind use. If you had someone analyze all of the search queries you’ve made on Google or Bing, what would they know about you? My guess is the answer would be “plenty”!
So you’ve got cheap storage and computing to hold all this wonderful information, but how do you analyze it and use it? Technology companies have tried to uncover this holy grail of information processing, and there are several proprietary offerings that fall into the big data category: EMC Greenplum, Oracle’s Big Data Appliance, SAP Hana, and more. But as you can guess, most vendors are going to be biased towards their technology, and given the complexity of these potential solutions, they are not inexpensive.
Enter Apache Hadoop! An open-source project housed at the Apache Foundation and based upon technical papers written by Google, Hadoop founder Doug Cutting (currently an Architect at Cloudera) leveraged his search background and expertise to create a big data framework focused on storing, processing, and analyzing large streams of data. The two key technologies within Apache Hadoop are the Hadoop Distributed File System (scalable, high availability, distributed data storage) and MapReduce (application framework for parallel processing of data). As you can guess, given its origins as a non-vendor technology, Apache Hadoop had an agnostic advantage in solving a problem that spans multiple technologies and vendors. And given the fact that open source software is freely distributed and used, the only real hurdle to using Apache Hadoop is LEARNING how to use the technology. So Apache Hadoop became a leader in the big data space due to its agnosticism and slowly began to be supported by more traditional, proprietary software vendors. Microsoft, IBM, and EMC all stepped up to work with and integrate Hadoop technology into their offerings.
Now that Hadoop is embraced by many across the technology landscape, who do you, as a business customer, go to learn how to Hadoop it? While the software is available through the Apache Software Foundation, commercial distributions of Apache Hadoop sprang up from the fertile big data earth. Cloudera, Hortonworks, and MapR are three examples of Apache Hadoop vendors that stepped up to act as both contributors and supporters to the open-source project. They also packaged a commercial distribution providing support, tools, and training for Apache Hadoop to make it more usable and stable for corporate environments; a similar situation to Red Hat leading the adoption of enterprise-ready Linux in companies today.
So there you go, the Apache Hadoop tree in the middle of a huge big data forest! In the end, big data is really a super-set of technologies designed to solve a business problem: far too much data and information, too little knowledge. Add in a little bit of distributed storage, a little bit of parallel processing, a robust open-source application and as usual in the IT world, it’s pretty complicated. And big data technology is evolving quickly as we speak. But the reality is that the problems that are being solved by big data won’t be going away any time soon. I can’t imagine any of us spending less time on the internet, moving forward. Just hold on tight because this big data, Apache Hadoop ride is just getting started… | <urn:uuid:52c26684-1164-4766-b61d-a9a704202cad> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/08/02/beyond-the-buzz-big-data-and-apache-hadoop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94077 | 1,026 | 2.6875 | 3 |
SQL injection is a technique used to take advantage of non-validated input vulnerabilities to pass SQL commands through a Web application for execution by a backend database. Attackers take advantage of the fact that programmers often chain together SQL commands with user-provided parameters, and can therefore embed SQL commands inside these parameters. The result is that the attacker can execute arbitrary SQL queries and/or commands on the backend database server through the Web application.
Databases are fundamental components of Web applications. Databases enable Web applications to store data, preferences and content elements. Using SQL, Web applications interact with databases to dynamically build customized data views for each user. A common example is a Web application that manages products. In one of the Web application's dynamic pages (such as ASP), users are able to enter a product identifier and view the product name and description. The request sent to the database to retrieve the product's name and description is implemented by the following SQL statement.
SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = ProductNumber
Typically, Web applications use string queries, where the string contains both the query itself and its parameters. The string is built using server-side script languages such as ASP, JSP and CGI, and is then sent to the database server as a single SQL statement. The following example demonstrates an ASP code that generates a SQL query.
sql_query= " SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = " & Request.QueryString("ProductID")
The call Request.QueryString("ProductID") extracts the value of the Web form variable ProductID so that it can be appended as the SELECT condition.
When a user enters the following URL:
The corresponding SQL query is executed:
SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = 123
An attacker may abuse the fact that the ProductID parameter is passed to the database without sufficient validation. The attacker can manipulate the parameter's value to build malicious SQL statements. For example, setting the value "123 OR 1=1" to the ProductID variable results in the following URL:
http://www.mydomain.com/products/products.asp?productid=123 or 1=1
The corresponding SQL Statement is:
SELECT ProductName, Product Description FROM Products WHERE ProductNumber = 123 OR 1=1
This condition would always be true and all ProductName and ProductDescription pairs are returned. The attacker can manipulate the application even further by inserting malicious commands. For example, an attacker can request the following URL:
http://www.mydomain.com/products/products.asp?productid=123; DROP TABLE Products
In this example the semicolon is used to pass the database server multiple statements in a single execution. The second statement is "DROP TABLE Products" which causes SQL Server to delete the entire Products table.
An attacker may use SQL injection to retrieve data from other tables as well. This can be done using the SQL UNION SELECT statement. The UNION SELECT statement allows the chaining of two separate SQL SELECT queries that have nothing in common. For example, consider the following SQL query:
SELECT ProductName, ProductDescription FROM Products WHERE ProductID = '123' UNION SELECT Username, Password FROM Users;
The result of this query is a table with two columns, containing the results of the first and second queries, respectively. An attacker may use this type of SQL injection by requesting the following URL:
http://www.mydomain.com/products/products.asp?productid=123 UNION SELECT user-name, password FROM USERS
The security model used by many Web applications assumes that an SQL query is a trusted command. This enables attackers to exploit SQL queries to circumvent access controls, authentication and authorization checks. In some instances, SQL queries may allow access to host operating system level commands. This can be done using stored procedures. Stored procedures are SQL procedures usually bundled with the database server. For example, the extended stored procedure xp_cmdshell executes operating system commands in the context of a Microsoft SQL Server. Using the same example, the attacker can set the value of ProductID to be "123;EXEC master..xp_cmdshell dir--", which returns the list of files in the current directory of the SQL Server process.
The most common way of detecting SQL injection attacks is by looking for SQL signatures in the incoming HTTP stream. For example, looking for SQL commands such as UNION, SELECT or xp_. The problem with this approach is the very high rate of false positives. Most SQL commands are legitimate words that could normally appear in the incoming HTTP stream. This will eventually case the user to either disable or ignore any SQL alert reported. In order to overcome this problem to some extent, the product must learn where it should and shouldn't expect SQL signatures to appear. The ability to discern parameter values from the entire HTTP request and the ability to handle various encoding scenarios are a must in this case.
Imperva SecureSphere does much more than that. It observes the SQL communication and builds a profile consisting of all allowed SQL queries. Whenever an SQL injection attack occurs, SecureSphere can detect the unauthorized query sent to the database. SecureSphere can also correlate anomalies on the SQL stream with anomalies on the HTTP stream to accurately detect SQL injection attacks.
Another important capability that SecureSphere introduces is the ability to monitor a user's activity over time and to correlate various anomalies generated by the same user. For example, the occurrence of a certain SQL signature in a parameter value might not be enough to alert for SQL injection attack but the same signature in correlation with error responses or abnormal parameter size of even other signatures may indicate that this is an attempt at SQL injection attack. | <urn:uuid:755584f0-b5cd-427d-8afa-3643658d6223> | CC-MAIN-2017-04 | https://www.imperva.com/Resources/Glossary?term=sql_injection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.841116 | 1,169 | 3.671875 | 4 |
As technology usage continues to grow in K-12 schools around the world, now is the time for educators and IT to examine ways to provide students with the best educational experience possible. In this white paper, you'll gain a better understanding of the crucial role iPads and apps play in providing personalized learning and how IT can give that experience to each and every student.
- iPads create new opportunities for customized learning
- Mobile device management (MDM) enhances technology intitatives
- One school district transformed their classrooms and IT departments with iPad
Provide the personalized learning experience students crave and educate yourself on the power of iPad, educational apps, and Apple MDM.Download PDF | <urn:uuid:b39dfdbd-367d-430c-ae79-54a5af35d128> | CC-MAIN-2017-04 | https://www.jamf.com/resources/transformative-learning-with-ipads/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00145-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933961 | 136 | 2.90625 | 3 |
Parks M.,Griffith University |
Subramanian S.,Griffith University |
Baroni C.,University of Pisa |
Salvatore M.C.,University of Pisa |
And 4 more authors.
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2015
Recently, the study of ancient DNA (aDNA) has been greatly enhanced by the development of second-generation DNA sequencing technologies and targeted enrichment strategies. These developments have allowed the recovery of several complete ancient genomes, a result that would have been considered virtually impossible only a decade ago. Prior to these developments, aDNA research was largely focused on the recovery of short DNA sequences and their use in the study of phylogenetic relationships, molecular rates, species identification and population structure. However, it is now possible to sequence a large number of modern and ancient complete genomes from a single species and thereby study the genomic patterns of evolutionary change over time. Such a study would herald the beginnings of ancient population genomics and its use in the study of evolution. Species that are amenable to such large-scale studies warrant increased research effort. We report here progress on a population genomic study of the Adélie penguin (Pygoscelis adeliae). This species is ideally suited to ancient population genomic research because both modern and ancient samples are abundant in the permafrost conditions of Antarctica. This species will enable us to directly address many of the fundamental questions in ecology and evolution. © 2014 The Author(s) Published by the Royal Society. All rights reserved. Source
Wang M.-S.,CAS Kunming Institute of Zoology |
Wang M.-S.,University of Chinese Academy of Sciences |
Li Y.,CAS Kunming Institute of Zoology |
Li Y.,University of Chinese Academy of Sciences |
And 24 more authors.
Molecular Biology and Evolution | Year: 2015
Much like other indigenous domesticated animals, Tibetan chickens living at high altitudes (2,200-4,100 m) show specific physiological adaptations to the extreme environmental conditions of the Tibetan Plateau, but the genetic bases of these adaptations are not well characterized. Here, we assembled a de novo genome of a Tibetan chicken and resequenced whole genomes of 32 additional chickens, including Tibetan chickens, village chickens, game fowl, and Red Junglefowl, and found that the Tibetan chickens could broadly be placed into two groups. Further analyses revealed that several candidate genes in the calcium-signaling pathway are possibly involved in adaptation to the hypoxia experienced by these chickens, as these genes appear to have experienced directional selection in the two Tibetan chicken populations, suggesting a potential genetic mechanism underlying high altitude adaptation in Tibetan chickens. The candidate selected genes identified in this study, and their variants, may be useful targets for clarifying our understanding of the domestication of chickens in Tibet, and might be useful in current breeding efforts to develop improved breeds for the highlands. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. Source
Chen L.,University of Chinese Academy of Sciences |
Tang L.,Wuhan University |
Xiang H.,University of Chinese Academy of Sciences |
Jin L.,China National Genebank Shenzhen |
And 5 more authors.
GigaScience | Year: 2014
Genetic modification has long provided an approach for " reverse genetics" , analyzing gene function and linking DNA sequence to phenotype. However, traditional genome editing technologies have not kept pace with the soaring progress of the genome sequencing era, as a result of their inefficiency, time-consuming and labor-intensive methods. Recently, invented genome modification technologies, such as ZFN (Zinc Finger Nuclease), TALEN (Transcription Activator-Like Effector Nuclease), and CRISPR/Cas9 nuclease (Clustered Regularly Interspaced Short Palindromic Repeats/Cas9 nuclease) can initiate genome editing easily, precisely and with no limitations by organism. These new tools have also offered intriguing possibilities for conducting functional large-scale experiments. In this review, we begin with a brief introduction of ZFN, TALEN, and CRISPR/Cas9 technologies, then generate an extensive prediction of effective TALEN and CRISPR/Cas9 target sites in the genomes of a broad range of taxonomic species. Based on the evidence, we highlight the potential and practicalities of TALEN and CRISPR/Cas9 editing in non-model organisms, and also compare the technologies and test interesting issues such as the functions of candidate domesticated, as well as candidate genes in life-environment interactions. When accompanied with a high-throughput sequencing platform, we forecast their potential revolutionary impacts on evolutionary and ecological research, which may offer an exciting prospect for connecting the gap between DNA sequence and phenotype in the near future. © 2014 Chen et al.; licensee BioMed Central Ltd. Source
Yan L.,Jilin University |
Yan L.,Puer Institute Of Pu Er Tea |
Wang X.,CAS Kunming Institute of Zoology |
Liu H.,CAS Kunming Institute of Zoology |
And 20 more authors.
Molecular Plant | Year: 2015
Dendrobium officinale Kimura et Migo is a traditional Chinese orchid herb that has both ornamental value and a broad range of therapeutic effects. Here, we report the first de novo assembled 1.35 Gb genome sequences for D. officinale by combining the second-generation Illumina Hiseq 2000 and third-generation PacBio sequencing technologies. We found that orchids have a complete inflorescence gene set and have some specific inflorescence genes. We observed gene expansion in gene families related to fungus symbiosis and drought resistance. We analyzed biosynthesis pathways of medicinal components of D. officinale and found extensive duplication of SPS and SuSy genes, which are related to polysaccharide generation, and that the pathway of D. officinale alkaloid synthesis could be extended to generate 16-epivellosimine. The D. officinale genome assembly demonstrates a new approach to deciphering large complex genomes and, as an important orchid species and a traditional Chinese medicine, the D. officinale genome will facilitate future research on the evolution of orchid plants, as well as the study of medicinal components and potential genetic breeding of the dendrobe. © 2015 The Author. Source
Liu S.,China National Genebank Shenzhen |
Liu S.,Copenhagen University |
Wang X.,China National Genebank Shenzhen |
Xie L.,BGI Shenzhen |
And 11 more authors.
Molecular Ecology Resources | Year: 2016
Biodiversity analyses based on next-generation sequencing (NGS) platforms have developed by leaps and bounds in recent years. A PCR-free strategy, which can alleviate taxonomic bias, was considered as a promising approach to delivering reliable species compositions of targeted environments. The major impediment of such a method is the lack of appropriate mitochondrial DNA enrichment ways. Because mitochondrial genomes (mitogenomes) make up only a small proportion of total DNA, PCR-free methods will inevitably result in a huge excess of data (>99%). Furthermore, the massive volume of sequence data is highly demanding on computing resources. Here, we present a mitogenome enrichment pipeline via a gene capture chip that was designed by virtue of the mitogenome sequences of the 1000 Insect Transcriptome Evolution project (1KITE, www.1kite.org). A mock sample containing 49 species was used to evaluate the efficiency of the mitogenome capture method. We demonstrate that the proportion of mitochondrial DNA can be increased by approximately 100-fold (from the original 0.47% to 42.52%). Variation in phylogenetic distances of target taxa to the probe set could in principle result in bias in abundance. However, the frequencies of input taxa were largely maintained after capture (R2 = 0.81). We suggest that our mitogenome capture approach coupled with PCR-free shotgun sequencing could provide ecological researchers an efficient NGS method to deliver reliable biodiversity assessment. © 2016 John Wiley & Sons Ltd. Source | <urn:uuid:d8bd3350-a5b6-43dd-9e2f-4f387c659f3b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/china-national-genebank-313685/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895517 | 1,701 | 2.828125 | 3 |
Hao J.F.,Sichuan Agricultural University |
Hao J.F.,Desertification Combating of Sichuan Provincial Colleges and Universities Key Laboratory |
Wang D.Y.,Sichuan Agricultural University |
Wang D.Y.,Desertification Combating of Sichuan Provincial Colleges and Universities Key Laboratory |
And 15 more authors.
Shengtai Xuebao/ Acta Ecologica Sinica | Year: 2014
Over the last century, human disturbance plays an important role in the change of global climate and environment and consequently, the loss of global species diversity. Unsustainable use of forest resources is causing dramatic changes in the forest communities and poses a serious threat to biodiversity worldwide. Plant species diversity is an index reflects the complexity and stability of forest community function. The effect of human disturbance on forest community can be directly reflected by the change of community structure and diversity. We hypothesize that community structure and plant species diversity vary with the different densities of human disturbance. In our study site in Jinfengshan Mountain, a tourism region located in Ya’an district, Sichuan Province, there are two major types of human disturbances: irrational selective cutting and tourism activities. In order to investigate the influences of different disturbances on the characteristics of species composition and diversity of Phoebe zhennan community in Jinfengshan Mountain, a field investigation was conducted. The intensity of human disturbance is divided into 3 levels: severe disturbance (close to the core scenic area within 40 m), medium disturbance (distant from the core scenic area at 40—80 m) and slight disturbance (far from the core scenic area at 80-120 m). The species richness index S, Shannon-Wienner index H, Simpson index D and Pielou index Jsw are adopted to evaluate the level of species diversity in Phoebe zhennan community. The results showed that 155 species, belonging to 36 families and 136 genera were found in 9 plots with a total area of 3600 m2. The following results were also revealed in this investigation: (1) the number of species decreased with the increase of disturbance intensity. (2) In terms of the community structure, the diameter and height class were normally distributed with slight and medium disturbance, significant difference in community structure was observed among communities with three different disturbance intensities. Individuals under slight and medium disturbance were distributed in small and medium diameter class(4 < DBH < 20 cm), and low and medium height class(5 < H < 13 m). Individuals in severe disturbance were mainly distributed in small diameter class(DBH < 8 cm) and big diameter class(DBH≥28 cm), and low height class(3 < H < 7 m) and high height class(H≥15 m). (3) The species diversity indexes generally exhibited slight disturbance > medium disturbance > severe disturbance respectively. The richness index S, Shannon-Wienner index H and Pielou index Jsw were decreased with the increase of human disturbance intensity. This study suggests that human disturbance had negative effects on the diversity and stability of Phoebe zhennancommunity, which calls for urgent actions to solve the conflict between human disturbance and the protection of species diversity. “Close to nature” forestry, a theory about sustainable forest management which integrates consideration of social, economic, and environmental factors in decision making is proposed to be taken into practice in this study area. © 2014, Ecological Society of China. All rights reserved. Source | <urn:uuid:b9b2a43f-8c9e-4cd7-a9d5-b447c9f7a0af> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/desertification-combating-of-sichuan-provincial-colleges-and-universities-key-laboratory-1176533/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00137-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923884 | 719 | 2.703125 | 3 |
Asteroids slam into our moon fairly frequently, but the space rock that hit the lunar surface on March 17 was epic. For the past eight years NASA has been monitoring the moon for explosions caused by meteors. They've recorded more than 300. None, however, as big as the blast from the March 17 impact, which exploded "with a flash 10 times brighter than anything we'd seen before," NASA said in the video below. "Anyone looking at the moon at the moment of impact could have seen the explosion, no telescope required," NASA said. The meteor, which weighed about 40 kilograms, hit the lunar surface at 56,000 miles per hour, NASA said. The explosion was the equivalent of 5 tons of TNT. NASA reported the explosion on Friday and produced the 4-minute video below. If you just want to see the explosion, go to the 00:47 mark. Now read this:
Some of today's 'desktop' mini-PCs make laptops seem downright bulky in comparison.
Sensing a possible stall in your coding career? Here’s how to break free and tap your true potential
Among many other provisions, the legislation "explicitly prohibits" the replacement of American workers...
Sponsored by Puppet
Microsoft has set March 26 as the end date for support of the original Windows 10 edition that arrived...
Spanish police have arrested a suspected Russian hacker of developing the Neverquest banking Trojan, a...
Samsung also shared which batch of phones are going to get Nougat as the update rolls out to more... | <urn:uuid:0b48b439-5fb7-466f-b374-d0821fef1499> | CC-MAIN-2017-04 | http://www.itworld.com/article/2711056/enterprise-software/nasa-s-great-video-of-an-explosion-on-the-moon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957229 | 315 | 3.03125 | 3 |
Nearly one-third of Georgia's 1.8 million students enter or leave the state every year, often during the school year. But their academic records don't always follow them in a timely fashion, which leaves their new districts in a bind.
Districts want to place students in a class that's appropriate for their learning level and get them remedial instruction as needed. And parents don't have the information they can use to make accurate decisions -- only the school district does.
Even when parents file a record exchange form, the district they're transferring from may take months to send a paper file over called a cumulative file — if administrators send the file at all. Meanwhile, districts have to do the best they can without student records.
"It's so hard right now when children move across the state boundaries to get their records," said Bob Swiggum, CIO of the Georgia Department of Education. "It really falls back onto the district that they came from to have a good record keeping system."
Georgia solved this problem in state by putting student records in a longitudinal data system. But now Georgia school districts want to access records outside of their state -- and the Education Department has listened.
To that end, a digital record exchange system is in the pilot phase at the district and state level in Georgia, and at the state level in North Carolina. And its goal is to make the exchange process faster and easier across state lines so students can learn at their level.
North Carolina joined the pilot because, "it seemed to us to be the direction in which we were going, and we just wanted to be on the cutting edge and have input into how this was done," said Karl Pond, enterprise data manager in the North Carolina Department of Public Instruction.
No other group of states is working on a system to exchange records, though a few are creating systems to find out if dropouts showed up in another state. And while some of those groups are using vendors, Georgia used a federal grant to build the record system from scratch for states to use.
Here's how it works.
Because the U.S. Education Department has provided grants for longitudinal data systems, most states have systems that house student data over time -- these systems just need to be able to talk to each other.
That's where a set of record exchange utilities comes in. This utility allows a database in one state to call a database in another state. But no data is stored during the conversation.
Only certified people in each district can search for student records through this utility. For example, a registrar at every school would have access. If the student's new teacher wanted access, that person would need to go in the counselor's office and look up the record under supervision.
In Georgia, a district administrator can access the record exchange through a series of links in the district student information system and the state's longitudinal data system. A query for a student's name and date of birth will pull up the student record in whatever state database it exists. Then the administrator can download that record and export it into a CSV file.
One of the key components of this project is that school districts requested a way to address the record exchange problem. It wasn't a top-down mandate, and this approach has served Georgia well in other longitudinal data system projects, said Jesse Peavy, technology coordinator of Bleckley County Schools.
"Once the word got out that you could do this, guess what people started doing?" Peavy asked. "'I want that, I want that.'"
Eventually, Georgia would like to turn administration of this record exchange over to an organization such as the Council of Chief State School Officers. Once states go through an authorization process and sign a memorandum of understanding, their school districts are free to search records in other states. And that could solve some of the major problems with the record exchange process. | <urn:uuid:67a4f465-1982-4625-95a2-91806ccc15c7> | CC-MAIN-2017-04 | http://www.govtech.com/data/Student-Record-Exchanges-in-the-Digital-Age.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00347-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968123 | 788 | 2.515625 | 3 |
Citizens Ready, Willing & Able, but Uninformed About How to Assist Security Efforts
WASHINGTON, D.C.; March 31, 2004 – While a majority of Americans describe themselves as “concerned” regarding homeland security and believe that the United States is likely to be the target of another terrorist attack in the months ahead, very few are aware of state and local security preparedness plans, according to a report released today by the nonpartisan Council for Excellence in Government.
The report, “From the Home Front to the Front Lines: America Speaks Out about Homeland Security,” presents findings of a two-part study conducted by Hart-Teeter Research and sponsored by Accenture. The report is based on a national survey of more than 1,600 American citizens as well as a national sample of 250 front-line emergency response personnel.
When asked for ways that government can improve homeland security, more than one-third of citizen respondents said they believe that the two most-effective measures are creating information systems that can share data across law enforcement, health and emergency agencies, and improving border security.
Nearly half (47 percent) of Americans surveyed said that the United States is safer today than it was on Sep. 11, 2001, up from 38 percent one year after the attacks.
Other key findings of the report:
Three-quarters (seventy-seven percent) of adults said they believe it is very or somewhat likely the United States will be the target of another major terrorist attack in the next few months. However, half (forty-nine percent) of the adults surveyed said that they are not concerned about an attack in their neighborhoods;
- While 26 percent of Americans describe themselves as “calm,” nearly three-quarters (73 percent) describe themselves as either “anxious” or “concerned;”
- The most-feared types of attacks are bioterrorism and chemical weapons, selected by 48 percent) and 37 percent of citizen respondents, respectively;
- Only one in five (19 percent of) Americans said they are aware of or familiar with their communities’ preparedness plans; 18 percent said they are aware of or familiar with their state’s preparedness plans; 36 percent said they are aware of or familiar with their workplace’s preparedness plans; and 27 percent said they are aware of or familiar with their schools’ preparedness plans;
- Citizens view tighter border security and information systems that share data across agencies (interoperability) as the best steps to strengthen homeland security, each selected by 37 percent of respondents.
- More than three in five citizens (62 percent) said they would be willing to volunteer to help homeland security efforts, including planning, training, and practicing drills in their communities. The same percentage supports a new nationwide hotline to report suspicious activity;
- Fifty-six percent of Americans believe that the Patriot Act is good for America. Thirty-three percent believe it is bad for America. Eleven percent of Americans are unsure. Half the public believe that it must be debated thoroughly in Congress before any decisions are made about whether it should be renewed next year;
- A majority (59 percent) of the public said they believe the government should have access to companies’ personal information about their customers if there is any chance that it will help prevent terrorism.
“When it comes to our nation’s safety and security, the American public has very clear and thoughtful suggestions for government leaders, and they see both an important role and serious responsibilities for themselves as well,” said Patricia McGinnis, president and CEO of the Council for Excellence in Government. “The results of this poll make clear that the American public has a front-line position in protecting the home front. But it also shows that government must better engage them, particularly by closing the communications gap between government and citizens. Local emergency plans are not going to be effective if ordinary citizens do not know where to turn or what to do. One key challenge for government at all levels is to get these plans into the hands—and the heads—of the public.”
“The good news is that governments are already working hard to improve in the two key areas that Americans identified as priorities for shoring up our homeland security,” said Stanley J. Gutkowski, managing director of Accenture’s USA Government practice. “Federal, state and local governments have recognized the need to do a better job of sharing information in order to be able to identify potential threats to our society. At the same time, the Department of Homeland Security is taking the necessary steps to protect our physical borders by pushing out virtual borders to stop terrorists before they can enter U.S. soil, water or air space.”
In addition to the national survey of American’s attitudes, the report also provides detailed opinions from a sample of front-line emergency responders across the nation, including fire chiefs, police chiefs and sheriffs. Although a majority (53 percent) of this group said they believe that the country is safer today than it was two and a half years ago, two-thirds (65 percent) of all of these respondents said they believe that their agencies are only somewhat prepared to respond if disaster strikes, and only one-quarter (26 percent) said they believe that their agencies are adequately prepared.
As with citizen respondents, first responders’ most-feared types of attacks are bioterrorism and chemical weapons, selected by 67 percent and 42 percent, respectively. But first responders show considerably more concern about attacks on critical infrastructure than does the public, with nearly two-thirds (62 percent) of first responders saying that they worry “a great deal” or “quite a lot” about attacks on infrastructure.
When asked to prioritize measures to promote homeland security, first responders rated emergency response equipment training first among their priorities, selected by 51 percent, followed by the two areas selected as most important by citizen respondents: interoperability, selected by 34 percent of first responders; and tighter borders, selected by 25 percent of first responders. Two-thirds (66 percent) said they support the establishment of a nationwide homeland security telephone hotline.
The study was conducted by the research firms of Peter D. Hart and Robert M. Teeter and comprised two parts: 1) a telephone survey conducted from Feb. 5 to 8, 2004 of a nationally representative sample of 1,633 randomly selected adults in the United States ((margin of error: +3.1%); and 2) interviews with 250 fire chiefs, police chiefs, sheriffs and other first responders.
“From the Home Front to the Front Lines: America Speaks Out about Homeland Security” is part of the Council for Excellence in Government’s “Homeland Security from the Citizens’ Perspective” project, designed to engage and connecting citizens, businesses, and government nationwide through a series of town hall meetings, expert working groups, and the release of this poll and report. Based on the results of this combined work, the Council will publish a set of major homeland security recommendations later this spring, for action by key government players at the local, state and national levels, as well as business and civic leaders, and citizens.
About The Council for Excellence in Government
Currently celebrating its 20th anniversary, the Council for Excellence in Government (www.excelgov.org) is a national nonpartisan and nonprofit organization based in Washington, D.C. that works to improve the performance of government at all levels, as well as citizen participation, understanding and trust in government.
Accenture is a global management consulting, technology services and outsourcing company. Committed to delivering innovation, Accenture collaborates with its clients to help them become high-performance businesses and governments. With deep industry and business process expertise, broad global resources and a proven track record, Accenture can mobilize the right people, skills, and technologies to help clients improve their performance. With approximately 90,000 people in 48 countries, the company generated net revenues of US$11.8 billion for the fiscal year ended Aug. 31, 2003. Its home page is www.accenture.com. | <urn:uuid:b7049cdb-2cf7-42f6-9c3a-226017944e53> | CC-MAIN-2017-04 | https://newsroom.accenture.com/news/nation-mix-anxiety-concern-and-calm-when-it-comes-to-homeland-security-new-report-finds.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00559-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95836 | 1,680 | 2.515625 | 3 |
College students majoring in computer science and engineering face a better job market upon graduation than their peers in other fields, particularly when it comes to starting salaries, according to a new report by the National Association of Colleges and Employers.
The January 2013 Salary Survey report found that computer science and computer engineering fields were among the 10 highest-paid at the bachelor’s degree level. Computer engineering was the highest paid major in 2012, with an average starting salary of $70,400. Computer science graduates earned average starting salaries of $64,400, placing third among bachelor’s degree graduates in 2012, NACE found.
Engineering careers dominated the top 10 list, capturing six of the top 10 spots for average salaries for bachelor’s degree holders. Other majors in the top 10 were chemical engineering ($66,400), aerospace/aeronautical/astronautical engineering ($64,000), mechanical engineering ($62,900), electrical/electronics and communications engineering ($62,300) and civil engineering ($57,600).
Other top-paying fields were finance ($57,300), construction science/management ($56,600) and information sciences and systems ($56,100), NACE found.
“This is not surprising since the supply of these graduates is low, but the demand for them is so high,” said Marilyn Mackes, executive director for NACE.
While some studies have concluded that federal IT workers earn more than their private sector counterparts, it does not appear to be the case when it comes to young college graduates. For example, the starting base salary for a GS-7 Step 1 is $33,979. Keep in mind that this figure does not include locality pay or other incentives.
Can federal salaries effectively compete with the private sector for IT grads? How did/does the starting salary for your federal IT job stack up? | <urn:uuid:83fd0f36-ce67-484d-922e-d63c8d4920c0> | CC-MAIN-2017-04 | http://www.nextgov.com/cio-briefing/wired-workplace/2013/01/new-tech-grads-earn-lot-more-industry-government/61024/?oref=ng-relatedstories | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00156-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94944 | 385 | 2.625 | 3 |
This course shows you how to design and develop cloud-native applications, ones that aren't just cloud ready or cloud hosted but that take maximum advantage of the cloud. It teaches practices for developing cloud applications, using Java EE as the primary programming language. You also learn how to deploy these applications using Bluemix, with its platform capabilities, PaaS capabilities, and its services.
After you complete this course, you can perform the following tasks:
- Explain in detail the characteristics of a cloud-native application
- Describe Cloud Adoption Pattern to use application in cloud
- List the twelve factors for application in cloud
- Apply best practices to architect a cloud-native application using Java EE
- Design microservices as the building block for your application
- Use various data sources that can be used by your Bluemix application
- Describe and apply security for your cloud-based application
Who Can Benefit
This course is designed for application developers who are responsible for designing and building applications in cloud-based environments, such as IBM Bluemix.
Before taking this course, you should have the following skills:
- Basic Java EE architecture and development skills
- Basic cloud concepts | <urn:uuid:07de4f91-ba76-408f-9e4c-64f2cd35fbe6> | CC-MAIN-2017-04 | https://www.exitcertified.com/training/ibm/blueworks-live/developing-cloud-native-applications-bluemix-47012-detail.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930762 | 243 | 2.5625 | 3 |
The hissing sound of the dot-com bubble as it bursts isnt the only noise on the Internet these days. Listen carefully and you might hear the rip and tear of hackers as they find their way past firewalls to steal information. Lately, the stock market woes of Internet companies have grabbed most of our attention, but its at the expense of a problem that has grown far more serious and could have major implications for the long-term success of the Internet.
Consider the recent raft of trouble that has occurred online: In March, the FBI announced that hackers from Russia and the Ukraine had stolen more than a million credit card numbers from commercial Web sites in the United States. Before that, there was the hacker ring that stole thousands of long-distance phone card numbers from Sprint and the disgruntled employee who shut down Forbes magazine for two days. A study last year of 186 companies by the Computer Security Institute and the FBI found they lost $377 million to hacker attacks. Overall, the report found that the frequency of computer intrusions and their costs are on the rise.
Its growing increasingly clear that financial transactions on the Internet are not receiving the protection they need. If attacks and theft continue to grow, public confidence in the Internet could be seriously undermined, threatening the long-term success of electronic commerce for the private and public sectors.
The good news is that we have the technology to secure important transactions. Public Key Infrastructure (PKI) allows individuals and businesses to exchange sensitive information securely and safely through the use of electronic keys that lock out intruders. The bad news is that few U.S. companies or federal, state and local governments are using the technology in any significant way.
A recent article in The Washington Post reported that American businesses and governments lag behind Europe and Asia when it comes to adopting PKI. The reason for the disparity, according to the Post, is a matter of government involvement: "Wherever PKI has taken off -- Australia, for example, where the government took the lead two years ago in articulating a standard and pressing for its use -- the public sector has driven the change."
Apparently, foreign governments have realized the value of having such an infrastructure for public and private benefit and are not afraid to tackle the problems of standards, complex technology and government-issued identification, something that raises the hackles of privacy advocates here.
As the Post article points out, the business sector has been reluctant to adopt PKI, despite its reliability. As a result, we are at least five years behind in having PKI to protect online transactions. Rather than wait for the private sector to make the first move, government should step up and take the initiative.
In fact, public-sector interest in PKI was initially strong. The General Services Administration launched a program called ACES to jumpstart use of the encryption technology in federal agencies. At the state and local level, government associations, such as NASIRE and Public Technology Inc., set up programs to educate their members about the usefulness of PKI. But as a recent report by the General Accounting Office points out, PKI adoption has been stymied by interoperability problems and the entrenched silo effect of information management in government.
What we need are government leaders who are engaged when it comes to technology and are willing to take the lead in establishing the policies and procedures that will smooth the way for PKI adoption throughout government. The idea of e-government has forced public officials to re-examine the way government operates and serves its constituents. PKI might be the technology that will force government to actually retool and rebuild itself for the future. But it wont happen without effective and engaged leadership. | <urn:uuid:218c5b91-6574-485d-9adb-e746d304e12a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100498089.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960626 | 744 | 2.515625 | 3 |
NASA, Cisco collaborate on global environmental monitoring
- By Dan Campbell, Special to GCN
- Mar 05, 2009
NASA is teaming with Cisco Systems Inc. to deploy a platform that will monitor and evaluate the global environment. The system, dubbed Planetary Skin, will collect and analyze environmental data from a vast array of satellite, sea-based, airborne and land-based sensors and other collection devices deployed worldwide.
“Planetary Skin provides continuous global observations of our home planet using a constellation of spacecraft, as well as airborne and in situ ground observations to monitor the health and well-being of Earth,” NASA officials said in a news release.
NASA will provide the means to collect the data while Cisco will perform data modeling and analysis and provide expertise in constructing a network that can scale to accommodate potentially millions of nodes, data sources and participants.
The developers are motivated by increasing concerns over global climate change and the lack of a coordinated system to facilitate the sharing of data. The project encourages governments, businesses, academic institutions and environmental organizations to share the data they obtain in pursuing their missions. Planetary Skin will be a collaborative platform accessible online by the general public, governments and businesses. The data will be available in near-real time.
Ultimately, the goal of Planetary Skin is to deliver actionable knowledge to decision-makers via a system that correlates data on a variety of environmental conditions and natural resources. The system will enhance the ability of global leaders to detect and mitigate the impact of climate changes in their respective realms, according to information on PlanetarySkin.org. The concept proposes a unifying approach to monitoring, measuring and managing environments in three main areas: rural, rural to urban and urban.
The first phase of the project, called Rainforest Skin, will launch next year. It will help evaluate the role of deforestation in the excess buildup of carbon in the atmosphere, which contributes to global warming. The project will be a prototype so developers can learn more about how to properly deploy a vast sensor-based network that unifies a variety of information sources to achieve complex objectives.
Dan Campbell is a freelance writer with Government Computer News and the president of Millennia Systems Inc. | <urn:uuid:d0d1d621-7d43-42ea-8a6c-57a40b140993> | CC-MAIN-2017-04 | https://gcn.com/articles/2009/03/05/planetary-skin.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00422-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907977 | 442 | 3.046875 | 3 |
TACC, the Texas Advanced Computing Center, knows all about big data. As a leading center of computational excellence in the United States, TACC relies on advanced computing technologies to enable discoveries that advance science and society. Of course, all the data that is generated requires a repository – that’s where Corral comes in. The large-scale data repository was deployed in 2009 to support the storing and sharing of research data at the University of Texas.
A recent article on TACC’s website highlights an important milestone for Corral. The DataDirect Networks storage system recently crossed the one petabyte mark in total data stored, and it now hosts over 100 unique data collections. The diverse assortment of datasets range from measurements of Earth’s gravity field to whale songs to mass spectrometry data, according to the piece by science writer Arron Dubrow.
Usage of the system continues to climb. For the last six months, usage has increased 10 percent per month.
“We’ve seen ever-increasing growth in the number and diversity of collections on Corral over the past several years,” said Chris Jordan, manager of the data management and collections group at TACC. “This shows how important a resource dedicated to data collections is to modern research practices, both for the researchers who are creating data and the worldwide community of researchers who use public data collections to further their own research.”
Corral is not the only storage mechanism at TACC, but it is unique for hosting large collections that are actively serving the community. TACC’s 100-petabyte Ranch tape archive serves as a long-term repository for archived work. The site’s newest petascale supercomputer, Stampede, includes more than 15 petabytes of dedicated storage, and there is also a scalable global file system, which adds another 20 petabytes. These are both used for short-term data retention to support ongoing simulations and analyses.
Corral, which has a current raw capacity of six petabytes, was designed and optimized to support complex large-scale collections and a collaborative research environment. With a high-speed connection to TACC’s other advanced computing systems, scientists can easily share data and results.
According to Niall Gaffney, TACC’s Director of Data Intensive Computing, “Corral is leading the way in the preservation and dissemination of data for researchers who are discovering that global, on-demand access to large quantities of data leads to previously unachievable results.” | <urn:uuid:0d905515-f75b-46e7-8526-2ae541cd637f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/11/tacc_spurs_data-intensive_science_with_corral/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00422-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929373 | 521 | 2.90625 | 3 |
LAS VEGAS -- French company Induct demonstrated for the first time in the U.S. a driverless eight passenger robotized shuttle, design for transportation in city centers and campus settings.
The all-electric shuttle, called Navia, looks a bit like an oversized golf cart, but instead of seats, passengers lean against padded inner sides. Today, the vehicle travelled a closed course at the CES conference here, stopping at designated spots to allow riders to exit the vehicle before continuing along its route.
Unlike Google's efforts to build a driverless car, Induct chose a shuttle because it can be placed into immediate use without the danger of interacting with other major roadway traffic.
"We've tested it for the past year and a half in Europe, Asia and the U.S.," said Max Leferve, who co-founded the company with his father Pierre. "The tech in Google's car is very expensive. We used the most affordable sensors ... to create a vehicle we can sell."
Leferve said his company built the vehicle smaller so as to facilitate faster loading and offloading of passengers. He also said it's 40% to 60% less expensive than a typical shuttle bus, which can cost up to $200,000 per year to run, including the pay of a driver.
"This vehicle costs $250,000 for a four year lease," he said.
While Leferve's company built the vehicle, it won't be manufacturing the fleet. The company plans to sell the intellectual property for others to build and sell.
Leferve said the company has adopters in the U.S., but didn't reveal who they are.
The Navia uses technology called SLAM (Simultaneous localization and mapping), which builds a map within an unknown environment and can be updated at will.
"It sees where you're driving and creates a map," Leferve said.
The Navia self-driving, all-electric shuttle
The vehicle is programmed through an onboard touch screen display. When in program mode, the vehicle is taken on a route by a driver, learning it as it goes along. Stops are then preset, at buildings on a campus, for example, and riders can use the touch-screen display to designate a stop for themselves. A set of gates slide closed while the vehicle is in motion, and they open for stops.
The Navia has four laser sensors, one on each corner of the vehicle. The lasers scan up 25 times per second at distances of up to 200 yards, aligning the vehicle to its pre-set course and while remaining wary of any obstacles. If an object suddenly enters the path of the shuttle, such as a pedestrian, it will automatically stop.
The vehicle runs on a lithium-ion battery that can power the shuttle for up to seven hours.
Leferve said his company has also developed a mobile app that allows pedestrians to call the vehicle to a pre-designated stop. Induct is also working on a website to allow commuters to call for the vehicle at a designated place and time along its preset route.
The Navia currently travels at 15 miles per hour (mph), but it was been tested at up to 25 miles per hour and Leferve hopes to test the technology on a faster moving vehicle, perhaps even fast enough to travel on secondary roadways.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian, or subscribe to Lucas's RSS feed . His email address is firstname.lastname@example.org.
Read more about location-based services in Computerworld's Location-Based Services Topic Center.
This story, "Driverless Shuttle Aimed at Campuses, Inner Cities" was originally published by Computerworld. | <urn:uuid:dc1ceb81-4f68-4192-8bf4-ec44c38936b2> | CC-MAIN-2017-04 | http://www.cio.com/article/2379825/automotive/driverless-shuttle-aimed-at-campuses--inner-cities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966038 | 793 | 2.53125 | 3 |
LANDOVER, MD --(Marketwired - January 05, 2017) - The Asthma and Allergy Foundation of America (AAFA) applauds the National Institute of Allergy and Infectious Diseases' (NIAID), part of the National Institutes of Health (NIH), release of new Addendum Guidelines today to help clinicians introduce peanut-containing foods to infants to prevent the development of peanut allergy. The new Addendum Guidelines for the Prevention of Peanut Allergy in the United States supplement the 2010 Guidelines for the Diagnosis and Management of Food Allergy in the United States, and emphasizes that infants should start other solid foods before they are introduced to age appropriate peanut-containing foods.
The three guidelines for when infants should be introduced to peanut-containing foods are:
- Guideline 1: Infants at high risk of developing peanut allergy due to severe eczema, egg allergy, or both should have peanut-containing foods introduced into their diets as early as 4 to 6 months of age. All infants in this group should have peanut IgE testing performed prior to introduction.
- Guideline 2: Infants with mild or moderate eczema should have peanut-containing foods introduced into their diets around 6 months of age. No testing is necessary in this group prior to introduction.
- Guideline 3: Infants without eczema or any food allergy should have peanut-containing foods freely introduced into their diets, without any testing beforehand.
"This represents a monumental shift in our understanding of peanut allergy and ways to prevent it from developing. Previously, pediatricians and allergists recommended avoidance of peanut until 3 years of age, but now mounting evidence demonstrates that we can prevent a significant number of new peanut allergy cases by introducing it into the diet during infancy," said David R. Stukus, MD, Associate Professor of Pediatrics Section of Allergy/Immunology Nationwide Children's Hospital, member of AAFA's Board of Directors, and part of the NIAID Coordinating Committee, and the Expert Panel that drafted the Addendum Guidelines. "It's important to understand that the majority of infants can have this safely introduced at home without any evaluation but those at highest risk (severe eczema and/or egg allergy) should be evaluated for the presence of peanut allergy antibody prior to introduction."
Peanut allergy is a growing health problem for which no treatment or cure exists. "AAFA is dedicated to keeping infants and children with food allergies safe and healthy until a cure is found. AAFA is proud to have participated in the NIAID Expert Panel and Coordinating Committee to help develop the Addendum Guidelines," said Meryl Bloomrosen, MBA, MBI, AAFA's Senior Vice President of Policy, Advocacy and Research, and member of the NIAID Coordinating Committee. "We commend NIAID for its timely response to the results of the landmark NIAID-funded Learning Early About Peanut Allergy (LEAP) study, published last year. We look forward to educating parents and caregivers of infants so that they are familiar with these guidelines, which is essential to their successful implementation."
About NIH and NIAID
NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. NIAID conducts and supports research -- at NIH, throughout the United States, and worldwide -- to study the causes of infectious and immune-mediated diseases, and to develop better means of preventing, diagnosing and treating these illnesses. News releases, fact sheets and other NIAID-related materials are available on the NIAID website. For more information about NIH, and NIAID, visit www.nih.gov, and www.niaid.nih.gov.
Founded in 1953 and celebrating over 60 years of service, the Asthma and Allergy Foundation of America (AAFA) is the oldest and largest nonprofit patient organization dedicated to improving the quality of life for people with asthma, allergies and related conditions through education, advocacy and research. AAFA provides practical information, community-based services, support and referrals through a national network of chapters and educational support groups. Through its Kids With Food Allergies division, AAFA offers the oldest, most extensive online support community for families raising children with food allergies. In September 2016, AAFA launched its Food Allergy Patient & Family Registry, a program that collects, manages and analyzes data from and about people with food allergies to advance research through patient information. For more information, visit www.aafa.org, https://research.kidswithfoodallergies.org/, and www.kidswithfoodallergies.org. | <urn:uuid:885c1cc3-fa23-4497-9db3-d078c52bb3e6> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/aafa-applauds-niaid-release-addendum-guidelines-prevention-peanut-allergy-2186427.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00054-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93928 | 996 | 2.640625 | 3 |
That's because the Government Accountability Office reports that efforts to deploy two of the Fed's most prominent security efforts the Trusted Internet Connections (TIC) and Einstein (or officially known as the National Cybersecurity Protection System) have largely gone unused, keeping the threat of cyber attacks on federal systems very real.
According to the GAO: As of September 2009, none of the 23 federal agencies it looked at had met all of the requirements of the TIC initiative. Although most agencies reported that they have made progress toward reducing their external connections and implementing critical security capabilities, most agencies have also experienced delays in their implementation efforts. TIC is supposed to secure and consolidate federal agencies' external network connections, including Internet connections, set baseline security and improve the government's response to infiltrations. Early this year the Office of Management and Budget is directing agencies to deploy a standard set of security tools and processes on all of their Internet connections, which may explain why many agencies haven't caught up.
In the same time frame, fewer than half of the 23 agencies had executed Einstein and Einstein 2 had been deployed to 6 agencies. Agencies that participated in Einstein 1 improved identification of incidents and mitigation of attacks, but the Department of Homeland Security which oversees this efforts, will continue to be challenged in understanding whether the initiative is meeting all of its objectives because it lacks performance measures that address how agencies respond to alerts. Einstein technology is intended to provide the DHS with Internet monitoring capability including intrusion detection.
While the GAO doesn't specifically link the lack of TIC and Einstein implementations to specific problems, its notes that federal security breaches have potentially allowed sensitive information to be compromised, and systems, operations, and services to be disrupted. For example:
- The Department of State experienced a breach on its unclassified network, which daily processes about 750,000 e-mails and instant messages from more than 40,000 employees and contractors at 100 domestic and 260 overseas locations.
- The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as "Slammer" infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours.
- Officials at the Department of Commerce's Bureau of Industry and Security discovered a security breach in July 2006. In investigating this incident, officials were able to review firewall logs for an 8-month period prior to the initial detection of the incident, but were unable to clearly define the amount of time that perpetrators were inside its computers, or find any evidence to show that data was lost as a result.
With agencies still in the process of implementing TIC and DHS in the early stages of deploying Einstein 2, the success of such large-scale initiatives will be in large part determined by the extent to which DHS, OMB, and other federal agencies work together to address the challenges of these efforts, the GAO stated.
This report comes on the heals of another GAO study that found about 69% of the IRS' previously noted security flaws remain unfixed and continue to jeopardize the confidentiality, integrity, and availability of the tax agency's systems. The problems put the IRS at increased risk of unauthorized disclosure, modification, or destruction of financial and taxpayer information, the GAO concluded.
The GAO recently issued another report stating that disruptive cyber activities are expected to become the norm in future political and military conflicts.
From the GAO: "The growing connectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt telecommunications, electrical power, and other critical services. As government, private sector, and personal activities continue to move to networked operations, as digital systems add ever more capabilities, as wireless systems become more ubiquitous, and as the design, manufacture, and service of information technology have moved overseas, the threat will continue to grow."
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:3ba4c54a-b042-4f92-8973-f42a1619832d> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2230410/security/report-rips-key-government-security-efforts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00054-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956988 | 811 | 2.640625 | 3 |
Google has never told the world exactly how many servers are running in its data centers, where the search giant stores digital information.
But that hasn't stopped people from ballparking Google's server count, which "recent guesstimates" have put at more than 1 million, according to Data Center Knowledge.
However, you know what happens when you guess: You make an "ess" out of G and U! Or, in the case of Google's servers, you overestimate the total number.
New information from a data-center energy usage study by a Stanford University "suggests that [Google] is probably running about 900,000 servers."
The professor, Jonathan Koomey, didn't just make up a number. He worked with data supplied by Google, Data Center Knowledge reported:
Google’s David Jacobowitz, a program manager on the Green Energy team, told Koomey that the electricity used by the company’s data centers was less than 1% of 198.8 billion kWh – the estimated total global data center energy usage for 2010. That means that Google may be running its entire global data center network in an energy footprint of roughly 220 megawatts of power.
These numbers allowed Koomey to arrive at specific numbers that-- oh, wait. From Koomey's report:
Table 4 makes some educated guesses about Google’s servers to estimate electricity used for that company’s data centers over time. While there is substantial uncertainty in these estimates (because of the lack of data on the installed base and other characteristics of Google’s servers), the calculations show that Google’s data center electricity use is about 0.01% of total worldwide electricity use and less than 1 percent of global data center electricity use in 2010.
"Educated guesses"? "Substantial uncertainty"? What happened to "didn't just make up a number"?
The closest Koomey comes is this: "In summary, the rapid rates of growth in data center electricity use that prevailed from 2000 to 2005 slowed significantly from 2005 to 2010, yielding total electricity use by data centers in 2010 of about 1.3% of all electricity use for the world, and 2% of all electricity use for the U.S." | <urn:uuid:c04509f3-4221-4748-87ea-3ff5f7840e70> | CC-MAIN-2017-04 | http://www.itworld.com/article/2740064/data-center/google-uses-fewer-servers-than-we-thought.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920785 | 460 | 2.84375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.