text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
More than a dozen years ago, America had a record high 186 million phones that operated over copper wires. Since then, more than 100 million have been disconnected, according to the trade group US Telecom. Today, nearly two in five Americans (38.5 percent) use only wireless phones in their home, according to a 2012 report from the Centers for Disease Control and Prevention. For young Americans ages 25-29, the numbers are even higher: 6 in 10 don’t have a landline and rely only on their cellphones.
Traditional phone companies, known as incumbent local exchange carriers (ILECs), have lost 70 percent of their residential phone business to wireless and cable carriers, reports US Telecom. The days of plain old telecom services are drawing to a close, soon to be replaced entirely by wireless and cable communications. The demise of traditional phone service has been so dramatic that AT&T wants to dismantle its entire network of copper landlines by the end of the decade, according to the Associated Press; and Verizon, the nation’s second largest landline phone company has received permission from New York state to replace traditional wire lines with its Voice Link wireless service in areas where phone services were washed out by Hurricane Sandy.
The phasing out of old phone technology is happening with support from state legislatures. So far, 25 states have passed legislation “eliminating or reducing state commission authority over telecommunications by the end of the 2012 legislative session,” according to the National Regulatory Research Institute (NRRI). By the end of 2013, several more states — Indiana, Nevada, Tennessee and Wyoming — are expected to do the same. “Should the legislation pending in the 2013 sessions be enacted, nearly 70 percent of the country will have significantly reduced or eliminated commission jurisdiction over retail communication services, such as VoIP (voice over Internet protocol) and other IP-enabled services, ” according to an NRRI report. In other words, the new generation of phone services will be decidedly less regulated than the landlines they're replacing.
Meanwhile, the Federal Communications Commission’s new chairman, Tom Wheeler, has announced his intent to take up the issue of letting phone companies phase out landlines in favor of wireless and Internet-based phone service. Wheeler has set up a transition task force, which will take up the issue in December.
But just because wireless and broadband phone service has been embraced by a wide swath of Americans doesn’t mean everyone is ready to say goodbye to their landlines.
Verizon customers on Fire Island, N.Y., who lost their phone service from Sandy and have had their landlines replaced by the company’s Voice Link technology, have complained about dropped lines and inconsistent connectivity. The Voice Link box doesn’t work with some remote medical monitoring devices, home alarm systems and faxes nor can it accept collect calls or connect callers with an operator. For small businesses, the boxes don’t work with credit card machines.
Old-fashioned phone lines also don’t stop working when power goes out, making them a reliable form of communications in an era when monster storms that can knock out power for days have become the new norm.
“This is old-school, but there are plenty of instances where the cable goes out, the electricity goes out and the phone network is there,” Rob Frieden, a professor of telecommunications and law at Penn State, told National Public Radio.
Rural phone customers complain that phone carriers have raised landline rates, while letting maintenance to deteriorate, as wires and switches reach the end of their life. Along with rural callers, the elderly are also likely to still use landlines and suffer from any drop in service quality.
That’s why AARP has called on Pennsylvania’s lawmakers to reject a bill that would deregulate phone service, remove consumer protections and possibly lead to higher phone charges for existing landlines. The consumer advocacy group cited a study that showed 94 percent of Pennsylvania residents who are 50 or older are satisfied with their landline service and 54 percent are concerned about affording a landline in the next three years. The same survey showed that 84 percent of rural Pennsylvanians over the age of 50 oppose the deregulation bill.
The phone companies have responded by saying that a less-regulated marketplace will give them the leeway to build a more robust, up-to-date service that competes with wireless and cable providers.
The solution to the opposing views on the future of phone service may lie in accepting the shift to new technology, but crafting a viable transition plan that doesn’t alienate segments of the population still wedded to landlines, says Harold Feld, senior vice president of Public Knowledge, a Washington-based public interest telecommunications advocacy group. In a recent blog, Feld wrote, “Instead of looking at state and federal oversight of the transitions as a negative, people need to recognize that state and federal oversight are what prevent potential disasters like Fire Island from going critical.”
Public Knowledge favors the upgrade currently underway with the nation’s phone system, but says government needs to make sure the new system has the same social values that made the traditional phone system such a success for more than 100 years. Feld is hopeful that FCC Chairman Wheeler will keep in mind the four values that have informed communications law over the decades: public safety, universal access, competition and consumer protection.
With the FCC’s Transition Task Force giving its first status report on Dec. 12, we should soon know for sure.
This story originally appeared on GOVERNING.com. | <urn:uuid:bf1590cd-4b33-4532-93ec-1a60b1691644> | CC-MAIN-2017-04 | http://www.govtech.com/products/As-Landline-Phones-Disappear-Some-Voice-Concerns.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00484-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950198 | 1,146 | 2.890625 | 3 |
Setting up networking is the step that we receive the most questions about when customers are new to the vCloud environment. We’re currently working on a video tutorial that will further help clients with the steps of setting up networking. In the meantime, here is an overview of the types of networking available and the tabs that you’ll find when you are configuring your network.
3 Types of Networking:
Routed: This connection provides an address on a local subnet that routes out through a firewall to reach the Internet.
Internal: This connection provides local network access only and is not externally routable. One example of using this network type would be when you have a database server that needs to communicate with a webserver on the same network, but not route out to the Internet.
Direct: Places a public IP address on your server and provides a direct connection to the Internet without a firewall.
Network Configuration Tabs: When you right click on your network, you’ll see an option to “Configure Services”. This will open up a pop up for the following tab settings:
DHCP: Enables DHCP to dynamically assign local addresses to machines in your vCloud environment.
Firewall: In this tab, you will add rules to your firewall to allow or deny specific network traffic. By default the firewall is enabled with all outgoing traffic allowed; however, you will need to create rules to allow incoming traffic.
NAT – External IPs: In this section, customers can add additional external IPs to their organization.
NAT Mapping: In this section, customers can configure network address translation (NAT) rules to map internal addresses to external routable addresses.
Site-to-Site VPN: Offers the ability to set up a secure VPN tunnel between clouds.
Static Routing: This tab allows you to create static routes to manually control the path your traffic takes within and out of the network.
Stay tuned for our video tutorial on setting up networking! If you have questions in the meantime, feel free to contact us, we’re happy to help. | <urn:uuid:7d97bca0-173c-4e9a-ab4e-4395184e9081> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/vcloud-director-networking-overview | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90037 | 431 | 2.6875 | 3 |
Everyone is now discussing about the Heartbleed Bug, which was detected in the beginning of April, 2014. If you are an IT expert, you will easily understand those talks about OpenSSL, encryption systems and similar things. However, if you are just an ordinary PC user, you may have some hard time to realize what this bug really is and how to protect yourself against it.
According to security experts, Heartbleed is a recently discovered security vulnerability, which puts people’s login data at many popular websites at risk. It is related to the piece of software known as OpenSSL, which should protect personal users’ data (passwords, loggins, credit card information, etc.) while it travels from computer to the website. The latest its version (1.0.1) has a bug that allows recovering this data on the memory of the web server without leaving a trace. So, Heartbleed allows to expose people’s logins and use them for various dirty crimes. It can be done no matter what operating system or service is used by the victim, so people can lose their loggins no matter what they do, shop on Amazon or blog on Tumblr.
According to the latest reports, Heartbleed has breached the security of around half of the million of Internet’s web servers. Facebook, Instagram, Pinterest, Tumblr, Google accounts, Etsy, Flickr, SoundCloud, YouTube and other online giants have or might have been attacked. If you have an account on any of these or other popular sites, you may want to change all your passwords and make sure that all your data is fine. However, making these changes might be useless if sites’ operators have still failed to fix the bug. So, WAIT FOR CONFIRMATION from sites or use such tools as LastPass HeartBleed Checker… You can find more information about this bug at The Heartbleed Bug. | <urn:uuid:01e1bcbb-5b74-46da-b728-d7c4c2c98043> | CC-MAIN-2017-04 | http://www.2-spyware.com/news/post3343.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93608 | 392 | 2.640625 | 3 |
It is true that Web design usability has been the subject of much debate in the design, technology and UX arena in the past couple of years. This has been brought about with the push for some form of standardization alongside the advancements in technology. The Internet now has become much bigger and better with its capacity being almost unlimited. Then there have been tools like HTML5 and AJAX, which help provide top notch elegance with a simple base and this has led to the emergence of two schools of thought, which have different ideas about which direction Web design should go from now. They are standard utilitarianism versus the huge expression and complexity of design.
Each school of thought actually has a very solid case on its side depending on your point of view. The true fact, however, is that this is not the case. In order to have a clear perspective of this, it is important that you go back to the days when our World Wide Web was still in its formative years, and technology that had been designed to support it was also still in its early stages.
During the early years of the launch of the current World Wide Web, there were very many limits on what one could do with any Web page. The simplicity of the Web was such that people were only limited to using bullet points, frames, tables, and simple text concepts and the most basic images and pictures. Some are still being used today though the frequency of their use has been diminishing steadily over time. When people are in a technically and visually limited environment, their creativity and ingenuity is stimulated. This was clearly demonstrated by our World Wide Web during its formative years when everything was so complicated and limited. Some of the best ideas were born during that time.
At the time, there were some very basic general design trends that were used. There was no standardized format to provide guidelines on how the pages were supposed to be made. The design of the Web page thus depended wholly on the choices of the designer. Each Web page provided you with an insight into the mind of its designer since everyone created the Web page according to how they thought it should look like. This obviously brought about many difficulties. Navigating Web pages was quite hard since each page differed from the last one and had its own distinct format and was hard to use. This was very stressful when compared to the modern pages and designs.
In the 2000s, more and more advanced technologies began to emerge that led to a major shift from the utilitarian design to the standardized form. Many of the websites, which had an extremely large following before the shift, died almost instantly after they shifted their websites to the standard form. Most claimed the reason for their decline was that they had changed what had attracted their followers initially and that they were now not the same websites people used to follow. It was during this time that websites became much easier to use since almost all of them used a similar format. The downside however is that they became less engaging and more mechanical.
Come 2007 with the emergence of HTML5 and AJAX, websites became more lively and personal. They allowed the addition of more designs and customization of websites while at the same time maintaining their standardized format. Websites now became much easier to use since they were almost similar. Each website now had its own design and outlook when compared to other websites but the layout still remained the same. Navigation was now easier and websites became more engaging and personalized. The confusion while surfing that existed before was now a thing of the past. What most people fail to grasp now is that the growth and usability of the internet has been brought about by mixing the emotional aspect and the standard layout of Web pages. You can now navigate websites almost entirely by using your instincts. This just goes to show that web design usability is interrelated and that none can exist without the other.
Danielle Arad is Director of Marketing and User Experience Specialist of WalkMe.com, the world's first interactive website guidance system. She is also chief writer and editor of UX Motel, a blog for user experience experts. Follow her @uxmotel.
Edited by Alisen Downey | <urn:uuid:94b97d97-477f-47c5-8ef7-cf4ef5047611> | CC-MAIN-2017-04 | http://www.html5report.com/topics/html5/articles/341609-why-web-design-usability-reliant-emotional-design.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.989636 | 829 | 2.65625 | 3 |
Myths have always been a part of human culture, and can be found in nearly every aspect of life, including the computer. One of the larger computer-based myths revolves around malware, more specifically the virus. Many users are familiar with the concept but have a tough time distinguishing between what is true and what isn’t. Are you one of them?
Here are five common myths about viruses that confuse people, and the truths associated with them. Before we delve deeper it would be a good idea to explain what a virus is.
A virus is a computer program that infects a computer and can generally copy itself and infect other computers. Most viruses aim to cause havoc by either deleting important files or rendering a computer inoperable. Most viruses have to be installed by the user, and usually come hidden as programs, browser plugins, etc.
You may hear the term malware used interchangeably with virus. Malware is short for malicious software and is more of an umbrella term that covers any software that aims to cause harm. A virus is simply a type of malware.
Myth 1: Error messages = virus
A common thought many have when their computer shows an error message is that they must have a virus. In truth, bugs in the software, a faulty hard drive, memory or even issues with your virus scanner are more likely the cause. The same goes with if your computer crashes, it likely could be because of something other than a virus.
When you do see error messages, or your computer crashes while trying to run a program or open a file, you should scan for viruses, just to rule it out.
Myth 2: Computers can infect themselves
It’s not uncommon to have clients bring their computers to a techie exclaiming that a virus has magically appeared on the system all by itself. Despite what some may believe, viruses cannot infect computers by themselves. Users have to physically open an infected program, or visit a site that hosts the virus and download it.
To minimize the chance of being infected you should steer clear of any adult oriented sites – they are often loaded with viruses, torrent sites, etc. A good rule of thumb is: If the site has illegal or ‘adult’ content, it likely has viruses that can and will infect your system if visited, or files downloaded from there.
Myth 3: Only PCs can get viruses
If you read the news, you likely know that many of the big viruses and malware infect mostly systems running Windows. This has led users to believe that other systems like Apple’s OS X are virus free.
The truth of the matter is: All systems could be infected by a virus, it’s just that the vast majority of them are written to target Windows machines. This is because most computers run Windows. That being said, there is an increasing number of threats to OS X and Linux, as these systems are becoming more popular. If this trend keeps up, we will see an exponential rise in the number of viruses infecting these systems.
Myth 4: If I reinstall Windows and copy all my old files over, I’ll be ok
Some believe that if their system has been infected, they can simply copy their files onto a hard drive, or backup solution, reinstall Windows and then copy their files back and the virus will be gone.
To be honest, wiping your hard drive and reinstalling Windows will normally get rid of any viruses. However, if the virus is in the files you backed up, your computer will be infected when you move the files back and open them. The key here is that if your system is infected, you need to scan the files and remove the virus before you put them back onto your system.
Myth 5: Firewalls protect networks from viruses
Windows comes with a firewall built into the OS, and many users have been somewhat misled as to what it actually does, and that firewalls can protect from viruses. That’s actually a half truth. Firewalls are actually for network traffic, their main job is to keep networks and computers connected to the network secure; they don’t scan for viruses.
Where they could help is if a virus is sending data to a computer outside of your network. In theory, a firewall will pick up this traffic and alert you to it, or stop the flow of data outright. Some of the bigger viruses actually turn off the firewall, rendering your whole network open to malware attacks.
What can I do?
There are many things you can do to minimize the chances of infection. The most important is to install a virus scanner on all of your systems, keep it up to date and run it regularly. But a defensive strategy like this isn’t enough, you need to be proactive by:
If you are worried about the security of your systems and network, call us today. Our team of security experts can work with you to provide a plan that will meet your needs. | <urn:uuid:b8e5a3f2-8890-494d-921c-a082c08c8f19> | CC-MAIN-2017-04 | https://www.apex.com/confused-computer-viruses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957586 | 1,017 | 3.03125 | 3 |
Virus:W32/Ramnit is no stranger to many malware analysts/researchers, as it was in the wild back in 2010.
Other malware researchers have blogged about the technical details of this interesting virus (here and here, for example); however there are still some noteworthy techniques — and an "easter egg" — waiting to be discovered.
One of the interesting techniques is the injection method that Ramnit uses. This differs from the traditional method, in which a virus would create a suspended thread and inject code using a memory writing Windows API function, then resume the suspended thread after the injection is done.
In this case, what makes Ramnit different is that it calls a Windows API function to spawn a new process, either the default web browser process or the Generic Host Process for Win32 Services, also known as svchost.exe. By injecting into this newly spawned process, the code is not easily visible to users and able to bypass the firewall.
Before this happens though, Ramnit installs an inline hook in an undocumented Windows native system service called Ntdll!ZwWriteVirtualMemory. The picture below depicts how this injection works:
The hooked Windows native system service redirects the code execution flow to the module defined in the caller process to perform the code injection routine. The injected code in the new process includes the capability for file infection (Windows executable and HTML files), as well as backdoor and downloader functionalities.
Another noteworthy detail in Ramnit is its "easter egg", found in the DLL that it injects to the processes mentioned above. The code snapshot below should explain everything:
Basically, this easter egg navigates to the registry key and looks for "WASAntidot":
When we try to create "WASAntidot" registry key on a test machine, we see this:
Voila! The machine is safe from Ramnit infection now! | <urn:uuid:205c8913-43ab-43df-8fe3-7bb588e697cb> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002138.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921337 | 396 | 2.5625 | 3 |
Over the past few years, as the miniaturization of technology marches on, manufacturers have been making enormous strides in wearable computing technology. While your smartphone is now the computer in your pocket, smartwatches are becoming the computer on your wrist. With many smartphone manufacturers branching out into smartwatches, these devices are gradually carving out their own niche of consumers. Like smartphones, smartwatches can contain a treasure trove of meaningful data for forensic investigators, but the data within them can be difficult to acquire. Gillware Digital Forensics offers smartwatch forensics services for law enforcement and legal professionals.
The History of Smartwatches
The digital watch has been around since the 1970s. However, it wouldn’t be until the 1990s that the very first precursor to what we now know as the smartwatch would be invented. The Timex Datalink, produced in 1994, could transfer data wirelessly between itself and a PC and store appointments and contact lists created in Microsoft Schedule+.
1998 saw the invention of the first Linux wristwatch by Steve Mann. Mann’s presentation at IEEE ISSCC2000 two years later granted him the title of “the father of wearable computing”. Also in 1998, Seiko launched its Ruputer in Japan (distributed outside of Japan as the Matsucom onHand PC). Since the onHand PC featured its own graphics display and could run third-party apps, it could be considered the world’s first smartwatch.
In 1999, Samsung announced its SPH-WP10, the world’s first watch phone, with an integrated speaker, microphone, and a protruding antenna. It looks just like what you’d expect a late-90s cell phone with a wrist strap to look like.
Throughout the first decade of the twenty-first century, Microsoft, Samsung, IBM, and other companies pushed wearable computers further. But it wasn’t until 2013 that tech startup Omate used a Kickstarter campaign to fund production of the first smartwatch boasting all of the capabilities of a smartphone.
It was at this point that the technology had been suitably miniaturized such that these types of devices could be inexpensive to make and easy to use. Major computer and smartphone manufacturers jumped into the game, prompting consumer device analyst Avi Greengart to suggest that 2013 was the “year of the smartwatch” (funnily enough, various outlets have continued to pronounce the three subsequent years the “year of the smartwatch” as well, perhaps robbing the term of some of its grandiosity). Over the next few years, more and more manufacturers threw their hats into the ring, with Apple announcing its own Apple Watch in 2014 and releasing it the next year.
Smartwatches have yet to prove as ubiquitous as smartphones, with many still skeptical these devices will ever fully catch on the way smartphones have, but their usage continues to increase.
Why Would You Ever Need Smartwatch Forensics Services?
It may seem strange to expect a smartwatch forensics investigation to yield anything of value. After all, it’s a wristwatch, and how much can a wristwatch tell you about your case? Well, as it turns out, these smart watches, along with the phones they’re connected to, can reveal a whole lot. Smartwatches are wearable computers, and today even the most basic models are packed with features that can store highly relevant information for your case.
The personal data you can find on a smartwatch and/or the user’s phone associated with it includes:
GPS and activity tracking data: Many of these smartwatches are marketed as sport watches, and can record data on the user’s whereabouts, such as the path they take on their morning jog. Many smartwatches can also include fitness tracker applications that keep track of its owner’s activities, help them manage and record their workouts, and even monitor their heart rate. These highly-personal tidbits of information can be stored within the watch or the phone it’s been set to sync to.
Calendars, schedulers, personal organizers: Smartwatches can carry a wealth of data relating to the user’s daily life. The user’s smartwatch can store calendar events for the user, such as their appointments, reminders, shopping lists, and search history, which is often synced between the watch and their smartphone and/or PC.
Text messages and phone calls: Many smartwatches fall under the category of “watch phones” and feature full mobile phone capabilities. Users can, on their smartwatch, receive and send text messages and phone calls via a Bluetooth or USB headset. Other smartwatches connect wirelessly to the user’s smartphone, allowing the owner to use their phone’s features and capabilities through the watch.
Smartwatches can also tell time.
If a smartwatch turns up during your investigation, there could be a wealth of valuable forensic data living inside it, but you might not have the resources to acquire the data within. Fortunately, Gillware Digital Forensics is here for you.
Smartwatch Forensics Services
As the prefix “smart-” suggests, smartwatches have a lot in common with smartphones. Many of the same manufacturers who produce smartphones also produce smartwatches of their own, such as Motorola, Samsung, LG, Huawei, Alcatel, and Apple. Other smartwatch manufacturers include Pebble, with its popular Pebble Time line, as well as Sony and Asus.
Many models of smartwatch run on Android operating systems. A special smartwatch-optimized version of the Android O/S, dubbed “Android Wear”, runs on some models of smartwatch such as Sony’s SmartWatch 3 and LG’s Watch Urbane. For these devices, forensic analysis bears many similarities to Android forensics.
Apple uses its own proprietary WatchOS on its Apple Watch, while Samsung typically relies on the linux-based Tizen operating system. Pebble smartwatches use their manufacturer’s own proprietary Pebble OS as well. Some smartwatches run on Ubuntu Touch, a touchscreen-optimized O/S for mobile devices based on the popular open-source Linux operating system.
Smartwatches can be quite physically similar to smartphones as well. The core of the smartwatch is an internal flash memory chip. Some smartwatches even have SIM cards of their own, which can hold relevant data to forensic investigators. Many smartwatches also hold slots for microSD cards to expand their data storage capabilities.
Smartwatch forensics shares many aspects with smartphone forensics. Like smartphones, these devices can be password-locked to prevent unwanted intrusions, and one of the great difficulties in forensic investigations can be getting past a password lock to acquire the device’s contents.
Acquiring a smartphone’s contents can require a very large bag of tricks, from software tools like Cellebrite to delicate JTAG operations or invasive chip-off forensics. Each brand and model of smartwatch will require a slightly different approach to navigating its hardware and software terrain. Like smartphone forensics, there is no one-size-fits-all approach to smartwatch forensics, and certain techniques and levels of data acquisition can be more useful than others in some circumstances and less useful in others.
Apple Watch Forensics Services
Apple threw its hat into the rapidly-widening smartwatch ring in 2014 with the announcement of its Apple Watch, which it released in 2015. The Apple Watch runs on the WatchOS, which is based on iOS, the mobile O/S used in Apple’s iPhone line. It does not have full smartphone capabilities on its own, and must be paired with an iPhone 5 or later with the Apple Watch app installed on it. For this reason, it’s important to examine both the watch and the paired iPhone. The Apple Watch can receive notifications, messages, and phone calls from the user’s paired iPhone, and can be used along with Apple Pay to pay for goods and services.
In much the same manner as iOS forensics, Apple Watch forensics can turn up important forensic data regarding the user’s app usage, and may be able to reveal information about the user’s physical activity and whereabouts that cannot be found through forensic examination of their iPhone alone. While much of the data accessed by an Apple Watch user merely passes through the device and actually originates from their iPhone itself, these actions can leave traces on the smartwatch, which Apple Watch forensics analysis can uncover.
Forensic examiners can acquire an Apple Watch’s contents using many of the same techniques used to acquire the contents of an iOS device. Like iOS devices, Apple Watches have no removable data storage capacity, only an internal flash memory chip, and its internal storage uses the HFS+ filesystem. As a result, Apple Watches have limited space for storing data—around eight gigabytes of storage space. And yet, important forensic artifacts may live even in such a small space. The digital forensics experts at Gillware can assist you with your Apple Watch forensics needs as well as forensic investigation of other smartwatches of various brands and models.
Gillware Digital Forensics’ Smartwatch Forensics Services
Gillware Digital Forensics features skilled digital forensics experts who can help you in every step of the way through your investigation. From the initial forensic assessment of the smartwatch you need analyzed to the final forensic results, to expert testimony in court to make sure our findings are represented clearly and accurately, the experts at Gillware Digital Forensics have your back.
Gillware Digital Forensics’ president is Cindy Murphy, an industry veteran with decades of experience working in law enforcement as a digital forensic expert and several major industry certifications, including recommendation as an expert from Cellebrite.
When the smartwatch you need analyzed is damaged or broken, Gillware Digital Forensics can leverage the expertise of our secure, GSA-contracted data recovery lab. Our data recovery and forensic experts can acquire as much data as possible, sift through it, and report our findings. With the tools and experience borne out of years of data recovery and digital forensics work, the experts at Gillware Digital Forensics can excel at even the most delicate, sensitive, and complex cases.
If you need assistance with smartwatch or Apple Watch forensics, the experts at Gillware Digital Forensics are here to help you.
To get started on a case, follow the link below to request an initial consultation with Gillware Digital Forensics. | <urn:uuid:fade0769-6ee5-499b-bd07-15a54c9cca4f> | CC-MAIN-2017-04 | https://www.gillware.com/forensics/smartwatch-forensics-apple-watch | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00201-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946468 | 2,185 | 2.6875 | 3 |
Whether or not the D-Wave quantum computer is in actuality a quantum computer is a debate that HPCwire has been following since the project hatched in 2007. Early critics of the system claimed that it wasn’t a “real” quantum computer. Since that time, D-Wave has been winning over supporters, including Google, NASA and Lockheed Martin. But others (MIT’s Scott Aaronson for one) remain skeptical. In the course of the continuing controversy, the Washington Post‘s Timothy B. Lee recently took up the matter with D-Wave’s vice president of processor development Jeremy Hilton.
There is some disagreement over what exactly constitutes a quantum computer. The classic model for building a quantum computer is called Gate Model Quantum Computing, but an alternative model was introduced by MIT researchers in the early 2000s. It’s called adiabatic quantum computing, and it’s what D-Wave’s computer is based on. Some critics of D-Wave’s technology contend that it’s not the real deal.
“What D-Wave built is not universal quantum computing,” Hilton readily admits, but he maintains that it’s been proven in the literature – not by D-Wave – that the adiabatic model is an equivalent model of quantum computation.
D-Wave founders went with the adiabatic approach because they thought it had the best chance of enabling real work in a reasonable time frame without getting into the really difficult NP-Hard class of problem-solving.
Asked to be “concrete” in describing their hardware, Hilton responds:
“D-Wave has focused on the superconducting side of things to benefit from the infrastructural advancement the semiconductor has made. The fabrication of superconductors is all [mature] semiconductor technology. We fabricate [our chips] at Cyprus Semiconductor. We don’t have exotic tools to make those devices. That was an important aspect for D-Wave, we want to scale up to a high level. If all those problems have already been solved, we’ll be able to take advantage more quickly. [If we had used] ion trap technology, new technologies would have needed to scale up.”
As for why this system should be considered impressive even though it’s only “comparable or slightly better” than classical computing technology, Hilton affirms that even being “in the ballpark of the conventional algorithms in the field was very exciting.” The company and its backers are focused on their future roadmap and on the large improvements they are seeing between generations. For example, transitioning from a 128 qubit to 512 qubit processor returned a 300,000x improvement in performance. A 1000 qubit is planned for release some time this year and a 2000 qubit processor is on the horizon as well.
“We’re at a point where we see that our current product is matching the performance of state-of-the-art classical computers,” Hilton adds. “Over the next few years, we should surpass them. The ideal is to get into a space that is fundamentally intractable with classical machines. In the short term all we focus on is showing some scaling advantage and being able to pull away from that classical state of the art.”
In the remainder of the Q&A, Hilton uses a hills and valleys metaphor to describe how the D-Wave machine compares with its conventional computing cousins (“entanglement allows those valleys to interact and interfere in a way that allows the system to find its lowest-energy optimization”). He also explains why D-Wave hasn’t focused on Shor’s algorithms (“it’s not an interesting market segment for a business”), and counters claims of secrecy as a historic effect (their early years were focused on building a scalable technology, not publication).
In the final analysis, Mr. Lee asks all the right questions, but the responses, while frank, can come off as frustratingly vague – a paradox befitting the subject matter, perhaps, or something more calculated if you’re a critic.
Richard Feynman has been quoted as saying: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” But he also said: “If you can’t explain it to a six year old, you don’t really understand it.” | <urn:uuid:b8323a73-ac24-4c97-a1b6-df65ca8c6a37> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/13/everything-wanted-ask-d-wave/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00109-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950146 | 938 | 2.953125 | 3 |
“When we have a new president, you and your family are going to be deported.”
This upsetting comment was one found in a list of commonly heard bullying phrases compiled by fifth graders in the Midwestern United States.
When 11- and 12-year-old students were asked what this statement meant, they did not know what “deported” meant. They did know that they’d heard the term in regards to the current U.S. presidential election, though. Students stated they heard people say comments like this in educational environments because politicians have said them on the TV at home.
What kids hear when they listen to today’s political news
Racial insults are not the only bad things uttered by political candidates, though — especially recently. Politicians yell, interrupt, call each other names and exemplify other bullying behaviors on the news and social media. And kids are watching and listening intently the entire time.
All 50 states in the U.S. currently have laws against bullying among young people. Nearly all schools have anti-bullying policies enforced by teaching students what bullying behavior is, how to recognize it, and what to do to prevent it. So students and teachers across the country are expressing concern that the bullying behavior among high-profile politicians is setting a poor example for today’s youth. They believe this type of bullying would not be allowed on a school campus.
In a recent CBS News article, Buffalo, New York School Administrator Will Keresztes stated that much of the political rhetoric would violate not only the district’s code of conduct, but it would also violate the state’s Dignity for all Students Act.
How do young people get their news?
Most students who were polled for the aforementioned list of bullying phrases reported that they learn what politicians say and what people think of them through social media platforms. These platforms include facebook, Instagram and Twitter. In fact, about 80 percent of these kids have their own facebook accounts, even though facebook’s policies restrict user profiles to those over the age of 13.
With nearly 75 percent of the U.S. population on facebook, it’s safe to say that most people reading this know that the comments, memes and shared websites on facebook are not always factual information. Yet, young people get their news from what they see on social media, and then mimic the less-than-upstanding behaviors of politicians currently in the spotlight.
What do educators think of politicians’ behavior?
A recent survey of approximately 2,000 teachers produced by the Southern Poverty Law Center indicates that the presidential campaign is having a profoundly negative impact on school children across the country. Educator Kelly Ann Carroll, teacher of the year in one of the largest school districts in Texas and parent of six children is seeing the effects of the campaign first hand. Here are her thoughts on the situation:
“I am ashamed at the example today’s politicians are giving American children. We see images of smiling government candidates of all stations promoting suicide prevention and anti-bullying policies to our children and school systems yet the cannot or refrain from acting worse than hazing frat boys on national television. The mud slinging occurs so frequently that children can’t help but be exposed.”
What can adults do to help students understand this info?
While the behavior of politicians on the news and social media can’t be controlled, schools and parents can use the examples of bullying as teachable moments with young people. Here are some lessons that can be taken from the presidential race:
In a South Dakota news report, school counselor Laura Meile said she teaches students that actions now have consequences down the road. Even though the Facebook and Twitter attacks seem to gain candidates a lot of attention, the act of posting negative things about someone online can cost both the attacked person and the attacker opportunities in the future. “We remind students this can follow you, and this doesn’t go away,” Meile said. “Colleges may look at it, and employers will look at it, also.”
Additionally, the current political climate gives adults the opportunity to talk to young people about appropriate behavior when the bullied child hasn’t done anything to provoke it. It provides a platform to talk about bullying, fighting fair, being honest and being a good role model.
In a TODAY show article on talking to kids about politics, Dr. Deborah Gilboa says, “Our kids and teens will all see plenty of adults in the spotlight who behave badly, from favored athletes to celebrities. It’s worth talking about these choices so that our children can use those examples to guide their own good choices.”
When discussing inappropriate behavior online and in person, adults have the opportunity to give young people the tools to deal with bullying. Anti-bullying expert and Founder of the Hey U.G.L.Y. organization Betty Hoeffner says that providing students with a standard response to bullying behavior is a key tool for dealing with negative situations.
Hoeffner teaches students that the biggest reasons people bully is because they are hurting themselves. Because of this, a good standard response to a bully is: “Who’s treating you so mean that you have to be mean to me?” This takes the power away from the bully by forcing them to look at their own feelings. Imagine if politicians today responded to each other with that remark!
In addition to a standard remark, students should also be taught steps to take to deal with cyberbullying. Hey U.G.L.Y. provides some suggestions for this here.
Research from Hawkins, Pepler & Craig in 2001 shows that 57 percent of bullying situations stop when a peer intervenes on the victim’s behalf. Students who witness bullying or who are being bullied themselves need to know how to anonymously report the situation without fear that retaliation will happen.
Make sure students know who they can talk to at the school about bullying issues. If your school has online reporting tools, such as Impero Education Pro’s Confide function, then make sure students have a clear idea of how to use them. Finally, let students know they can call the tipline at their local police department about bullying, and the call will be completely confidential.
Today’s political race shows student safety is more important than ever
Mudslinging and attention-grabbing behavior has been and may always be part of political campaigns. Regardless, upholding a school’s policies, values and student safety is important. Utilizing real-life situations to provide teachable moments helps students to feel safe, have open communication, be unique and become future leaders.
“Knowledge — that is, education, in its true sense — is our best protection against unreasoning prejudice and panic-making fear, whether engendered by special interest, illiberal minorities or panic-stricken leaders.” – Franklin D. Roosevelt, 32nd President of the United States
Impero Education Pro software helps schools facilitate digital safety by monitoring for issues such as bullying, suicidal behavior, eating disorders, weapons and violence. To talk to our team of education experts, call 877.883.4370, or email Impero now to arrange a call back.
If you are a nonprofit organization that would like to partner with Impero to keep kids safe from weapons and violence, child abuse, bullying or other harmful acts, email us today. | <urn:uuid:70bf2271-b8c2-49cd-b388-0cd390cdf0ac> | CC-MAIN-2017-04 | https://www.imperosoftware.com/finding-teachable-moments-about-bullying-from-the-u-s-presidential-race/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00017-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960671 | 1,547 | 3.359375 | 3 |
6 Ways the IoT Could Impact Data Centers
The Internet of Things (IoT) is a term that refers to the interconnection between physical objects that is also known as a physical or IP address. This Gartner Report states that, by 2020, the IoT is expected to build to 26 billion units. This means that all data centers must be prepared for the increase of data, not just in terms of storage but for other aspects as well.
Here are 6 challenges that data centers should prepare for as a result of the IoT:
1. Volumes of data storage: The data of the IoT could come from personal consumers, devices and large enterprises. The combination of these sources could lead to an astronomical growth in the quantity of data that must be stored by a data center. While some amount of scalability is always planned for, in order to be proactive, a data center must plan specifically for the IoT.
2. Data security and privacy: With an increase in the amount of data, the security measures in the data center also need to be strengthened accordingly. The multiple devices used to access data add to the concerns about breach of privacy. These devices may vary from the smallest phone or tablet to a smart kitchen appliance or automobile.
3. Network requirements: Most data centers are equipped for medium-level bandwidth requirements for access to the data. With the IoT, the number of connections and the speed of access would both have to undergo significant improvements to satisfy the growing requirements.
4. Scaling of storage architecture: The increase in the storage requirement could also lead to a challenge in the way the storage and servers are configured. It is recommended that a distributed structure is adopted to make the storage and access most efficient.
5. Multiple locations: Providing a solution for storage of IoT data that comes from multiple locations could be a challenge for a data center at a single location. The trend might need to move towards a collection of connected centers that are administered from a central location.
6. Cost effectiveness: The type of detailed back up data that is possible in the current landscape may no longer be affordable both in terms of storage and in terms of required network bandwidth. This might encourage a need for selective backup with a well-thought out frequency of performing the operation.
At Lifeline Data Centers, we believe that identifying future challenges is the first step towards finding solutions. Contact us to learn more. | <urn:uuid:ff941b82-1346-4bb7-9ba9-5d296dba04a5> | CC-MAIN-2017-04 | http://www.lifelinedatacenters.com/data-center/6-ways-the-iot-could-impact-data-centers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943781 | 487 | 2.734375 | 3 |
: One or more of the identified infections is a backdoor Trojan
. Backdoor Trojans
, and IRCBots
are very dangerous
because they compromise system integrity
by making changes that allow it to be used by the attacker for malicious purposes.
They can disable your anti-virus and security tools to prevent detection and removal
. Remote attackers use backdoors as a means of accessing and taking control of a computer that bypasses security mechanisms.
This type of exploit allows them to steal sensitive information like passwords, personal and financial data which is then sent back to the hacker.
Read Danger: Remote Access Trojans
You should disconnect the computer from the Internet and from any networked computers until it is cleaned. If your computer was used for online banking, paying bills, has credit card information or other sensitive data on it, all passwords should be changed immediately
to include those used for taxes, email, eBay, paypal and any other online activities.
You should consider them to be compromised
and change passwords from a clean computer, not the infected one. If not, an attacker may get the new passwords and transaction information.
Banking and credit card institutions should be notified immediately of the possible security breach. Failure to notify your financial institution and local law enforcement can result in refusal to reimburse funds
lost due to fraud or similar criminal activity.
If using a router
, you need to reset it with a strong logon/password before connecting again.
Although the infection has been identified and may be removed, your machine has likely been compromised and there is no way to be sure the computer can ever be trusted again. It is dangerous and incorrect to assume the computer is secure even if the malware appears to have been removed
In some instances an infection may have caused so much damage to your system that it cannot be successfully cleaned or repaired. The malware may leave so many remnants behind that security tools cannot find them.
Many experts in the security community believe that once infected with this type of malware, the best course of action is to wipe the drive clean, reformat
and reinstall the OS. Please read:
Backdoors and What They Mean to You
Whenever a system has been compromised by a backdoor payload, it is impossible to know if or how much the backdoor has been used to affect your system...There are only a few ways to return a compromised system to a confident security configuration. These include:
• Reimaging the system
• Restoring the entire system using a full system backup from before the backdoor infection
• Reformatting and reinstalling the system
This is what Jesper M. Johansson
, Security Program Manager at Microsoft TechNet has to say:
Help: I Got Hacked. Now What Do I Do?
The only way to clean a compromised system is to flatten and rebuild. That’s right. If you have a system that has been completely compromised, the only thing you can do is to flatten the system (reformat the system disk) and rebuild it from scratch (reinstall Windows and your applications).
We will do our best to clean the computer of any infections seen on the log. However, because of the nature of this Trojan, I cannot offer a total
guarantee that there are no remnants left in the system, or that the computer will be trustworthy.
Many security experts believe that once infected with this type of Trojan, the best course of action is to reformat and reinstall the Operating System.
Making this decision is based on what the computer is used for, and what information can be accessed from it.Knowing the above, do you wish to proceed with cleaning the malware from the computer? | <urn:uuid:15882e7d-b1bd-4b22-8656-b46c197a2768> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/479906/i-cant-stop-httpbtsearchname-from-being-my-homepage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931817 | 757 | 2.828125 | 3 |
I've seen a few headlines about the habitable planet (or planets) orbiting a star called Tau Ceti in which the words "neighboring" and "nearby" were used.
OK, on a universal scale, 12 light years might qualify as "next door." But in terms of our ability to cover vast distances in space ... well, we just can't do it. And we won't be able to in our lifetimes.
The numbers are sobering. A light year measures 5.88 trillion miles. That's trillion, space fans. For some perspective, the moon is 240,000 miles from Earth, while Mars is (on average) 140 million miles from Earth. And it currently takes anywhere from six to nine months or so to travel to Mars.
To cover one light year -- or 5.88 trillion miles -- you would have to travel back and forth from Mars more than 40,000 times.
And that's just one light year. Our "neighboring" Earth-like planet orbiting Tau Ceti is 12 light years away. I looked around a bit online for how long it would take to travel a light year at current space-travel speeds. The number I kept coming up with was around 35,000 years. Multiply that by 12 and you discover it would take 420,000 years to reach Tau Ceti. How many times will the commander of that flight hear, "Are we there yet"?
Let's say I'm off on my multiplication and division (about a 50-50 shot) by a factor of 100: It still would take 4,200 years to reach Tau Ceti under current travel speeds. Does that really sound more doable?
The truth is, interstellar travel is a pipe dream until we find a radically new source of fuel or way to travel.
It turns out, though, that NASA is working on a faster-than-light "warp-drive" that would reduce a trip to Alpha Centauri -- the nearest star to our sun, about 4.3 light years away -- to a mere two weeks. So figure about six weeks to reach Tau Ceti.
Of course, this is all in the theoretical stage. There's a good article from late November on the science website io9 about NASA's efforts (including an interview with NASA physicist Harold White) that explains the warp concept:
It takes advantage of a quirk in the cosmological code that allows for the expansion and contraction of space-time, and could allow for hyper-fast travel between interstellar destinations. Essentially, the empty space behind a starship would be made to expand rapidly, pushing the craft in a forward direction — passengers would perceive it as movement despite the complete lack of acceleration.
Bottom line: They're working on it, but don't hold your breath. And don't run out and book a flight going beyond our solar system: You'll have a long, long wait. | <urn:uuid:ca11f2bf-7c6b-4a4c-8784-aa2e93a5b313> | CC-MAIN-2017-04 | http://www.itworld.com/article/2717166/hardware/don-t-count-on-visiting-the--nearby--earth-like-planet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00137-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960925 | 605 | 3.3125 | 3 |
Don't Patch and Pray
It could be argued that the ease with which patches can be distributed has fostered an environment of features, as opposed to an emphasis on mature development practices that inherently promoted stability and security. With today's constant stream of patches coming from so many sources, however, and the tandem needs for security and stability applying pressure, organizations can no longer afford to patch and pray.
Why Are There So Many Patches?
Despite all of the advances over the years, software development is still immature. There are dozens of well thought out methodologies to assist in the defining of requirements, module interactions, code re-use, testing, etc. But let's think about this for a moment: which parts of a computer have code in them? Let's make a quick list: the CPU, BIOS, storage system, graphics card, network card, other hardware with on-board firmware, the operating system, device drivers, and security applications (including the anti-virus and personal firewalls), not to mention all of the in-house and third-party user applications. If you were to take a typical desktop PC, for example, the list of software/firmware written by various groups can quickly number in the hundreds and yet, they must all co-exist and many of the applications must work together in varying degrees.
The point is this: different teams using different personal styles, methodologies, tools, and assumptions generate all of this code, often with little to no interaction. When you combine the various pieces of software (i.e., all compiled or interpreted code, be it embedded in firmware or run in an OS environment), the results aren't always readily predictable due to the tremendous number of independent variables. As a result, issues arise; and when development groups attempt to fix the issues, they generate software patches with all of the best intentions.
If we return to our basic principle — as software becomes increasingly complex, the number of errors in the code will rise as well — this also means that potential errors with the patches themselves will correspondingly rise as well. Furthermore, patches often contain third-party code, or ancillary libraries that are not directly designed, coded, compiled, and tested by the development team in question. Simply put, there are many variables introduced with patches.
To be explicit, for the purpose of this article, patches are defined as a focused subset of code that is released in a targeted manner as opposed to the release of an entire application through a major or minor version code drop. The patch may fix a bug, improve security, or even update from one version of the application to another in order to address issues and provide new features. These days, of course, the security patch issues really get a lion's share of media attention, but correcting security isn't the only reason patches are released.
Regardless of the intent of a patch, the problem is that the introduction of a patch into an existing system introduces unknown variables that can adversely affect the very systems that the patches were, in good faith, supposed to help. Organizations that apply patches in an ad hoc (i.e., little or no planning taking place prior to development) manner are known to "patch and pray." This slang reflects that when patches are applied IT must hope for the best.
Interestingly, in reaction to the often-unknown impact of patching, there appears to be one school of thought wherein all patches should be applied and another that argues that patches should never be applied. It is unrealistic to view the application of patches as a bipolar issue. What groups need to focus on is the managed introduction of patches to production systems based on sound risk analysis.
It's All About Risk Management
In a perfect world, everyone would have the exact same hardware and software. This way, any new patch would perfectly install without issues. However, this perfect view is nearly impossible to attain on a macro/global scale, but does serve as an interesting thought experiment. The fact is that organizations will almost always have different environments than their vendors, peers, competitors and so on. Thus, any patch applied to existing systems carries a degree of risk.
Likewise, there are risks associated with not patching. What organizations need to do is assess the level of risk of each patch, define mitigation strategies to manage the identified risks, and formally decide whether or not the risk is acceptable. To put this in the proper context, let's define a basic process for patching because risk management is a pervasive concern though the whole process, but risk management by itself does not define a process.
A Basic Software Patch Process
The patching process does not need to be complicated, but it must be effective for the organization and its adoption must be formalized. Furthermore, it is absolutely critical that people be made aware that the process is mandatory. The intent is to codify a process that manages risk while allowing systems to evolve. By creating a standard process that everyone follows, best practices can also be developed over time and the process refined. With all of this in mind, here is a simple high-level process that organizations can use as a starting point in discussions over their own patch management process:
There must be active mechanisms that alert administrators that new patches exist. These methods can range from monitoring e-mails from vendors, talking to support groups, all the way to using automated tools, such as the Microsoft Baseline Security Analyzer, to actively scan systems for missing patches. These patches must be identified and added to a list of potential patches for each system.
Depending on the volume of patches, it may help to sort patches on the basis of priority and identify whether the patches are to proceed, be placed on hold, or cancelled. Think of it as a form of triage because IT resources are always limited and decisions must be made early on. This does assume, however, that the people making the decisions will be adequately informed about the risks involved.
Patches should be reviewed by system, priority, and category, and also grouped. As opposed to installing each patch as it comes in, organizations need to strongly consider having a policy of grouping patches and deploying them periodically in batches following a solid testing process. For example, one step would be to only apply patches on a bi-weekly schedule. This grouping and delayed application approach need not apply to all situations.
In the case of high-priority patches, wherein the risks associated demand immediate patching, then there must exist means to handle emergency exceptions in an accelerated fashion while maintaining effective controls. In other words, yes, hot patches will come in and demand immediate installation. However, rather than bypass all review and testing steps, there still needs to be a means to review the hot patches and make informed decision about their expedient installation.
Part of the planning process should also define how the appropriate stakeholders would be notified about an upcoming series of patches. The communication plan should outline how the stakeholders will be updated of issues, progress, and completion, as well as any post-implementation reviews. The degree of communication depends on what the patch is, the level of risk, and the stakeholders in question.
3. Initial Testing
Ideally, all patches will be reviewed on segregated test systems that mirror the production environment as closely as possible. The intent, of course, is to test and discover problems prior to going into production. This allows time for issues to be investigated. Again, and it is an ideal, production systems would never be patched directly. However, as mentioned earlier, there are situations, such as Code Red, Nimda and MSBlaster, wherein the security risks are so high, that production systems may need to be patched directly. To reiterate, the risks must be identified and reviewed in order for an informed decision to be made.
Note that testing should not be ad hoc. In other words, testing of each system should follow a formal test plan that outlines the main applications, functionality, test process and expected results if the applications are performing as planned. Yes, this does take a while. However, if stable systems are desired, it is time well spent. If a flawed patch is erroneously approved, installed, and causes production systems to fail, the costs can skyrocket. A decision to bypass testing, or have poor testing, is a gamble that can have disastrous results.
The approval step must be formal. The intent is to take the list of patches, the implementation plan, test results, and present them to a governing body to gain approval to install. The governing body should have the technical knowledge to make an informed decision about the risks and adequacy of the planning.
Even emergency patches must have a defined fast-track process that still requires approval to proceed. Never underestimate the value of review to catch potential issues.
Part of the planning step should be a deployment plan. It may prove beneficial to roll a patch out in phases starting with the least critical systems to see if there are unknown issues that unexpectedly appear in the production environment. In terms of actually installing the patches, there are manual methods and increasingly often, automated update tools that can be used to expedite the installation process. The key here is that installation should follow an approved plan. The actual installation of the patches in production is a relatively small part of the overall patching process.
6. Post Deployment Testing
The military has a saying that few plans survive contact with the enemy. In the context of patches, we must be sure that the deployed patches do not break the production systems. At this point, failures could result due to the patches, due to issues with the deployment system, due to keyboarding error, etc. Regardless of why, it is important that there be previous coordination with stakeholders to quickly assess systems to ensure that they are still operating as planned.
Once the patches have been deployed, there should be long-term automated monitoring in place to detect anomalies. Again, because so many variables are in play, even involved test plans may fail to identify a combination of events and values that causes a system to fail. Part of the patching process should be a review of any impacts to the monitoring systems. It may be that patches necessitate changes to production monitoring in order for it to continue to be effective.
In the end, a patch process takes time and effort. As a result, some personnel may elect to attempt bypassing the process for one reason or another. To be successful, the process cannot be partially followed. Everyone must follow the formal patch process.
As a side note, there are automated configuration integrity systems, such as Ecora and Tripwire, which should be used to detect changes. Detected changes must tie out to approved change orders and any others identified as unauthorized changes. All unauthorized changes must be investigated as to why they happened and corrective action taken to prevent them from happening again. Bear in mind a simple auditing tenet — there is no such thing as an immaterial control violation. If a control is bypassed, then a weakness exists and the next breech could be far worse if left uncorrected.
Software is complicated and there will continue to be issues that necessitate patching. As a result, organizations must develop processes that assess the risks associated with patching and make determinations about what to do and what not to do. Organizations can no longer afford to have a "patch and pray" mentality. Instead, they must view patching as a formal process that is going to be around for a long time. | <urn:uuid:3a7f6895-61a0-4ab5-bab4-dd2a86e07837> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/trends/article.php/11047_3065821_2/Dont-Patch-and-Pray.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00347-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954746 | 2,331 | 2.578125 | 3 |
Private and public sector water irrigation systems are getting a boost from high-test computing as they harness custom weather data to create smart watering systems that could save billions of gallons of water across the nation.
According to a recent article in Scientific American, a number of companies and municipalities have relied on irrigation and sprinkler systems that would turn on and off and particular times during the day without human involvement. However, during periods of heavy rain when such systems wouldn’t be needed, sending a maintenance person around to the locations where sprinkler or irrigation systems were would be a lengthy process and considered a waste of effort since they would need to be reset again.
The article points to one example in a Silicon Valley school district where, in 2009, “the district installed new smart controllers that automatically adjust daily watering to the weather.” They describe how “each box, fitted with a microprocessor and antenna, receives local real-time weather information by satellite from the WeatherTRAK climate center supercomputer run by Petaluma California-based HydroPoint Data Systems.” This data then regulates the watering and irrigation systems, sometimes instructing them to run once in 11 days versus daily.
The article goes on to point to how this real-time data is being used to regulate and control water output in a way that goes beyond mere timing and watering intervals:
“With most sprinkler systems, property owners set the traditional controller—basically a timer—to irrigate at specific intervals. Often, too much water is lost to evaporation during hot weather or to runoff during cool weather, which can also carry chemicals into the local watershed or ocean. Because outdoor irrigation can suck up 50 percent or more of urban water consumption, smart irrigation services have caught on in drought-prone western states like California, where water prices are relentlessly rising. (Occasional big floods don’t help the long-term problem.) HydroPoint now has more than 8,000 clients using 24,000 of its smart controllers, including Walmart, Coca-Cola, Hilton, Jack in the Box and the University of Arizona as well as the cities of Charleston, S.C., Houston and Santa Barbara.” | <urn:uuid:f91df5e6-8958-40d3-8325-001f455d9fbd> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/07/05/supercomputer_feeds_smart_irrigation_systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00467-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946239 | 447 | 2.953125 | 3 |
It’s no secret that biometric technology deployments are on the rise. Increasingly, retailers are catching on to the unique benefits and security that biometric technology offers to positively identify an individual by their physiological characteristics instead of through ID cards, personal identification numbers or passwords. The rapid growth of biometric technology seemed to begin shortly after we shifted into a society aggressively focused on safety and security in the wake of the rise in global terrorism. Biometrics was soon recognized as the only technology that could tell with near absolute certainty that someone was who they claimed to be. Governments were the first to actively use biometric identification to secure their intellectual and physical property and then slowly expanded to border control and public safety.
The progression of biometric technology didn’t stop solely with security deployments though; it kept on growing and progressing. As price points dropped and the technology became more refined, deployments began to shift to the private sector as companies took notice that biometrics had strong potential to help them with problems like employee time theft, inventory shrink, identity theft, compliance and fraud. Widespread adoption by the private sector fueled the growth of biometric systems designed to positively identify individuals to prevent these problems and with this growth came increased scrutiny of the technology (specifically how individual biometric data was stored and what it may be used for other than identification) by Privacy advocates and proponents of civil liberty protection. Their feelings are that biometric technology violates individual privacy without a 100% guarantee that templates are safely stored and unable to be stolen and governments are not using the data to track citizens interacting with a system and subsequently disseminating the information collected to external bodies.
These arguments are strong but perhaps a closer look at how the technology works would help uncover some answers to these concerns and clear up some misconceptions about biometric technology.
The Privacy Issue – How Does Biometric Technology Actually Work?
Most people believe that when an individual places their finger on a fingerprint reader to register their identity in a biometric system, an image of their fingerprint(s) is stored somewhere on a server or a computer. In actuality this is typically not the case. Instead, the biometric matching software extracts and stores what is known as an identity template. This is a mathematical representation of data points that a biometric algorithm extracts from the scanned fingerprint. The biometric identity template is simply a binary data file, a series of zeros and ones. The algorithm then uses the template to positively identify an individual during subsequent fingerprint scans. No image is ever stored or transmitted across a network. In addition, the algorithm is “one way” which means that it is nearly impossible to recreate the original biometric image from the template. In other words, it is nearly impossible to reverse engineer the data that is sent to positively identify an individual and successfully “steal” their biometric identity.
Understanding these processes is central to realizing how the danger of identity theft or a security breach is significantly lessened, if not completely eliminated, through the use of a proprietary algorithm with no stored image and data encryption. Biometric templates are also not linked to anything in a closed system that can positively identify an individual outside of that system.
However, privacy advocates strongly feel that the idea of capture, storage and use of biometric data (specifically by governments either through mandated deployments for social services/social issues or request of data and records from private business) to assemble a comprehensive citizen knowledge base and thus exercise covert control of society in general is a violation of individual privacy and proves to be a valid point.
If you adopt biometric technology for time and attendance, access control or another deployment within a business, do employees have a right to refuse participation on the grounds that it violates their privacy and/or individual civil liberties? It brings up an interesting question. Without irrefutable proof that a biometric database can’t be hacked into and the templates reverse engineered into images, if an employee did decide to decline participation, would they be able to prove their claim that the technology did in fact violate their civil liberties?
There have not been any known cases here in the U.S. of an employee taking their employer to court for their refusal to enroll in a biometric identification system that resulted in wrongful termination or a violation of their equal opportunity rights. However, shouldn’t biometric information be treated as any other personally identifiable data that an employer keeps on file like social security numbers, pictures, or bank information if you request a direct deposit? Information that, if stolen, could be used to recreate you as a person? Most companies already have policies in place that govern the safe protection of this data and biometrics should arguably be included and not treated any differently. It should be treated the same way as the data you have already given up and is stored just by being an employee of the company.
Most employers also monitor their employee’s activities while they are at work which could include video, email and telephone monitoring. An employee is then asked to sign that they received and read the employee manual that explicitly states their acknowledgement that they will be monitored throughout their employment tenure. Remember that this is not a request for permission to be monitored; it is an agreement that the employer will be doing it.
It is also important to note if you have a Twitter or Facebook account, purchase on the Internet, use credit cards at brick and mortar establishments, subscribe to publications on the Internet, have any form of insurance or bank account, etc. you no longer have any privacy. If you use one or more credit cards, the credit card company knows where you eat, what you eat, what kind of car you drive, where you live, what insurance you have, where you spend your vacations, what you read, how much you spend on shoes and more. If you use most social media platforms, you have publicly given up every bit of privacy you ever had. Although these are personal preferences, it makes the argument hard to justify that enrollment in a biometric system is any more egregious that most of the other daily online and offline activities that we participate in. | <urn:uuid:e233b00c-e664-4ae0-929a-7a6553fb7674> | CC-MAIN-2017-04 | http://blog.m2sys.com/retail-point-of-sale/do-employees-have-a-right-to-refuse-enrollment-in-a-biometric-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00375-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956149 | 1,237 | 2.6875 | 3 |
Massive open online courses offer IT professionals the opportunity to learn about some of the tech industry's most in-demand and current topics for free. Available to anyone with a Web connection, MOOCs cover a range of hot tech topics including software defined networking, cloud computing, security, drone development, artificial intelligence and mobile programming.
Popular MOOC platforms include edX, a venture developed by the Massachusetts Institute of Technology and Harvard University, and Coursera, which was founded by two Stanford University professors. Course material mostly comes from academic institutions that adapt the material taught in classrooms for online learning. Cornell University, the University of California Berkeley and Caltech are just some of the schools that have made content available on MOOC platforms.
Nonacademic institutions are also becoming interested in offering professional development material as a MOOC. EdX recently added the Linux Foundation’s introductory Linux development course to help address the need for programmers who know the open-source operating system.
"Many of our students are looking for courses on topics that enable them to get a better job or bridge skill gaps, and Linux is one example [of that]," said Anant Agarwal, president of edX, at the time of the announcement. "A verified certificate from the Linux Foundation would have a lot of credibility in the marketplace."
+ ALSO ON NETWORK WORLD Free data science courses from Johns Hopkins, Duke, Stanford +
MOOC educations are gaining traction among tech employers. Hiring managers focus on how workers have used their tech skills to help a business, not whether they learned them online or in a classroom.
"We're not theorists here. We're actually buildings things," said Chad Morris, product lead at Mandrill, the transactional email service from MailChimp. "We're really looking at what it is you've actually done.
The computer science skills Tyler Kresch learned from edX’s cloud computing classes helped him transition from a sales position to a junior developer role at Procore Technologies, a Santa Barbara, Calif., startup that makes cloud-based construction management software.
"I created a small app to help with the really tricky part of the account setup," said Kresch, whose long-term career goal involves starting a tech company. "It used to take an hour of our account manager's time to close every new account. We now use my tool and that saves us that hour."
Kresch’s experience illustrates the caveat to a MOOC education: People need projects that show hiring managers how they've used the tech skills they learned online. Opportunities to gain this experience include contributing code to an open source project or volunteering to work on a nonprofit's tech projects.
"There's this big trend toward people moving away from evaluating a brick-and-mortar education and really valuing the experience," he said. "These days your résumé -- more often than not -- is your online presence. It's your list of projects that you've done. It's not courses that you've taken."
Here are some tech-focused MOOCs, listed by platform, that can help an IT worker professionally.
- Engineering Software as a Service
- Introduction to Linux
- Building Mobile Experiences
- Cyber-Physical Systems
- Autonomous Navigation for Flying Robots
- Software defined networking
- Programming Cloud Services for Android Handheld Systems
- Information Security and Risk Management in Context
- Artificial Intelligence Planning
- Malicious Software and its Underground Economy: Two Sides to Every Story
Fred O'Connor writes about IT careers and health IT for The IDG News Service. Follow Fred on Twitter at @fredjoconnor. Fred's e-mail address is email@example.com | <urn:uuid:e6cd21f9-8aed-41ad-8cf4-108bcae60524> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2175557/software/free-online-courses-abound-to-help-you-bone-up-on-linux--sdn-and-more.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00375-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942433 | 771 | 2.59375 | 3 |
The basic mechanical failure mechanism for optical fibres is the slow to rapid growth of any glass imperfections in the fibre caused by the fibre being under stress. This ‘fatigue’ phenomenon can be accelerated with the presence of moisture (H2O) molecules at the glass surface of the fibre. So the waterproof for fiber optic cable is very important . Now let us know more info about waterproof cable.
All manufacturers of fibre optic cables intended for use outdoors must address the issue of protecting the fibre’s glass surface from the presence of moisture. Many manufacturers provide the waterproof characteristic to solve the problem. This is because the 250μm primary fibre coating provides only a 62.5μm-thick layer of UV-cured acrylate material as basic protection over the fibre’s glass surface. This UV-cured acrylate material is not chosen by the fibre manufacturers for its optimal resistance to water or its minimal porosity. It is in fact chosen primarily because of its fast processing speed,since a primary cost driver for fibre manufacturers is the draw speed, which is steadily increasing. The very thin UV-cured acrylate layer is porous to water molecules and will permit concentration of OH-ions at the fibre surface, if the fibre is immersed in water.
All plastic materials are porous to varying degrees. The general category of thermoplastic materials commonly used in cable constructions will to some extent absorb water; however, thermoplastic materials certainly do not act as a complete water block. Only materials like metals or glass can provide a true ‘hermetic’ seal. Plastic materials are generally characterised with parameters such as water absorption and absorption of other common solvents such as oils, gasoline, kerosene, etc. This being the case, water molecules cannot be eliminated from the glass surface of any fibres incorporated in a cable having plastic jackets. The issue is to minimise the concentration of water molecules at the glass surface so that stress crack growth effects are minimised.
There are two different designs approaches to water and moisture protection in fibre optic cables.
The loose tube gel-filled cables must prevent water from reaching the 250μm coated fibres. This approach is to ‘waterproof’ the cable by ‘filling’ the empty spaces in the cable with gel, theoretically preventing water from reaching the 250μm coated fibres. To insure that this is accomplished, the ‘filled’ cables are generally subjected to a hosing test to show that water will not flow through a short section (one meter) of cables. The fact that gels can move, flow, and settle, leaves an uncertainty of the filled level of any particular point of a loose-tube gel-filled cable. This uncertainty of the filling is highlighted by the routine practice of water-blocking the loose-tube gel-filled cables at the entrance to splice housings to keep water from migrating from the cable into the splice housing.
The tight-buffered, tight bound indoor/outdoor cables utilise an entirely different design approach to deal with the moisture issue. Rather than attempting to be ‘waterproof’, they are designed to be water tolerant.
Recognising the porosity of plastic materials and the inherent problems of waterproofing a cable, the moisture protection is concentrated at the fibre surface where it is most needed.
Correctly designed harsh environment tight-buffer systems consist of extremely low moisture absorption coefficient materials at the fibre coating. This provides a buffer system thickness of 387μm over the glass which is more than six times as thick as the 62.5μm coating found in the loose-tube cables.
Buffer materials are low-porosity plastics with excellent moisture resistance. This construction very effectively minimises the water molecule and OH-ion concentration level at the glass surface and virtually eliminates the stress corrosion phenomenon. The tight-buffered design also has the great advantage of being a solid, non-flowing, non-moving structure.
The same level of protection remains in place all along the fibre, regardless of installation conditions, environment, or time.
The balance of the tight-buffered, tight bound cable designs is such that it minimises the open spaces available in the cable structure in which water can reside. Even if an outer cable jacket is cut, or water otherwise enters the cable structure, only a very small percentage of the cross-sectional area is open to water.
1,Water penetration refers to the effectiveness of cable in restricting the longitudinal movement of water or moisture along the core. This requirement is primarily intended to localise any water penetration to minimise the adverse effect on cable performance and to prevent water or moisture leaking into joints and terminations that may cause corrosion problems.
2,Additionally, cable installed underground should have a high density compound sheath material (such as poly ethylene) that provides an adequate barrier to moisture entry to the cable core. The addition of a lapped metal tape (‘moisture barrier’) and/or grease or gel within the core (‘filled’ or ‘flooded’ cable) provides even higher protection against moisture entry.
The above considerations is very important and should always be considered. Always refer to the manufacturers specification sheet and follow their installation instructions.
For the diverse requirements of our customer, we are involved in offering a wide assortment of waterproof cables. FiberStore offer these cables at very economical rates in the market. These cables are widely used and are highly demanded in the market due to their water proof nature. In addition to this, we offer these cables in various fiber optic cable specifications as per the requirements of our clients. We provide fiber optic cable products(such as duplex fiber cable,simplex fiber optic cable) are absolutely high quality and low price. | <urn:uuid:1936439c-b89f-4d0e-9b91-c727abb7015e> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-importance-of-waterproof-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918861 | 1,189 | 3.171875 | 3 |
Robotics is anything but a static field with a continuous stream of advancements adding to both the compexity and possibility behind each new development. A new subfield–“cloud robotics”–is emerging as a hot topic in research. As one might imagine, instead of relying on “in-house” resources, robots can potentially leverage the cloud to deliver instant information and to handle computationally-intensive tasks that would otherwise use a great deal of a robots on-board system.
Erico Guizzo from IEEE’s Spectrum pointed out that while we are unable to upload information directly into our “meat brains” to have instant access to information to help us perform tasks (ala The Matrix) robots have that advantage.
Recent research projects are leveraging information stored in the great vast cloud to enable robots to quickly acquire the skills and knowledge they need in the blink of an eye. Furthermore, the notion of cloud-driven robots means that there is the possibility for a robot to cast off heavy-duty computation to the cloud so that it can free up resources for other tasks, thus providing the opportunity for added sophistication due to more resources becoming available.
As Guizzo reported, there are a number of research groups that are exploring the idea of “robots that rely on cloud computing infrastructure to access vast amounts of processing power and data. This approach, which some are calling ‘cloud robotics’ would allow robots to off-load compute-intensive tasks like image processing and voice recognition and even download new skills instantly, Matrix-style.”
One of the more promising aspects of this idea goes beyond the “cool” factor of offloading and instant skill “level-up” via cloud-stored information. This also means that robots can decrease in size since so many are forced to carry extensive on-board computation—a serious task considering the computationally-intensive tasks that most robots perform.
In addition to the computers they need to schlep around are some seriously heavy-duty power sources, most often in the form of batteries to keep the computation and movement humming along. Reduced need for extensive on-board computation means less power usage—which combined means the possibility for much smaller robots. | <urn:uuid:e514d7fd-19d7-4741-9df2-2a700461bcb8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/03/03/cloud_robotics_research_aims_to_create_smaller_smarter_machines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938014 | 457 | 3.109375 | 3 |
SSH (Secure Shell) without password using Putty
SSH (Secure Shell) is a network protocol that provides secure access to a computer (mostly Unix based). When you want to connect to a remote Unix server, SSH is one way of accessing the server. SSH is very powerful by combining both security of the data transmitted over network and accessibility to the remote system. SSH protocol works between two computers by a client-server architecture. When a client computer connects to the server, the server requires the client to authenticate itself. There are different ways a client can authenticate itself to the server. A typical authentication mode will be to enter a password when logging into a remote system. In this howto we can explore another mode of authentication in which server doesn’t require a password to be entered by the user. This mode will be very useful if you are connecting to a remote system frequently and dont want to enter the password everytime.
Before we see the steps, just to give a background on the components involved:
When you need to connect to a remote computer via SSH, that computer should have a SSH server running on it. All Unix based distributions ( Linux, Mac OSX etc.,) includes a ssh server. For Windows based systems Cygwin can be used as an SSH server.
Assuming your remote computer has an SSH server running on it, to connect to that computer you would need a SSH client on the local computer. On Unix based systems, SSH clients are available as command line utilities. For Windows based systems, putty is an excellent client. Check here for more information about putty.
- We start the configuration at the client windows computer. Download the latest version of Putty.exe and Puttygen.exe from here. Using the Puttygen tool we have to generate an authentication key. This key will serve as a substitute for the password that will be entered during login.
- Start puttygen.exe by double clicking on the executable. The following window opens up.
- Leave the default ‘SSH-2 RSA’ selection and click on the ‘Generate’ button. The following window opens. Move mouse randomly over the empty space below the progress bar to create some randomness in the generated key.
- Don’t enter any key phrase. Click on ‘Save private Key’ button. Click ‘Yes’ on the window asking for confirmation for saving the key without a password.
- Save the key file to a safe location (Let us assume you will be saving it as C:\Personal\SSHKey\Laptop.ppk).
- Now you can close the Puttygen window.
- Open the Laptop.ppk file in a notepad. Copy the four lines under ‘Public-Lines’ section to windows clipboard.
- Now open putty and connect to the remote system using the user id you want to use for future no password connections. (Let us assume you will connect to the remote machine using user name ‘ubu’. This time when you login, you have to provide the password at the prompt. Future logins won’t require this password.
- Under the logged in user’s home directory there will be .ssh directory, under that create a new file called authorized_keys using a text editor such as vi. (In our case the file will be created under /home/ubu/.ssh/authorized_keys).
- Type the word ” ssh-rsa ” (including spaces on both ends of the word) and paste the 4 lines copied from step 7. Remove the carriage return at end of each line, merging four lines into one single line. Be careful not to delete any characters while doing that. Final output should like the following window.
- Save the file and quit the text editor. Assign rw permissions only for the owner. $ chmod 600 ~/.ssh/authorized_keys.
- Now we have configured SSH server, its time to test our setup.
- On the local system, open Putty, enter the ip address details of the remote system.
- Now from the left navigation, select Connection -> Data. Enter ‘ubu’ as ‘Auto-login username’ on the right panel.
- Again from the left navigation menu, scroll down and select Connection -> SSH -> Auth. Enter the path of the saved private key file ( In our case C:\Personal\SSHKey\Laptop.ppk ). Leave other defaults as such and press open button.
- Now the putty connects to the remote SSH server and there won’t be any password prompt here after :-).
SSH is a powerful tool and relies on password as a security. We just bypassed that security for sake of convenience. If a hacker get holds of the private key we generated, it allows a free access to your systems. So use this technique with care. | <urn:uuid:edd0b2db-9be5-44b5-8191-5127f8a85a60> | CC-MAIN-2017-04 | https://www.getfilecloud.com/blog/ssh-without-password-using-putty/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862875 | 1,028 | 3.578125 | 4 |
As keynote speaker at the International Association of Emergency Managers conference in Reno, Nev., on Oct. 29, Dennis Mileti sounded an ominous tone in his call to reduce the consequences of natural hazards citing Hurricane Katrina as an example of how badly things can go wrong even when no surprises occur. Mileti, director emeritus of the Natural Hazards Center at the University of Colorado at Boulder, pointed out lessons learned from a similar hurricane in the 1920s.
SOME OF THE DAMAGE FROM KATRINA — Loss of structural damage reached $21 billion; losses of food and water; 124,000 jobs were lost; every hospital in New Orleans was crippled and six months later just one-third of the beds were available; and 1 million people were displaced, making it the largest permanent migration since the Civil War. Half of it was a failure of the levees — a human failure.
CONSEQUENSES ARE CHANGING — The consequences are changing, upping the ante for political leaders and emergency managers because of a variety of factors including climate change. Warming trends make for more intense storms as well as more heat, drought and floods. In addition, the population is becoming more vulnerable. The population is aging, moving into hazardous areas more often and becoming poorer. Another factor is crumbling infrastructure and “pasted together” utilities.
THERE ARE GAPS IN EMERGENCY MANAGEMENT PLANNING — Few people really understand risk, and most development decisions are made locally without regard to the consequences of a disaster but rather for economic and prosperity. Future risk in terms of loss is rarely considered.
THE SEVEN TOOLS OF EMERGENCY MANAGEMENT — Mileti listed seven tools that should be used to reduce the consequences of the factors above:
1. Land Use Management (regulations, etc.)
2. Control and Protection Works (protect the public based on real consequences)
3. Building Codes and Practices
4. Public Education
5. Prediction Forecast Warning
6. Insurance (redistribute losses)
7. Preparedness, Planning and Response | <urn:uuid:da37604c-ae40-45c7-805f-99f20ae36279> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Lessons-Learned-IAEM-Conference.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944202 | 428 | 3.359375 | 3 |
Black Box Explains...V.35, the Faster Serial Interface
V.35 is the ITU (formerly CCITT) standard termed “Data Transmission at 48 kbps Using 60–108 KHz Group-Band Circuits."
Basically, V.35 is a high-speed serial interface designed to support both higher data rates and connectivity between DTEs (data-terminal equipment) or DCEs (data-communication equipment) over digital lines.
Recognizable by its blocky, 34-pin connector, V.35 combines the bandwidth of several telephone circuits to provide the high-speed interface between a DTE or DCE and a CSU/DSU (Channel Service Unit/Data Service Unit).
Although it’s commonly used to support speeds ranging anywhere from 48 to 64 kbps, much higher rates are possible. For instance, maximum V.35 cable distances can theoretically range up to 4000 feet (1200 m) at speeds up to 100 kbps. Actual distances will depend on your equipment and cable.
To achieve such high speeds and great distances, V.35 combines both balanced and unbalanced voltage signals on the same interface. | <urn:uuid:4afaeac8-72b9-4712-9e49-da27638940f5> | CC-MAIN-2017-04 | https://www.blackbox.com/en-pr/products/black-box-explains/black-box-explains-v-35-the-faster-serial-interface | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891123 | 239 | 2.546875 | 3 |
Update September 2012
The recent discovery that junk DNA is not actually junk rather reinforces my long standing thesis, espoused below, that we don't know enough about how genes work to be able to validate genetic engineering artifacts by empirical testing alone. I point out that computer programs are only validated by a coordinated mixture of testing, code inspection and theory, all of which based on knowing how the code works at the instruction level. But we don't have a terribly complete picture of how genes interact. We always knew they were 'massively parallel', and now it turns out that junk DNA has some sort of role in gene expression across the whole of the genome, raising the combinatorial complexity even further. This tells me we have little idea how modifications at one point in the genome can impact the functioning at any number of other points (but it hints at an explanation as to why human beings are so much more complex than nematodes despite having only a modestly larger genome).
And now there is news that a cow in New Zealand, genetically engineered in respect of one allergenic protein, was born with no tail. It's too early to blame the GM for this oddity, but equally, the junk DNA finding surely undermines the confidence that any genetic engineer can have in predicting that their changes cannot have had unexpected and really unpredictable side effects.
Original post, 15 Jan 2011
As a software engineer years ago I developed a deep unease about genetic engineering and genetically modified organisms (GM). The software experience suggests to me that GM products cannot be verifiable given the state of our knowledge about how genes work. I’d like to share my thoughts.
Genetic engineering proponents seem to believe the entire proof of a GM pudding is in the eating. That is, if trials show that GM food is not toxic, then it must be safe, and there isn't anything else to worry about. The lesson I want others to draw from the still new discipline of software engineering is there is more to the verification of correctness in complex programs than tesing the end product.
Recently I’ve come across an Australian government-sponsored FAQ Arguments for and against gene technology (May 2010) that supposedly provides a balanced view of both sides of the GM debate. Yet it sweeps important questions under the rug.[At one point the paper invites readers to think about whether agriculture is natural. It’s a highly loaded question grounded in the soothing proposition that GM is simply an extension of the age old artificial selection that gave us wheat, Merinos and all those different potatoes. The question glosses over the fact that when genes recombine under normal sexual reproduction, cellular mechanisms constrain where each gene can end up, and most mutations are still-born. GM is not constrained; it jumps levels. It is quite unlike any breeding that has gone before.]
Genes are very frequently compared with computer software, for good reason. I urge that the comparison be examined more closely, so that lessons can be drawn from the long standing “Software Crisis”.
Each gene codes for a specific protein. That much we know. Less clear is how relatively few genes -- 20,000 for a nematode; 25,000 for a human being -- can specify an entire complex organism. Science is a long way from properly understanding how genes specify bodies, but it is clear that each genome is an immensely intricate ensemble of interconnected biochemical short stories. We know that genes interact with each other, turning each other on and off, and more subtly influencing how each is expressed. In software parlance, genetic codes are executed in a massively parallel manner. This combinatorial complexity is probably why I can share fully half of my genes with a turnip, and have an “executable file” in DNA that is only 20% longer than that of a worm, and yet I can be so incredibly different from those organisms.
If genomes are like programs then let’s remember they have been written achingly slowly over eons, to suit the circumstances of a species. Genomes are revised in a real world laboratory over billions of iterations and test cases, to a level of confidence that software engineers can’t even dream of. Brassica napus.exe (i.e. canola) is at v1000000000.1. Tinkering with isolated parts of this machinery, as if it were merely some sort of wiki with articles open to anyone to edit, could have consequences we are utterly unable to predict.
In software engineering, it is received wisdom that most bugs result from imprudent changes made to existing programs. Furthermore, editing one part of a program can have unpredictable and unbounded impacts on any other part of the code. Above all else, all but the very simplest software in practice is untestable. So mission critical software (like the implantable defibrillator code I used to work on) is always verified by a combination of methods, including unit testing, system testing, design review and painstaking code inspection. Because most problems come from human error, software excellence demands formal design and development processes, and high level programming languages, to preclude subtle errors that no amount of testing could ever hope to find.
How many of these software quality mechanisms are available to genetic engineers? Code inspection is moot when we don’t even know how genes normally interact with one another; how can we possibly tell by inspection if an artificial gene will interfere with the “legacy” code?
What about the engineering process? It seems to me that GM is akin to assembly programming circa 1960s. The state-of-the-art in genetic engineering is nowhere near even Fortran, let alone modern object oriented languages.
Can today’s genetic engineers demonstrate a rigorous verification regime, given the reality that complex software programs are inherently untestable?
We should pay much closer attention to the genes-as-software analogy. Some fear GM products because they are unnatural; others because they are dominated by big business and a mad rush to market. I simply say let’s slow down until we’re sure we know what we're doing.
Maybe genetic engineering experts (I admit I am not one) could comment on ways that GM products might be made safe-by-design, to provide deeper assurance of safety than mere testing provides.
I read about GM bananas in New Yorker recently. Researchers assure us that because bananas are sterile, even if engineering does introduce a fault into the gennome, it won't be able to leave the plan and get into any progeny. But that argument assumes that the entire plant is still behaving normally. Because every gene potentially touches every other gene, who's to say that the GM organism is still perfectly predictable?
Again, I am not being paranoid here, I'm just saying that the job of verifying software is really tough, and the software profession has learned the hard way to be very careful with the assumptions it makes about program correctness and verifying correctness.
Point taken about the lack of assurance that GMO are safe since we still don't very fully understand how any genome works.
But what is "so incredibly different" between you and a worm??!!!?? You both have social lives, sex lives, private lives, toileting requirements, health considerations, dietary needs, sexual reproduction, jobs, emotional responses, energy levels. I suppose worms don't write blogs, drive Subarus, shop at Woolworths, pray to Jesus, Speak English with an accent, or listen to One Direction, but those are all recent developments in evolutionary terms. None of them mean anything more than a peacock's feathers mean to a peacock, in the context of the genome.
And the turnip... 50% similar? That sounds about right. You share a common ancestor. You share millions of years of grandparents. Turnips don't dance or write with ballpoint pens, but they have families and they have aspirations. They have the will to power. Compared to a piece of granite, a turnip is an amazing thing.
Certainly One Direction is all peacock feathers; their appeal is a case study in sexual selection.
But from a software developer's perspective, are you not surprised that with just 20% extra lines of code, the nematode was upgraded to be able to speak, write, drive, pray, shop and bop?
If the explanation lies in the newly discovered switching role of junk DNA, and if genes are switched on and off by bits of code spread across the genome, then I don't know how genetic engineers are able to predict the effects of gene splicing. And predict they must. My thesis is that, like software, black box testing of GMOs cannot be sufficient; we need to also perform the equivalent of code inspection, yet the fundamentals of the programming language are not yet understood.
Who's to say that arbitrary changes to a turnip's genome might not change its will to power into something more? Turnip the volume! | <urn:uuid:579993e7-04b3-4e84-a357-209e7d4f72d9> | CC-MAIN-2017-04 | http://lockstep.com.au/blog/2011/01/15/not-ready-for-gm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00577-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949468 | 1,835 | 2.6875 | 3 |
This resource is no longer available
Techniques for Unit Testing Embedded Systems Software
Developers know that early testing gives the team more time to find and repair defects. But when you're developing an embedded system, your team's access to the target hardware is almost always limited or nonexistent, preventing early testing.
Applying unit testing— or, more generally, API testing— in the host environment or on a simulator largely decouples the testing task from the availability of target hardware, allowing you to start testing while you develop an embedded system.
Read this white paper to learn how to implement unit testing for embedded systems development and tackle the challenges that can potentially get in your way. | <urn:uuid:c9472605-37cd-4a74-b2df-91faa8825e45> | CC-MAIN-2017-04 | http://www.bitpipe.com/detail/RES/1359495746_839.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00027-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884253 | 138 | 2.515625 | 3 |
Using a special polymer, researchers have found a way to use rain water to cool buildings, Gizmag.com reported. Researchers from ETH Zurich used sponge-like mats made from a material known as Poly N-isopropylacrylamide (PNIPAM) to test their idea.
Using model hobby railroad houses in a test, researchers found that a house covered with their mat could save real houses 60 percent on energy use.
In practice, the mat, just 5 millimeters thick, would be placed on rooftops and collect rain water. The mats are sponge-like and collect rain water while under 89.6 degrees Fahrenheit. After exceeding 89.6 degrees, the material becomes hydrophobic and begins sweating the collected rainwater, thus cooling the structure.
Researchers suggested the technology, which is inexpensive to produce, could be well applied in tropical climates where it is both warm and there is rain. Researchers also said there are some problems to work out, such as the fact that the mats are not resistant to frost. | <urn:uuid:f0740155-909f-4b3f-a359-3de5602ab799> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Houses-Researchers-Gonna-Make-You-Sweat.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952796 | 211 | 3.4375 | 3 |
Malicious advertising (Malvertising) is a malware attack that uses online ads to spread malicious code.
You visit a website with an infected banner or popup ad. No site is safe, no matter how legitimate it appears to be. Even mainstream sites such as NYTimes. com, Gizmodo, and Dailymotion have unknowingly carried infected ads.
Cyber criminals are able to utilize malvertising by submitting booby-trapped advertisements to ad networks for a real-time bidding process.
Malicious ads rotate in with normal ads. Therefore, when you visit an infected site, you might not be attacked.
Using software like pop-up/ad blockers offers some protection against malvertising, but employing anti-exploit software in conjunction with an anti-malware is your best bet. | <urn:uuid:91ce68a3-732d-442e-a9b4-3d78866c40e4> | CC-MAIN-2017-04 | https://www.malwarebytes.com/whatismalvertising/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00421-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907173 | 163 | 2.53125 | 3 |
Two Cornell University students, inspired by one of the pair's experience of Obstructive Sleep Apnea disorder and his struggle to get access to his sleep lab data, decided to take matters into their own hands, so they built an EEG machine for their microcontroller lab final project.
Then, being geeks, they decided to use it to play Pong. Because really, who doesn't want to play games with the power of their mind?
[ FREE DOWNLOAD: Patents and the lessons learned from Web 2.0 ]
In one of their original plans, the pair set out to detect the color a user was thinking of from a set, after training, based on reading the user's brainwaves. Although they were unable to achieve better than 64% accuracy on the color classification tests, they were able to achieve sufficient accuracy at distinguishing different magnitudes of mu brain wave suppression (characteristic of users thinking about moving, but not moving, their legs) and different magnitudes of alpha brain wave rhythms (characteristic of users' level of relaxation) that users could reliably control the Pong paddle after a bit of practice using either method.
Enough with the technical details: Here's video of the brainwave-controlled Pong in action:
The device, which uses an old baseball cap as an EEG helmet and an ATmega644 microcontroller to convert the raw, amplified analog signal from the cap into a digital signal which is then passed to a computer over a USB serial connection, cost the students less than $75 to build. They've also posted the source on Github.
Like this? You might also enjoy...
- Be Very Afraid: Sharks Now Come Equipped With Freakin' Lasers
- This Automated Dorm Room Is Ready For Romance, Parties, and Studying
- Hugvie: The Squishy, Huggable Future of Telepresence Is Here
This story, "This EEG-controlled pong game puts idle minds to good use" was originally published by PCWorld. | <urn:uuid:018bc764-47a4-48ef-bd4e-3957da9c6e9a> | CC-MAIN-2017-04 | http://www.itworld.com/article/2726104/enterprise-software/this-eeg-controlled-pong-game-puts-idle-minds-to-good-use.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00145-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952092 | 412 | 2.90625 | 3 |
The EU’s 27 member states have pledged to collectively get to the point of using 20% renewable energy, reduce CO² emissions by 20% and increase energy efficiency by 20% by, you guessed it, 2020. However, to get there requires the migration of the aging legacy utility grid to an IP-based, connected “smart grid” system that can optimize energy production and distribution according to actual consumption requirements. The problem, of course, is the potential for opening the doors to critical infrastructure systems to hackers.
The European Network and Information Security Agency (ENISA) is rushing to be proactive with an exhaustive study entitled, “Appropriate security measures for smart grids: Guidelines to assess the sophistication of security measures implementation.”
“The development of an efficient, reliable and sustainable environment for the production and distribution of energy in the future is linked to the use of smart grids,” ENISA noted in the report. “Various market drivers, regulatory or standardization initiatives have appeared or gained importance as tools to help involved stakeholders to be prepared against smart grids security vulnerabilities and attacks.”
The perception and the approach taken on this topic differ among stakeholders, ENISA noted, which is prompting it to tackle the creation of a common approach to addressing smart grid cybersecurity measures.
The ENISA propositions fall into 10 research areas: Security governance and risk management, management of third parties, secure lifecycle process for smart grid components/systems and operating procedures, personnel security, awareness and training, incident response and information knowledge sharing, audit and accountability, continuity of operations, physical security, information systems security and network security.
ENISA also noted that advanced ICT systems are at the core of an effective smart grid implementation. Also industrial control systems (ICS) and related operational technology (OT) need to be taken into account, and all processes across the whole value chain are heavily based on these infrastructures and technologies.
“Smart grids give clear advantages and benefits to the whole society, but the dependency on ICT components (e.g. computer networks, intelligent devices, etc.), ICS (e.g. supervisory control and data acquisition systems, distributed control system, etc.), OT (e.g. firmware, operating systems, etc.) and the internet makes our society more vulnerable to malicious attacks with potentially devastating results on smart grids,” said ENISA. “This can happen in particular because vulnerabilities in smart grid related communication networks and information systems may be exploited for financial or political motivation to shut off power to large areas or directing cyber-attacks against power generation plants.”
Some say that the US could take a page from the EU’s approach, particularly in the wake of the discovered SCADA vulnerabilities that make industrial info-infrastructure a startlingly vulnerable area.
“It's a pretty much common knowledge among IT professionals that the state of security within US critical infrastructure systems is laughable,” noted Threatpost blogger Brian Donahue. “So the EU's intention to implement security into its smart grid as it is expands is praiseworthy. For our part though, the Federal Energy Regulatory Commission (FERC), America's energy watchdog, announced the creation of a new office in September, the Office of Energy Infrastructure Security (OEIS), tasked with identifying, communicating and advising on risks to FERC facilities stemming from cyber attacks and physical attacks.” | <urn:uuid:90d87313-e562-4775-88ab-b978c2165816> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/eu-tackles-smart-grid-security-for-next-gen-energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00145-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926714 | 698 | 2.78125 | 3 |
We believe that anything that can benefit from a connection should have one – even a tree.
Our “Twittering Tree” senses changes in the electromagnetic field around it as people pass, and sends Tweets that reflect its mood directly to its Twitter account, ConnectedTree. This tree also reacts to people´s presence and movements by playing music, speaking and turning on and off lights.
The tree´s responses aren´t random – they are based on the activity around it. When someone moves away from it the tree will express its “loneliness” with a particular tune and a tweet. When several visitors are competing for its attention, it will comment on how busy it is. A special response is generated when someone touches the tree and an SMS is sent to the passerby´s mobile phone.
So how does it work and what is the technology behind it?
When someone walks by or approaches the Twittering Tree, its sensor transmits information about that movement and the changes it causes in its electromagnetic field to a processor in a nearby laptop, which then activates a number of responses.
About the Connected Tree
World Read Aloud Day
How to read aloud to the tree and share with others
On talking trees
Use your Twitter account and start your message with #ectree to comment on our Connected Tree´s tweets. | <urn:uuid:0a515bea-792a-4534-90a5-613185e9b42e> | CC-MAIN-2017-04 | http://www.ericsson.com/connectedtree | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950096 | 279 | 2.796875 | 3 |
On 23rd August, a number of dedicated hacker sites published information about an upcoming massive hacker attack on the Internet. This attack is allegedly to be initiated by terrorist groups on 26th August. It seems likely that such an attack would be a DoS attack, possibly combined with a virus attack.
Sadly, the threat of terrorism is a topic on everyone's minds these days. Although an Internet attack by terrorist groups may or may not take place today, such attacks will undoubtedly take place in the future. Kaspersky Lab is now publishing recommendations on what actions users can take to reduce the impact of an Internet attack on home computers and corporate networks.
Hackers more often than not use home computers to carry out hacker attacks. Users will be unaware that hackers have gained control over their machines. This is done by infecting the victim machine with a virus, or by hacking into an unprotected system. Once hackers have gained control over a victim machine, it can be used either to conduct Internet attacks or to mass mail spam (which often contains viruses). And all of this can be done without the owner noticing anything suspicious.
Users whose computers are permanently connected to the Internet, or who spend a large amount of time on the Internet, should minimize their connection time as far as possible when the possibility of an attack is heightened. Antivirus products must be updated regularly and users are recommended to enable antivirus monitoring, so that the computer is constantly scanned for new malicious programs. It is extremely important to install antivirus database updates promptly. This is because virus attacks may be conducted using new viruses which cannot be detected by old databases. By updating databases regularly, as soon as updates are released, it is possible to completely deflect an attack.
In the case of ADSL connections (when the computer is permanently connected to the Internet), a firewall is an essential piece of security software. Computers which are permanently connected are the machines most likely to fall victim to hacker attacks. A dial-up connection is therefore more secure from a security point of view. Users with high-speed connection should check that their security settings are correctly configured.
Corporate users, governmental organisations, providers
A home computer is simply a tool in the hands of cybercriminals; corporate Internet users are the main target for electronic terrorists. We can assume that a mass Internet attack will be primarily directed at web sites which are of political significance, belonging either to governmental bodies or to commercial organisations.
Antivirus and network protection are the two main ways to secure a corporate information infrastructure. System administrators should ensure that their network has no vulnerable points; if loopholes are found, IT staff should take rapid action. Administrators should also be active in tracking network activity; when attack threatens, the corporate network should be monitored 24 hours a day. Companies which take the threat of electronic terrorism seriously will understand that user education is also a key factor in establishing secure networks. In this case, IT personnel and IT security specialists should conduct additional sessions for company employees, ensuring that all users understand the basics of information security and how to protect against electronic threats.
All of the above measures will help home and corporate users maintain the integrity of their information. And antivirus protection, regularly updated, remains the cornerstone of computer security. By regularly updating antivirus databases, users are working with the manufacturer of their chosen antivirus product to protect machines against potentially devastating attacks. | <urn:uuid:8e3c036f-25de-4341-9c59-6988aaa0b145> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2004/Recommendations_for_deflecting_virus_and_hacker_attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00294-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94619 | 683 | 2.953125 | 3 |
Apollo is the god of the sun, so why Cable and Wireless has chosen to name its new submarine transatlantic cable network after him is a mystery.
Cable and Wireless is working with Alcatel to build the new transatlantic system to meet increasing IP and data demands. When complete, the network will have four fiber pairs in each of two submarine legs, capable of 3.2 terabits per second of traffic transmission on each leg. The system will run for approximately 8,000 miles under the Atlantic Ocean, linking Long Island and New Jersey with Cornwall in the UK and Brittany in France.
According to Cable and Wireless, the cable system will be the first 80 wavelength transatlantic system, with greater resilience to damage, and a system where customers can choose their own level of protection for voice, data, and IP data transfers.
The system is expected to be operating by summer 2002. Cable and Wireless' London office was unavailable for comment. | <urn:uuid:7f547555-fbbb-434b-92ef-b97d6b6dddee> | CC-MAIN-2017-04 | https://www.cedmagazine.com/news/2001/01/new-transatlantic-cable-network-no-myth | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932173 | 192 | 2.5625 | 3 |
GCN LAB IMPRESSIONS
A data center without hard drives: That's Stanford's RAM plan.
- By Greg Crowe
- Oct 21, 2011
Computer researchers at Stanford University have announced that they would like to eliminate hard drive storage for their computer system and store everything in computer memory. They are calling it the “RAMCloud.” It just goes to prove that you can spice up anything with the word “cloud,” whether it has anything to do with cloud networks or not.
Their argument essentially is that, even though capacity of hard drive storage has increased dramatically over the last four decades, their performance hasn’t increased as much and has fallen behind the needs of large-scale Web applications. Although many have proposed various solutions to this, including the use of solid-state drives and so forth, the folks at Stanford have proposed a whole new class of storage that uses dynamic random-access memory exclusively.
Using DRAM would have much lower latency than any other type of storage. It would also have other benefits, such as quick recovery. In another paper, the Stanford researchers claim that an average-sized server using RAMCloud could recover from a crash in 1.6 seconds.
Of course, there are two major potential problems with using DRAM. One is the cost. DRAM has a much higher cost per data unit than pretty much any other type of storage available. However, since this storage class would primarily be used in huge Web-based retail applications and so forth, they could probably afford to splurge a bit.
The second issue is really the clincher. DRAM is designed to work when it has power supplied to it. Whatever is stored on it will quickly degrade without power. So, to use it in this capacity, they will need a huge amount of uninterruptible power. And of course, if the worst actually comes to pass and the data center is left powerless for an extended period of time, the center's employees will need a copy of their data on some more conventional media so that they can restore it to the DRAM systems. But network centers have gotten pretty good at maintaining constant power level for a while now, so this wouldn’t be anything new.
All in all, it’s a pretty neat idea that could dramatically improve the performance of scalable systems.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:2db024aa-5455-455f-8b29-5f85a4fece57> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/10/21/stanford-no-hard-drives-dram-data-center.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96785 | 498 | 2.8125 | 3 |
Bahrain has negligible production of cereals and other agriculture products. Water constraints and unavailability of agriculture land are the major constraints in crop production in Bahrain. Consumption is increased of all the crops and it will grow at a CAGR of XX.XX % CAGR till 2020. Imports are also increased in Bahrain.
Crops like cereals were not produced in Bahrain and due to lack of water resources which affected other crops as well. Hydroponic systems are getting attention in Bahrain and companies also started using them for vegetable and fruit production as this process requires less quantity of water and no land or soil.
Increase in population and future prospects in food industry are reasons for increase in consumption. The food crisis of 2007 forced Bahrain to import wheat at a very high price as major exporters like Russia, Ukraine, India, etc. stopped food exports which forced Bahrain to import food items from international market at a very high price. Bahrain is a net importer of cereals like wheat, barley, millet, maize and sorghum.
Import of these cereal crops are going to increase as they are not produced and because the consumption is increased as well. Prices of all the crops were increased from 2009 to 2010.
Bahrain Agriculture Market is developing due to:
Bahrain is using desalination for potable drinking water, which helped the country to increase agriculture production. Horticulture is the new trend in agriculture production which has given boost to the production. Most of the agricultural land belongs to Kings of Bahrain. Government is giving free land, free pesticides, seeds, water, machinery etc. to farmers for production.
Bahrain is a net importer of food products. Low ground and surface water availability, limited rainfall, limited arable land and low production are the biggest restraints for agriculture production in Bahrain.
What the report offers: | <urn:uuid:abce40b8-dbb9-4e34-b777-3a3003e94bfb> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/agriculture-in-bahrain-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00348-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95458 | 375 | 2.75 | 3 |
California is rolling out a new law to reduce greenhouse gas emissions, primarily from electric generating plants, and the cost of the effort is expected to be passed along to data centers, which are among the biggest consumers of electric power in the state.
This means data center operators in California will need to step up their energy efficiency efforts in order to avoid the higher costs. And the handwriting is on the wall for data centers in the rest of the U.S., as President Obama has directed the EPA to develop greenhouse gas controls nationwide.
The law that took effect on Jan. 1 requires California to reduce greenhouse gas emissions to 1990 levels by 2020. The plan is to try to reduce emissions statewide by 2 percent to 3 percent a year. According to the California Air Resources Board, the lead enforcement agency, the law requires power plants to obtain permits, also called "allowances," for every metric ton of greenhouse gases they emit.
As they reduce their emissions, they can sell their allowances to power plants that have not been able to cut back. The allowances will be bought and sold in a cap-and-trade market in which carbon emitters who keep their emissions below the cap of 25,000 metric tons per year can sell their allowances to other power plant operators. The value of the allowances, initially set at $13 per metric ton, will fluctuate in an open market based on demand, just like any other commodity.
Entities that are over the cap can use a combination of reduced use of fossil fuels, other energy efficiencies, or the purchase of allowances in the cap-and-trade market.
The law directly affects power companies, but it's expected that they will pass on any higher costs to customers and the biggest customers are data centers.
How much money are we talking about? Nobody can say for sure, but the Green Grid Association, a global organization of technology and energy companies that support green energy, is about to publish a white paper that calculates the impact on data center electricity rates of utilities converting from carbon to green tech.
Part of the study was shared at a Green Grid Association conference in Santa Clara, Calif., in March and the expectation is that going green will increase the cost of electricity.
The following estimates are based on a model of a hypothetical power plant operating in the Midwest, says James Grice, an attorney who deals with data center siting issues for clients.
If 25 percent of the energy generated is green, the "blended rate" of carbon and green power is $0.051 per kilowatt hour (kw-hr) and carbon emissions are reduced by 25 percent.
If green accounts for 50 percent of the electricity generated, the price rises to $0.063 per kw-hr and carbon reduction rises to 46 percent.
If the power is 100 percent green, the rate hits $0.093 per kw-hr and carbon reduction hits 95 percent.
There is no doubt that greenhouse gas emission controls on power plants under the new EPA regulations will make electricity more expensive for data centers, which creates an incentive for energy efficiency, according to a recent article in The Data Center Journal.
"For data centers, this policy clearly translates into higher operating expenses, although on the upside,' it could make certain energy-efficiency improvements seem more economical," the article, published shortly after Obama's directive, stated.
The article goes on to say, "In some cases, investments in efficiency -- particularly in data centers with high PUEs [power usage effectiveness] -- could offset or even reverse increases in energy costs resulting from new regulations.''A
(A PUE, established by the Green Grid, is a ranking of the energy efficiency of a data center; the lower the PUE, the more energy-efficient it is.)
Are data centers ready?
The attention to energy efficiency in data centers varies, says Nicole Peill-Moelter, the director of environmental sustainability at Akamai Technologies, a provider of application and content delivery technology and services.
Akamai operates data centers globally, including in California, and contracts with third-party data centers there, too, says Peill-Moelter. While Akamai constantly works to improve energy efficiency at its data centers, some of the third party data centers Akamai uses are behind the times.
"I've gone to some of the data centers where we have equipment hosted and a lot of these data centers don't have implemented even the basic energy efficiency measures," Peill-Moelter says. "They pass the costs directly onto their clients."
There are a number of steps data center operators can take to improve energy efficiency and the IT industry overall is already moving on many of them.
- Energy-efficient processors. Each generation of processors from Intel, AMD and the rest ushers in improved measures of performance per watt with multicore and multi-threading designs.
- Servers. Software can reduce the total energy draw of data center servers by pushing more data through older servers to improve their efficiency, says Akamai's Peill-Moelter. In addition, new network software can identify "zombie servers" in the data center that aren't operating, but still drawing power, because the application or business unit assigned to that server doesn't use it anymore.
- Virtualization. Many servers operated at as little as 10 percent to 15 percent utilization. But running multiple virtual servers in one physical server can increase utilization to the range of 40 percent to 50 percent, requiring fewer physical servers.
- Cloud computing. Contracting with a cloud service provider relieves a company of some of the energy expense of running its own data center, but because the compute cycles have to be created somewhere, the cloud provider's costs will in some way be passed onto the user.
- Data center cooling. Adopting a hot aisle-cold aisle strategy, physically separating the hot aisle (the backs of servers where heat is generated) from the cold aisle (where people work) keeps hot and cool air from mixing, Peill-Moelter says, and reduces the strain on air conditioning systems.
Data center operators can also invest in their own renewable energy systems to reduce their dependence on the utility grid for electricity.
While wind and solar are the most typical forms of renewable energy, San Jose-based eBay is in the process of installing fuel cells as an alternative to the utility grid's power at a data center the auction site operates in Utah.
"Our Utah facility is our most carbon-intensive location, it's something like 94 percent or higher coal-based," says Jeremy Rodriguez, a distinguished engineer in the Global Foundation Services unit of eBay. "Onsite generation is new to us. We're just getting our first fuel cells landed at the location and soon we'll tell the world how they are going to work."
It's too soon to tell precisely how California's new law will affect data center power costs because of many variables, says Grice. It remains to be seen how many plants will exceed the carbon emissions caps and what their costs will be in the cap-and-trade market. But it's something data center operators need to watch closely.
The impact of the law on data centers will also vary based on their location in the state and which utility serves them, the utility's cost structure and how much of its electricity is generated by renewable energy instead of carbon-based fuels, says Akamai's Piell-Moelter.
Finally, data center operators and utilities outside California shouldn't consider themselves off the hook. Instead of waiting for the U.S. Congress to enact climate change legislation nationwide, President Obama announced a directive in a speech on June 25 for the U.S. Environmental Protection Agency to work with states, industry and other stakeholders A to regulate carbon emissions, first from new coal-fired power plants, and then from existing plants.
"Now that the Obama administration has directed EPA to put greenhouse gas regulations on all power plants, then all states will face similar costs before too long,'' says California Air Resources Board spokesman Dave Clegern.
Mullins is a technology reporter who has covered Silicon Valley since 2000. He can be reached at email@example.com.
This story, "New Global Warming Rules Put the Heat on Data Centers" was originally published by Network World. | <urn:uuid:645ea3d1-eab7-4374-85c8-db2236c0c511> | CC-MAIN-2017-04 | http://www.cio.com/article/2383028/data-center/new-global-warming-rules-put-the-heat-on-data-centers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948734 | 1,706 | 2.78125 | 3 |
The fiber optic amplifier plays a significant and key role within the enhancing the capacity for a communication system to deliver information. The light signals can be transmitted by the use of optical transmitters, optical receivers and optical fiber.
The optical amplifier is a device amplifying an optical signal directly, without the need to first convert it to an electrical signal. The most popular parameter of gain from it is bandwidth and noise performance. It’s compensation for the wakening of knowledge throughout the transmission, due to fiber optic attenuation. The wavelength and also the power of the input fiber signal are decided through the fans.
Fiber optic amplifier has industry’s highest color resolution and simple amplifier, and sensor setup will lead to enhanced stability for previously difficult detection applications. What is more, it can offer you very high-output powers with diffraction-limited beam quality when utilizing it. Its saturation characteristics have the ability to prevent any intersymbol interference so that it is vital for optical fiber communications. That fiber amplifiers are often operated in the strongly saturated regime enables the highest output power. The amplified spontaneous emission will affect its gain achievable. It’s important to safeguard a high-gain amplifier from the parasitic reflections, for the parasitic laser oscillation or perhaps to fiber will be damaged by these.
Optical amplifiers could be transferred in the forward direction, in the backward direction, or bidirectional. However, its direction from the pump wave won’t modify the small-signal gain, the ability efficiency of the saturated amplifier as well as the noise characteristics. Furthermore, the amplification of a weak signal-impulse in a monocentrics nonlinear medium could be allowed because of it. Along with the advancement of we’ve got the technology, the caliber of it’s been improved greatly that it is well-liked by many companies. Besides, there are all sorts of products on the market so the people might have more opportunities to pick one that’s ideal for their needs.
However, when it comes to choice for the fiber optic amplifier, the best solution is to figure out the best providers that focus on this type of products. Because the components of this kind of products are complex, and you’re simply unfamiliar with the related details about it. The professional providers can use their professional knowledge and lots of years of experiences to provide you with wise advice, which can help you make a right decision. Of course, some providers provides you with certain warranty so that you can take it to their company for repair when it reduces.
CATV EDFA is a type of fiber optic amplifier. It is used to increase the output power of the transmitter and prolong the signal transmission distance. It’s widely requested TV signals, video, telephone, and data long haul transmission. FiberStore provides high output power and low noise EDFA CATV Amplifiers with selection of output power from 14dBm to 27dBm to meet the requirements of a high-density solution for the large-scale distribution of broadband CATV video and knowledge signals to video overlay receivers in a FTTH/FTTP or PON system. | <urn:uuid:8be18dee-7fda-4d73-b803-05e4560c0f74> | CC-MAIN-2017-04 | http://www.fs.com/blog/why-we-need-a-fiber-optic-amplifier.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930775 | 639 | 2.65625 | 3 |
Cookie Poisoning attacks involve the modification of the contents of a cookie (personal information stored in a Web user's computer) in order to bypass security mechanisms. Using cookie poisoning attacks, attackers can gain unauthorized information about another user and steal their identity.
Cookie poisoning is in fact a Parameter Tampering attack, where the parameters are stored in a cookie. In many cases cookie poisoning is more useful than other Parameter Tampering attacks because programmers store sensitive information in the allegedly invisible cookie. For example, consider the following request:
GET /store/buy.asp?checkout=yes HTTP/1.0 Host: www.onlineshop.com Accept: */* Referrer: http://www.onlineshop.com/showprods.asp Cookie: SESSIONID=570321ASDD23SA2321; BasketSize=3; Item1=2892; Item2=3210; Item3=9942; TotalPrice=16044;
In this example, the dynamic page requested by the browser is called buy.asp and the browser sends the parameter checkout to the Web server with a yes value, indicating that the user wants to finalize his purchase. The request includes a cookie that contains the following parameters: SESSIONID, which is a unique identification string that associates the user with the site, BasketSize (how many items are in the purchase), the price of each item and the TotalPrice. When executed by the Web server, buy.asp retrieves the cookie from the user, analyzes the cookie's parameters and charges the user account according to the TotalPrice parameter. An attacker can change, for example, the TotalPrice parameter in order to get a "special discount".
Since programmers rely on cookies as a location for storing parameters, all parameter attacks including SQL Injection, Cross-Site Scripting, and Buffer Overflow can be executed using cookie poisoning.
The Imperva SecureSphere Web Application Firewall (WAF) can block cookie poisoning attacks while solutions such as network firewalls, intrusion prevention, and intrusion detection systems are not effective.
Detection of cookie poisoning attacks involves compound HTTP statefulness. The intrusion prevention product must trace down cookies "set" commands issued by the Web server. For each set command the product should store important information such as the cookie name, the cookie value, the IP address and the session to which that cookie was assigned as well as the time it was assigned. Next the product needs to intercept each HTTP request sent to the Web server, retrieve the cookie information out of it and check it against all stored cookies. If the attacker changes the content of a cookie the product should be able to identify that using the information it stores on the specific user. The product must trace application-level sessions and not just IP addresses in order to provide accurate results.
Intrusion Detection and Prevention Systems which are not Web application oriented simply do not provide this functionality. These products are unable to trace users by the application session and are unable to store information on each specific user currently logged into the Web application. | <urn:uuid:21424ca4-648b-4578-b9c5-b4e534276142> | CC-MAIN-2017-04 | https://www.imperva.com/Resources/Glossary?term=cookie_poisoning | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870514 | 628 | 3.234375 | 3 |
Over the past six years or so, I've watched my three children go from babies to real-life human beings that can form words and sentences (my youngest, about-to-turn-3, can just about form sentences that sort of make sense). So I've watched them as they go from babbling to nonsense words to actual words.
Why do I bring this up? No reason, other than there are now robots that can do the same. Check out this video from New Scientist, in which we see a not-that-creepy-looking robot learning words based on visual cues from a box.
More details on the robot are at this New Scientist article.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. | <urn:uuid:632e8139-4fc3-46cf-8a35-776589451b92> | CC-MAIN-2017-04 | http://www.itworld.com/article/2722128/virtualization/baby-robot-learns-new-words--prepares-for-world-domination.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00404-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942642 | 193 | 2.546875 | 3 |
What did the rocket scientist say to the Free/Open Source Software developer? Let's do launch! It's only natural that they'd want to work together. Both communities are focused on the cutting edge: creating tools and capabilities that did not previously exist. Both dedicate their work to expanding humanity's pool of information and want that information to float freely through society.
I am a software developer currently working on the NASA/JPL MSL (Mars Science Laboratory) rover, which launches in 2009. These are personal observations of how I encounter Free/Open Source Software (FOSS), and what I think about it.
Free floating information feeds a cycle of knowledge. Where the FOSS community donates code, algorithms and products, NASA and other organizations reciprocate with knowledge about weather systems, climate and basic science. Everyone contributes what they're best at, and tightly chartered organizations can stay focused on deeper penetration of hard problems, confident that others are doing the same. Space exploration is necessarily a cooperative venture; it's much too hard for anything less than all of humanity.
Look at these statements side by side, and you'll see the philosophical similarities:
NASA codifies its dedication in Congress' Space Act Charter:
[NASA shall] ... provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof...
The Open Source Initiative criteria for "Open Source" includes:
- Allow free redistribution
- Provide access to source code
- Allow modifications and the creation of "derived works"
FOSS developers codify that dedication in copyrights, copy-lefts, and license agreements like the GPL (GNU Public License), which says in part:
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
Open Source in Space
Need a few examples?
FOSS explores our Solar System. We send robots to the moon, Mars and beyond to flyby, orbit or land. FOSS goes with them, pervasive in the real-time operating systems, math libraries and file systems. Consider the robotic decisions of where to rove, and realize the power given the human race by the Free Software Foundation's (FSF) compilers, libraries, build scripts and so on.
"Electra" is NASA/JPL's Software Defined Radio (SDR) product created to support the Mars Network, and the InterPlanetary Internet. Electra provides UHF radio links in compliance with Consultative Committee for Space Data Systems (CCSDS) protocols Proximity-1 (data link) and CFDP (file delivery).
The neat thing about SDRs is that we can still reconfigure the protocol and signal processing functions after launch. For example, on MRO some other hardware started "leaking" electro-magnetically, which interfered with the Electra radio. We sent up a software fix to Electra to reduce the impact. We already know MRO will act as a radio relay for future Mars probes not yet built, such as the Mars Science Laboratory (MSL), which will arrive in 2010. MSL and others will use protocols not yet invented, and MRO will have to be updated to learn them.
Please bear with me a moment; I'd like to make a point about how much FOSS enters into this. Each of the following are FOSS project technologies used in the program. The flight software is mostly C, with some assembly for trap handling. It lives on file servers running kerberized OpenAFS, in a CVS repository, and is cross-compiled on Linux for RTEMS on SPARCv7 target chip. The code is built with gcc, make and libtools, and linked with newlib. (There, that wasn't so bad. I'll mention it again later.)
|Hubble "Pillars of Creation"|
FOSS observes our Universe. All the striking Mars rover navigation images, all the roiling clouds of Jupiter, each new view of Saturn's spectacular rings, every azure picture of deep-blue Neptune; the distant stars, far-away galaxies and stupendous galaxy clusters; all of these come to us touched in some way by FOSS. Think about the JPEG image formats themselves, the X11 workstation software to process them, and the MySQL databases to hold them.
Deep Space Network antenna
FOSS moves and analyzes ground data. When we prepare, check and double-check command sequences for uplink, the inputs and outputs travel variously across multiple sendmail e-mail systems, Linux platforms, and of course Internet Protocol stacks. The downlinked data, after its tiring journey across the solar system, bounces those last miles in mostly the same way the rest of our Web-connected world works. In the 1960s, NASA engineers had to invent lots of ways to move data around, and it became expensive to maintain.
FOSS methodologies develop space software. Some FOSS project efforts are very widely distributed, driving development methodologies to leverage all those eyes while integrating all that expertise and smoothing together all those styles. That problem is much tougher than, but similar to, our allocation and integration of functions across organizations, teams and contractors. Those methodologies are very attractive to us. There are several NASA examples of using an "agile software lifecycle," and we look to open communities to show us that it can be done, and how to do it best.
The only way for the public to participate in seeing fresh images in near-real time is through an open architectures for public outreach.
FOSS develops the next-generation cutting-edge technology. Why can we put a man on the moon, but we still don't have robot cars? Some key challenges are:
- Robots have different physical characteristics.
- Robots have different hardware architectures.
- Contributions made by multiple institutions.
- Advanced research requires a flexible framework.
- Software must support various platforms.
- Lack of common low-cost robotic platforms.
- Software must be unrestricted and accessible (ITAR and IP).
- Software must integrate legacy code bases.
The Coupled Layer Architecture for Robotic Autonomy (CLARAty) project brings together folks from many institutions. They develop unified and reusable software that provides robotic functionality and simplifies the integration of new technologies on robotic platforms. (They also have some funky movies of robots doing funny things.)
Our dirty little secret is that space agencies are companies just like everybody else. We too (am I shocking you?) use e-mail, Web servers, and all the usual non-space-qualified suspects. Here are some examples:
- Operating Systems, Systems Management: Rocks (cluster Linux), Ganglia, amanda
- Software Management: Depot, Subversion, Trac, Bugzilla
- Communications: OpenSSH, Apache, Jabber, Firefox/Mozilla, Sendmail, Mailman, Procmail, CUPS, OpenOffice, wikis (various)
- Data Visualization: ImageMagick, GMT, MatPlotLib
- Compilers, languages, code checkers: SunStudio, splint, Doxygen, valgrind, Java, Perl (some JPL history there), Python, Ruby
- Databases: MySQL
The Open Advantage
OK, so given that our cultures are similar, how does that translate into our bottom line? Why does FOSS have such a large role in space exploration? Here's the top-10 list of what I see.
1. Schedule Margin
Planets move; launch windows don't. The Spirit and Opportunity Mars Rovers had to go in the summer of 2003 or never. They are simply too massive to throw that far, for that budget, unless the planets aligned just so. (Mars and Earth line up every 26 months or so, but in 2003 they were unusually close together.) Procurement cycles for spending lots of government money can be months long, and they can dominate critical paths.
Quickly obtainable FOSS relieves that pressure and gives us some elbow room. Bug fix turnaround times can be critical. If we can fix the source code ourselves, we can keep a whole team moving forward. If the fix is accepted by the open-source community, we avoid long-term maintenance costs and have it for the next project. Feature additions ("Gee, if it only did this, too...") have the same advantage but take longer to give back. Oddly, we can contract for new features but cannot easily give them away. The FOSS spirit hasn't yet pervaded government contracting rules.
2) Risk Mitigation
Full system visibility is key to risk identification, characterization and resolution. The Mars robots are sent to encounter unfamiliar situations. Think how much information system engineers need to mitigate those risks. This is no place for a closed system.
All flight software goes through rigorous review, including the software (compilers) that builds the software (command sequence generators) that builds the software (commands). We do code walk-throughs which perforce means having the source code. We design white-box test plans by analysis of software decision paths, which is easier to do with the source code in hand. Our review process requires "outside experts, not working on the project" to review the code; well, that's exactly what a FOSS community is all about, isn't it? In essence, the open-source community is the world's largest Review Board, only we don't have to buy the doughnuts.
When you leave Earth's orbit, you also leave "push the reset button" and "reload from CD" far behind. We tend to find bugs that don't bother other customers. We live at or beyond the border cases, and we push frontiers in all senses of the word. So all the critical bugs have to be found and squashed before we go.
The best way to shake out software bugs is to have lots of testers independent of development try it out in unfamiliar environments and in ways unforeseen—which pretty much describes the FOSS user community. By the time something's on its 2.1 release, it's usually been beaten up pretty thoroughly. And the beauty is, you have full disclosure about what broke in the 1.0 release, under what conditions, how it was fixed and what tests prove it's gone.
Space exploration takes a lot of brain power—more brains than any one company or nation commands. Our industry, academic and international partners each have specialized expertise vital to the effort. And each partner, it seems, uses a different platform, language or protocol from the rest that's optimized for that particular piece. Each builds a sub-assembly, and the thing has to work when you bolt it all together.
This is interoperability by definition. Software must be designed from the start to "play well with others" beyond your organizational control. Attempts to dictate uniform development platforms are not infrequent, and always fail. At worst, they represent a willful desire to ignore strict interface control, and interfaces are precisely what call for the greatest care.
As we build the space shuttle replacement and new spacecraft for the moon, interoperability is a top-level requirement. Under the current "Vision for Space Exploration," the "Constellation Program's Communications, Command, Control and Information (C3I) Interoperability Specification" was drafted early. Not surprisingly, the specification calls on open standards.
The Pioneer and Voyager spacecraft are older than disco, though younger than the Beatles. They are further from Silicon Valley than anything made by human hands, and getting further. Data from them continues to puzzle us. So, software to analyze that data has been ported to myriad computers. There's never enough money to upgrade routinely, so we stick with a platform until it dies and/or its manufacturer goes out of business. This is only barely tenable through strict portability conventions.
Spacecraft parts are hideously expensive, what with radiation toleration, quality screening and so on. Software usually has to be developed on simulators and ported to a number of similar but not identical units. For every flight article, there may be a "qualification unit" (for testing to failure), two or three "form/fit" units for functional testing, and some "engineering units" for development. A simple radar algorithm has seen development on Mac OS X, Microsoft Windows and Linux (that I know of)—none of which is the final environment. It's been coded in python, perl and C. The work would take far longer had we been locked into one vendor.
The Electra platform and code described earlier have been ported/inherited/reused for:
- a landing radar on MSL
- a spectrometer interface on ISRO's (Indian Space Research Organisation) Chandrayaan-1 Moon Mineralogy Mapper
- a lunar radio architecture prototype C3I (Command, Control, Communications, and Information) Communications Adaptor (CCA)
- Radio Atmospheric Sounding and Scattering Instrument (RASSI)
—all in the space of a few years, each time by a different team.
I've seen technical information retrieved from hard copies of presentations, because people unable to open files that were only a few years old. The (closed) format had changed when a desktop computer was upgraded. That just won't fly. | <urn:uuid:41834fcc-a462-44e5-a10a-391240352979> | CC-MAIN-2017-04 | http://www.cio.com/article/2438926/open-source-tools/open-source-software-and-its-role-in-space-exploration.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929716 | 2,815 | 2.578125 | 3 |
Sensors, wireless tech protect police dogs from heat stroke
Police and military dogs face many of the same dangers as their human partners. Many of these dogs, also known as K9s, fall victim to heat related conditions such as heat stroke, which could result in death.
To combat K9 casualties, Massachusetts, Arizona and Texas law enforcement units have invested in a wireless monitoring system to convey the dog’s internal body temperature to its human partner. Data Sciences International and Blueforce Development Corp. have partnered to develop the new system.
The system continuously measures the K9’s body temperature using a small surgically implanted sensor. The sensor then relays the temperature to a receiver attached to the dog’s protective gear, where it can be monitored by the human partners. The receiver relays the information to the K9 officer's smartphone and will instantly alert him if the K9's body temperature exceeds safe health limits.
"Our active involvement in public safety revealed that officers have serious K9 safety needs," said Blueforce CEO Mike Helfrich. "We expect this solution to help save K9 lives by communicating real-time temperature."
The telemetry is communicated to anyone subscribed to the animal through the Blueforce Tactical mobile application for Android or iOS, according to Blueforce blog post. Those who are subscribed receive a notification when the dog’s body temperature exceeds or falls below prescribed values.
Posted by Mike Cipriano on Mar 26, 2014 at 11:17 AM | <urn:uuid:3f46b9c5-eb03-4d82-a32b-18894cd8a033> | CC-MAIN-2017-04 | https://gcn.com/blogs/pulse/2014/03/k9-heat-monitor.aspx?admgarea=TC_STATELOCAL | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00522-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935134 | 308 | 2.75 | 3 |
Multiplexing of fiber optic communication carrier for optical wave. Fiber optic communication multiplexing technique is mainly divided into three categories: optical multiplexed, optical signal multiplexing and subcarrier multiplexing with (SCM). Optical wave multiplexing wavelength division multiplexing (WDM) and space division multiplexing (SDM), the optical signal multiplexing comprises time-division multiplexing (TDM) and frequency division multiplexing with a (FDM).
1. Optical wave wavelength Division Multiplexing
At the transmitting end, the optical multiplexer to two or more different wavelength of the optical carrier signal converge together, and coupled to the same fiber in the optical lines for transmission; optical splitter at the receiving end via the various wavelengths of optical carrier separation, and then by the optical receiver, for further processing in order to restore the original signal. This is the wavelength division multiplexing. Suitable for multimode and single-mode system, one-way, two-way transmission, both assigned transmission loop transmission. The operating wavelength from 0.8μm to 1.7μm, low dispersion windows of the low attenuation of the optical fiber. Multiplexer requires a low insertion loss (1.0-2.5dB), sufficient bandwidth and good isolation. WDM technology allows the communication capability of the optical fiber communication system increases exponentially. Used for a set of optical amplifiers along the long-distance trunk and undersea fiber optic cable systems.
2. The Space Division Multiplexing
Including two aspects: First, the fiber multiplexed bunched fiber combination: two beam split along the space of a multidimensional communicaiton in an optical fiber. Multidimensional the modulation and demodulation of the degree of coherence can be used to realize multiplex space division multiplexing communication. Image bundles is a special the space division multiplexing. Image using the space division multiplexing transmission, the transmission speed will be orders of magnitude improvement. Hundreds of thousands of multi-core pixel image transmission fiber optic technology has matured, its color retention characteristics and translucent quite good.
3. Optical Frequency Division Multiplexing
Frequency division multiplexing and wavelength division multiplexing, in essence, there is no difference. If the optical carrier of the same fiber transmission few large ones, and the carrier
asked larger spacing, referred to as WDM: wavelength interval smaller and if more optical carrier ones dense, that is, frequency division multiplexing. Frequency division multiplexing may be dozens, or even hundreds of times to improve the communication capacity. In the dense frequency point, without the conventional optical multiplexer and demultiplexer, but rely on the tuning device, the optical power coupler, or an optical filter and so on. At the receiving end there are two differnet tuning methods to achieve dense frequency division multiplex, a coherent heterodyne detection of the optical fiber communication and tuning the local oscillatior laser, second is the direct detection by a conventional optical fiber communication and tunable fiber filter. Mainly used in fiber optic subscriber network and fiber optic LAN, and is particularly suitable for the frequency division multiple access applications.
4. Optical Time Division Multiplexing
Optical time division multiplexing (OTDM) is optical digital communication in an efficient multiplexing method. It is the communication time is divided into equal intervals, each interval
transmitting only a fixed channel, each channel is transmitted in accordance with a certain time sequence. General frame synchronization and bit synchronization both synchronous manner. As electronic devices limit too high digital rate and the optical time division multiplex required the retrocession access and difficult to tap technology, little progress in the past. But in recent years, a number of key technological breakthroughs, such as optical time division multiplex/ demultiplexing, transform limit ultrashort optical pulse generation, all-optical clock extraction technology, all-optical regeneration technology, optical modulator and optical zoom and optical linearand nonlinear transmission technology and so on, which makes the realization of the whole optical information processing system to become possible.
5. Subcarrier Multiplexing
Subcarrier multiplexing the signal to be transmitted is first used to modulate a radio frequency wave, and then the radio frequency wave to modulate the emission source. After the photoelectric conversion at the receiving end to restore the signal RF wave, and then restored to the original signal through the RF detector. Subcarrier fiber optic transmission to go through two modulation and twice demodulation, two-tier carrier are optical wave and radio frequency (RF) waves, radio frequency waves, also known as a subcarrier. The sub-carrier multiplex transmission system is by increasing the band width to achieve multi-channel transmission, bandwidth is increased as the carrier wave frequency and the number of channels increases. Its advantages can be mature microwave technology is not high optical devices, technically easy to implement. | <urn:uuid:87f92b09-b16c-4f6c-b695-0f7051a7463f> | CC-MAIN-2017-04 | http://www.fs.com/blog/fiber-optic-transmission-multiplexing-technique.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00366-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.869704 | 1,029 | 3.015625 | 3 |
The House Science and Technology Committee held a hearing to review the current management and overall challenges of waste electronic equipment (E-waste) in the United States. Chairman Bart Gordon and Committee Members questioned witnesses on industry practices for recycling, refurbishment, resale and disposal of electronic products.
"It's important to bear in mind that a computer is not a soda can and a TV is not newspaper. These are products that contain complex parts and are made up of dozens of materials, some of which like lead and mercury, are toxic. Separating these materials takes time and energy, potentially exposing the environment and workers to hazardous substances, and in the case of something like leaded glass from a computer monitor or TV, leaves us with material that there isn't much demand for," stated Gordon.
E-waste includes electronic products such as computers, TVs, VCRs, stereos, printers, cell phones, and copiers at the end of their useful life. The volume of E-waste has grown substantially due to increased demand for more advanced technology or as a result of non-salvageable products. Today, the lifespan of many electronic products can be as short as 18 months. According to the Government Accountability Office (GAO), roughly 100 million TVs, computers, and monitors become obsolete every year. Although many producers have made progress in product durability and efficiency, the E-waste problem continues to grow, both in the U.S. and globally.
"Fortunately, there is a growing awareness of E-waste recycling," said Gordon. "E-waste is hardly trash; while some materials in electronic waste are potentially hazardous, others are quite valuable. It doesn't make a whole lot of sense to put gold in a dump."
E-waste has significantly higher concentration of metals like gold and copper compared to an equivalent weight of a typical ore. Some states and electronics producers have begun to address this issue, mandating product take back or providing for a mechanism to recycle these goods. There is a national, and an international conversation taking place right now about how to make sure more E-waste is captured by recyclers.
Currently, thirteen states have laws regarding E-waste and many retailers offer various types of product take-back incentives. Despite these efforts, the Environmental Protection Agency (EPA) estimates that less than 15 percent of end of life products reach a recycling or re-use program. For instance, the EPA estimated that in 2005, 2 million tons of unwanted electronics ended up in landfills or incinerators compared to only 345,000 tons that reached recyclers. In an effort to make recycling easier and more effective while also decreasing the amount of toxic materials used to produce electronics, Members and witnesses discussed the potential for research and development and green design advancements.
"In addition to increasing the amount of E-waste that is recycled, we should also look at designing products in smarter ways. Why not design a computer or a cell phone using all of the same screws and no mercury?" said Gordon. "Focused R&D initiatives will be essential to help manufacturers of emerging technologies produce more environmentally friendly products while still meeting consumers' needs."
For more information on this hearing or to access witness testimony, visit the Committee's Web site. | <urn:uuid:8d6731fd-023a-400a-9c1d-36fd1ea59f24> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/House-Committee-Reviews-National-Management-of-E-Waste.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955493 | 671 | 3.1875 | 3 |
When you run apps from the Windows 8 Start Screen and switch to another one, the original app that you were using is not actually closed. Instead this App is left running in the background so that you can easily switch between them. When you leave apps running in the background they use resources such as memory and CPU power that could be better used by other programs on your computer. Therefore it is better to close an app when you are done using them rather than leaving them running.
Unfortunately, Windows 8 does not provide a readily apparent method of terminating an app like they do on a normal Windows application. There is no X button () like we have become accustomed to in Windows desktop applications and most apps do not provide a way of quitting them. In this tutorial we will walk you through three methods that you can use to actually close a Windows app rather than leaving it running in the background. This will allow you to actually terminate the app so that the resources they are using can be used by your other applications.
Method 1: Alt+F4
The Alt+F4 keyboard combination is by far the easier method that you can use to close a Windows 8 app, but it can only be used when you are actually using the particular app. To close an app using this keyboard combination simply switch to the app and then press the Alt key on your keyboard, and while holding it down, press the F4 key on your keyboard as well. This will immediately and forcefully close the app in the middle of what it was doing.
Method 2: Windows 8 Task Manager
The Windows 8 Task Manager has been updated to not only include running processes but to also list any apps that may be running. This allows you to quickly see all the apps that are running easily close them. To access the Task Manager, type Task Manager from the Windows 8 Start Screen and then click on the Task Manager option when it appears in the search results. This will open the basic Task Manager as shown in the screenshot below.
To close the app, simply left-click once on the app name and then click on the End Task button. Task Manager will immediately terminate the app for you.
If you wish to see more details on how much memory or CPU a particular app is using before you terminate it, you can click on the More Details option. This will open a more detailed interface for the Task Manager as shown below.
This new interface allows you to see how much resources a particular app is using so that you can better decide if you wish to or even need to close it. If you do wish to close the app, simply left-click on it to select it and then click on the End Task button.
Method 3: Win+Tab
The last method is to use the Win+Tab keyboard combination, which opens a panel that displays all of the open apps currently running in Windows 8. This keyboard combination is different than Alt+Tab as it will only list open apps and will not display any open desktop applications. From any screen on your computer, when you press the Windows () key on your keyboard, and while holding it down, press and hold the TAB key a vertical panel will appear that shows individual tiles for each running app on your computer. Please note, if there are no apps currently running this keyboard combination will not display anything. This list of of open apps is indicated by the red arrow in the image below.
To close an app on this list, right-click on an app and then click on the Close option that appears. This will immediately terminate and close the app that you selected.
If you have questions regarding this tutorial or just want to chat with others about Windows 8, please feel free to post in our Windows 8 Forum.
Windows 8 no longer includes the traditional Start Menu that Windows users have become associated with using. Instead they replaced it with a new interface called the Windows Start Screen that many people find to be not as intuitive as the traditional Start Menu. This is especially the case if you are not using a touch screen. With this in mind, a free program called Classic Shell has been updated ...
Windows 8 comes with a new user interface called the Windows Start Screen that is the first thing you see when you login to Windows 8. This is the main interface that Windows 8 user's use to launch applications, search for files, and browse the web. This Start screen contain tiles that represent different programs that you can launch by clicking on the title. One of the features of this new ...
Windows 8 has a settings screen called PC Settings that allows you to change some basic preferences and computer settings directly in the Windows 8 Start Screen. This screen allow you to change settings that include backgrounds, colors, synchronization preferences, and synchronization preferences. This tutorial will explain how to access these PC Settings and provide basic information about what ...
When you upgrade your current version of Windows to a newer version, the upgrade process will create backups of important directories from your previous version of Windows. These directories include your user profiles, Windows folder, Program Files directory, and other important locations. These backups are then stored in two folders called Windows.old and $Windows.~BT, which can be used after the ...
In Windows 8, Windows Media Player is no longer the default program to open your music files such as MP3 files. When you open a MP3 file, the Music app is used to open and play the file instead. If you wish to make it so that Windows Media Player is the default media player like it was in previous Windows versions please follow these steps. | <urn:uuid:908a1921-5406-4b95-9241-eaab37f5a165> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/close-an-app-in-windows-8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923859 | 1,126 | 2.625 | 3 |
Vulkan is a new low-level API that developers can use to access the GPU. This can be used instead of OpenGL or Direct3D. It is essentially the successor to OpenGL as the standard is created by the Khronos Group, a standards organization. Khronos created Vulkan to be an open standard royalty-free.
Developers are able to take advantage of Vulkan’s reduced CPU overhead and efficient performance with games, applications, and mobile. Version 1.0 of the specification was released today and the first Vulkan SDK, LunarG, was also released for Windows and Linux.
Vulkan is available on multiple versions of Microsoft Windows from Windows 7 to Windows 10, and has been adopted as a native rendering and compute API by platforms including Linux, SteamOS, Tizen and Android.
AMD, ARM, Intel, NVIDIA, and other industry pillars have been quick to adopt the standard. NVIDIA offers beta support for Vulkan in Windows driver version 356.39 and Linux driver version 355.00.26. AMD similarly offers beta support for Vulkan with beta drivers for Windows 7 – Windows 10. | <urn:uuid:b5baac24-8146-4907-9a38-3877cfcfc4c4> | CC-MAIN-2017-04 | https://www.404techsupport.com/2016/02/khronos-vulkan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931248 | 224 | 2.640625 | 3 |
Solid state drives (SSDs) have been around for decades, with the first SSDs being confined to military and research use. Hard disk drive (HDD) technology sprung up during this time, dominating the flourishing computer market, and going from being nearly unheard of technology at the beginning of the 1980s to being a need-to-have storage component be the end of the decade.
Fast forward to today. Solid-state storage technology has continued to evolve and develop from its experimental beginnings. While HDDs still hold an important place in the world of data storage, SSD drives have characteristics that make them better for some enterprise pursuits.
The following four benefits of SSD demonstrate the places where it will most effectively meet a customer’s needs, and why.
SSDs are More Durable and Smaller—Excellent For Mobile Use
In a world where so much business is conducted remotely, laptops are a common fixture of business enterprises. Being carried back and forth from home to office, laptops are often at risk for being dropped, and are subjected to numerous other forms of daily abuse. This makes it easy to see the downside of using an HDD in a laptop. HDDs consist of spinning platters with a read head, and these mechanical parts can be damaged in transport. This can create an unsafe environment for your stored data. Because SSDs have no moving parts, they are quite shock resistant. They can survive better during your daily commute and travel, withstanding up to 1500g during operation.
Plus, an average solid state drive is a sleek 1.8” format opposed to the 3.5” typical mechanical drive which allows for slimmer and lighter portable laptops in the market. In the enterprise market, with rack space at a premium, the smaller SSD drives are becoming more essential and practical for business owners.
SSDs’ Speeds are Great for High Performance Needs
SSDs contain solid state flash memory using integrated circuits (ICs) rather than magnetic media to access your stored files. Unhindered by the mechanical elements of HDDs, which cause friction and slower drive speeds, SSD drives can hit markedly higher speeds than HDDs. This is very useful for enterprise needs such as:
Graphics rendering: In enterprises that use computers for graphic manipulation and video processing, speed is a must. High-end graphic and video programs are memory hungry and frequently read from the hard drive, making higher speed SSD drives a smart choice.
Bioinformatics: Cutting-edge scientific pursuits require the analysis of a massive amount of data. The almost unfathomable number of data points that constitute a genome, and the processes scientists use to manipulate that data to better understand how human life works on the most basic level, demand hyper-fast hard drive speeds. SSDs are a good choice for such an enterprise.
The stock market: Another space where lightning-fast data processing is key is the stock market. Systems that underpin the trading world and run the algorithms that keep the market moving need to be fast, and SSDs can give them the speed they need.
SSDs Serve Up Video-on-Demand
Another benefit of the uptick in hard drive speed SSDs offer is that it can better facilitate streaming video. A huge chunk of internet commerce, news, and entertainment outlets now include streaming video, and it requires high performance drives to allow enterprises to serve this information without crashes and lag. Because of their high speeds, SSDs offer a great option to facilitate such high-traffic, resource-gobbling processes.
SSDs are Quieter than HDDs
If you have ever walked into an office that is conspicuously free from the sound of whirring and revving drives, that is due to the advent of solid state technology. Because SSDs do not contain spinning plates and write/read heads, they are much quieter and run cooler than an HDD.
For more information on SSD solutions, contact your Ingram Micro sales associate at 1(800) 456-8000. | <urn:uuid:f2a7f58a-6942-43b4-afaa-73ac66f147b6> | CC-MAIN-2017-04 | http://www.ingrammicroadvisor.com/components/4-benefits-of-ssd-vs.-hdd-whats-the-difference | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93449 | 812 | 2.78125 | 3 |
A new security tool developed by Department of Energy engineers is designed to give security and IT administrators the ability to more quickly identify and respond to an issue on the network.
Hone is the brainchild of Glenn Fink, a senior research scientist with the Secure Cyber Systems Group at the DOE s Pacific Northwest National Laboratory (PNNL) in Richland, Wash. Hone is what Fink calls a cyber-sensor that essentially discovers and monitors the relationship between network activity on a computer and the applications--such as Microsoft's Internet Explorer--and processes running on it.
By greater visibility into those relationships, IT professionals will be able to more quickly understand and deal with cyber-attacks. In addition, IT administrators can use the tool for a host of network- and security-related tasks, according to Fink.
In developing Hone, he said he wanted to help people see what s on their networks.
"I want people to understand what s really happening on these very complex machines," Fink said in an interview with eWEEK.
He initially created the framework of what would become Hone as a postdoctoral researcher at Virginia Tech. Fink said he saw what visualization technology was doing elsewhere, and asked why people didn t use it in security. Such deep visualization into the system and the network would be hugely beneficial to security administrators, he said.
"This was the hammer to hit their nail," he said.
Fink took his ideas with him when he went to work for PNNL, where he was able to secure the internal funding and collaboration needed to get going on work on what eventually turned out to be Hone. "It s really easy to get people to say, 'Yeah, that's cool,'" he said. "It's another thing to get people to say, 'And here's the money.'"
The problem is what he sees as an inefficient way of dealing with security issues. Right now, security and system administrators spend much of their time searching for unusual patterns in communications between computer systems and the network, Fink said. The problem is that once such a pattern is found, there's nothing to say which program is doing the communicating, so the administrators closely watch the system hoping to see the program work again and allowing them to get a better read on the situation.
However, Fink said, they may never see the dangerous program again. However, Hone creates an ongoing record of the communication, not only showing the communications between systems on a network, but also which specific programs--including Web browsers, system updates and malicious programs--are involved in the communication. | <urn:uuid:b9d0e9b2-b16f-4689-ab91-9a6016978fcf> | CC-MAIN-2017-04 | http://www.cioinsight.com/security/new-security-tool-helps-admins-beat-cyber-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976076 | 532 | 2.84375 | 3 |
In a brief aside during a Senate testimony on overall national security this week, U.S. director of national intelligence James Clapper justified the privacy and security advocates who have warned of the implications of the Internet of Things (IoT) since before it was a buzzword.
"In the future, intelligence services might use the [Internet of Things] for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials," Clapper said, according to The Guardian.
Of course, the IoT could be a great tool for almost all of these purposes. The simple fact that even seemingly benign smart home devices, like the commonly referenced Nest thermostat, send data to the cloud means they can at least reflect patterns on the user. If IoT device data can be intercepted, it can give investigators an idea of when a suspect or target is home and the kinds of activity they engage in. That same opportunity is available to criminals who would use that data to identify a potential target.
The only claim in Clapper's comment that might be questionable is that of using IoT devices to gain access to a network. That depends on the kind of network the device uses to connect to the internet, which, at this relatively early stage in the IoT's development, can be any of a whole bunch of options.
Many smart home devices currently connect to the same Wi-Fi network as the smartphone or laptop, but more and more companies are looking at alternative networks. I recently wrote about one option, called Sigfox, which uses very low-bandwidth networking technology to preserve battery life on internet-connected devices. Gaining access to a device operating on this network, which would be difficult in and of itself, wouldn't grant access to the broadband network that a surveillance target would use for anything of value, like communications, because it can't even handle any of that data. The privacy-minded IoT user (which, admittedly, seems like a paradox) likely wouldn't connect their smart home devices to their home Wi-Fi network, as long as alternatives are available.
However, many smart home devices still connect to the Wi-Fi network, making them a gateway to more valuable data for the time being.
Beyond that, surveillance and identification might become easier as internet-connected cameras become more prevalent. Savvy spying targets may not be dumb enough to install an internet-connected camera without changing the easily available default login credentials, but a whole lot of other people do. Even small businesses like coffee shops are prone to this mistake, leaving their video feeds available for accessing and sharing by everyday hackers, let alone those working in intelligence for the federal government. Simply stopping for a cup of coffee could leave you vulnerable to surveillance, theoretically.
Anyone who has followed the IoT already knew what Clapper said. He just confirmed it – as the IoT grows, it may become impossible to avoid exposure to technology that can be used for surveillance. | <urn:uuid:4c562371-8654-40c0-8250-cdffcea2053a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3032215/internet-of-things/u-s-intelligence-chief-touts-iot-as-a-spying-opportunity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956867 | 597 | 2.671875 | 3 |
4 Ways to be More Cyber-Aware and Reduce Digital Risk
Security | Aug 30, 2016
Every October, we celebrate National Cyber Security Awareness Month (NCSAM), a collaborative effort between government and industry to educate Americans on how to stay safe online. This year, Guidance Software joined the National Cyber Security Alliance’s Champions Program to promote a safer, more secure, and more trusted Internet.
The human element is often the weakest link in the cyber security chain. Hackers target consumers, employees, and increasingly CEOs, in attempts to steal sensitive information or gain access to secure networks.
The best security software in the world can’t stop a user from clicking a malicious link or setting their password as ‘password.’ We’ve listed 5 simple tips below that can help anyone be more cyber aware.
- Think before you click or download – Hackers often use social engineering attacks, like phishing emails, that include malicious attachments or links in email, texts, social networks, and more. Clicking one of these links, or downloading an infected attachment generally starts a download of malware that can steal information and infect systems. It seems obvious, but make sure you trust the source before clicking a link or downloading an attachment. Phishing schemes have come a long way since the days of a Nigerian prince leaving you their massive fortune if you only provide them a bank account number, social security number and address. Modern phishing attacks are often orchestrated by organized crime (about 90% of attacks) and designed to mimic a trustworthy source (i.e. your bank or the HR department). The most sophisticated attacks can execute hidden code if the email is opened, no click or download required.
- Keep clean machines – When Ben Franklin said, “an ounce of prevention is worth a pound of cure,” he was actually talking about fire safety. But the maxim applies equally well to cybersecurity today. Taking basic precautions like having the latest versions of security software, web browsers, and operating systems installed goes a long way to protect against online threats. For corporate machines, companies should have a policy to ensure user machines are updated, automatically when possible.
- Make passwords strong AND unique – We (hopefully) all know some basic best practices for passwords – think long and strong. But did you know that using unique passwords for every account helps to thwart cybercriminals? Gartner says more than two-thirds of consumers reuse their passwords. Managing unique passwords can be a pain, but will help you avoid something like Mark Zuckerberg’s Twitter and Pinterest accounts being hacked because he allegedly re-used the password “dadada.” According to the 2016 Verizon BDIR, 63 percent of confirmed breaches leveraged weak, default, or stolen passwords.
- Be smart with mobile tech - Treat your mobile device like your home or work computer. Keep operating systems software and apps updated. When you travel, consider disabling remote connectivity and Bluetooth. And finally, be wary of unsecured wireless networks (like hotel networks) and use a VPN for extra security.
Cybersecurity is not something that can be solved with technology or software alone. All organizations need cybersecurity strategies that include IT, HR and training, risk management, and buy-in from the most senior levels. As we move toward a day when all commerce will be digital, hackers will continue to look for ways to exploit systems and find valuable information.
These risks can never be eliminated, so organizations need to invest in a comprehensive digital risk management and security strategy to mitigate them. Guidance offers the best forensic security solutions in the business to help organizations protect their most critical information.
Charles Choe joined Guidance Software in 2015 as a Product Marketing Manager for both EnCase eDiscovery and EnForce Risk Manager. Charles earned his JD/MBA from the University at Buffalo in 2007. Since graduating, he worked as an Online Product Manager across a number of different industries, including Banking, Publishing, and Technology. | <urn:uuid:c085b8f6-eebb-4560-ac7e-50bcb17d5dd0> | CC-MAIN-2017-04 | https://www.guidancesoftware.com/blog/security/2016/08/30/4-ways-to-be-more-cyber-aware-and-reduce-digital-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923725 | 811 | 2.515625 | 3 |
Despite years of considerable economic growth, India continues to suffer from massive corruption. Furthermore, while the country has a plethora of social and infrastructure projects to help redistribute some of this wealth to the sizable impoverished population, funds and materials meant for these programs often tend to dissipate as they move across the country’s vast geography. ‘Favors’ and ‘fees’ inappropriately levied by individuals wielding power keep what is already a limited supply of resources from reaching the populations it is meant for. At the ESRI user conference last month, Sam Pitroda, an advisor to the Indian Prime Minister, described the government’s effort to eliminate the country’s widespread corruption with a data-driven approach: an integrated GIS system. Mr. Pitroda believes that GIS databases combined with a national identification number given to every citizen and bound to every relevant record pertaining to them can fix this system that has so long resisted remediation. Only time will tell, but the transparency of geospatially tracking populations and funds holds definite promise of solving this seemingly intractable problem.
GIS holds a unique power to create accountability and transparency, combating corruption and mismanagement in much-needed ways. Storing information in geocoded databases makes that information concrete and bound to reality. Statistics and tabled data often simply remain abstract and less easily communicated and understood. With such agency, GIS can create change and alter relationships between governments and governments, governments and businesses, and governments and their citizens. Maps and geodatabases directly align with the geographies that communities inhabit, gaining credibility. Information becomes power. Maps are innately understandable, and usable - to create fairness and to right wrongs.
On a smaller scale, the town of Battle Creek, Mich., used GIS to resolve a backlog of complaints about non-working streetlights. Even though the streetlights were managed by an extra-governmental partner, flickering and blown-out bulbs created the impression of an incapable government and an area slowly creeping towards disorder. Not good for morale, civic spirit, attracting investment, or safety. In response, the city embarked on a multi-phase project to create a GIS database of the city’s streetlights and their functionality. This information allowed the City to hold the operator accountable and to track repairs. Since then, the number of non-operational lights has waxed and waned but the city has been able to secure refunds for non-operating lights they’ve paid for, ultimately incentivizing better performance from their operating partner.
GIS can provide the analytical tools to understand impacts of development on different groups and meet the requirements of laws created to secure equity and hold governments accountable to their entire citizenry when undertaking projects. For example, when the Southern California Association of Governments was developing its regional transit plan, it turned to GIS to run the analysis to meet a federal requirement to minimize impacts on minority and low-income groups. Without, this would have been incredibly difficult - and probably would have involved quite a bit of conjecture and guesswork, reducing the quality of the data and its analysis. Instead, regression models using current demographics and health risks along with growth models and future transit plans were used in conjunction to understand the impact of planning on future populations. GIS provided the tools to allow local governments to respond to laws meant to hold them accountable for equity with due diligence rather than as an ineffective formality.
GIS isn’t just about maps, it’s about data - and rather importantly it can provide a tool to ensure that the reality of what’s happening with government money and initiatives on the ground matches the intentions of policy and the requirements of laws put in place. There are still many more opportunities for the strengths of GIS to build in accountability and transparency where it’s lacking. Budgeting departments could have their activities enhanced through logging the spending of funds geospatially both for the sake of guaranteeing equal investment across different areas, as well as finding opportunities for departments’ money to go farther together. Reporting requirements for funds distributed under the ARRA act required this kind of logging, with maps of projects and funds available online. Such a system should be the norm.
When combined with demographic data, issues of equity can become concrete, and shortcomings can be revealed and remediated - if the will is there. The huge amounts of open data now available make this analysis possible, but it is in the layering and analysis of this data that we can gain real accountability. For instance, campaign finance data has been available online for years in searchable databases, but what if that data were routinely layered with other government datasets to ensure government allocations of resources aren’t being skewed by implicit or explicit promises made through politics and campaigns? However, this isn’t just about technology and data - both of which are available for this task of enhancing accountability and equity. There also must be the mindset and the desire to pursue these lofty goals, as well as the technical talent and the leadership to apply human resources to these goals, translate the results to be actionable by less tech-savvy officials, and maintain the intentionality to follow through.
This story was originally published by Data-Smart City Solutions. | <urn:uuid:2f0906c7-741c-4a01-b184-31caa8a55a2a> | CC-MAIN-2017-04 | http://www.govtech.com/data/GIS-for-Accountability.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00139-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950989 | 1,071 | 2.609375 | 3 |
Wave Glider: How it works
- By John Breeden II
- Nov 06, 2012
Each Wave Glider is comprised of a surfboard-like float that contains all the instrumentation needed for scientific experiments, plus the intelligence needed to keep the vessel on course or to maneuver it in a different direction. Two solar panels keep seven lithium-ion batteries charged in sequence, so that the topside instruments can be provided with six amps of continuous, 12-volt power.
But the craft doesn’t need power to move. Seven meters below the surface, a submersible craft is towed with the float. As waves lift the craft up, fins on the submarine direct the water behind the vessel, somewhat the way fins on an airplane provide lift. When the Wave Glider comes back down a wave, the fins are pushed to rotate in the opposite direction as the sub sinks. So the robot gets propulsion thrust almost constantly with no outside power needed, other than to move the rudder for steering. In the open ocean with more wave action, a Wave Glider can make about 1.5 knots. Closer to shore it slows to about 1 knot, but can almost always maintain speed, 24 hours per day.
Two payload bays store electronic gear for each mission in dry boxes designed to keep out corrosive sea water. Each box can hold about 25 pounds of gear. Onboard navigation is handled by a simple 8-bit processor that knows about 20 commands, mostly for turning or otherwise navigating the ship remotely. Commands can be sent to a Wave Glider through an Iridium Communications satellite modem. And the vessel keeps track of its position on a GPS receiver, which can track and navigate through up to 255 preprogrammed waypoints for long journeys, self correcting to keep a true course.
The mission sensors are driven by an ARM processor running an embedded version of Linux. Most of the time, the processor payloads are given their own Iridium Satellite modem so that some people on land can manage the sensors while others can concentrate on driving the boat, and the two systems don’t compete for limited bandwidth.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:b406e4b6-5f80-4d15-bc02-3ad2fa2cd8d0> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/11/06/wave-glider-how-it-works.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00441-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915835 | 446 | 3.125 | 3 |
The petascale age is here. After years of predicting the scientific advancements they would be able to make with petaflop supercomputers capable of a thousand trillion calculations each second, researchers now have an opportunity to prove their point. Oak Ridge National Laboratory (ORNL) recently unveiled the first petascale system dedicated to scientific research, a Cray XT machine with a theoretical peak performance of 1.64 petaflops.
This behemoth — an upgrade to ORNL’s Jaguar system — comprises more than 45,000 quad-core AMD Opteron processors. It boasts an unprecedented 362 terabytes of memory, which is three times more than any other system, a 10-petabyte file system, 578 terabytes per second memory bandwidth, and input/output bandwidth of 284 gigabytes per second. We talked with Doug Kothe, director of science at ORNL’s National Center for Computational Sciences [NCCS], about the challenges of and potential breakthroughs in science now possible with this built-for-science petascale system.
HPCwire: ORNL’s upgraded Jaguar will be the first petascale supercomputer designed for and dedicated to open scientific research. What are the immediate plans for putting this system to use?
Doug Kothe: The current plan is for the system to be used during much of the coming year for specific high-impact projects of national importance. In addition, it will continue to support the INCITE program [Innovative and Novel Computational Impact on Theory and Experiment program, sponsored by the Office of Advanced Scientific Computing Research (ASCR) in the Department of Energy’s (DOE’s) Office of Science]. This first group is known as Transition to Operations, or T2O, projects. They are tackling science problems — both applied and fundamental — that cannot be solved without Jaguar’s speed, memory, and infrastructure. We have been working closely with ASCR and with members of the computational science community to identify projects that have application software that can effectively utilize a large fraction of the system.
We expect these projects to deliver important results. Since they will be led by the community’s most sophisticated users and prominent scientists, early simulations on Jaguar will also help us harden the system for a broader collection of projects later in the year.
The selection of science problems for early access to the petascale system is by no means finalized. Computational researchers who believe they can fully exploit this system to deliver far-reaching results should contact us via the Web. We have three principal goals during the system’s early phase: deliver important, high-impact science results and advancements; harden the system for production; and embrace a broad user community capable of and prepared for using the system.
HPCwire: Will specific science domains have precedence?
Kothe: We are looking at all research areas that are important to DOE’s mission, from energy assurance to climate-change science to more basic fundamental and applied science. The breadth and depth of critical science potentially solvable on this system are daunting, with domains including fusion, biology, atomic physics, chemistry, nuclear energy, materials and nanoscience, climate and geosciences, astrophysics, high-energy physics, turbulence, and combustion. And this is not an exhaustive list.
HPCwire: Can you give us some idea of the kind of results we can expect?
Kothe: Sure. Looking at climate studies, we hope to be able to say with increased confidence just how good global models will be at predicting regional climate change on the scale of decades. We should also be able to better predict the likelihood of abrupt climate change — change taking place over decades rather than centuries — and the potential for increasingly destructive storms around the world as the climate gets warmer.
As I mentioned before, energy assurance is extremely important to us as a DOE lab, and we will be looking at energy production, storage, and transmission from a variety of angles. We expect to see new insights into the physical properties of biomass that will help us overcome the technological impediments to mass cellulosic bioethanol production. We expect to make significant progress in understanding and controlling the core plasma turbulence that will exist in the ITER fusion reactor. And we expect to dramatically improve our understanding of what happens inside the core of a nuclear fission reactor by removing many of the simplifying assumptions and estimates that had previously been unavoidable in modeling neutron transport.
Other areas being investigated could ultimately affect how we as a society produce and use energy. We will be looking for significant new insights into electrical energy storage involving, for instance, the storage and flow of energy in carbon nanostructured supercapacitor systems. Advances in this area are important both to mobile devices and to the viability of renewable energy resources — such as solar and wind power — that must be stored and transported. We are working to embrace the energy storage community, and we currently have an exciting project committed to going after this challenge on the Jaguar petascale system.
We will also be seeing first-principles studies of strongly correlated materials such as those often found in magnets and superconductors. If we can understand with confidence the effect of disorder on superconducting transition temperatures, we can revolutionize energy transmission, transportation, and a number of other areas. High-temperature superconducting cables, for instance, will be able to carry electricity indefinitely without any loss.
HPCwire: What other areas are being targeted during the early phases of the Jaguar petascale system?
Kothe: There are many other areas. In biology we hope to see the first accurate microscopic structural description of the dynamics of water. This will be indispensible as we move forward to atomic-scale biological simulations. And we will continue to play a major role in computational astrophysics research. For example, simulations of binary black holes and the gravitational radiation they emit will support both current and future projects aimed at detecting gravitational waves. And we will be looking at the first realistic model of the closest supernova in nearly 400 years — SN1987A. These simulations will make quantitative predictions of key observables associated with core-collapse supernovas, including element synthesis.
HPCwire: Does industry fit into your plans?
Kothe: Yes, very much so. The INCITE program has been very successful in attracting companies to perform large-scale simulation science on ASCR systems such as Jaguar. At ORNL, for example, I have worked closely with industry projects involving Boeing and General Motors. What I’ve seen is that these companies bring very talented researchers to the table with very challenging, compelling problems. Their problems are not easily simulated, and for the most part they demand scalable application tools just like DOE and academic projects do. To borrow from the Council on Competitiveness, industry must out-compute to out-compete. I firmly believe that statement is right; hence, our role with U.S. industry is to work with them in delivering science results that help them become more competitive. Given today’s economy, it is imperative that we focus all the more on helping these companies gain a stronger foothold.
HPCwire: The computational science community has been anticipating computers capable of a petaflop or greater for some time. How will the research performed on these systems differ from that done on earlier systems?
Kothe: A decade ago the game for people who wanted to do scientific research aided by computer simulation was simplify, simplify, simplify. We weren’t able to easily solve coupled nonlinear systems, so we would uncouple and linearize them to give us something we could solve. In those days you had to argue that these simplified models described reality, but more often than not they really didn’t at the level needed for predictive accuracy.
In contrast, the mindset today is very different; researchers no longer see the computer as a restraint. Young scientists don’t realize how great they have it. In fact, there is almost nothing out there that we can’t at least think about modeling, if not on current systems then one or two generations down the line.
HPCwire: So is this system going to be too difficult to use for scientists and engineers who have never been engaged in “big computing”?
Kothe: We don’t think so. We have already run at least a half-dozen simulation tools at scale on this system, and it’s still in its infant stage, just seven weeks after the last cabinet arrived. The performance of these applications, measured by raw sustained compute speed and parallel efficiency, is impressive.
This early evidence and our optimism are based on two simple facts. First, Jaguar’s hardware and software environments use the same programming model as before for users and developers. For them there are no drastic changes. The operating system is Linux-based, and the integrated development environment of compilers, debuggers, performance tools, and the like are unchanged. Existing scientific application software doesn’t have to be redesigned, refactored, or rewritten just to execute on the system. In retrospect, the seamlessness of the transition from that perspective was frankly surprising.
Second, Jaguar is a well-balanced system, designed for the targeted science applications and well matched to them. The AMD Opteron processor, for example, is a great chip for science: It is fast, has great memory and intersocket bandwidth, and is easy to program, since it uses the same X86 instruction set we have used for years. Similar examples exist in the interconnect and I/O infrastructure. The total memory on the system is incredible — more than three times any other system.
HPCwire: Why is memory so important?
Kothe: Without sufficient memory, scientists must oversimplify assumptions or run at resolutions so low they miss important characteristics. For example, global climate simulations do not produce hurricanes if the resolution is too low. More memory in systems such as Jaguar means more space for additional information about the simulation model, such as more model equations and more complicated model equations. Generally the ability of a simulation to match reality is directly correlated with that simulation having adequately complex models. And the list goes on. We’ve gone out of our way to ensure that Jaguar adequately addresses application requirements. In fact, we’ve documented our requirements collection process, data, and analysis in a number of recent reports, available here.
HPCwire: How have the challenges to using these systems grown?
Kothe: As I said, the programming architecture for the petascale Jaguar is very similar to earlier versions of the system. Current Jaguar users will have to optimize their codes for the new system, but they won’t necessarily have to redesign their software and algorithms.
That having been said, world-class supercomputers have always been a challenge to use, and new systems require far more parallelism from the codes running on them than we’ve ever seen before. I can remember when a simulation on 512 processors was considered massively parallel, but then again I’m not an “early-career” researcher. But now we’re working with systems that have hundreds of thousands of processing cores, and that number is only climbing.
Of course, we realize that not all of our users will be supercomputer experts. That’s why we have a comprehensive, that is, multileveled support system with a proven track record that others are emulating. Each major project at the NCCS has a scientific liaison assigned from our Scientific Computing Group. This is a group of mostly PhD-level computer scientists and domain scientists who are experts at taking important scientific questions and translating them into effective supercomputing applications. These folks also have productive research accomplishments and careers in their own right; in short, they are on top of their game, which makes them especially adept at being useful members of the NCCS project teams.
The challenge as these systems grow is to exploit the memory and processor hierarchy that we’re seeing in current and next-generation computing nodes. For the foreseeable future, what we see in computing nodes is a hybrid architecture. They’ll have two or three different types and levels of memory accessible in different ways. Heterogeneous architectures with floating-point acceleration, like the Los Alamos RoadRunner system, are also likely to stay. The challenge will be to have your application easily know that a particular processor or memory is different from another and respond accordingly.
We’re also going to have to build more robustness and fault tolerance into applications. The more processors you have, the more likely it is that one or more used by your application will go down during the course of a run. Currently, almost all applications need to halt and restart from the last saved state if a node or collection of nodes falls out. We need to program applications so that they are able to keep going.
It won’t be easy. We don’t have more fault tolerance now because it’s hard to program. It’s like having to change a flat tire while the vehicle is still moving.
HPCwire: Are you going to be able to handle all that data?
Kothe: This is a major consideration. We believe we’re well prepared for the input, output, processing such as analytics, knowledge discovery, and visualization, and transfer of data our scientific applications require. We expect to generate over 5 petabytes of new data just during this early science period, which is purported to be more than double the data embodied within all U.S. academic research libraries. That’s a lot, and it will be created over a period of just several months. Similar requirements are coming, for example, from the climate community in supporting their IPCC AR5 simulations [for the Fifth Assessment Report of the Intergovernmental Panel on Climate Change]. Standing up the I/O infrastructure to accommodate these data requirements is an incredible accomplishment, and we believe we have people with the talent and experience to actually pull this off. Without this data infrastructure, Jaguar and the scientific applications running on it would be effectively useless. Simulation-based science is data-intensive and data-driven.
HPCwire: What do you see for computational science in the longer-term future?
Kothe: I think we’re going to see large-scale computer simulation in areas that may seem strange today. We’ll see simulations of human behavior and social networks. We’ll see more sophisticated and more valuable simulations of biological systems; so instead of a chain of molecules, we’ll be able to simulate full cells, organs, and even individuals. We’ll see systems of systems; for example, instead of one nuclear reactor, we’ll see an entire nuclear fuel cycle. We’ll see first-principles-based simulations at larger and larger length scales and over longer and longer time. We’ll see such rapid turnaround on simulations that complex nonlinear optimizations will become commonplace. We’ll see materials and chemical catalysts by design. We’ll better understand the complex biogeochemical cycles that underpin global ecosystems and control the sustainability of life on Earth. We’ll see the deciphering and comprehending of the core laws governing the universe. Potentially, this will all happen in our lifetimes. | <urn:uuid:6d6ebe27-fa32-4071-a17a-f3e6343bd1ea> | CC-MAIN-2017-04 | https://www.hpcwire.com/2008/11/19/oak_ridge_dives_into_science_at_the_petascale/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94085 | 3,172 | 3.078125 | 3 |
As seen from the previous blog entries we have received second-hand reports of Cabir being spotted in the Philippines.
So we decided to go into a high-security RF shielded area and do extensive study on how Cabir replicates. And what we found is interesting and changes predictions on how Cabir would spread if it's in the wild.
Operation of Cabir Worm is fully independent from the GSM side of phones based on Symbian Series 60. The worm actually starts spreading even when phone is just started and user has not entered PIN code yet.
However the Cabir worm is capable of sending infected SIS files to only one phone per activation. So when Cabir is installed for the first time or the is phone restarted, the worm will look for the first Bluetooth device it can find and keeps sending repeated messages to that, effectively locking on to that phone.
When Cabir infects another Series 60 phone, this newly infected phone will start sending messages back to the phone that sent it the SIS file, even when the phone is not in range. Thus forming a 'tar pit' so that both infected phones wont look new targets before they are rebooted.
This means that the only scenario where Cabir can spread is that the phone that sent infected SIS file to new target is out of Bluetooth range before user activates the Cabir on the new phone (answers "Yes" to the installation query). This would happen, for example, in a busy street where people walk past and are out of range before the user of the phone who received Cabir activates it.
Cabir will also try replicate to a new host every time the phone gets rebooted. So SymbOS/Cabir is capable of spreading - but not very quickly.
Cabir can infect only phones that are in discoverable mode, so setting your phone into hidden mode in Bluetooth settings will protect you from Cabir worm. | <urn:uuid:b1920c0b-1d1c-47ec-aae0-eb3fd9bb41b3> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00000273.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953315 | 395 | 2.59375 | 3 |
While education hasn’t gotten much air time in the 2016 election cycle, one common refrain from both parties is that all children deserve a high-quality education. The Data Quality Campaign (DQC), a nonprofit policy and advocacy organization designed to bring the education community together to empower educators, families, and policymakers, shares the goal of a high-quality education for every student.
To make that dream a reality, the DQC released District Actions to Make Data Work for Students, a report highlighting what policymakers can do to help students excel inside and outside of the classroom. The recommendations expand on the DQC’s four policy priorities from another recent report, Time to Act: Making Data Work for Students, which shows how districts can utilize the four priorities to improve student achievement.
The DQC believes that when administration, teachers, parents, and students have access to accurate, real-time information to make decisions, students excel. In its Time to Act report, the DQC laid out four key policy priorities to make data work for students:
- Measure what matters: Be clear about what students must achieve and have the data to ensure that all students are on track to succeed.
- Make data use possible: Provide teachers and leaders the flexibility, training, and support they need to answer their questions and take action.
- Be transparent and earn trust: Ensure that every community understands how its schools and students are doing, why data is valuable, and how it is protected and used.
- Guarantee access and protect privacy: Provide teachers and parents timely information on their students and make sure it is kept safe.
By adhering to the policy priorities, the DQC believes that students will be able to know that they are on track for success, or be able to identify if they have fallen off track and know how to rectify the situation. Similarly, parents will be able to hold the school accountable for meeting the student’s needs, and identify weak spots in need of additional enrichment activities. In the classroom, teachers will have a complete picture of each student’s progress. Plus, with increased analytics, teachers will be able to tailor lesson plans on an individual student level. School administration can use the data to provide better coaching and professional development for its staff and use the information to plan future investments and changes to the curriculum.
In its District Actions to Make Data Work for Students report, DQC lays out five principles that should guide a school district’s action to achieve the policy priorities:
- Students are central. Data must be used to support student learning and to ensure that each student’s individual needs are met.
- Data systems are not enough. States must shift their focus from building systems to empowering people.
- Data needs to be tailored to the user. All stakeholders in education require quality information, but the type and grain size of the data they need depend on the needs of the individual.
- Data is used for different purposes, including transparency, continuous improvement, and accountability. Not all data collected needs to be used for all three of these purposes.
- Stakeholder engagement is critical. People who need the data–including teachers, principals, and parents–must be involved in the creation of policies for access and use.
When data is used properly in schools, it can help everyone do their job better. The DQC believes that by focusing on students and improving analytic capabilities, school districts enable success at all levels. To learn more about the DQC’s recommendations for school districts, view its infographic. | <urn:uuid:af7229ce-af72-4eab-9397-b6621d90806d> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/campaign-pushes-data-to-improve-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934419 | 733 | 2.84375 | 3 |
Surely the viruses and Trojans of today are much smarter and in most cases, much more devious than those of 20 years ago. More importantly, todays worms and viruses are mostly focused on criminal pursuits and theft, so they threaten an organizations reputation, consumer trust and viability in the marketplace.
But, based on research, its quite fascinating to see how these destructive pieces of code have evolved into the threats we know, and fear, today.
Beginning in 1986, the first generation of malicious code was comprised of DoS viruses, which infected the operating system and programs of a PC.
You might remember Brain, Lehigh and Form, which infected floppy drives hard drives, spreading through sneaker nets, or non-networked computers. As the Boot-viruses matured, they were able to infect the boot sector of data disks, spreading slowly over several years for their infected numbers to peak. Boot-viruses soon evolved to infect widely- used program files, such as WordPerfect.
Between 1986 and 1995, virus writers were more focused on obfuscation, with viruses becoming polymorphic (encrypted so as to require new virus scanner strings or better algorithms); hardened to avoid being destroyed by anti-virus solutions; stealth-like in their movement; and, bipartitespreading by both boot and file means.
By the end of this first generation, more than 12,000 unique DoS viruses were written, with about 150 accounting for 95% of infections among PCs all over the world.
The Second Generation: Macro Viruses (1995 - 2000)
The first DoS virus generation ended with the advent of Windows 95 in 1995, and its stricter requirements for application code and segregation of code that ran at boot time.
Virus writers were not able to write Win32 assembler code, so they turned their attention to the macro language in the widely used Microsoft Office applications, and Word documents themselves began spreading the viruses. This evolution of code led to the second generation of malicious attacks via macro viruses.
Between 1995 and 2000, thousands upon thousands of macro viruses were written. However, fewer than 100 unique viruses actually infected PCs and systems.
The most notorious virus was Concept, which appeared in July 1995 and took nine months to reach peak infection; a growth rate that was three to four times faster than the most prolific DoS viruses at that time. But due to several layers of protection built into Microsoft Office applications and the presence of reliable heuristics in almost all anti-virus programs, the macro virus generation was cut relatively short.
The Third Generation: Big Impact Worms (1999 2005)
The introduction of high-impact, high-profile mass-mailer worms marked the beginning of the third generation of malicious code: Melissa (1999), I Love You (2000), Anna Kournikova (2001), SoBig (2003) and Mydoom (2004).
The highly prolific network worms, such as Code Red (2001), SQL Slammer (2003), Blaster (2003) and Sasser (2003) are also indicative of this generation.
This third generation of worms is responsible for much of the destruction that has paralyzed organizations recently. Each caused major or moderate impact to 20-to-60 percent of corporations. The average third-generation worm doubled its number of victims every one-to-two hours, rapidly reaching peak activity within 12-to-18 hours of being born. SQL Slammer, by far the fastest-spreading worm to date, infected a full 90% of everything it was ever going to infect in just ten minutes.
Mass-mailer worms work almost exclusively through social engineering, or by tricking the user to double-click on an attachment. Thankfully, many organizations now block the three primary attachment types (EXE, PIF and SCR), which has proven successful at blocking repeat occurrences of these third generation attacks.
Many companies have also implemented standard configurations, mini-hardening, router ingress and egress default deny access, network segmentation, and policies and education programs. With such broad, holistic education, standards, and other protections in place, many of the big impact worms attempts to destroy a PC or network have been thwarted.
The Fourth Generation: Malcode for Profit (2004 to present)
The last three generations of malicious code authors wrote and distributed malicious code primarily to receive praise from peers and to gain notoriety. However, as weve entered the fourth generation, it has become clear that code authors are not looking for bragging rights, but rather cashand lots of it.
Malicious code authors have found a variety of ways to make a profit, ranging from click-ad revenue to the direct heist of monetary vehicles such as credit card numbers, blackmail and the resale of malicious code resources by the technical master to criminals.
The threat of identity fraud and information theft has become increasingly real over the last two years, with major security breaches at CardSystems, DSW and ChoicePoint, among more than 100 others.
This generation is in many ways increasingly insidious, with its criminal code authors working to stay under the radar. Bot-herds driving millions of zombie (infected) computers to perform numerous different malicious tasks have become the norm. For example, more than 300 different variants of just the Mytob virus were released during 2005 each trying not for massive infection, but instead to gain an incremental one-or-two percent of victims.
More than half of file attachments are in .ZIP files, including encrypted .ZIP files, which are much harder to inspect at our borders. Once infected, these machines are used for almost all types of secondary attacks, phishing, pharming, further distribution of malcode, launching exploits, scanning for vulnerable computers, sending spam, proxyin other attacks, sales of technology and services to organized crime, and more.
The last year has inflicted harm on many consumers with phishing, where constantly evolving messages have been used to trick consumers into giving up login credentials; typically orchestrated via a fraudulent website or email. While recent efforts to warn companies and consumers about the threat of information theft are commendable, hackers and authors grow smarter and more sophisticated with each attack.
Over the last twenty years, worms have used all types of replication vectors, which of course increase with each advance in technology. Authors have worked diligently to have their worms and Trojans avoid detection and reach more victims with every iteration. For instance during this fourth generation, weve witnessed Backdoors, Trojans and root kits that enable the free reuse of the infected computer, and bots that create zombies out of a network of computers that allow the malcode perpetrator to orchestrate responses among tens of thousands, or even millions, of victims at a time.
With each generation of malware growing more complex and devastating, its become increasingly important for CIOs to know not only who is on their network, but who is accessing their network.
While there isnt an end-all-be-all solution to wiping malicious code authors off the face of the Earth, having the best security policies and procedures in place will help enterprises avoid a crippling network attack that not only puts information at risk, but impedes productivity and ultimately damages the bottom line.
To do this, CIOs and CSOs must work together to achieve a security strategy that aligns with the organizations business goals to best protect the network from todays threats, and proactively tackle the threats of tomorrow.
Peter Tippett is CTO of security vendor Cybertrust and chief scientist for ICSA Labs, a division of Cybertrust. He specializes in the utilization of large-scale risk models and research to create pragmatic, corporate-wide security programs. | <urn:uuid:5f9bbe98-4a18-486e-8146-348994447127> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3598621/The-Fourth-Generation-of-Malware.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953663 | 1,570 | 3.296875 | 3 |
While Wikipedia's parent organization is taking great pains to emphasize the small sample size used in research it sponsored to spur others into more thoroughly assessing the online encyclopedia's quality and accuracy, there's no hiding its overall view of the results: We aced this test.
The Wikimedia Foundation last November enlisted the e-learning company Epic and researchers from Oxford University to conduct what would be the first organized look at Wikipedia's accuracy since a 2005 report by Nature that showed Wikipedia averaging four mistakes per article to three for Encyclopaedia Britannica. Companies and organizations sponsor self-serving research all the time, but rarely are they so open with the methodology and underlying data used, or enthusiastic about encouraging more thorough reviews.
From a Wikimedia Foundation blog post:
The small size of the sample does not allow us to generalize the results to Wikipedia as a whole. However, as a pilot primarily focused on methodology, the study offers new insights into the design of a protocol for expert assessment of encyclopedic contents. ...
The results suggest that Wikipedia articles in this sample scored higher altogether in each of the three languages, and fared particularly well in categories of accuracy and references. As the report notes, the English Wikipedia fared well in this sample against Encyclopaedia Britannica in terms of accuracy, references and overall judgment, with little differences between the two on style and overall quality score. Similar results were found when comparing Wikipedia articles in Spanish to Enciclonet. In Arabic, Mawsoah and Arab Encyclopaedia articles scored higher on style than Wikipedia, but no significant differences were found on accuracy, references, overall judgment and overall quality score. None of the encyclopedias considered in this study were rated highly by the academics in terms of suitability for citation in academic publications.
The e-learning company Epic was equally thorough in describing the limitations of the research, but states in its own press release:
Nonetheless, Wikipedia articles in general emerge commendably in a number of respects, and it was possible to identify a pattern of qualities: Wikipedia articles were generally seen as being more up to date than other articles and were generally considered to be better referenced. Furthermore, they appeared to be at least as strong as other sources in terms of comprehensiveness, lack of bias and even readability.
"We're very encouraged by the results for this small sample of Wikipedia articles," said Dario Taraborelli, Senior Research Analyst at the Wikimedia Foundation. "It affirms the quality of the editing community's collaborative work and it provides valuable methodological insight for future studies."
Encyclopaedia Britannica did not agree with the findings of that 2005 Nature report. I have contacted its press office to see if it will have anything to say about this one.
(Update: A Britannica spokesman tells me they received no advance notice or copy of the research. I assume they'll have something to say once someone there has had a chance to look at it.)
Welcome regulars and passersby. Here are a few more recent buzzblog items. And, if you'd like to receive Buzzblog via e-mail newsletter, here's where to sign up. You can follow me on Twitter here and on Google+ here.
- Microsoft apologizes for phrase "big boobs" in Linux code
- Techie talks about life of lies ... and his recurring need to leave
- Snopes.com debunks old C++ hoax, but ...
- Microsoft needs to buy a vowel.
- Watch Steve Jobs play FDR in long-lost Apple take-off on "1984"
- 2012's 25 Geekiest 25th Anniversaries.
- Googler on why he left Google ... for the third time.
- HP's policy on politics: "Do as we say, not as Meg does."
- Cool eclipse video needed no "Star Wars" special effects
- History's first prank call almost as old as the telephone.
- "Hilarious Unix admin tools"
- Steve Jobs and his gadgets ... in LEGO. | <urn:uuid:43f011fd-9ccc-4ce6-9d20-5eaa3ce5d129> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222880/software/wikipedia-sponsored--pilot-study--lauds-wikipedia-accuracy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955072 | 819 | 2.6875 | 3 |
- Purpose and Scope
- Deductive Reasoning
- A Priori
- A Posteriori
- Deontological Ethics
- Categorical Imperative
I am not a philosopher. I enjoy philosophy, and I frequently wish to know more about it, so this primer will help me do that by giving me immediate and distilled access to key concepts. But this resource will be a blunt tool. If you ever hear any tinge of authority in its words, it will be due to poor writing on my part. Once again, I am not a philosopher. Really.
I intend to be extraordinarily broad and practical here (a euphemism for simplistic and/or sloppy). The alternative is never adding content. So if you are a professional (or even just serious) philosopher, I apologize in advance for the violence committed below and ask for your kind assistance in making what I have less horrible.
Hopefully that’s ample throat-clearing. Let’s proceed.
I’m not sure how many I’ll include here, but if you notice any serious omissions, please let me know.
Metaphysics deals with the nature of reality, such as existence, time, mind and body, objects, and their properties and relationships.
Epistemology deals with the limits of knowledge.
The idea that nature has a purpose, or, more precisely, that nature tends towards outcomes and goals the way human actions often have a purpose. Put another way, the idea that nature has a design.
Logic is the study of the principles of correct reasoning.
A type of argument where, given certain statements called premises, other statements called conclusions, are necessarily true if the premises are correct.
Knowledge that can be attained prior to experience or evidence.
Knowledge that is dependent on experience or evidence.
A study of the nature of being, existence, etc., with focus on categories of being and their relations. Traditionally part of metaphysics.
Ethics that are based on following rules, e.g. duties, obligations, etc.
Kant’s universal rule that you should only act in a way that would be good if everyone did it.
The belief that being good comes from selecting and adhering to good maxims, such as the Categorical Imperative.
Part of Epistemology that regards reason as the primary source and test of knowledge in the world.
Part of Epistemology that, in contrast with Rationalism, regards exprience as the primary source of knowledge in the world.
The rejection of theology and metaphysics as flawed ways of learning about the world, and the belief that the superior method is through empirical verification of natural properties of the world.
A way of evaluating the truth of claims based on how their implementation works in reality, e.g. communism not being valid because it didn’t work in the USSR.
To be a Hobbesian is to believe that people must submit to a guiding central authority in order to maintain order, as people left to their own will regress into a natural state of war, dominance, and suffering.
The idea that 1) we make our own fates through our actions, and 2) that we do so within a universe that is devoid of its own intrinsic meaning. In short, you make your own meaning, if any, through your own actions that you are responsible for.
Here I’ll succinctly cover a number of key philosophers using the following format: lifespan, major concepts, major works, and notes/analysis. Each entry will be extremely short, with only 1-5 sentences in any section.
- Rejected Christian morality and structure.
- Believed in a better human potential, called the übermench, that could express itself creatively and artistically rather than suppressing itself.
- Believed strongly in the human desires of achievement, ambition, and striving, which he called “The Will to Power”.
- The Birth of Tragedy
- On Truth and Lies in a Nonmoral Sense
- Untimely Meditations
- Human, All Too Human
- The Gay Science
- Also Sprach Zarathustra
- Beyond Good and Evil
- On the Genealogy of Morality
- The Case of Wagner
- Twilight of the Idols
- The Anti-Christ
- Ecce Homo
- Neitzsche Contra Wagner
- The Will to Power (Post Mortem Collection)
Notes and Analysis:
- Died fairly early after an extreme mental and physical breakdown.
- Was highly sexist, likely due to a lack of success with women.
- Used an aphoristic style in much of his writing.
- Got his push into philosophy from Schopenhauer. | <urn:uuid:4f7b977c-597f-4bb8-a299-70136c4ab7ec> | CC-MAIN-2017-04 | https://danielmiessler.com/study/philosophy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00175-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926026 | 989 | 2.875 | 3 |
Military Drones Present And Future: Visual TourThe Pentagon's growing fleet of unmanned aerial vehicles ranges from hand-launched machines to the Air Force's experimental X-37B space plane.
7 of 22
The Navy awarded Boeing several multimillion-dollar contracts in 2005 to supply the ScanEagle for use in the Persian Gulf. The Marines used the drones in Iraq to compile real-time images of the battlefield. They can be used individually or in groups to "loiter" over trouble spots and provide intelligence, surveillance and reconnaissance. Equipped with an infrared camera, the ScanEagle is capable of flying at an elevation of 16,000 feet.
Image credit: Boeing
Drones To Fly U.S. Skies, In DOD Plans
Military Transformers: 20 Innovative Defense Technologies
Spy Tech: 10 CIA-Backed Investments
14 Amazing DARPA Technologies On Tap
Air Force Drone Controllers Embrace Linux, But Why?
Secret Spy Satellite Takes Off: Stunning Images
5 Items Should Top Obama's Technology Agenda
U.S. Military Robots Of The Future: Visual Tour
Iran Hacked GPS Signals To Capture U.S. Drone
7 of 22 | <urn:uuid:b969ed92-b273-458a-8273-1c331f1abf2d> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/military-drones-present-and-future-visual-tour/d/d-id/1107839?page_number=7 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.784002 | 243 | 2.515625 | 3 |
Cambridge, UK, January 19, 2001 - Kaspersky Lab, an international data-security software-development company, reports the discovery of a new Internet-worm that attacks computers with the Red Hat Linux operating system installed.
As was emphasized in the latest virus advisory regarding the "Davinia" worm dated January 16th, one of the modern trends in malicious code development often has virus writers using known breaches in security systems of different platforms and applications. The recently detected "Ramen" worm is yet another confirmation of this trend; however, this time the victim is the Linux operating system, which is considered to be one of the most protected platforms available today.
To penetrate computers with Red Hat Linux 6.2 or 7.0 installed, "Ramen" exploits three security breaches named "in.ftpd", "rpc.statd" and "LPRng", which were detected and closed in June--September 2000. All of these breaches are from the "Buffer Overflow" category, and allow a malefactor to send a remote system an executable code and run it without the user's permission. The way the worm works is rather sophisticated: firstly, a target computer receives data that overflows the system's internal buffer so the worm's code obtains the root privileges and starts the command processor that executes the worm's instructions. Then "Ramen" creates the "/usr/src/.poop" folder, launches the "lynx" Internet browser, and downloads the worm's archive "RAMEN.TGZ" there from a remote computer. After this, "Ramen" opens the archive and executes its main file, "START.SH". The worm has no additional payload except for changing the content of "INDEX.HTML" files found on the system. When the affected HTML files are run, they display the following message:
"It is important to emphasize that the breaches exploited by the 'Ramen' worm are also found on other Linux distributors, such as Caldera OpenLinux, Connectiva Linux, Debian Linux, HP-UX, Slackware Linux and others. This particular worm is triggered to activate only on systems running Read Hat Linux. However, it is possible that we shall see other future modifications of 'Ramen' that will successfully operate on other Linux platforms," Said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab. "Therefore, we recommend immediately installing patches for these breaches regardless of the Linux distributor you use."
More details about the "Ramen" Internet-worm can be found in Kaspersky's Virus Encyclopedia (www.viruslist.com).
Although Kaspersky Lab has received no reports of this worm to be found "in-the-wild" to date, we recommend users download the daily update for the Kaspersky Anti-Virus (AVP) database containing protection against the "Ramen" worm.
Kaspersky Anti-Virus can be purchased in Kaspersky Lab online store. | <urn:uuid:0af820e7-16e7-4752-b06f-47297ece6138> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/Ramen_the_first_successful_attack_on_the_Linux_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913362 | 612 | 2.75 | 3 |
A group of researchers from Carnegie Mellon University’s School of Computer Science believe they might have solved the problem of choosing and, above all, remembering complex and diverse passwords that are simultaneously difficult to crack by attackers.
The researchers have developed Shared Cues, a password management scheme that takes advantage of human memory’s tendency towards association and introduces mnemonic techniques to trigger it.
Shared Cues uses a generator and mandates a rehearsal schedule dependent on different types of internet users. “The key idea is to strategically share cues to make sure that each cue is rehearsed frequently while preserving strong security goals,” they shared.
The cues are used to create random person-action-object (PAO) stories that will form the basis for the password.
For example, in the image displayed above the story can be “Bill Gates swallows a bike”. But, depending on the creativity of the user, it can be another unexpected “version” of the story.
The public clues (delivered via an app) allow users to remember the story and the chosen password combination. An initial rehearsal schedule has to be implemented, and will depend on the memory capabilities of the user and how often he or she uses a particular password.
But the best thing is that even if an attacker knows the clues, chances are good that it would take him forever to guess the right “story” and the way in which it was used to create the password.
According to Jeremiah Blocki, a Ph.D. student in Carnegie Mellon’s Computer Science Department and one of the authors of the research, users could use as few as nine “stories” to create complex passwords for over 100 accounts, but he personally uses has 43 to improve his password security.
“The most annoying thing about using the system isn’t remembering the stories, but the password restrictions of some sites,” Blocki pointed out, referring to the often required use of numbers, figures or capital letters in passwords.
“In those cases, I just make a note to, for instance, add a “1′ to the password,” he says. Writing down things like this would usually affect the security of a password, but in this case it can’t, as the basis (the story) is not written down, and is still difficult to guess. | <urn:uuid:bdcaf7b9-c4a7-4f0e-beee-4fb17d6c4011> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/12/13/easy-to-remember-difficult-to-crack-passwords-via-visual-cues/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939423 | 493 | 3.484375 | 3 |
One of the most high-profile threats to information integrity is the computer virus. Surprisingly, PC viruses have been around for two-thirds of the IBM PC’s lifetime, appearing in 1986. With global computing on the rise, computer viruses have had more visibility in the past two years.
Note that computer viruses are also found on Macintoshes and other platforms, but in this book, we will focus on PC viruses.
The topics we will cover are:
– what a virus is
– the evolution of the virus problem
– viruses on different operating systems
– solutions to the virus problem
– ow Norman Virus Control products help
Download the paper in PDF format here. | <urn:uuid:64263483-e258-4a16-b5ac-7f69b32f0ef3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/10/23/the-norman-book-on-computer-viruses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938931 | 141 | 2.828125 | 3 |
Security fears are driving more than half of all internet users to routinely delete cookies on their computer, which is making it difficult for legitimate websites to monitor visitor behaviour.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
JupiterResearch carried out research among thousands of website consumers and found that 58% have deleted cookies from their machine and 39% are deleting all cookies from their PCs on a monthly basis.
Cookies are small files used by commercial sites to track the behaviour of visitors, to enable them to offer what they feel are the most appropriate products or services the next time a visitor logs onto the site. They are also used to recognise registered website users.
But privacy and security concerns are encouraging consumers to disable cookies on their browsers or delete them after they have been downloaded, said JupiterResearch.
"Given the number of sites and applications that depend heavily on cookies for accuracy and functionality, the lack of this data represents significant risk for many companies," said Eric Peterson, analyst at JupiterResearch.
"Because personalisation, tracking and targeting solutions require cookies to identify web visitors over multiple sessions, the accuracy of these solutions has become highly suspect, especially over longer periods of time," he added. | <urn:uuid:52add7de-6741-479d-9fbd-2ca01a92aefa> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240060331/Cookie-threat-to-websites | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00140-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960086 | 257 | 2.921875 | 3 |
Whether a computer-powered chatbot recently passed the Turing Test for artificial intelligence is debatable, but there's little doubt that the growing sophistication of the conversation programs could one day make them a threat to corporate security, an expert says.
The better chatbots get at imitating human conversation through text or audio conversations, the more useful they will become to criminals looking to bilk victims of their savings or steal the credentials of a well-placed employee of an organization, said Kyle Adams, chief software architect for Juniper Networks' intrusion prevention system WebApp Secure.
"Everyday someone is making an improvement to these things and they are getting better," Adams said Thursday.
The closer chatbots get to convincingly imitate the average middle-class American in casual conversation, the greater threat they will become to U.S. companies and ordinary people.
Some of the techniques criminals use today to snare victims, such as phishing and social engineering, could become much more effective through the use of the programs, Adams said.
For example, scammers could use them to strike up an automated conversation with an office worker via email to build trust before the criminals send a message with a malicious link.
"They could dramatically increase the number of people they get to actually click the link at the end of that trail," Adams said.
Chatbots could also be useful to spammers sending the familiar email that purports to come from a wealthy foreigner seeking assistance to move millions of dollars from his homeland.
The swindlers often have to correspond with respondents through several emails before they find the super gullible willing to send dollars for a share of the fake money transfer. A specially designed chatbot could help in vetting respondents.
"The phishers could cast a much wider net and narrow that down to a very small list of good targets within a couple days with very little effort," Adams said.
Other nefarious techniques made more efficient could include convincing support staff at a company to divulge an employee's credentials for a service or corporate network.
The problem the attacker faces with this type of telephone-based scam is having the call traced, Adams said. With a chatbot, the call could be made from a device left anywhere, such as a coffee shop, with the conversation sent to a remote server over the public Wi-Fi.
For the chatbot to be effective, it would have to be programmed with knowledge of the person it is pretending to be. Such information is typically gathered today on social networks and other online communities and sources.
Finally, extortionists could use chatbots in a denial of service attack against a call center. Imagine, a chatbot that's good enough to keep a customer rep on the phone for just a few minutes. If enough, bogus calls are made, they could prevent legitimate customers from reaching service reps.
"I actually think this could probably be one of the most devastating uses of chatbots," Adams said.
None of these scenarios would be possible using today's programs, but the technology is advancing.
In the two-day Turing Test last week, a program dubbed Eugene Goostman posed as a 13-year-old Ukranian and supposedly tricked a third of the human testers into believing it was human.
The test at the Royal Society in London drew lots of critics who challenged the way it was conducted. Nevertheless, despite the controversy, chatbots are improving and they are likely to prove useful to the good guys and the bad guys.
Share any ideas you might have on how criminals could use chatbots in the comment section below. | <urn:uuid:b4e69ac6-06e9-4243-bb59-48ff3795f5ff> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2363301/physical-security/when-chatbots-could-become-a-real-security-threat.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00534-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962708 | 727 | 2.75 | 3 |
Library of Congress aids geospatial data preservation
Columbia University takes part in project
- By Kathleen Hickey
- Jun 01, 2010
The Library of Congress and Columbia University are creating a Web-based information hub to provide best practices, tools, methods and services to assist organizations in preserving geospatial data.
Geospatial data, which includes maps and satellite images, identifies the geographic location and characteristics of natural or constructed features and boundaries on the earth. The data is important for responding to disasters, urban planning, navigation, protecting the environment and a host of other uses.
Supercomputer tapped for 3D models of oil spill
Digital maps help bail out flooded NJ county
However, much of this information is in danger of being lost, because of evolving technology and other threats.
“The geospatial community has told us that a clearinghouse to communicate preservation best practices is essential for keeping these information resources available around the nation,” said Laura Campbell, associate librarian for strategic initiatives at the Library of Congress.
The Library’s National Digital Information Infrastructure and Preservation Program will fund development of the clearinghouse. Columbia’s Earth Institute will house it, at its Center for International Earth Science Information Network. CIESIN will launch a beta version of the clearinghouse later this year.
“These electronic resources are essential to research, education, and sustainable development and only grow more valuable over time,” said Robert Chen, director of CIESIN.
Kathleen Hickey is a freelance writer for GCN. | <urn:uuid:21f1a98a-2cd4-4d64-9c7c-4ed092ee71b0> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/06/01/library-of-congress-columbia-geospatial.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900501 | 318 | 3.109375 | 3 |
By Art Reisman
Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.
Article Updated March 2012
As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.
Exactly what is deep packet inspection?
All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.
The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.
When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).
Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.
How is deep packet inspection related to net neutrality?
Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.
Why do some Internet providers use deep packet inspection devices?
There are several reasons:
1) Targeted advertising — If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.
2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.
3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.
When is it appropriate to use deep packet inspection?
1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.
2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.
3) Intrusion detection and prevention– It is one thing to be acting as an ISP and to eaves drop on a public conversation; it is entirely another paradigm if you are a private business examining the behavior of somebody coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home. In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.
4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam. I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising
Does Content filtering use Deep Packet Inspection ?
For the most part no. Content filtering is generally done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.
What about spam filtering, does that use Deep Packet Inspection?
Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.
What is all the fuss about?
It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.
For example, this is an excerpt from a recent PC world article:
Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.
Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:
Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.
— Digital Daily, March 10, 2008. Read the full article here.
Later in 2008, the FCC came down hard on Comcast.
In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.
By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.
— Wired.com, August 1, 2008.Read the full article here.
To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.
University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.
— Wired.com, May 22, 2008. Read the full article here.
However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.
The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.
— The Register, December 16, 2008. Read the full article here.
Canadian ISPs confess en masse to deep packet inspection in January 2009.
With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.
— Tech Spot, January 21, 2009. Read the full article here.
In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.
In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.
— Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.
The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.
7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.
— Kyle Brady, July 27, 2009. Read the full article here.
[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.
While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list. | <urn:uuid:e51d3d55-8a65-4aba-8708-9c3d8fa31f7a> | CC-MAIN-2017-04 | https://netequalizernews.com/2011/02/08/what-is-deep-packet-inspection-and-why-the-controversy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951885 | 2,172 | 2.65625 | 3 |
A team of researchers at the University of Edinburgh, Scotland has embarked on a palatable study. Using a supercomputer, scientists are simulating ice cream in hopes to improve the popular dessert’s texture and shelf life. NVIDIA reported on the project in a recent blog post.
One of the lead scientists noted that ice cream is a very complex substance. The multiple ingredients found in popular recipes react with each other in a variety of ways over time. To get a better understanding of how they interact, the group decided to perform a computer simulation.
The processing power required to study ice cream at the molecular level would take years for consumer-grade computers to accomplish. To get faster results, the team turned to the Edinburgh Parallel Computing Center (EPCC), where they researched the frozen treat on a 200K-core Cray supercomputer.
During the project, the team realized they were able to perform the same computations on a far smaller, GPU-accelerated cluster. The simulations were migrated from a 200-cabinet system to a 10-cabinet, GPU-accelerated Cray XK6. The smaller cluster had 936 Tesla GPUs, which helped the team complete their simulation two and a half times faster than with CPUs alone.
While ice cream is a tasty and unique substance to learn more about, the same simulations could apply to other soft materials. University of Edinburgh’s Alan Gray explained that paint, ketchup, yogurt, and hair products are just a few examples of items applicable to this research. | <urn:uuid:59f42b4b-7c17-4a33-ac03-3f3c2b723f32> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/07/27/soft_serve_supercomputing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00286-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947194 | 313 | 3.59375 | 4 |
Del Valle ISD, situated outside of Austin, Texas, is a school district with relatively modest means and yet, it is enabling its students to visit some of the most remote areas of the world including Iran, Taiwan, India and Bosnia, all through the power of video conferencing.
Check out this article in Wired Magazine to learn how Del Valle ISD embraces technology in the classroom and provides a world-class learning experience to its students.
A Learning Revolution in Texas: Bringing the World to the Classroom
On the outskirts of Austin, Texas, a small high school of relatively modest means is fundamentally changing how education is being brought to students — with video technology that lets students debate their peers in the most remote areas of this country, or even around the world in places like Iran, Taiwan, India and Bosnia.
School districts often fail to see the significance of technology in education, according to Michael Cunningham, the principal of Del Valle High School. He is embracing technology as the future for learning because it allows for low cost solutions to the increasingly expensive conventional approaches to education.
“What the PDF has done for books, video conferencing can do for teaching,” Cunningham explained. “Right now, a printed textbook may cost $100 or $200 per copy, whereas a PDF version of the same book can be available for almost nothing.” With video conferencing, he continued, the best lecturers from around the world can be viewed in a classroom for a fraction of the cost of hosting them in person.
World-Class Experience Made Affordable
Cunningham’s quest to provide a world-class school experience to his students has been an abiding passion for over a decade. “Del Valle is essentially a lower socio-economic school district, and students don’t have many of the advantages available to their counterparts in other schools,” Cunningham said. “In the 2001 school year, we learned our school district had underutilized video equipment. We put it to work very quickly, in debates with schools first from Alaska and Canada. That has morphed since then into all kinds of other video conferencing applications.”
Just this past December, Del Valle students had a video debate with students in Kherad High School in Iran. Del Valle has been conducting debates with Iranian students from the past four years – at a time when the US government was not even having formal discussions with the nation.
A “what if” debate, the students considered the issue of what might have happened in world history if Cyrus the Great of Persia had taken over the Greek city states. “It yielded some interesting discussion,” Cunningham said. “Arguably, there might not have been Christianity, the Crusades – even the Muslim religion might not have come to be. Our whole way of government may have been different. Just one or two changes in world history could have resulted in a wholesale change in the way we understand the world.”
In most cases, the video conferencing tool being used by Cunningham and his fellow educators around the world is LifeSize ClearSea, an open standard, software-based system that requires no dedicated equipment. Proprietary systems from other providers would make the process considerably more complicated. The technology enables high quality video communications over very low bandwidth, so even students in countries with limited Internet resources can participate.
For Cunningham, video technology enhances educational opportunities for students who otherwise might not have access to such opportunities.
“In Bosnia-Herzegovina, local students walked to the nearby American consulate to take part in an online video conferencing debate with our students,” Cunningham said. “Most of our partner schools from around the world are now using this technology to speak with us. Without that technology, it would be not much more than a one-sided conversation.”
Later in 2014, Cunningham is planning a moot court trial of the Warren Commission, the report from which will see its 50th anniversary in September. Also in the works, Cunningham is interested in planning an event paying tribute to the 100th anniversary of the Armenian genocide.
Underlying Cunningham’s use of video technology is his belief that current approaches to high school education are really a thing of the past….[MORE] | <urn:uuid:659054f0-1928-4671-81b4-5e255d86d647> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00498-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960268 | 886 | 3.109375 | 3 |
One of 2016’s key events in the tech world was the massive distributed denial of service (DDoS) attack in October that brought many of the internet’s most heavily trafficked sites to their knees.
There were two main takeaways from the event. Firstly, DNS infrastructure is highly vulnerable. And secondly, the growing proliferation of cheap, connected Internet of Things (IoT) devices – webcams, Wi-Fi speakers, wearables etc. – is making it far easier for cybercriminals to launch massive DDoS attacks.
Why? Because many of these devices are shipped with default usernames and passwords, which are never changed by the enduser, and so are easily taken over. Earlier in October, the Mirai botnet malware was made public, and it evidently played a role in the attack.
In 2017 businesses are sure to suffer more DDoS attacks and internet shutdowns powered by cheap, insecure IoT devices. But while these attacks could become more common, they’re also likely to become less lethal as backbone providers harden their defenses and device manufacturers adopt identity-based security to close vulnerabilities.
However, the sheer number of cheap AND insecure IoT devices deployed globally will ensure DDoS attacks continue sporadically through 2017.
Catastrophic DDoS attacks might dominate tech media coverage, but the failure of IoT device, service and infrastructure to adopt and scale robust security and privacy tactics will play out in several ways.
Here are four sectors that will face the brunt of this as digital transformation takes hold in 2017.
In 2017, the distinction between in-home and clinical healthcare devices will continue to erode.
To date, smart wearables and exercise devices like Fitbits and Apple Watches have been perceived as a means to track exercise in order to further fitness goals – distinct from clinical medical devices like heart monitors, blood pressure cuffs or insulin pumps.
At the same time, it’s become common for patients with high blood pressure to monitor their levels at home by capturing them on a mobile app on their phone – exactly how fitness trackers work.
The wealth of data available to clinicians flowing from such devices is leading to expectations that individuals can and perhaps should play much more active roles in preventative care.
But the ease with which personal health data can now be gathered and shared will increase pressure on healthcare IT decision-makers to turn to identity management and authentication as the technology most effective for achieving security objectives.
The proliferation of digital systems and devices in healthcare settings creates more vulnerabilities where personal data can get exposed or stolen.
By adding contextual authentication and authorisation through strong digital identity, hacking these systems becomes more difficult. For example, adding presence, geo-location and or persistent authentication.
2. Financial services
In 2017, commercial banks and investment houses will continue the race to avoid having their business models disrupted by fintech innovation such as Bitcoin and emerging artificial intelligence technologies.
Banks are already co-opting these disruptive technologies and incorporating them into their own IT mix.
Somewhat ironically, having established relationships with their customers, many legacy banks could be very well positioned to not just weather the digital transformation storm, but emerge even more stable and profitable in the years ahead.
This is especially true for those that embrace omnichannel techniques and technologies to create seamless experiences that delight customers across devices.
Banks in 2017 will work on allaying customer privacy concerns as they cope with regulations regarding data protection and sharing. There will be a continued effort to eliminate internal data silos that create impersonal customer experiences across channels, and fragmented systems that can’t support digital customer demands and business requirements.
The race toward omnichannel will accelerate in 2017 as many retailers and B2C organisations find themselves doing more business via mobile than they’re doing on the conventional laptop and online channel.
Delivering convenience and seamless experiences will depend heavily on providing customers with experiences that are not just secure but also personalised to their needs and tastes.
In order to do this, they must securely connect the digital identities of people, devices and things. This requires solving complex identity challenges and creating solutions that enhance and improve customer experiences and at the same time maximise revenue opportunities.
4. Communications and media
AT&T’s proposed acquisition of Time Warner at the end of 2016 highlights exactly how vulnerable legacy media and telecommunications firms perceive themselves to be to disruptive forces like cord cutting.
‘Digital pipe’ companies feel like they need to lock in content providers in order to lock in audiences and preserve value. However, regulators may frown on such industry consolidation, and independent players like Netflix and semi-independent players like Hulu and independent cable TV producers continue to find ways to directly insert successful content into the entertainment bloodstream.
Here again, making content easily accessible through the full array of channels is key to locking in loyalty and preserving lifetime value (LTV).
Sourced from Simon Moffatt, ForgeRock | <urn:uuid:84936f45-5553-46e3-9ea3-6e6a956a1b93> | CC-MAIN-2017-04 | http://www.information-age.com/4-sectors-vulnerable-iot-attacks-2017-123463488/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948263 | 1,013 | 2.53125 | 3 |
Edward Seidel, the former director of the National Science Foundation Office of Cyberinfrastructure, told attendees at TeraGrid ’11, held July 18-21 in Salt Lake City, Utah, that after more than four centuries of science being conducted at a painstakingly slow pace, today’s communications technologies and scientific advances are forcing a dramatic change–and acceleration–in all areas of science. At the heart of this change will be software.
The challenge for the NSF and the larger US science community is to come up with a cyberinfrastructure (CI) model that effectively brings together these advancing technologies. The XSEDE (Extreme Science and Engineering Discovery Environment) program, now succeeding the TeraGrid project after 10 years, has the potential to play a vital role in shaping a blueprint for the nation’s CI initiative, said Seidel, currently the assistant director for Mathematical and Physical Sciences at the NSF.
The result of a five-year, $121 million NSF award, XSEDE is designed to be the most powerful collection of advanced digital resources and services in the world. It is the follow-on to the NSF-funded TeraGrid, which began in 2002. CI refers to an accessible and integrated network of computer-based resources and expertise that’s focused on enabling and accelerating scientific inquiry and discovery.
“We now have very small periods in time that are leading to very large changes in the amount of data, the amount of computation, and the amount of knowledge that is needed in order to carry out this kind of work,” said Seidel, also a professor with Louisiana State University’s departments of Physics and Astronomy and Computer Science.
Citing astrophysics as a prime example of one discipline undergoing this unprecedented pace of change, Seidel said that going forward, an “explosion” in data-driven science is going to lead to an even more dramatic rate of change. Multiple approaches to observation, experimentation, computation, and data analysis need to be integrated to understand a single event, such as a gamma-ray burst.
“I think XSEDE probably marks the beginning of a national architecture with the capability of actually putting some order into all of this,” he said, noting that “we have the critical elements in place” but that “we need to think how to integrate all these different science activities in a multi-scale way.”
Still, Seidel noted that such radical changes in conducting research, collaborating, and archiving scientific results cannot be adequately addressed with the current incremental approach.
“The good news is that we have the beginnings of an architecture but the language differences are pretty severe,” he said, referring to differing terms and software used by researchers from one field to another. In calling for the creation of a common software community, Seidel noted that “XSEDE can’t do all this alone, so we need to think about how to aggregate multiple resources coherently to do the kind of work we want.”
As technological advances fuel dramatic changes, Seidel said we now have a “cyber crisis” at many levels. One challenge, he said, is how to manage the exponentially increasing amounts of data generated from a myriad of digital resources.
“Every year we generate more data, not just more than we did last year, but in all years combined,” he said. He urged that we initiate a national discussion on how to communicate, collaborate, and integrate a wide range of research activities, even in real-time, to better analyze and respond to events such as natural or man-made disasters to generate significant benefits to society at large.
At the same time, this “data deluge” provides the opportunity for potentially very powerful collaborations on a national and even global scale. “We need to be thinking about developing cyberinfrastructure, software engineering, and capabilities to mix and match components, as well as data sharing policies, that really enable scenarios such as coupled hurricane and storm surge prediction, as well as the human response to such events,” he said.
In framing the various elements required to create an effective national cyberinfrastucture, Seidel said another challenge is how to leverage new technologies, especially within the realm of social networking, to develop and promote new ways of sharing scientific results via campus collaborations as well as partnerships at the state, federal, and international levels.
“We are thinking about ways to encourage the publication of more modern forms of scientific output,” he said. He suggested in organizing scientific data for multiple communities, new approaches that merge databases with wikis, in addition to using social networking media tools such as Flickr and Twitter, will be very powerful. He noted that there are even new programs that create openly writable information storage and search platforms, such as those discussed in posters at the conference.
“We need to make the world writable,” Seidel told TeraGrid ’11 participants, adding that “software is the modern language of science these days.” | <urn:uuid:a2d83bc2-ff00-434f-8df3-2a698d5bd2e5> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/08/09/nsf_s_seidel_software_is_the_modern_language_of_science_/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947999 | 1,055 | 2.703125 | 3 |
The need to transmit secret or sensitive information has been around for a long time, and cryptography, in one shape or form, has been around for almost as long. The Spartans for example, around 400BC, used a system involving a long length of papyrus wrapped around a cylindrical rod. Words were then written on the paper lengthwise along the rod. When the strip was unrolled, one could only see an arrangement of apparently meaningless letters. To decrypt the message the papyrus had to placed around a rod of the same diameter in exactly the same way as when it was encrypted.
Since the advent of PCs, encryption technology has evolved at a spectacular rate. Whereas before, the real problem in deciphering an encrypted message lay in the need to make large-scale calculations, computers have made it possible to carry out such calculations at amazing speeds. These same machines have also enabled encryption systems to be made both more accurate and complex.
Nowadays, it is unthinkable that data of any consequence should be transmitted without having previously been encrypted to prevent exposure to the prying eye. Unfortunately though, the common perception of data transmission is limited to the Internet and therefore many other systems that can and should be protected with encryption are generally ignored.
Laptop computers have now become a standard accessory for today’s highly mobile business people, meaning also that all information stored on these computers is highly mobile and needs to be adequately protected. Although passwords can be used to impeded unauthorized access, they are little obstacle to the skilled and determined intruder.
With these kinds of problems in mind, Microsoft has incorporated an encryption system in its operating systems to prevent unauthorized access to information on disk. This system, the ‘Encryption File System’ (EFS), allows files and folders to be encrypted so that if a laptop or disk were to fall into the wrong hands, it would be impossible to decipher the information it contained.
To further heighten security, EFS includes various layers of encryption. Each file has its own unique encryption key, which is essential in order to be able to work with the file. This key, which is also encrypted, is only available to users authorized to access each file. EFS is actually integrated into the file system, thus reinforcing security against unauthorized access and at the same time making administration easier for users. The encryption and decryption of data is completely transparent and requires no user interaction other than selecting the file to encrypt.
One of the biggest potential problems presented by any file encryption system concerns access to these files after encryption, not just by the users who encrypted them, but also by others, such as network administrators or company bosses. If the password holder is unavailable at any time, even the IT staff will not be to access the encrypted files. To prevent such situations from occurring, the EFS in Windows XP and Windows Server 2003 allows the administrator to recuperate encrypted files using ‘recovery agents’ that can access all users’ passwords.
Even though EFS, or any other encryption system, can offer great security advantages, they are also negative implications for protection against viruses. When a file is encrypted, its content becomes unintelligible, not just to people but also to any processes that don’t know either the file’s password or the generic administrator password.
The process that is potentially most affected by this limitation is the one that searches for malicious code in the system: the antivirus, which scans all files that could contain viruses and stops them from running if they’re infected. To this end, it handles “EXE”, “COM” or “DLL”, files as well as those data files that could contain executable code such as “DOC” or “XLS” files and their macros. And it is precisely these Word or Excel files that are most likely to need to be encrypted, as they are the typical vehicles for storing important data: budgets, forecasts, etc. If the antivirus system is incapable of scanning these encrypted files, they could remain infected with obvious dangers that this entails.
When installing an antivirus on a Windows Server 2003 system with EFS it should first be checked whether the antivirus is capable of scanning for viruses even in encrypted files. If not, encrypting a file would leave the antivirus disarmed in the face of malicious code.
In theory, in order to scan encrypted files, the antivirus must be able to access each and every file encryption key stored on the system. To do this, the antivirus would have to operate as a recovery agent, with access to all encryption keys. Because of the security implications, the indiscriminate creation of recovery agents is not good practice and therefore antiviruses ought to work in other ways. For example, by intercepting and scanning the file when it is opened by the authorized user, with the antivirus acting as the user. In this way, only files accessed with correct authentication (i.e. by a user with the correct encryption key) will be scanned thus minimizing system resource use without jeopardizing security.
The process of scanning and disinfecting is as follows:
1. The file is stored on the hard disk.
2. The user makes a request to the server to recover the file.
3. The server makes a request to the disk with the credentials of the user making the request.
4. The antivirus intercepting activity on the hard disk receives the request. As the file is encrypted, it makes a call to the system to decipher it. This call is made with the user credentials of the user making the request. Once the file is decrypted, it is scanned and, if necessary, disinfected.
5. The system returns the clean file to the system which in turn passes it on the user making the request.. | <urn:uuid:c24ad60b-6876-4808-b3a7-e7845692c1b7> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/06/02/antivirus-and-efs-in-windows-server-2003/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95311 | 1,192 | 3.546875 | 4 |
Apparently iPhone passwords may not be as secure as one might believe. According to German security researchers from the Fraunhofer Institute Secure Information Technology (Fraunhofer SIT), if you have physical access to the phone, passwords can be recovered from a locked Apple iPhone in six minutes.
But how is this possible? According to documentation on Fraunhofer’s site:
When an iOS device with hardware encryption capabilities is lost or stolen, many users believe that there is no way for a new owner to access the stored data — at least if a strong passcode1 is in place. This estimation is comprehensible, since in theory the cryptographic strength of the AES256 algorithm used for iOS device encryption should prevent even well equipped attackers. However, it was already shown2 that it is possible to access great portions of the stored data without knowing the passcode.
Tools are available for this tasks that require only small effort. This is done by tricking the operating system to decrypt the file system on behalf of the attacker. This decryption is possible, since on current3 iOS devices the required cryptographic key does not depend on the user’s secret passcode. Instead the required key material is completely created from data available within the device and therefore is also in the possession of a possible attacker.
From the video (HERE) you can see the jailbreaking tool and script that Fraunhofer uses in action to access the secrets stored on the iPhone.
Big deal, one might say, they can read my text messages. Well, with smart phones becoming a standard enterprise network client, theoretically one could retrieve the passwords used to access corporate networks with this utility.
According to the researchers site, all current iPhones and iPads are vulnerable to this attack.
It would seem that the dangers of leaving your laptop lying around now pertain to your smart phone too.
Cross-posted from Cyber Arms | <urn:uuid:2137304a-4492-4694-aba0-527f15ffbd60> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/11817-iPhone-Hacked-and-Passwords-Stolen-in-Six-Minutes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933647 | 387 | 2.578125 | 3 |
If you’re in information security you’ve probably heard a lot about serialization bugs. They are becoming increasingly common, and I wanted to give a basic overview of how they work and why they’re an issue.
The parsing problem
So much of security comes down to parsing. It’s the primary reason we need input validation, and the reason that software like antivirus and network protocol analyzers can have so many security issues.
The job of a parser is to take input from somewhere else and run it through your own software. That should frighten you. It’s like a CDC employee using the ‘open and lick’ method to test petri dish samples.
Bottom line: If you’re going to parse something, you have to get intimate with it.
And that brings us to serialization.
Serialization is the process of capturing a data structure or an object’s state into a (serial) format that can be efficiently stored or transmitted for later consumption.
So you can take an object, capture its state, and then put it in memory, write it to disk, or send it over the network. Then at some point the object can be retrieved and consumed, restoring the object’s state.
A basic example of serialization might be to take the following array:
$array = array("a" => 1, "b" => 2, "c" => array("a" => 1, "b" => 2));
And to serialize it into this:
At its core, serialization is a type of encoding.
So this brings us to the core issue: deserialization requires parsing.
In order to go from that serialized format to usable data, some software package needs to unpack that content, figure it out, and then consume it.
Unfortunately, this is precisely what parsers are so bad at. And doing it wrong can lead to all manner of security flaws, up to and including arbitrary code execution.
- Parsing untrusted input is hard
- Serialization takes data and encodes it into opaque formats for transfer and storage
- To make use of that content, parsers must unpack and consume it
- It’s extremely hard to do this correctly, and if you do it wrong it could mean code execution
- Don’t deserialize untrusted data if you can avoid it
- If you can’t avoid it, just realize you’re asking your parsing software to lick some petri dishes labeled “SAMPLE UNKNOWN”, and explore your options for making it so you don’t have to do this anymore
This overall concept applies to most any language that uses serialization, but some languages (like Java) are in worse shape than others. | <urn:uuid:bf3a22c7-e67d-4b4e-a7df-0a7fbf54113f> | CC-MAIN-2017-04 | https://danielmiessler.com/study/serialization-security-bugs-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00186-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891772 | 587 | 2.890625 | 3 |
Just when you thought storage couldn't get bigger along comes YAT (Yet Another Technology) that could change everything.
Yep, multi-layered nanostructured quartz glass has been used by researchers at the University of Southampton in England to store data at a density of 360TB per disk (presumably a disk the size of a DVD) and because the substrate is glass it is temperature stable up to 1000°C with a "practically unlimited lifetime."
The system is termed "5D" recording because the data is encoded in three spatial dimensions plus polarization and intensity.
Coined as the ‘Superman’ memory crystal’, as the glass memory has been compared to the “memory crystals” used in the Superman films, the data is recorded via self-assembled nanostructures created in fused quartz, which is able to store vast quantities of data for over a million years. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.
The only problem with the system is that writing data onto a disk requires a very sophisticated femtosecond laser system that costs thousands of dollars. On the other hand, a system to read the disks could be produced, say the researchers, for just hundreds of dollars.
You can read the paper, "5D Data Storage by Ultrafast Laser Nanostructuring in Glass", describing the technique as well as the more digestible press release. | <urn:uuid:6edef7ec-aa5f-4b9f-9eed-a3cbc1f69d0c> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224936/data-center/storage--you-want-storage--how-about-360tb-per-disk-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941142 | 304 | 2.9375 | 3 |
NASA pursues horizontal launch systems
Next generation of satellites could ride the rails or a sled
- By Michael Hardy
- Sep 16, 2010
NASA is trying to use existing technologies in new ways to design an entirely novel launch technology. If it works, the new launch system would propel a space vehicle down a track or on a sled until it's moving fast enough to launch and escape Earth's atmosphere into space, according to Michael Cooney, writing on Network World's Layer 8 blog.
Stan Starr, NASA's branch chief of the Applied Physics Laboratory at the Kennedy facility, said in the blog entry that nothing in the rail launcher -- formally called the Advanced Space Launch System -- requires any new technologies. However, he said, it may require some advances in existing technologies.
NASA engineers have proposed a a 10-year plan that starts with launching drones. If that works, the scientists will move on to more advanced models, with the goal of eventually being able to put small satellites into orbit, Cooney reported.
According to a Fox News report, the system is intended for "scramjets," high-altitude jets that take in air, compressing it and mixing it with hydrogen to create a burst of propulsion. The jets could carry out their missions and then land on a runway alongside the launch sled.
However, the system -- if it succeeds -- could also be used for manned space flights, Starr told Fox.
Technology journalist Michael Hardy is a former FCW editor. | <urn:uuid:acad3aa5-0089-43ec-a821-f314f72877a8> | CC-MAIN-2017-04 | https://gcn.com/Articles/2010/09/16/NASA-pursues-horizontal-launch.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935932 | 302 | 2.984375 | 3 |
In a picturesque spot overlooking San Francisco Bay, the U.S. Department of Energy's Berkeley Lab has begun building a new computing center that will one day house exascale systems.
The DOE doesn't know what an exascale system will look like. The types of chips, the storage, the networking and programming methods that will go into these systems are all works in progress.
DOE is expected to deliver to Congress by the end of this week a report outlining a plan for reaching exascale computing by 2019-2020 and its expected cost.
But what the DOE does have an idea about it is how to cool these systems.
The Computational Research and Theory (CRT) Facility at Berkeley will use outside air cooling. It can rely on the Bay area's cool temperatures to meet its needs about 95% of the time, said Katherine Yelick, associate lab director for computing sciences at the lab. If computer makers raise the temperature standards of systems, "we can use outside cooling all year round," she said.
The 140,000-square-foot building will be nestled in a hillside with an expansive and unobstructed view of the Bay. It will allow Berkeley Lab to combine offices that are split between two sites. It will also be large enough to house two supercomputers, including exascale-sized systems. "We think we can actually house two exaflop systems in it," said Yelick. The building will be completed in 2014.
Supercomputers use liquid cooling, but this building will also use evaporative cooling. Under this process, hot water goes up into a cooling tower where evaporation helps to cool it. The lowest level of the Berkeley building is a mechanical area that will be covered by a gradient that is used to pull in outside air, said Yelick.
An exascale system will be able to reach 1 quintillion (or 1 million trillion) floating point operations per second, which is roughly 1,000 more times powerful than a petaflop. The government has already told vendors that an exascale system won't be able to use more than 20 megawatts of power. To put that in perspective, a 20 petaflop system today is expected to use somewhere in the range of 7 MWs. There are large commercial data centers, with multiple tenants, that are now being built able to support 100 MWs and more.
A rendering of the Berkeley computational research center planned for the San Francisco Bay area.
The idea of using climate, or what is often called free cooling, is a major trend in data center design.
Google built a data center in Hamina, Finland, using Baltic Sea water to cool systems instead of chillers. Last October, Facebook announced that it had begun construction of a data center in Lulea, Sweden , near the Arctic Circle, to take advantage of the cool air. Hewlett-Packard built a facility that relies on cold sea air just off the North Sea coast in the U.K.
One project that is carbon free is a data center built by Verne Global in Keflavik, Iceland. The power supply comes from a combination of hydro and geothermal sources.
The cool temperatures in Keflavik allow the data center to make use of outside air for cooling. The company has two modes of operation; one is direct free cooling, which means air is taken directly from the outside and put into the data center. The company can "remix" the returning hot air to have "tight temperature controls," said Tate Cantrell, the chief technology officer. The air is also filtered.
The data center also has the ability to switch to a recirculation mode where no outside air goes into the data center. Instead, a heat exchanger with a cold coil and a hot coil is used. The cold coil cools the air in the data center air stream, and the hot coil is cooled by the direct outside air, Cantrell said.
The Keflavik data center will use the heat exchanger in two situations. The first is to conserve moisture in the air when the dew point is low, meaning there is a low percentage of water in the airstream. The data center also has humidifiers. Below a certain level of humidity there is a possibility of introducing static into an environment. The other reason for switching to a heat exchanger is to protect the filters in the event that a strong storm kicks up a lot of dust, said Cantrell.
The groundbreaking of the Berkeley facility last week included Steve Chu, the U.S. energy secretary and a former Berkeley Lab director. He said the computational facility, "is very representative of what we have that's best in the United States in research, in innovation." Computation will be "a key element in helping further the innovation and the industrial competitiveness of the United States," he said.
Read more about data center in Computerworld's Data Center Topic Center.
This story, "Bay Area climate to help cool exascale systems" was originally published by Computerworld. | <urn:uuid:0da7afe6-1347-401b-846b-dd607bf75ec8> | CC-MAIN-2017-04 | http://www.itworld.com/article/2732320/data-center/bay-area-climate-to-help-cool-exascale-systems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951253 | 1,039 | 3.328125 | 3 |
Who chooses data archiving, and why
Data archiving is essential for organizations that accumulate new information but still need to retain older information. The trends of corporate and agency policy, legal precedent, and government law and regulation are for longer retention, more information, and faster retrieval. Automated data archiving helps organizations to achieve these capabilities at lower costs.
How data archiving works
Organizations set their own policies for qualifying data to be moved into archives. These policy settings are used to automate the process of identifying and moving the appropriate data into the archive system. Once in the archive system, information remains online and accessible. Original content is preserved to ensure complete, reliable integrity for the life of the archived information.
Benefits of data archiving
Automating the data archive process and using purpose-built archive systems make production systems run better, use less resources, and reduce overall storage costs. Production performance is unaffected by information growth. Backup and recovery runs faster, disaster recovery is less costly, and systems are easier to manage. Data moved into archives is stored at much lower cost. | <urn:uuid:79fd4b5a-181c-4765-8aac-336fd3bbb807> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/data-archiving.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91574 | 219 | 3.109375 | 3 |
SophosLabs experts have released a detailed report about the most spamming countries in the world. This report confirmed that India has beaten USA at spamming and now it is the top 1 country to contribute to the junk email problem. Probably the 1/10th of the spam messages you receive are from an indian computer.
The biggest amount of spam comes from computers that are infected with malware, which connects them to botnets and makes them spam robots. In addition, hackers are able to steal information or to add more malware to compromised computers.
The overall count of spam messages is decreasing comparing to the Q1 of 2011 because of better work by ISP’s. However, that also shows that cybercriminals might be choosing a new path to send spam. The traditional email spam is ineffective, so spammers are attacking social networks to spread spam marketing campaigns.
It’s not a secret that old social networks are targeted for spamming campaigns. Spammers use new social networks too. You can see an example of Pinterest. It was used to link web pages, which sells goods or earns commission for spammers. With increasing amount of spam, the malware spread also increases. Social networks are targeted to phish usernames, passwords and other personal information.
By looking at the statistics, we see, that first-time users of the Internet aren’t taking the right measures to protect themselves from malicious software infections, so their computers are turned into spam bots. However, every internet user should see that as a huge problem. Don’t allow cybercriminals to use your computer for illegal purposes. Use up-to-date anti-virus software and take attention to the links you click and software you install to your PC. | <urn:uuid:e6363965-3c42-4067-aa73-ee86904a178c> | CC-MAIN-2017-04 | http://www.2-spyware.com/news/post759.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951658 | 358 | 2.671875 | 3 |
Primer: AjaxBy Baselinemag | Posted 2005-11-08 Email Print
Ajax, a collection of programming technologies, delivers online content to users without reloading an entire page.
What is it? Ajax is a buzzword that describes a collection of Web-oriented programming technologiesall of them several years oldfor creating Web applications that behave more like traditional computer programs.
What does it do? A Web application built with Ajax efficiently delivers additional information to a browser when someone clicks on a button or moves the mouse cursor over a part of the page, without refreshing the entire page. That, ideally, results in Web pages that respond almost as if they were a locally installed program. For example, video rental company Netflix uses Ajax to automatically pop up a movie's synopsis, complete with a thumbnail image of the movie poster, when a customer moves a cursor over titles in a list of search results. Previously, the site required loading to a brand-new page if a user wanted to find out more about a movie.
Why is this stuff getting attention now? Because some big Web sites have recently provided examples of usefuland funAjax-based applications. Google Maps (maps.google.com), introduced in February, shows street addresses on a map and then lets you scroll in different directions without having to wait for the page to reload. "Sites built using the Ajax approach are easy to use and very cool," says Brian Goldfarb, a product manager in Microsoft's Web Platform and Tools group. "It gets you emotionally connected."
Why else is it interesting? Ajax works with most standard Web browsers and any Web server, unlike proprietary technologies for creating interactive Web applications that require additional software (such as Macromedia's Flash). Although technically the Microsoft-developed XML code that is part of Ajax isn't an industry standard, major browsersincluding Microsoft's Internet Explorer and the open-source Firefoxwork with Ajax-based pages.
What's the downside? It's very hard to do. Creating an Ajax application from scratch is like having to build a brick wall but first having to figure out how to create the bricks. "Sexy Web pages are great," says Forrester Research analyst Mike Gilpin, "but the dark side to Ajax is that it's really, really labor intensive." That's why Ajax-like applications haven't achieved widespread popularity.
Will it get easier? Yes. Web development tools vendors are delivering better building blocks for Ajax. In September, Microsoft demonstrated Atlas, a set of prebuilt programming "libraries" that wrap Ajax technologies into discrete, functional pieces of code. Tibco, an application-integration software company, last year bought General Interface, a six-person startup that developed a tool for creating Web interfaces with Ajax. For Tibco, Ajax is no mere decorative trifle: "It's for people who want to create rich applications," says Kevin Hakman, a marketing director at the company, "and eliminate the installation of software on the desktop." | <urn:uuid:c6a78b6d-11cd-4b23-80ab-cc869520f6a1> | CC-MAIN-2017-04 | http://www.baselinemag.com/it-management/Primer-Ajax | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00360-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904734 | 611 | 2.734375 | 3 |
The Lunar Atmosphere and Dust Environment Explorer, also known as LADEE, moved into orbit around the moon on Wednesday, according to NASA. The probe launched Sept. 6 from NASA's Wallops Flight Facility on Wallops Island, Va.
Using three instruments to collect data about the chemical makeup of the lunar atmosphere and variations in its composition, the probe also will capture and analyze dust particles it finds in the moon's atmosphere.
Orbiting the moon's equator, LADEE is in a unique position to frequently move from lunar day to lunar night, enabling it to better collect data on the "changes and processes occurring within the moon's tenuous atmosphere," NASA noted.
The spacecraft, which is about the size of a small car, is orbiting the moon about every two hours, eight miles above the lunar surface.
LADEE is scheduled to spend 100 days collecting data on the moon's atmosphere, giving scientists information they hope will help them better understand the planet Mercury, asteroids and the moons orbiting other planets.
Studying the moon's atmosphere is LADEE's primary mission, but it already has completed another task. About a month after launch, the spacecraft began a test of a high-data-rate laser communication system.
Don Cornwell, Lunar Laser Communications Mission Manager at NASA's Goddard Space Flight Center, said last month that the test exceeded their expectations. NASA engineers are encouraged that a laser communications system could be the building blocks of an outer space Internet.
NASA hopes to use similar systems to speed up future satellite communications, as well as deep space communications with robots and human exploration crews.
Using laser communications instead of radio systems would enable robots, such as the Mars rovers Curiosity and Opportunity, and astronauts to send and receive far greater data loads from space, whether in orbit around Earth, on the moon or on a distant asteroid.
This article, NASA's lunar probe gets to work studying atmosphere, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA's Lunar Probe Gets to Work Studying Moon's Atmosphere" was originally published by Computerworld. | <urn:uuid:433f2da9-e4b0-458e-ae5f-6ac0fb9cd1c2> | CC-MAIN-2017-04 | http://www.cio.com/article/2380704/government/nasa-s-lunar-probe-gets-to-work-studying-moon-s-atmosphere.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00360-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913702 | 505 | 3.921875 | 4 |
By Bruce Schneier.
The human brain is a fascinating organ, but it’s an absolute mess. Because it has evolved over millions of years, there are all sorts of processes jumbled together rather than logically organized. Some of the processes are optimized for only certain kinds of situations, while others don’t work as well as they could. There’s some duplication of effort, and even some conflicting brain processes.
Assessing and reacting to risk is one of the most important things a living creature has to deal with, and there’s a very primitive part of the brain that has that job. It’s the amygdala, and it sits right above the brainstem, in what’s called the medial temporal lobe. The amygdala is responsible for processing base emotions that come from sensory inputs, like anger, avoidance, defensiveness and fear. It’s an old part of the brain, and seems to have originated in early fishes.
When an animal — lizard, bird, mammal, even you — sees, hears or feels something that’s a potential danger, the amygdala is what reacts immediately. It’s what causes adrenaline and other hormones to be pumped into your bloodstream, triggering the fight-or-flight response, causing increased heart rate and beat force, increased muscle tension and sweaty palms.
This kind of thing works great if you’re a lizard or a lion. Fast reaction is what you’re looking for; the faster you can notice threats and either run away from them or fight back, the more likely you are to live to reproduce.
But the world is actually more complicated than that. Some scary things are not really as risky as they seem, and others are better handled by staying in the scary situation to set up a more advantageous future response. This means there’s an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for handling it.
We humans have a completely different pathway to cope with analyzing risk. It’s the neocortex, a more advanced part of the brain that developed very recently, evolutionarily speaking, and only appears in mammals. It’s intelligent and analytic. It can reason. It can make more nuanced trade-offs. It’s also much slower.
So here’s the first fundamental problem: We have two systems for reacting to risk — a primitive intuitive system and a more advanced analytic system — and they’re operating in parallel. It’s hard for the neocortex to contradict the amygdala.
In his book Mind Wide Open, Steven Johnson relates an incident when he and his wife lived in an apartment where a large window blew in during a storm. He was standing right beside it at the time and heard the whistling of the wind just before the window blew. He was lucky — a foot to the side and he would have been dead — but the sound has never left him:
Ever since that June storm, a new fear has entered the mix for me: the sound of wind whistling through a window. I know now that our window blew in because it had been installed improperly…. I am entirely convinced that the window we have now is installed correctly, and I trust our superintendent when he says that it is designed to withstand hurricane-force winds. In the five years since that June, we have weathered dozens of storms that produced gusts comparable to the one that blew it in, and the window has performed flawlessly.
I know all these facts — and yet when the wind kicks up, and I hear that whistling sound, I can feel my adrenaline levels rise…. Part of my brain — the part that feels most me-like, the part that has opinions about the world and decides how to act on those opinions in a rational way — knows that the windows are safe…. But another part of my brain wants to barricade myself in the bathroom all over again.
There’s a good reason evolution has wired our brains this way. If you’re a higher-order primate living in the jungle and you’re attacked by a lion, it makes sense that you develop a lifelong fear of lions, or at least fear lions more than another animal you haven’t personally been attacked by. From a risk/reward perspective, it’s a good trade-off for the brain to make, and — if you think about it — it’s really no different than your body developing antibodies against, say, chicken pox based on a single exposure.
In both cases, your body is saying: “This happened once, and therefore it’s likely to happen again. And when it does, I’ll be ready.” In a world where the threats are limited — where there are only a few diseases and predators that happen to affect the small patch of earth occupied by your particular tribe — it works.
Unfortunately, the brain’s fear system doesn’t scale the same way the body’s immune system does. While the body can develop antibodies for hundreds of diseases, and those antibodies can float around in the bloodstream waiting for a second attack by the same disease, it’s harder for the brain to deal with a multitude of lifelong fears.
All this is about the amygdala. The second fundamental problem is that because the analytic system in the neocortex is so new, it still has a lot of rough edges evolutionarily speaking. Psychologist Daniel Gilbert wrote a great comment that explains this:
The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years — and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.
Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.
A lot of the current research into the psychology of risk are examples of these newer parts of the brain getting things wrong.
And it’s not just risks. People are not computers. We don’t evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases — generally known as “heuristics.” These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities. Don Norman’s wonderful essay, Being Analog, provides a great background for all this.
Daniel Kahneman, who won a Nobel Prize in Economics for some of this work, talks (.pdf) about humans having two separate cognitive systems, one that intuits and one that reasons:
The operations of System 1 are typically fast, automatic, effortless, associative, implicit (not available to introspection) and often emotionally charged; they are also governed by habit and therefore difficult to control or modify. The operations of System 2 are slower, serial, effortful, more likely to be consciously monitored and deliberately controlled; they are also relatively flexible and potentially rule governed.
When you examine the brain heuristics about risk, security and trade-offs, you can find evolutionary reasons for why they exist. And most of them are still very useful. The problem is that they can fail us, especially in the context of a modern society. Our social and technological evolution has vastly outpaced our evolution as a species, and our brains are stuck with heuristics that are better suited to living in primitive and small family groups.
And when those heuristics fail, our feeling of security diverges from the reality of security.
Internationally renowned security expert, Bruce Schneier, is the CTO of BT Counterpane. He has authored eight books including Beyond Fear and Secrets and Lies and hundreds of articles and academic papers. Mr. Schneier has regularly appeared on television and radio, has testified before Congress and is a frequent writer and lecturer onissues surrounding security and privacy.
This article first appeared in Wired News in March 22, 2007 and appears on Mr. Schneier’s website, http://www.schneier.com/essays.html | <urn:uuid:ee631ea0-880b-40d6-aac2-e8db47df111f> | CC-MAIN-2017-04 | http://www.securitysolutionsmagazine.biz/why-the-human-brain-is-a-poor-judge-of-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00176-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951418 | 1,822 | 3.0625 | 3 |
You've got all the bells and whistles when it comes to network firewalls and your building's security has a state-of-the-art access system. You've invested in the technology. But a social engineering attack could bypass all those defenses.
Say two fire inspectors show up at your office, show their badges and ask for a walkthrough—you're legally required to give them access to do their job. They ask a lot of questions, they take electrical readings at various wall outlets, they examine wiring under desks. Thorough, aren't they? Problem is, in this case they're really security consultants doing a social engineering 'penetration test' and grabbing access cards, installing keystroke loggers, and generally getting away with as much of your business's private information as they can get their hands on. (See How to rob a bank for details from this real-world example.)
Social engineers, or criminals who take advantage of human behavior to pull off a scam, aren't worried about a badge system. They will just walk right in and confidently ask someone to help them get inside. And that firewall? It won't mean much if your users are tricked into clicking on a malicious link they think came from a Facebook friend.
In this article, we outline the common tactics social engineers often use, and give you tips on how to ensure your staff is on guard.
Last updated September 27, 2012. See below for the following topics:
- What is social engineering?
- How is my company at risk?
- Sneaky stuff. Give me some specific examples of what social engineers say or do.
- Why do people fall for social engineering techniques?
- How can I educate our employees to prevent social engineering?
- Are there any tools that can help?
- Looks like this is an important security issue. Tell me more!
What is social engineering?
Social engineering is essentially the art of gaining access to buildings, systems or data by exploiting human psychology, rather than by breaking in or using technical hacking techniques. For example, instead of trying to find a software vulnerability, a social engineer might call an employee and pose as an IT support person, trying to trick the employee into divulging his password.
Famous hacker Kevin Mitnick helped popularize the term 'social engineering' in the '90s, although the idea and many of the techniques have been around as long as there have been scam artists of any sort. (Watch the video to see social-engineering expert Chris Nickerson size up one building's perimeter security)
How is my company at risk?
Social engineering has proven to be a very successful way for a criminal to "get inside" your organization. In the example given above, once a social engineer has a trusted employee's password, he can simply log in and snoop around for sensitive data. Another try might be to scam someone out of an access card or code in order to physically get inside a facility, whether to access data, steal assets, or even to harm people.
Chris Nickerson, founder of Lares, a Colorado-based security consultancy, conducts 'red team testing' for clients using social engineering techniques to see where a company is vulnerable. Nickerson detailed for CSO how easy it is to get inside a building without question.
In one penetration test, Nickerson used current events, public information available on social network sites, and a $4 Cisco shirt he purchased at a thrift store to prepare for his illegal entry. The shirt helped him convince building reception and other employees that he was a Cisco employee on a technical support visit. Once inside, he was able to give his other team members illegal entry as well. He also managed to drop several malware-laden USBs and hack into the company's network, all within sight of other employees. Read Anatomy of a Hack to follow Nickerson through this exercise.
In What it's like to steal someone's identity professional pen tester Chris Roberts, founder of One World Labs, says he too often meets people who assume they have nothing worth stealing.
"So many people look at themselves or the companies they work for and think, 'Why would somebody want something from me? I don't have any money or anything anyone would want,'?" he said. "While you may not, if I can assume your identity, you can pay my bills. Or I can commit crimes in your name. I always try to get people to understand that no matter who the heck you are, or who you represent, you have a value to a criminal."
Sneaky stuff. Give me some specific examples of what social engineers say or do.
Criminals will often take weeks and months getting to know a place before even coming in the door or making a phone call. Their preparation might include finding a company phone list or org chart and researching employees on social networking sites like LinkedIn or Facebook.
In the case of Roberts, he was asked to conduct a pen test for a client who was a high-net-worth individual to see how easy it would be to steal from him. He used a basic internet search to find an email address for the individual. From there, it snowballed.
Useful Books on Social Engineering!
By Hadnagy and Wilson (Wiley, Dec 2010)
"This book covers, in detail, the world's first framework for social engineering."
By Johnny Long et al (Syngress 2008)
"Whether breaking into buildings or slipping past industrial-grade firewalls, my goal has always been the same: extract the informational secrets using any means necessary."
"We searched for the e-mail address online were able to find a telephone number because he had posted in a public forum using both," said Roberts. "On this forum, he was looking for concert tickets and had posted his telephone number on there to be contacted about buying tickets from a potential seller."
The phone number turned out to be an office number and Roberts called pretending to be a publicist. From there he was able to obtain a personal cell phone number, a home address, and, eventually, mortage information. The point being from one small bit of information, a social engineering can compile an enitre profile on a target and seem convincing. By the time Roberts was done with his pen test, he knew where the person's kids went to school and even was able to pull a Bluetooth signal from his residence.
Once a social engineer is ready to strike, knowing the right thing to say, knowing whom to ask for, and having confidence are often all it takes for an unauthorized person to gain access to a facility or sensitive data, according to Nickerson.
The goal is always to gain the trust of one or more of your employees. In Mind Games: How Social Engineers Win Your Confidence Brian Bushwood, host of the Internet video series Scam School, describes some of the tricks scam artists use to gain that trust, which can vary depending on the communication medium:
-- On the phone:
A social engineer might call and pretend to be a fellow employee or a trusted outside authority (such as law enforcement or an auditor).
According to Sal Lifrieri, a 20-year veteran of the New York City Police Department who now educates companies on social engineering tactics through an organization called Protective Operations, the criminal tries to make the person feel comfortable with familiarity. They might learn the corporate lingo so the person on the other end thinks they are an insider. Another successful technique involves recording the "hold" music a company uses when callers are left waiting on the phone. See more such tricks in Social Engineering: Eight Common Tactics.
-- In the office:
"Can you hold the door for me? I don't have my key/access card on me." How often have you heard that in your building? While the person asking may not seem suspicious, this is a very common tactic used by social engineers.
In the same exercise where Nickerson used his thrift-shop shirt to get into a building, he had a team member wait outside near the smoking area where employees often went for breaks. Assuming this person was simply a fellow-office-smoking mate, real employees let him in the back door with out question. "A cigarette is a social engineer's best friend," said Nickerson. He also points out other places where social engineers can get in easily in 5 Security Holes at the Office.
This kind of thing goes on all the time, according to Nickerson. The tactic is als o known as tailgating. Many people just don't ask others to prove they have permission to be there. But even in places where badges or other proof is required to roam the halls, fakery is easy, he said.
"I usually use some high-end photography to print up badges to really look like I am supposed to be in that environment. But they often don't even get checked. I've even worn a badge that said right on it 'Kick me out' and I still was not questioned."
Social networking sites have opened a whole new door for social engineering scams, according to Graham Cluley, senior technology consultant with U.K.-based security firm Sophos. One of the latest involves the criminal posing as a Facebook "friend." But one can never be certain the person they are talking to on Facebook is actually the real person, he noted. Criminals are stealing passwords, hacking accounts and posing as friends for financial gain.
One popular tactic used recently involved scammers hacking into Facebook accounts and sending a message on Facebook claiming to be stuck in a foreign city and they say they need money.
"The claim is often that they were robbed while traveling and the person asks the Facebook friend to wire money so everything can be fixed," said Cluley.
"If a person has chosen a bad password, or had it stolen through malware, it is easy for a con to wear that cloak of trustability," he said. "Once you have access to a person's account, you can see who their spouse is, where they went on holiday the last time. It is easy to pretend to be someone you are not."
See 9 Dirty Tricks: Social Engineers Favorite Pick-up Lines for more examples.
Social engineers also take advantage of current events and holidays to lure victims. In Cyber Monday: 3 online shopping scams and 7 Scroogeworthy scams for the holidays security experts warn that social engineers often take advantage of holiday shopping trends by posioning search results and planting bad links in sites. They might also go as far as to set up a fake charity in the hope of gaining some cash from a Christmas donation.
Why do people fall for social engineering techniques?
People are fooled every day by these cons because they haven't been adequately warned about social engineers. As CSO blogger Tom Olzak points out, human behavior is always the weakest link in any security program. And who can blame them? Without the proper education, most people won't recognize a social engineer's tricks because they are often very sophisticated.
Social engineers use a number of psychological tactics on unsuspecting victims. As Bushwood outlines in Mind Games, successful social engineers are confident and in control of the conversation. They simply act like they belong in a facility, even if they should not be, and their confidence and body posture puts others at ease.
This is your brain on social engineering
Brian Brushwood is really good at tricking people. So good he founded a website called "Scam School".
Brushwood understands how social engineers mislead people. Four basic principles:
- They project confidence. Instead of sneaking around, they proactively approach people and draw attention to themselves.
- They give you something. Even a small favor creates trust and a perception of indebtedness.
- They use humor. It's endearing and disarming.
- They make a request and offer a reason. Psych 101 research shows people are likely to respond to any reasoned request.
Read the details in Mind games: How social engineers win your confidence
"People running concert security often aren't even looking for badges," said Brushwood. "They are looking for posture. They can always tell who is a fan trying to sneak back and catch a glimpse of the star and who is working the event because they seem like they belong there."
Social engineers will also use humor and compliments in a conversation. They may even give a small gift to a gate-keeping employee, like a receptionist, to curry favor for the future. These are often successful ways to gain a person's trust, said Bushwood, because 'liking' and 'feeling the need to reciprocate' are both fixed-action patterns that humans naturally employ under the right circumstances.
Online, many social engineering scams are taking advantage of both human fear and curiosity. Links that ask "Have you seen this video of you?' are impossible to resist if you aren't aware it is simply a social engineer looking to trap you into clicking on a bad link.
Successful phishing attacks often warn that "Your bank account has been breached! Click here to log in and verify your account." Or "You have not paid for the item you recently won on eBay. Please click here to pay." This ploy plays to a person's concerns about negative impact on their eBay score. | <urn:uuid:0ab3abfc-d15f-4ee7-8450-cbaa2fbc3ba9> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2124681/leadership-management/security-awareness-social-engineering-the-basics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968542 | 2,733 | 2.75 | 3 |
The EU cookie law is a piece of privacy legislation that was originally adopted by all EU countries on May 26th 2011. The UK was given one year to comply with the EU directive after it updated its Privacy and Electronic Communications Regulations, which brought the EU directive in to UK law.
The cookie law requires websites to gain consent from visitors to store or receive any information on a computer or any other web connected devices (e.g. smartphone or tablet). The cookie law has been designed to protect online privacy of customers by making them aware, and giving them a choice, about the amount of information collected by websites. Each EU member has its own approach to the law; however the basic requirements of the directive remain the same.
The Information Commissioners Office (ICO) is responsible for ensuring that organisations comply with the cookie law. The ICO has issue two sets of guidelines, so far, with the most recent reminding those concerned how the law ‘will not go away.’
After May 26th if a business is not compliant, or is not visibly working towards compliance, it will run the risk of enforcement action and a possible fine of up to half a million pounds.
Read news and tips on how to comply with the EU cookie law.
Table of contents:
What is a cookie?
A cookie is a type of information that a website puts on your hard disk, in order to remember something about you at a later time. Cookies tends to record your preferences when using a particular site and are commonly used to rotate banner adverts, so the user receives different adverts based on previous website activity.
News and guidance on EU cookie law and legislation
PECR amendments made: Tighter rules on cookies laws
Starting 26 May organisations will need to request permission from website visitors, before issuing cookies.
ICO issues guidance and warnings to nudge website owners over UK cookie law
The ICO has had to issue yet more guidance and warnings to resistant website owners, as it continues to stands by its UK cookie legislation decision.
ICO cookie law guidelines: A cautious welcome from industry experts
Guidelines on how to comply with the new EU cookie laws have been met with caution by industry and legal experts.
UK law aligned with new EU cookie legislation
From mid-May organisations will need user permission before they plant cookies on their machines. The EU law is being enforced by the UK.
UK digital economy threatened by EU cookie privacy law
According to the Internet Advertising Bureau (IAB) the new EU e-privacy cookie legislation could be damaging for the UK’s digital economy.
Online businesses could leave Holland if EU cookies law comes in to play, warns web publishers
Online businesses could be forced to exit Holland if the country’s parliament chooses to adopt the strict new EU cookie directive.
Tips and advice on EU cookie law and legislation
PECR regulations: How to audit cookies on your site
Learn how to audit the cookies on your website, if you are concerned about the new PECR regulations.
Information Commissioner’s Office released practical guidance on EU cookie law
To support companies in complying with the new EU cookie law, the ICO has released a set of practical guides.
Law advice for UK website owners on EU cookies directive
Read advice from the ICO on how to comply with the new EU cookies directive.
How to cope with the new EU cookies law
Henrietta Neate looks at what the new law says and what you will need to do to, in regards to the new EU cookies law. | <urn:uuid:4baa2d8c-276c-4703-96ad-ebfa8a34862b> | CC-MAIN-2017-04 | http://www.computerweekly.com/guides/How-to-comply-with-the-EU-cookie-law | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00259-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947029 | 721 | 3.15625 | 3 |
Peer-to-Peer Botnets for Beginners
With all the hype about the ZeroAccess take-down, i decided it might be a nice idea to explain how peer to peer botnets work and how the are usually taken down.
|A basic example of a tradition botnet|
With tradition botnets (Be it HTTP, IRC or some other protocol), the structure remains the same. The bots all connect to one or more servers through one or more domains. Although the network structure can differ from very basic to very complex, the botnet can easily be disabled with enough cooperation.
seizing control of an active domain or sever associated with the botnet can usually be all that is needed in order to disband it, however if for some reason it isn’t possible to send commands to the bots, a different approach is needed. Attempting to shut down the control sever is usually a waste of time as it would not take long for the botmaster to set up another and redirect the domain to it, which only leaves the option of going after the domain.
A botnet that isn’t run by beginners will likely use multiple domains, if a single domain is shut down, the bots will connect to the next. To take down such a botnet: it would be required for researchers to either suspend all domains associated with the botnet (in a time frame that doesn’t allow the botmaster to update the bots with new domains), or to seize the domain the botnet is currently using and point it to a functional server (known as a sinkhole), designed to keep the bots away from the legitimate control server and out of the botmaster’s reach.
Peer to Peer
Peer to Peer (P2P) botnets try to solve the problem of security researchers and authorities targeting domains or servers, by creating a decentralized network. The idea of P2P is that all the bots connect & communicate with to each-other in order to remove the need for a centralized server, however it’s not as straight forward as that.
CommandsIf bots are communicating with each-other, then the botmaster needs to make sure only he can command the bots, this is usually done using digital signing. Signing is performed by using asymmetric encryption, a special type of encryption that required two keys (public and private). if one key is used to encrypt a message, it can only be decrypted with the other key. If the botmaster keep one key secret (private key) and embed the other key in the bot (public key), he can use his key to encrypt commands and then the bots can decrypt them using the public key: without the botmaster’s private key, no one can encrypt the commands.
Most people’s idea of a peer to peer botnet is similar to Figure 1, the bots all connect to each-other via IP address, forwarding commands to each-other, removing the need for a central server or domain, this representation however is incorrect.
Computers that are behind NAT, Firewalls, or use a proxy server to access the internet: cannot accept incoming connection, but can make outgoing connection. This is a bit of a problem as it would prevent the majority of bots being connected to by other bots. In traditional botnets, this obviously isn’t a problem as the bots connect to the server, so a peer to peer network still requires servers in a way.
Bots that are capable of accepting incoming connections (not behind Proxy / NAT / Firewall) act as servers (usually referred to as nodes or peers), the bots that are not capable of accepting connections (usually referred to as workers) will then connect to one or more nodes in order to receive commands (Figure 2). Although the nodes are technically servers, they are used in a way that prevents take down, that is: the workers are distributed between many nodes, allowing them to shift to another node if one is taken down. P2P botnets only work if there are enough nodes that it is impractical to take them all down, the bad news is because the nodes are legitimate computers, they can’t simply be seized like a server would be.
Each node maintains a list of IP addresses of other nodes which it shares with the workers, the workers then store the lists, allowing them to switch nodes if the current one were to die. At this stage the botnet would just be many small groups of bots connected to many different nodes, which would be impossible to command. For commands to circulate the entire network, either: The bots will connect to multiple nodes and pass any commands received to the other nodes; The nodes connect to other nodes and pass commands between themselves; or a combination of the two.
In order for a bot to join the network: it would need the IP address of at least one node, this is where bootstrapping comes in. The bot is hard-coded with a list of bootstrap servers, which it connects to when it is first run on the infected computer. the job of a bootstrap server is to maintain a huge list of node IP addresses, providing new bots with a smaller list of node IPs (introducing it to the network). Generally bootstrap servers provide some sort of signing, which prevents them from being hijacked by security researchers and used to give new bots invalid node IPs.
Obviously the bootstrap servers are a central points, like with traditional botnets, they could be taken down, however this isn’t a huge issue. If all of the bootstrap servers were to be seized at once, it would not effect the bots that are already on the botnet, however it would prevent new infections from joining. The botmaster can simply cease infecting new systems until they can set up new bootstrap servers,this would be only a temporary hold back so it is fairly pointless to attack the bootstrap system.
Dismantling the botnet
Attacking the bootstrap system only temporarily prevents new bots joining the network, digitally signed commands prevent anyone other than the botmaster commanding the bots, and there are far too many nodes to take down at once, so what can be done?
Nearly all peer to peer botnets is existence have a vulnerability in the peer sharing mechanism. As explained earlier, the nodes are required to maintain and share a list of other nodes with the workers, to distribute the workers among the vast number of nodes. It would be incredibly time consuming or even impossible for the botmaster to manually provide each node with a list of other nodes, so the nodes do it automatically. When a new bot is identified as being capable of accepting connections, the node it is connected to adds it to the node list and shares it with the other nodes.
So what if you were to introduce a malicious computer to the network, one that would be identified as capable of becoming a node, the from there you provided the workers and other nodes with a list of invalid ips? Probably not a lot. It’s likely that the nodes would verify each new node IP address to make sure it’s real, but with that in mind there’s another way!
Security researchers could introduce many malicious nodes to the network, but instead of providing workers and other nodes with false IP addresses, they would only share a list of other malicious nodes. With enough resources, the malicious nodes can become a significant part of the botnet and separate the workers from the legitimate nodes. By only issuing the IP addresses of other malicious nodes, it increases the chance that the workers will only be aware of malicious nodes and significantly decreases the chance of them rejoining the network. At a given date the malicious nodes can stop forwarding commands from legitimate nodes: leaving all the workers which are connected to malicious nodes unable to receive commands, and leaving the botmaster no time to react. Such an attack would be unlikely to separate all the workers from the network, but could cripple a significant part of the botnet. Malicious nodes can be left running to commandeer more bots and attempt to keep hold of any bots which may have stored IP addresses of legitimate nodes. | <urn:uuid:bc919f9a-fe7b-4208-adbe-cf19bc65e04c> | CC-MAIN-2017-04 | https://www.malwaretech.com/2013/12/peer-to-peer-botnets-for-beginners.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947135 | 1,669 | 2.890625 | 3 |
Overview of secure network administration and its principles
In this article, an overview of secure network administration and its principles are going to be discussed rigorously from an informative point of view. To start with one must be familiar with the notion of the fundamental words of the topic. This topic is Initiating with the very word 'network', around which the whole subject is revolving. A 'network' here means a computer network. It is telecommunication network that enables computers to commute data through data connections. To ensure the smooth trafficking of data the system has to be administered, to keep the network under control. It keeps track of resources in the network and the way they are allotted. For doing so, there are certain principles that are maintained and practiced to get the job done. Those principles are here analysed further.
A rule-based management is a system that comprises of a collection of 'if-then' statements that employs a series of defences, to which rules are made on how to work on those assertions. In the field of software development, rule-based system can be applied to design software that can take the place of human experts to render a solution to a problem. Such types of system can also be known as expert system. Rule-based systems are also practiced in AI (artificial intelligence) programmes and systems.
The Firewall helps a person getting his computer strong since it puts up some wall against the attacks so one should surely go for the firewall as he realizes that it is the single most important thing he needs to have in the computer.
The VLAN's logical interface is known as a Switched Virtual Interface, in the world of switching. While configuring a switch one will notice these interfaces remonstrated as a VLAN interface. Just as a Fast Ethernet interface, one will be able to configure these interfaces also. A VLAN interface is assigned as an IP address, bridge group, interface description and even a quality of service policy. VLAN interface allows layer 2 devices to communicate with layer 3 devices. Multi-Layer switches utilize VLAN interfaces to facilitate multi-layer routing functions on a single switch. Substantially the switch act like its own-router-on-a-stick. In a Multi-Layer switched network, many switches use VLAN interfaces as default gateways for personal computers and other host machines available on the network to communicate within other IP networks.
Secure router configuration
There are some simple steps to be followed to secure ones router configuration. Such as, one must change the default username and password in the router manufacturer web pages to avoid hacking of the account information; one must change the default SSID, a network name used by the access points and routers, while configuring wireless security of the network used; one has to enable the physical address or MAC address filtering; one has to disable the SSID broadcast feature of the network; one must not enable the option of auto-connection to open Wi-Fi networks; one must assign a static IP address to all the connected devices of the network; firewalls must be enabled in every computer connected with the router and in the router itself; the positioning of the router or the access point must be safely done keeping in mind its range of reachability; one must consider turning of the network device during long periods of offline or non-use.
Access control lists
An access control list (ACL) is a catalogue of access control entries (ACE). In an ACL, each ACE identifies a trustee. It furthermore specifies the access rights, to be allowed, denied or audited, of that trustee. In the security descriptor for a securable object, there are two types ACLs, viz. a DACL and a SACL. A DACL or discretionary access control list identifies the trustees that are allowed or denied access to a securable object. When a process attempts to access a securable object, the system scans the ACEs to determine whether to grant access to it or not. A SACL or system access control list authorizes the administrators to log attempts to access a secured object. Each ACE itemizes the log attempts of the individual administrators to generate a record in the security event log.
It is the security service at the ports for maritime services. They are usually posted at the ports, domestic or intentional. They have to keep a watch at the coast line along the both sides of a port. In other duty they have to inspect the passengers to resist the suspect of terrorists and to save the respective nation from terrorist attacks. Not only passengers but the cargoes are also to be inspected at the coast line. This security services reduce the vulnerabilities of a nation towards terrorist attacks and thus contributes a lot to the national security. This is a vital an essential side of a nations' security, a many of the nations have a coastal border around them. Some of them at big names and they are having huge contribution in the world business.
This is a mechanism for port security. They provide the essence of e Wi-Fi or WLAN access to the coast guard system and securities and they then can get access over the entire mobile or computer web peripherals. Generally three parties get the access by his technology; the three parties are respectively a supplicant, authentication server and an authenticator. The authentication server acts as a network device and the authentic at or works as a security guard. A supplicant will not be allowed to enter the area or pass the gat, until a valid Visa I produced to the authentication severs. This technology is extensively used at the airport or the marine ports to support he security system there. This device not only helps the security system to verify a person's identity, but the exclusive device, for its nature to check a VISA card of any passenger correctly. Any alteration at the VISA, or if it is not update or it is outdate are easily detected by the device and thus is a very useful hardware for the national security to protect the nation from domestic or international threats.
Sometimes the data traffic at the security networks and checkpoints goes to such unusual high volume that it signifies some malfunctioning running at some other end. This unusual extra float of traffic is referred as flood and the technology ha is been used to control the system there is known as flood guard. There are several flood guard companies around the globe that gives the information technology support to a nation for its national security. This flood guards are generally used to track the network traffic to identify the network overflowing conditions. SYN, ping or port floods are the symbols of such overwhelming conditions at the network. By reducing the possibilities of unusual entry into the system, the DoS attacks can be reduced. Generally they attack the servers which at used extensively, be it inbound or outbound. A protection from this threat reduces the process of illegal entries of hackers in the network and thus reduces the chances of data loses of the nation. Especially the confidential data remains the target of the terrorists, and by this system a nation can be made secured from those terrorist attacks.
Loop is a network building problem basically. The switches in the port are made so complex that the same node is used again and again and thus the network forms a loop. By the loop formation the ports gets jammed and thus the server either slows down or crashes or may produce multiple strings where one string is commanded. The system may be understood from a football game. In football, when the ball is at the legs of an opponent, then, what happens, a particular set of players forms a loop to snatch the ball technically. If the number of players becomes half of the team size, then what happens? The game almost stops or results to a fire kick or a penalty shoot-out for the opponents were ultimately the ball is lost. So the game requires a proper synchronization between the players. There are two technologies to protect the loop formation in the network. The first one is a spanning tree. It works more on the VLAN basis and no on pre port basis. The other one is the Cisco. However, loop protection is not something to protect formation of loop, BT I is the process to manage the network, or manage the damages that has been cased due to a loop formation.
It is a technology to serve the security system, which one may say that the secondary security or the second phase of the security check. After the preliminary or primary check-up is over at the security check post, the suspected one are denied with access there. That re retreated out from that checkpoint only. The rest are sent forward for the secondary checkpoint. The sec dairy checkpoint is the point for making the implicit deny. The suspects out there are kept aside and the general correct things are allows passing by. After the entire process is over, then the system can be configured temporarily to allow some or all the suspects, after a manual verification of the suspects.
Not all users are given access to the main server. This security tool is used by security service to protect the nation. This is also used in almost all mostly large corporate sectors. The network is divided there for users or viewers or staff. Sometimes they are used for segmenting an international server among the national sectors, by channelizing the users into their nation wise portal. Thus the pressure over the main server is decreased by a lot and that helps in protecting the stability of the server. Some of the common segments used are based on age or location or gender or even prices or categories. The commerce sites also use this technology to maintain their server from a sudden crash or damage. The national security teams also use this technology. They use the system to bifurcate the checking based on the size of the luggage or cargo size etc.
This is an authentication tool used in networks to find out the user specific tools or usage of the server. This is extensively used for security purpose at satellite zones or telephone or mobile servers and even in forensic labs. Recently it is also been used by the national level banks and military camps. Thus is used to provide the details of any user's work in a server. Again they can be used to lock one user from using a tool or lock the operations that have been done by that particular user, so that the users cannot damage it or even get an access on it. It has been used for the military regimes to Peter the use of special weapons without the permission of the government the national security boards.
Unified Threat Management
This has been a popular threat management tool in the networking service. It is a multipurpose tool to protect the system from all types of threats. Thus his has been one single device that gives protection to all the networks, and l the sectors if a network. Thus it acts as an antivirus, anti-malware, multipurpose firewall and also provides security to the VPN settings and even the APN settings also. Thus it is a helpful and solid technology to fight with hackers and keep the system clean and clear and thus is a high end technology to protect the server.
This is a short list of all the popular tools that provides security to the computer networks, servers and even the satellite services. Thus information technology has extended there are for the security of the nation from almost all external as well as internal threats. Above all, the teams that deals with these technologies are well developed and educated in the functioning ad controlling of the devices and thus they are play in the roles in the national security service. Thus article is all about the narration of all the security features and technologies they are used extensively for the national security. | <urn:uuid:08a00b03-cc19-415e-96e7-3921ed1c2faa> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-overview-of-secure-network-ddministration-and-principles.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00287-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94154 | 2,339 | 3.375 | 3 |
The Computer History Museum in Mountain View, Calif., this week said it had created a Cisco Archive that promises to document and preserve the networking giant’s impact on the industry and Internet.
+More on Network World: What network technology is going to shake up your WAN?+
In a blog post, Paula Jabloner the first Director of the newly established Cisco Archive wrote about one of the more significant events the Archive will preserve: “It was 1989. Kirk Lougheed of Cisco and Yakov Rekhter of IBM were having lunch in a meeting hall cafeteria at an Internet Engineering Task Force (IETF) conference. They wrote a new routing protocol that became RFC (Request for Comment) 1105, the Border Gateway Protocol (BGP), known to many as the “Two Napkin Protocol” — in reference to the napkins they used to capture their thoughts.”
“BGP is still integral to an Internet that has grown from 80 thousand hosts in 1989 to over one billion hosts today. BGP and the World Wide Web share a 25th birthday thanks to Tim Berners-Lee writing the original Web proposal in 1989.”
The museum naturally encourages “Cisco buffs and hoarders of historical materials to contact the archive team. We depend on your contributions to preserve Cisco's tangible legacy. “
Here’s a little more history if you are interested:
Check out these other hot stories: | <urn:uuid:d15d6293-300f-412a-aee4-5d76d77e87df> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2893796/cisco-subnet/cisco-gets-computer-history-museum-haven.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946644 | 298 | 2.6875 | 3 |
For a large and diverse state like California, it's remarkable that efforts to coordinate and develop an integrated statewide GIS still fall mainly to a volunteer-staffed entity that operates with no state mandate: the California GIS Council.
California lags behind many other states in what is now considered the effective state model for GIS coordination issued by the National States Geographic Information Council (NSGIC).
In a survey published in May 2004 by the NSGIC, California ranked behind 32 states in meeting the nine facets of effective GIS management, and probably would have ranked even lower without the voluntary GIS Council.
California's current budget problems and reorganization efforts have moved other priorities to the fore. As the NSGIC argued in its survey report, however, states must establish strong coordination efforts to minimize costs and leverage GIS efforts so all levels of government benefit.
"We know a coordinated effort on GIS saves money," said Donna Hansen, deputy city manager of Modesto, Calif., and co-chair of the California GIS Council. "It will save local governments a lot of money, and it will save the state a lot of money."
Fits and Starts
Efforts to develop statewide GIS policies and infrastructure date back more than 10 years. The council's genesis lies in the Governor's Geographic Information Task Force, convened by former Gov. Pete Wilson's Administration in 1993 as a result of the growing awareness that GIS could be a powerful analytic tool for all government levels.
The task force recommended forming a council to coordinate GIS efforts in California, as well as the creation of a Geographic Data Catalog and the posting of a state GIS officer to facilitate continuous coordination efforts.
Subsequently the state established the California Environmental Resources Evaluation System (CERES) within the California Resources Agency in 1995. CERES fulfilled one task force recommendation by developing and making available over the Internet the California Environmental Information Catalog (CEIC) -- a catalog of environmental data including spatial holdings.
Meanwhile, task force participants established the California Geographic Information Association in 1994 -- an organization instrumental in helping sustain existing GIS coordination throughout California. It received a grant from the Federal Geographic Data Committee in 1995 and partnered with the Resources Agency to enhance the CEIC.
However, it was not until 2000, through a memorandum of understanding (MOU) among the Resources Agency, California Environmental Protection Agency (CalEPA) and the now-disbanded Department of Information Technology (DOIT), that a council was formed to coordinate and engineer cost sharing for GIS data development and maintenance.
"The idea of a council was first developed when there was a Department of Information Technology and Elias Cortez was in charge there, and Gary Darling was in the position I'm in now," said John Ellison, agency information technology officer at the Resources Agency. "They said we should do this to bring the right parties to the table because GIS is a multigovernmental issue. It has to involve government representatives all the way from the local to the federal level, with the state playing a key role. It also needs to engage the private sector so we can start talking about common data sets and common needs, and an infrastructure for developing, maintaining and sharing these data across all those levels."
Unfortunately executive sponsorship of the council waned after the demise of DOIT, which followed California's costly IT failures.
"There was a lot of political fallout from that, and some of the collateral damage in that whole fiasco was a stumbling of the California GIS Council," Ellison said. "The council seemed to lose momentum."
After reassessing interest, the Resources Agency set out to revive the council and through a revised MOU brought in several new entities -- the California Department of Health Services; the California Business, Transportation and Housing Agency; the Governor's Office of Planning and Research; the Governor's Office of Emergency Services and the state CIO.
In March 2001, a workshop was convened to brainstorm and set the vision for the new GIS Council. State sponsors of the council, other members of the former council and various interested parties attended the workshop. The outcome was a revised charter and a restructured membership that was of a more manageable size and better suited to focus on policy.
The reconstituted California GIS Council (CGC) met for the first time on Aug. 13, 2003.
"On the council, we have representation from 17 regional coordinating groups around California," said Joe Concannon, senior planner at the Sacramento Area Council of Governments and region representative. "They are all arranged a little differently. Some are very formal, like the San Francisco Bay area. We are probably in the middle. Some are very informal, like the North Coast Users Group."
So far, the resurrected council has only met twice, and priorities for the council are still being hammered out. Ellison said the top priority, however, is to push the state to appoint a GIS officer to coordinate GIS efforts on an ongoing basis, just as many other states have. In fact, this is one point now included in the NSGIC's state model for GIS coordination.
"That was probably the single most important recommendation made by the original Geographic Information Task Force back in 1993 -- that the state should form the office of the GIS officer," said Ellison. "In other words, just as the state has a CIO, the state should have a GIO as well. Obviously that recommendation was never acted on, and that continues to be the primary objective and hope of California's GIS community."
Because imagery is so important and expensive, the council requested federal funding for imagery acquisition.
"The council is seeking a $4 million one-time cost, followed with an ongoing grant of approximately $200,000 a year until the program can be self-maintaining," said Ellison. "The idea is to bring enough money to the table so other folks who are already investing in imagery could do so in a collaborative fashion. They could take their existing investments and leverage them into a more communal acquisition process. Basically we are looking for a reasonably high-resolution satellite image for the entire state -- a one meter or better, multispectral, multicolor image."
Another priority for the council is the development of better GIS metadata -- data about data.
"To a lay person, it would be like a library's card catalog," said Richard Mader of the Southern California GIS Government Users Group, who is also on the council's metadata work group. "It gives you enough information about the data so you know where and what it is, and spatial data has certain other requirements because of the spatial accuracy and so forth."
The council is also seeking to promote better exchange of information on what local jurisdictions are planning, such as acquiring aerial photography and exploring the formation of partnerships to save money.
Through coordinated efforts, redundancies can be eliminated. Data can be collected once and used many times by different agencies and jurisdictions, and that is only the beginning of benefits and cost-savings, said the California GIS Council's Hansen.
"We also know GIS is absolutely essential to keeping our communities safe, and to more effectively manage large disasters, water issues, policing and fire issues," she said. "A lot of these maps are used by government to plan and improve a host of services in a community. Better management also can save money.
"But there is only so much a council with no funding can do. Some money is needed to do the job right, and we are not even talking about a lot of money compared to so many other appropriations." | <urn:uuid:339e9f09-553e-4aec-a99b-c024939f853a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Piecing-It-Together.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959112 | 1,555 | 2.65625 | 3 |
In many malware cases, the infection method can be far more elaborate than the actual malware being installed. This is the case with Newscaster, a campaign believed to have Iranian origins and targeting US defense contractors, high ranking military officials, and government officials. With an infection vector so involved, and malware as simple as it is, this campaign was able to avoid detection since it began in 2011.
Spreading the news
In a report for the recently exposed attack dubbed “Newscaster” by iSIGHT Partners (documented here) highlight how social networks, combined with social engineering efforts continue to be a highly successful attack vector. The level of effort, time and detail expended, combined with the profile of the victims was significant.
Fake news site that was used in the attacks
The report details how senior targets in the military, diplomats, defense contractors and journalists all became victims of a well-engineered social network attack that leveraged a fictitious news website “NewsOnAir.org” utilizing fictitious reporting personas that interacted with the victims over LinkedIn, Facebook, Google+, and Twitter. It is believed that the core group of attackers are Iranian.
Potential Facebook site used in the attacks
The usual approach to social engineering attacks on social networks is to lure users into providing credentials or opening files/links intended to compromise their computers.
The attacks in this campaign came from an attacker posing as a legitimate persona, sharing common interests or business goals. The convinced victim will "connect" with the attacker, whom they believe they know, or has similar interests or business goals. This will then lead to dialog with the attacker masquerading as a peer.
Once the connection is made, the attacker will send “spear phishing” emails or direct messages containing links to the victim through the chosen platform (Facebook, Google services, LinkedIn, Twitter).
Fake Google+ account with Typo
Google+ Account has 142 people/organizations in their circles including several government and politicians which contribute to its 'clout'.
The victim is far more likely to click any links that have been directly shared with them especially if blended with other legitimate dialog related to the common interest.
The payload of these messages ultimately results in data loss or theft – typically delivered as spear phishing emails or instant message communication to deliver data stealing malware or user supplied credentials.
The website was made to look legitimate by providing news feeds from legitimate sites, but with the bylines of the articles posted changed to the fake personas used by the attackers to increase the legitimacy of any interaction with the victim.
The Cylance research team discovered 90 samples related to this campaign, 12 of those still without any detection after an extended period of time. Many of these samples leveraged Botnet / IRC techniques to control the victim PC.
The sample would have likely been delivered via a phishing email. The screen shot below shows the “Flash player” executing.
Let's take a deeper look into this sample so we can better understand the leverage gained from these infections. The samples fall into two primary groups, installer and bot.
The installer portion is the result of a file binder which executes the bot and bait at the same time. The name of the file binder being used is "SetupEx". The bait typically is some form of product installer or funny video. When we first execute this sample installer, we can see the bait application running.
The bait in this installer is a Flash video
The bot component is the core of the payload. Its operations are simple but effective. It does have methods implemented to avoid detection, but they are not advanced.
We can observe some of these methods statically. For instance, there are a large number of encoded strings in the binary. The encoding method is inverting the alphabet (for instance 'a' becomes 'z'). To save time, I developed a script to automate this decoding.
You will be able to see strings such as the remote domain for the IRC server, the channel being used on the IRC server, its password, etc. This information is not stored in a rigid format, so you still must extract them from the results of this script. Here is some sample output when decoding the mutex name.
bwall@research:~$ echo "zuLmvXlkbNfgvc" | python alphareverse.py
Example usage of decoding script
We can use some of the encoded strings to create an effective YARA rule (detects 85 of 90 samples).
$needed01 = "PON\\HLUGDZIV\\Nrxilhlug\\Drmwldh\\XfiivmgEvihrlm\\Klorxrvh\\Vckolivi\\Ifm"
$needed02 = "zuLmvXlkbNfgvc"
$needed03 = "f:\\filebuildernew version\\builder c++\\setupex\\debug\\setupex.pdb" nocase
1 of them
YARA rule for detecting Newscaster binaries
This bot stores its configuration in the current user's TEMP directory, creating a subdirectory named "System". It stores a mutated instance of itself in the ProgramData directory, creating a subdirectory named "Windows Update". It names this instance "isass.exe" but with a capital "I" to appear as if it were 'lsass.exe', a critical part of the Windows operating system.
All instances appear to use the same mutex, "afOneCopyMutex", which also happens to be part of the above YARA rule (albeit in its encoded state).
Disassembly showing mutex name being decrypted and used by CreateMutexA
For more information on the malware's non-interactive behavior, see the Malwr analysis here.
Command and Control
This bot uses IRC for its command and control protocol. It also appears that a custom IRC daemon is being used, as it supplies very little information back to connecting bots.
I setup an IRC server and used the name "AF" for my client. The sample being used only accepts certain commands from other users with names starting with "AF" or "AS_". The IRC channel is configurable, and may be different for different samples. The connection to the IRC server is delayed after the start of the bot.
Disassembly of code section checking checking name of command's sender
The connection to the IRC server happens after attempting to disable the UAC in Windows, which would allow for easier privilege escalation.
From reverse engineering, we can see the accepted command, although a few do not perform any operations. The "VER" command obtains version information from the bot.
The "HI" command invokes a polite response.
The "!CMD" command executes commands.
The "EXEC" command gets the path of the executing binary.
The "!DSF" command will upload a selected file to a select IP and port.
An instance of netcat was able to capture the uploaded data. Here is a truncated hexdump from the receiving server.
A similar command, "!UP", uploads the selected file to an HTTP server.
I also used a netcat server for this. Here is the truncated hexdump of the information uploaded.
The "KILL" command shuts the bot down.
Cylance PROTECT is able to run in both a blocking and monitor mode. In this example we are running in monitor mode so we can gain more visibility into the threat. Running the application causes CylancePROTECT to discover two threats.
The detection of two files is due to the second dropped file. In the PROTECT console we are able to see two active threats on the client, and both files detected. CylancePROTECT does not rely on detonation analysis, but it is able to provide this data in the console to assist in forensic analysis.
Drilling down via the console, we can see that after initial execution, the dropped file is running.
Clicking on the “endend.exe” allows us to drill in further on the specific sample.
By clicking the Detailed Threat Data button, the CylancePROTECT Administrative console allows us to get further details on the threat: in this case, dropped files.
Detailed Data -> Network provides us a view into the hosts that the malware is attempting to communicate with.
The PROTECT console also allows us to see the File, Mutex and Registry keys that the sample attempts to generate or interact with during detonation, aligning to the research earlier in the blog post.
Using Cylance V, our detection and forensics tool, we can see that all 90 samples we were able to source are detected, and of that set, at least 12 samples still do not have solid AV industry detection.
The Newscaster campaign, while not technically advanced, was elaborate and extensive. It managed to go undetected from at least 2011, and some of the malware it produced continues to go undetected by signature-based detection. The mathematical detection provided by Cylance, even with no prior knowledge of this malware, had no issue in detecting 100% of these samples.
Sample distribution with relations computed by ssdeep | <urn:uuid:93042c37-b66a-45da-9110-d49765e9febb> | CC-MAIN-2017-04 | https://blog.cylance.com/a-study-in-bots-newscaster | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.754347 | 1,917 | 2.515625 | 3 |
Do Computers Really Get Tired?
Q: I’ve always heard that computers get slower when they’re left on for long periods of time, but I wonder if that’s technically true. Often in claiming this, people refer to “electron buildup.” Does this actually exist? Do computers really get fatigued in any dimension?
A: Electronic devices experience some sort of fatigue, but it usually happens after a lot longer than several days. In the IT world, this is called mean time between failures (MTBF) and it is used as a measure of how reliable a product is.
MTBF is usually given in units of hours; the higher the MTBF, the more reliable the product is. Typical MTBF values for parts of a computer vary between different vendors but, on average, a CD-ROM drive is about 15,000, while hard drives are rated as 500,000 MTBF (that’s 57 years!). MTBF is a calculated average and should be used to predict more than as an actual proven number.
If electronic devices were actually subject to fatigue, would a satellite be able to operate for several years? Probably not. The satellite manufacturing company knows that no one will be able to reset it every two weeks, so it designs and tests it to last.
But this is definitely not the issue here, since the concern is about several days, not months. The problem with personal computers is more related to misbehaving applications, heat and related issues.
One common contributing factor is memory consumption. If an application does not release the memory space that was allocated to it and keeps doing so over and over again, in a given span of time the computer’s RAM will be fully occupied and additional applications will be sent to the swap space or fail. This is commonly referred to as a memory leak. Another kind of misbehaving application can start a process every few minutes and not terminate the previous process it was using.
The framework that applications run in — the operating system — can also generate the same kind of CPU and memory issues, and Windows support teams know that a common solution or preventative measure is to restart a computer every week or so.
Another major factor is related to environmental issues, or one particular issue, which is temperature. Electronic devices are designed to operate in an optimal way by residing in a cold environment (which is usually too cold for humans), and if they aren’t kept cold enough, then things start to get weird. Computers that work for several days might accumulate excessive heat and might have problems getting rid of it without being turned off.
So what solutions exist for dealing with such problems? Reset your computer every three or four days. Don’t wait until it starts to act weird, because this will happen in the middle of a presentation or another inconvenient moment. Put a reminder in your calendar or just make it a habit to reset every now and then. By reset, you should not use the standard restart from Windows, you should shut down, wait five to 10 minutes and turn your PC back on (lunchtime might be a good opportunity for that).
There are software tools that can assist in keeping things under control on the memory and CPU utilization fronts. Memory managers are more common, but there are also some CPU managers out there. Google the keyword “memory manager” or “CPU manager” with your operating system version (XP, Vista, etc.) and you should find many appropriate applications.
Some operating systems are better at controlling malfunctioning applications than others. UNIX used to be the leader in that, especially the commercial server platforms such as Solaris, AIX and HPUX. A server’s operating systems are considered better at running stable for long period of times; they incorporate more security and protection for the hardware resources then workstations.
Avner Izhar, CCIE, CCVP, CCSI, is a consulting system engineer at World Wide Technology Inc., a leading systems integrator providing technology and supply chain solutions. He can be reached at editor (at) certmag (dot) com. | <urn:uuid:3db98de4-5f28-4a83-9157-199ed0737be0> | CC-MAIN-2017-04 | http://certmag.com/do-computers-really-get-tired/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953821 | 850 | 3.046875 | 3 |
NASA's newest telescope is giving scientists their clearest pictures yet of the sun's atmosphere, and in doing so could help mitigate the potentially devastating effects an extreme solar storm could have on our power and communications networks on Earth.
Launched a month ago, the Interface Region Imaging Spectrograph, or IRIS, on Thursday sent some of its first images of the sun back to Earth. The pictures should help scientists form a better understanding of the sun's weather, which is important because its influence on Earth goes well beyond providing sunlight and warmth.
An ever-changing pattern of instability on the sun's surface causes particles to be thrown outward, sometimes directly toward the Earth. These eruptions can take the form of solar flares, which cause the awe-inspiring northern lights, but can also cause the Earth's atmosphere to expand and increase the amount of drag on low-Earth-orbit satellites, such as those used for spying and GPS navigation, shortening their lifespan.
The most violent eruptions can have a much larger impact, including potentially knocking power grids offline and leaving millions without electricity. Such an eruption occurred in 1859, frying parts of the international telegraph system, which at the time was the main medium for long-distance communications.
If such an event occurred today, with electricity and Internet communications such a fundamental part of daily life, it's hard to even fully imagine the potential impact. A recent report from Lloyds of London suggested the damage from a violent eruption could leave 20 million people without power for as long as two years.
All solar weather travels through the lower solar atmosphere, and IRIS contains a powerful spectrograph that will focus on this region of the sun. Thus, scientists hope IRIS will give them a better understanding of these solar events and perhaps help them find a way to predict them.
"These beautiful images from IRIS are going to help us understand how the sun's lower atmosphere might power a host of events around the sun," Adrian Daw, mission scientist for IRIS at NASA's Goddard Space Flight Center, said in a statement. "Any time you look at something in more detail than has ever been seen before, it opens up new doors to understanding. There's always that potential element of surprise."
The Earth is prone to the impact of solar weather because the particles hitting Earth from the sun are magnetized. | <urn:uuid:7e04737d-8476-4947-8c3a-8115d4584e58> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2484387/emerging-technology/nasa-s-new-iris-telescope-could-foresee-extreme-solar-storms.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952207 | 478 | 3.9375 | 4 |
Everyone experiences stress at work, but some jobs involve more overall stress than others. That's especially true for the tech industry, where high demand meets a lack of talent, which results in understaffed IT departments and a lack of support in up and coming areas like big data, security, mobile developers and more.
For some people, a stressful job is no issue, but for others it can be draining and even become a health issue. And science backs this up; numerous studies have suggested that stress at work can shorten your life span and cause negative health problems. According to the American Psychological Association (APA), "Along with its emotional toll, prolonged job-related stress can drastically affect your physical health. Constant preoccupation with job responsibilities often leads to erratic eating habits and not enough exercise, resulting in weight problems, high blood pressure and elevated cholesterol levels."
The APA also cites a loss of mental energy in addition to the health problems that can stem from a negative working environment or a stressful job. It can also perpetuate a negative and cynical attitude, leading to problems with depression, which can ultimately reduce your overall immunity.
CareerCast released a list of the most stressful and least stressful jobs for 2016 across every industry. For technology, there were eight jobs on the list that were categorized as being the most stressful jobs in the industry. The study looked at 11 stress factors including the amount of travel, growth potential, deadlines, working in the public eye, competitiveness, physical demands, environmental conditions, hazards encountered, own life at work, life of anther at risk and meeting the public. Respondents were asked to rate each category on a scale of 1 to 10 to find the "stress score" of each job.
[ Related story: Tech and IT workplace hiring trends for 2016 ] | <urn:uuid:fe7c3c15-b712-4236-93cc-4720d365a1a0> | CC-MAIN-2017-04 | http://www.computerworld.com/article/3031004/it-careers/the-8-most-stressful-jobs-in-tech.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96574 | 363 | 2.515625 | 3 |
We continue our focus on security this month by switching gears from “premises” security to securing the data that continually streams from connected devices in the Internet-of-Things. In fact, security (along with privacy – more on that in a future post) probably stands out as the most active point of discussion when it comes to building out an effective Internet-of-Things. Concerns are well-founded because, candidly, there’s a great deal at stake.
Much of this has to do with the impression that the IoT means hundreds of thousands more physical devices in a given enterprise will be connected to the Internet, with every single one of those devices representing a potential point of vulnerability. As an imaginative exercise, think of what could happen if, once driverless cars have become a bona fide component of everyday life, the connections to those cars were to land in the hands of a nefarious party. Highway mayhem could become an ultimate act of cyber-crime, with cars suddenly acting insanely at the hands of a perpetrator.
The security issue is so pervasive that the “once King of mobile” Blackberry has designs on becoming a “once and future King” around providing non-phone device security. The matter is spurring innovation in areas where one could not be blamed to think innovation had run its course.
That being said, it may be constructive to first look into how the Internet of Things diverges from the “regular” Internet as we know it. Terming it an ‘Internet’ is actually a bit of misnomer, because it largely consists of wirelessly-connected devices or sensors interacting in a client-server, or hub-and-spoke model; the Internet analogy does not, and should not, apply for most real-world applications coming online today. The level of interconnectivity among devices is fairly low, given the dedicated point-to-point communication, and point-to-point service delivery.
Which is to say, the IoT environment is much more closed in the first place than the literature would have us believe. There are architectural differences that go beyond how humans communicate over the Internet.
Moreover, IoT devices generally are not wired down to a network jack. Rather, they’re mostly connected over the air via cellular and fixed wireless technologies. There is an important distinction to make here. As compared to fixed network resources such as WiFi, which are built primarily on a single set of protocols, cellular networks are varied — not only in underlying technology but also in the frequency used. Cellular contains a certain obscurity that works to its advantage.
In addition, 3G cellular networks for example have five different sets of security features built into the architecture:
- Network Access Security, which provides identity confidentiality, user authentication, confidentiality, integrity and mobile equipment authentication.
- Network Domain Security, which allows the provider domain to securely exchange signaling data, and prevent attacks on the wired network.
- User Domain Security, which lets a device securely connect to mobile stations.
- Application Security, which lets applications in the user domain and the provider domain securely exchange messages.
- Visibility and Configurability Security, where users can freely find out which security features are available.
On top of these built-in mechanisms, data streams running over cellular are often subject to more stringent security processes such as encryption or SSL support, depending on the application. And sensitive markets such as energy or payment processing implement additional security overlays that go far beyond simple protection against end-point ingress, such as PCI and NIST. We’ve mentioned these before.
Taken together, it is my contention that as far as security is concerned, the use of cellular as the primary connector for IoT devices, even when fixed wireless technologies could be employed, offers a more significant set of challenges to would-be IoT hacks wreaking havoc inside of device domains. It is not my aim to represent that cellular networks are completely secure by the simple virtue of their being cellular. They are of course open to attacks such as DOS, channel jamming, message forgery, and the like.
There is, however, something to be said for the up-front level of access layering that cellular networks naturally throw into the equation. The idea that these connections have the capacity to impede access at the outset, like border fences in a prison until reinforcement can be called, adds a tangible, if not vital, source of value. | <urn:uuid:63de6867-1789-4d36-afb0-78e73144c1c8> | CC-MAIN-2017-04 | http://www.koretelematics.com/blog/does-cellular-act-as-a-natural-barrier-to-iot-hacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953197 | 909 | 2.578125 | 3 |
Wavelength conversion is an important function in WDM (Wavelength Division Multiplexing) network, as it enables better utilization of bandwidth and reduces blocking probabilities, which is caused by the factors of insufficient network resources like wavelength or bandwidth, lack of wavelength converters in network, routing and wavelength assignment decision made on outdated network state information. The wavelength continuity constraint increases the blocking probability. Wavelength continuity constraint can be relaxed by using wavelength converters with the development of traffic grooming technique. A wavelength converter is a single input/output device that converts the wavelength of an optical signal arriving at its input port to the different wavelength as the signal departs from its output port, but otherwise leaves the optical signal unchanged. Different levels of wavelength conversion shown in Figure 1.
Figure1: Different levels of wavelength conversion
Depending on the mapping functions and the form of control signals, wavelenth converters can be classified into three types: optoelectronic (OEO), optical gating, and wave-mixing. Figure 2 (b)-(d) shows functional block diagrams for the three types of wavelength converters. (a) is a general wavelength converter’s functional block diagrams.
Figure2: Functional Block Diagrams for Different Wavelength Converters
Optoelectronic (O/E-E/O) Wavelength Conversion
In this method, the incident signal at the input wavelength λ1 to be converted into an electrical bit pattern, then amplification and reshaping and then convert to optical signal at the desired wavelength λ2 (shown in Figure 3). Such a way is relatively easy to implement as it uses standard components. Its other advantages include an insensitivity to input polarization and the possibility of net amplification. Among its disadvantages are limited transparency to bit rate and data format, speed limited by electronics, and a relatively high cost, all of which stem from the optoelectronic nature of wavelength conversion.
Figure3: O/E-E/O Wavelength Conversion
Optical Gating and Wave-mixing Wavelength Conversion
These two methods for wavelength conversion belong to All-Optical Wavelength Conversion. In these all-optical methods, the optical signal is allowed to remain in the optical domain throughout the conversion process. Note that, in this context, all-optical, refers to the fact that there is no O/E conversion involved. What are optical gating and Wave-mixing? Do not worry, you will have a clear understanding after reading the following content.
Optical Gating is a series of techniques using cross-modulation to achieve wavelength conversion. These techniques utilize active semiconductor optical devices such as semiconductor optical amplifier (SOA) and lasers. Cross modulation methods can be further divided into cross-gain modulation (XGM) and cross-phase modulation (XPM) Mode. To date, the most promising method for wavelength conversion has been cross-modulation in an SLA (Semiconductor laser amplifiers) in which either the gain or the phase can be modulated. A basic XGM converter is shown in Figure 4 (a). The idea behind XGM is to mix the input signal with a cw (continuous wave) beam at the new desired wavelength in the SLA. Due to gain saturation, the cw beam will be intensity modulated so that after the SLA it carries the same information as the input signal. We can see that a filter is placed after the SLA, which is used to to terminate the original wavelength. The signal and the cw beam can be either co- or counterpropagating. A counterpropagation approach has the advantage of not requiring the filter as well as making it possible for no wavelength conversion to take place. A typical XGM SLA converter is polarization independent but suffers from an inverted output signal and low extinction ratio. Figure 4 (b) shows Cross-phase modulation using an SLA for wavelength conversion which makes it possible to generate a noninverted output signal with improved extinction ratio. The XPM relies on the fact that the refractive index in the active region of an SLA depends on the carrier density. Therefore, when an intensity-modulated signal propagates through the active region of an SLA it depletes the carrier density, thereby modulating the refractive index, which results in phase modulation of a CW beam propagating through the SLA simultaneously.
Figure4: Use of an SLA for wavelength conversion. (a) Cross-gain modulation. (b) Cross-phase modulation.
Wave-mixing (Figure 5) arises from a nonlinear optical response of a medium when more than one wave is present. It results in the generation of another wave whose intensity is proportional to the product of the interacting wave intensities. Wave-mixing preserves both phase and amplitude information, offering strict transparency. It also allows simultaneous conversion of a set of multiple input wavelengths to another set of multiple output wavelengths and could potentially accommodate signals with bit rates exceeding 100 Gb/s. There are two types of Wave-mixing: FWM (Four-wave mixing) and DFG (Difference frequency generation). FWM is an intermodulation phenomenon in non-linear optics, whereby interactions between two wavelengths produce two extra wavelengths in the signal. It is similar to the third-order intercept point in electrical systems. Four-wave mixing can be compared to the intermodulation distortion in standard electrical systems. DFG is a consequence of a second-order nonlinear interaction of a medium with two optical waves: a pump wave and a signal wave. This technique offers a full range of
transparency without adding excess noise to the signal, and spectrum inversion capabilities, but it suffers from low efficiency. The main difficulties in implementing this technique lie in the phase matching of interacting waves and in fabricating a low-loss waveguide for high conversion efficiency.
Figure5: A wavelength converter based on nonlinear wave-mixing effects. the value n=3 corresponds to FWM and n=2 corresponds to DFG
This article mentions the various techniques used in the design of a wavelength converter. They have their own advantages and disadvantages. The actual choice of the technology to be employed in wavelength conversion in a network depends on the requirements of the particular system. | <urn:uuid:8958b689-e58a-49dd-9d0b-cc1010c5008e> | CC-MAIN-2017-04 | http://www.fs.com/blog/wavelength-conversion-techniques.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891141 | 1,286 | 3.1875 | 3 |
Have you ever noticed that big cloud of dust around the moon that makes it nearly impossible to see our lunar friend clearly in the night sky?
Of course you haven't; moon dust is not detectable to the naked Earthbound eye. But researchers at the University of Colorado-Boulder have determined that the pelting of its surface by countless numbers of comet dust particles has created a "permanent, asymmetric dust cloud around the Moon."
Publishing their work in the online journal Nature, the scientists write, "We expect all airless planetary objects to be immersed in similar tenuous clouds of dust."
What a messy universe we live in! Worse, even the most effective air purifier won't work in an airless environment.
Mihaly Horányi, lead author of the study, "A permanent, asymmetric dust cloud around the Moon," tells Quartz that "we're really talking about very small particles." But:
“I think there is a concern about the long-duration exposure to dust impacts, and what happens with mirrors and mechanical devices [in space].”
It's probably similar to the challenges of keeping equipment running in a windy, desert environment. Sand gets into everything! Except, when that stuff happens in space, UPS isn't going to deliver replacement parts overnight.
This story, "Moon dust could ruin space travel for everyone" was originally published by Fritterati. | <urn:uuid:183ca256-2677-4521-9a7d-b23ddf002a6b> | CC-MAIN-2017-04 | http://www.itnews.com/article/2937206/moon-dust-could-ruin-space-travel-for-everyone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946837 | 291 | 3.859375 | 4 |
Despite their name, laptops should not be left on top of your lap for too long, doctors have warned.
In a paper published in the Pediatrics journal, Andreas Arnold, MD and Peter Itin MD, of University Hospital Basel in Switzerland, describe seeing ten cases of rashes caused by notebook PCs since 2004. The youngest sufferer was a 12-year-old boy.
The child developed the condition, known scientifically as ‘erythema ab igne’, on his left thigh. It was caused by the heat from his laptop, which, according to BBC News, he had been playing games on for hours.
The doctors describe the rash as being caused by “prolonged exposure to a heat or infrared source”.
“In laptop-induced erythema ab igne, the localization on the thighs and asymmetry are characteristic. The heat originates from the optical drive, the battery, or the ventilation fan of the computer,” the report said. | <urn:uuid:7493c436-839d-401d-932e-5259ec44184b> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/doctors-warn-of-laptop-rash/021301 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967316 | 206 | 2.59375 | 3 |
The phishing attack that led to more than 10,000 Hotmail, MSN and Live.com passwords being exposed online earlier this week has provided an interesting glimpse into the mindset of email users when setting up their accounts.
A researcher who managed to look at the 10,000 or so Hotmail, MSN and Live.com passwords published an analysis of the list and the strength of passwords used.
According to the analysis, one of the simplest passwords around, ‘123456’ appeared 64 times in the list. Undoubtedly, those account users would do well to change it as soon as possible but judging by people’s attitudes towards passwords, I doubt that many of those 64 account holders will choose anything more complex than adding an ‘a’ at the beginning.
Some of the other statistics are quite interesting. Forty-two percent of the passwords only use lowercase letters from ‘a to z’, while only 6% used mixed alpha-numeric and other characters.
The analysis shows that one-fifth of the passwords were only six characters long although the longest had 30 characters. The shortest was 1 character long.
A good number of passwords were formed using first names which is just as secure as having no password at all.
As Emmanuel Carabott explains, it is very important that people not only create strong passwords but they also change them regularly. Furthermore, it is good practice to use different passwords for different accounts so that if one is compromised, your other accounts or memberships will not be affected.
A lot of people are worried that if they use very strong or long passwords, they will forget them and not be able to access their email. While this is a valid point, it is possible to create a strong password that you can and will remember. For example, you can choose a phrase or a combination of words that are of particular significance: I love chocolate. By changing a few characters you can create a strong password:!loveCh0c0late.
Read the following Technet article for guidelines on choosing strong passwords. | <urn:uuid:07f333fe-7d03-41ba-904b-8a1a12c53916> | CC-MAIN-2017-04 | https://techtalk.gfi.com/mind-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00241-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96118 | 425 | 2.6875 | 3 |
There’s malware that can steal your social networks and now there’s malware that can steal your virtual world in order to steal from your in-real-life-world as well. Military researcher Robert Templeman from the Naval Surface Warfare Center in Crane, Indiana, and a team from Indiana University, created a super creepy Android app called PlaceRaider; it runs in the background on the Android 2.3, Gingerbread operating system. The sensory malware covertly taps into the phone’s camera to capture photos which attackers can stitched together to recreate a 3D image of the victim’s surroundings and then steal any sensitive information in view. This new “threat to the privacy and physical security of smartphone users” was dubbed “virtual theft.”
Malware that utilizes a smartphone’s sensors to steal sensitive information from the target’s physical environment has previously been developed. Soundminer monitors phone calls and steals credit card numbers either spoken or entered onto the keypad. Another example uses a smartphone accelerometer; spiPhone eavesdrops on the sound of your fingers typing on the keyboard to detect pairs of keystrokes and determine what you're typing. The creators of PlaceRaider, a “novel visual malware,” said sensor malware that remotely exploits a mobile phone’s camera has been “understudied.”
According to the abstract of PlaceRaider: Virtual theft in physical spaces with smartphones:
Through completely opportunistic use of the camera on the phone and other sensors, PlaceRaider constructs rich, three dimensional models of indoor environments. Remote burglars can thus download the physical space, study the environment carefully, and steal virtual objects from the environment (such as financial documents, information on computer monitors, and personally identifiable information). Through two human subject studies we demonstrate the effectiveness of using mobile devices as powerful surveillance and virtual theft platforms, and we suggest several possible defenses against visual malware.
To test if the visual malware would work and capture images other than the ceiling or a person’s pocket, the Indiana University team handed out infected Android phones to a group who was unaware of the malware. Not only were they able reconstruct 3D models of the users’ surroundings, they were able zoom in and commit “virtual burglary,” meaning they could steal credit card numbers, checks, calendars, documents and other sensitive information such as from a computer screen – anything that the camera could pick up on in the users’ environment. If you carry your phone to the bedroom or somewhere while you were undressing, it would expose a lot more than your documents to an attacker.
So the user was not alerted, the research team avoided surreptitiously taking videos as the battery drain might be noticed. Instead, the malicious mobile app muted the camera shutter as it took random images, and then stamped the time and location on each photo. The camera snapped one picture every two seconds. The software automatically deleted any blurry or dark images that were below the quality threshold before uploading them to the PlaceRaider command and control server. While most Androids have camera resolutions above 8 megapixels, as seen in the image below, they opted for a lower resolution of 1 megapixel to avoid the additional cost to handle and store all that extra data.
Templeman wrote [PDF], “PlaceRaider thus turns an individual's mobile device against him- or herself, creating an advanced surveillance platform capable of reconstructing the user's physical environment for exploration and exploitation.”
Malware such as PlaceRaider could be wrapped and hidden away within another otherwise legitimate app. “These remote services can run in the background, independent of applications and with no user interface.” Although the researchers used the Android platform for the visual malware, they said, “we expect such malware to generalize to other platforms such as iOS and Windows Phone.”
One of the suggested defenses was to check any app permissions before installing, but the researchers said if PlaceRaider was embedded in a camera app, then it would not require escalating privileges. A camera app would ask for the same permissions as the Trojan needed.
Templeman concluded, “We conceptualize a mode of attack where opportunistically collected data is used to build 3D models of users' physical environments. We demonstrate that large amounts of raw data can be collected and define novel approaches that can be used to improve the quality of data that is sent to the attacker. We offer PlaceRaider, an implementation of a virtual theft attack and through human subject studies demonstrate that such attacks are feasible and powerful.”
Remotely exploiting your smartphone camera is certainly scary stuff that could wreak destruction on both a privacy and security level while it covertly steals a person blind. During an interview about the visual malware app on 720 WGN, security researcher Apu Kapadia said PlaceRaider made him paranoid about his phone. Yet when he looked, he couldn't find any smartphone camera covers. If this gets out in the wild, maybe there would be a market for that . . . or else people might use a tiny piece of masking tape?
If interested, you can download the PlaceRaider cryptography and security research paper from Cornell University Library.
Image courtesy of School of Informatics and Computing at Indiana University | <urn:uuid:b9f65185-40fb-418a-90a4-ffa678b96c96> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473131/malware-vulnerabilities/visual-malware-remotely-exploits-android-camera--secretly-snaps-pic-every-2-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936808 | 1,091 | 2.71875 | 3 |
Interconnecting the various IT resources in a data center requires large amounts of cabling (whether fiber, copper or a combination of the two). Perhaps you’ve seen the mess of wires and cables that can quickly build behind your desk, especially if you have a number of connected gadgets; the situation in the data center can be much worse if not carefully controlled. Finding a good place for cabling that permits good airflow, accessibility for maintenance and expansion, and safety (for personnel and equipment) is critical
Moving Away From the Under-Floor Plenum
Although raised floors remain common in data centers, they are generally considered a less than optimal solution with respect to energy efficiency. One of a raised floor’s main benefits is it creates a space under the equipment that can contain the multitude of cables (power and data) required to feed servers and other IT equipment. Even assuming a raised floor isn’t an energy-efficiency hindrance in itself, using the under-floor plenum for cabling can hamper airflow, creating hot spots that necessitate running the cooling system at a higher capacity. Furthermore, the cable holes in tiles (or, worse, complete removal of tiles) results in air leakage, another drag on efficiency. From a cabling perspective, running cables under the floor makes them much less accessible at maintenance time; lifting floor tiles on a raised floor must be done carefully, and finding a particular cable can still be difficult or impossible, particularly if cables are not well marked or if many “dead” cables are left under the floor (another cause of clutter).
As a good chunk of the data center industry has moved away from raised floors, instead implementing hot aisle/cold aisle techniques, the matter of cable distribution has also come into focus. Obviously, if cabling cannot be placed under the floor, it must be placed above it.
Benefits of Overhead Cabling
From an energy efficiency standpoint, overhead cabling eliminates one major source of airflow obstruction, helping reduce the likelihood of hot spots. According to an APC by Schneider Electric white paper (“How Overhead Cabling Saves Energy in Data Centers”), “The decision to place network data and power cabling into overhead cable trays can lower cooling fan and pump power consumption by 24%.”
But another major benefit is accessibility. Instead of being under the floor—and possibly all but inaccessible owing to the arrangement of equipment above the floor or the hassles of lifting floor tiles—overhead cabling can be entirely accessible, easing the process of maintaining existing cables or adding new ones. A TechTarget.com article (“Using overhead cables to tidy your data center: Ask the Expert podcast”) cites Robert McFarlane, a principal at consulting and technology design firm Shen Milsom and Wilke, as identifying another tremendous advantage: “avoiding the need to comply with article 645 of the National Electrical Code (NEC) and the dangerous Emergency Power Off (EPO) button that article requires.” The EPO button is a perennial source of headaches for data center operators: it has been mistaken for a variety of purposes, including a door opener, to the catastrophic detriment of data center uptime. Of course, McFarlane is referring to the use of overhead cabling for power cables in this context. But it is worth noting that the overhead cabling concept can also apply to power cables, delivering the same airflow and maintenance benefits on the facilities side as it does on the IT side.
Thus, if implemented properly, overhead cabling can improve both data center efficiency and uptime—a dual win. But the key is doing so in a way that avoids some common pitfalls.
Cable trays are a means of running cables above the floor while still keeping them accessible for maintenance. A common form of cable tray is a wire mesh or “basket” style. But without proper planning, cable trays can still result in the same types of problems that plague under-floor cabling. For instance, unless “dead” cables are removed regularly when they fail or otherwise are unused, cable clutter can build to the point that identifying a faulty cable for maintenance purposes becomes nearly impossible. Furthermore, the buildup of weight can cause cable trays to sag—if nothing else, creating an unsightly appearance in the facility.
Thus, part of implementing cable trays properly is simply performing necessary maintenance: the trays cannot remove dead cables for you. To aid organization (and thus simplify maintenance tasks), modular cable tray systems are a good option. A Schneider Electric blog (“Overhead Cabling Can Reduce Data Center Energy Costs”) notes, “Creating a modular overhead system provides a solution to this potential problem. Data center personnel can more easily sort and plan cable location, integration and removal with multi-level cable tray organization. In addition, the system facilitates the removal of a ‘dead’ cable, since it will not be tangled or buried among a multitude of other cables.” Modular designs for cable trays (as in almost any other situation) may require some more planning to implement properly, but the return on this investment can accrue quickly in both less maintenance effort and less downtime.
Another important consideration for cable trays is how to install them properly, particularly in existing data centers. As cables accumulate, these trays must bear a significant (although not necessarily huge) weight. Thus, they must be secured to an existing structure—but without causing hazards. According to McFarlane via TechTarget.com, “You really want to avoid...suspending the tray from overhead in an existing facility. Drilling into overhead concrete or removing insulation from beams so you can attach anchors creates dust and contaminates...So in an existing facility you’re usually better off mounting the tray to the tops of the cabinets and racks. Many are made to accept the accessory stand-offs that are made for this purpose.” Installing cable trays in new facilities is generally simpler since the arrangement is planned and can be integrated with the building’s structure, if appropriate.
Yet another concern with cable trays is the possibility of creating zinc whiskers, which are tiny filaments of zinc metal that can “grow” from zinc-galvanized steel as a result of mechanical stress. Zinc whiskers that break off (simple contact with a surface can free zinc whiskers) can be caught by the air handling system and eventually land in sensitive IT equipment, possibly causing faults (unexpected system restarts, short circuits and so on) whose source is difficult to identify. Cable trays that are galvanized with zinc are susceptible to this problem.
A range of cable tray styles and features are available from numerous vendors. The different products, however, generally aim to highlight and build on the particular benefits of overhead cabling: easing maintenance, improving organization and improving airflow. Of course, cable trays are not a solution by themselves: data center operators must implement and follow organization and maintenance procedures to avoid the same clutter and build-up of dead cabling that plagues cabling in under-floor plenums. But the benefits of overhead cabling in energy savings alone (to say nothing of organization, neatness, uptime and maintenance) can quickly yield a return on the investment in cable trays. Furthermore, when hot aisle/cold aisle techniques are implemented, the use of overhead cabling eliminates entirely the need for a raised floor, which is a significant expense in the data center.
Photo courtesy of alexhutnik | <urn:uuid:906c02c6-e6da-403f-8f5f-d20f44ecd5a7> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/latest-in-data-center-low-voltage-cabling-distribution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915052 | 1,552 | 2.578125 | 3 |
This course begins with a general introduction to Software Defined Networks, including discussion of control planes and data planes, APIs, logical layers, types of SDN networks, and more. Students will spend 15+ hours delving into real-world implementation and deployment of OpenFlow, an SDN standard that allows a remote controller to interact with the forwarding plane of a network switch or router over the network, making it easy to deploy innovative routing and switching protocols in a network.
Learn how having complete transparency into setting up a flow gives you new insights as to how networking can change. Starting with basic tunneling, or traffic engineering, and then advancing to setting up multiple group tables, and tuning flow priorities and flow aging. You will gain hands on the tools and tricks needed to quickly deploy SDN.
Jasson Casey is the founder and executive director of Flowgrammable. With more than 15 years’ experience in the telecommunications industry, Jasson is currently a research associate with the Open Networking Foundation, a PhD candidate at Texas A&M University, and a research affiliate with the Center for Secure Information Systems at George Mason University. Jasson’s PhD research formed the basis for the Flowgrammable OpenFlow stack.
Looking for more Software Defined Network training? INE has you covered with our Introduction to OpenStack and Introduction to Open vSwitch (OVS) courses! | <urn:uuid:f15223f3-2d8e-463c-a2ea-73905d156b24> | CC-MAIN-2017-04 | http://www.ine.com/self-paced/technologies/introduction-sdn-openflow.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00416-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909651 | 283 | 2.5625 | 3 |
Intel Lights Up Laser ChipBy John G. Spooner | Posted 2006-09-18 Email Print
Intel and UCSB researchers have created a process to build lasers into silicon chips as another step toward photon-based chip interconnects.
Researchers at Intel and the University of California Santa Barbara say they have made another step toward integrating silicon chips and lasers, which could someday speed up computers with high-bandwidth chip-to-chip interconnects.
The researchers are collaborating in a field Intel has dubbed silicon photonics; the creation of on-chip components that can use light to transmit data. The researchers' latest work involves a process of integrating a laser directly into a silicon chip.
Intel has been exploring for some time different ways to use silicon photonics to replace electrical interconnects, which use copper wiring, to speed up the vital connections that move data into and out of its processors. The prospect of moving from electrical interconnects to silicon photonics is a difficult one, however. Among other things, photonics devices are relatively expensive, complex and, to date, have required what Intel says are exotic materials.
The work announced on Sept. 18, which involves combining indium phosphide and silicon, the basic building block of chip making, offers further proof that photonic devicesin this case lasers themselvescan be built into silicon chips, the researchers said in a statement released by Intel.
But, to be sure, Intel thinks that electrical interconnects will continue to be used for some time as the technology for making photonic devices is developed and then matures. But researchers have said they believe optical interconnects can eventually win out as it becomes more difficult to wring greater and greater performance out of copper wires.
The company also has a vested interest in silicon photonics as creating high-bandwidth interconnects, which can move more data, will become more vital as Intel moves deeper into the realm of multicore chips.
Intel is nearing the launch of its first quad-core chip, which will place four individual processor cores into one processor package. However, its engineers are working on chips with far more than four cores as part of a project called Tera-Scale Computing. Tera-Scale, which could either become or lead to a future processor architecture for Intel, is researching the idea of combining a few specialized processor coresfor jobs like processing TCP/IPwith tens or hundreds of simple cores to divide up a computing task and process it quickly.
Read the full story on eWEEK.com: Intel Lights Up Laser Chip | <urn:uuid:eda21351-0cd2-42f9-9f4b-0dd4c6421928> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/Intel-Lights-Up-Laser-Chip | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948829 | 521 | 3.203125 | 3 |
5.3.3 What is PKCS?
The Public-Key Cryptography Standards (PKCS) are a set of standards for public-key cryptography, developed by RSA Laboratories in cooperation with an informal consortium, originally including Apple, Microsoft, DEC, Lotus, Sun and MIT. The PKCS have been cited by the OIW (OSI Implementers' Workshop) as a method for implementation of OSI standards. The PKCS are designed for binary and ASCII data; PKCS are also compatible with the ITU-T X.509 standard (see Question 5.3.2). The published standards are PKCS #1, #3, #5, #7, #8, #9, #10 #11, #12, and #15; PKCS #13 and #14 are currently being developed.
PKCS includes both algorithm-specific and algorithm-independent implementation standards. Many algorithms are supported, including RSA (see Section 3.1) and Diffie-Hellman key exchange (see Question 3.6.1), however, only the latter two are specifically detailed. PKCS also defines an algorithm-independent syntax for digital signatures, digital envelopes, and extended certificates; this enables someone implementing any cryptographic algorithm whatsoever to conform to a standard syntax, and thus achieve interoperability.
The following are the Public-Key Cryptography Standards (PKCS):
- PKCS #1 defines mechanisms for encrypting and signing data using the RSA public-key cryptosystem.
- PKCS #3 defines a Diffie-Hellman key agreement protocol.
- PKCS #5 describes a method for encrypting a string with a secret key derived from a password.
- PKCS #6 is being phased out in favor of version 3 of X.509.
- PKCS #7 defines a general syntax for messages that include cryptographic enhancements such as digital signatures and encryption.
- PKCS #8 describes a format for private key information. This information includes a private key for some public-key algorithm, and optionally a set of attributes.
- PKCS #9 defines selected attribute types for use in the other PKCS standards.
- PKCS #10 describes syntax for certification requests.
- PKCS #11 defines a technology-independent programming interface, called Cryptoki, for cryptographic devices such as smart cards and PCMCIA cards.
- PKCS #12 specifies a portable format for storing or transporting a user's private keys, certificates, miscellaneous secrets, etc.
- PKCS #13 is intended to define mechanisms for encrypting and signing data using Elliptic Curve Cryptography.
- PKCS #14 is currently in development and covers pseudo-random number generation.
- PKCS #15 is a complement to PKCS #11 giving a standard for the format of cryptographic credentials stored on cryptographic tokens.
It is RSA Laboratories' intention to revise the PKCS documents from time to time to keep track of new developments in cryptography and data security, as well as to transition the documents into open standards development efforts as opportunities arise. Documents detailing the PKCS standards can be obtained at RSA Security's web server, which is accessible from http://www.emc.com/emc-plus/rsa-labs/standards-initiatives/public-key-cryptography-standards.htm or via anonymous ftp to ftp://ftp.rsasecurity.com/pub/pkcs/doc/.
Questions and comments can be directed to email@example.com. | <urn:uuid:29fad545-8964-4eaf-8c5a-1487c4e33f19> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/pkcs.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.878463 | 736 | 3.765625 | 4 |
Broadband Adoption - CSS/Header/Title
Closing the Digital Divide
Broadband Adoption - Intro Paragraph
ACCESS TO THE INTERNET IS ESSENTIAL FOR SUCCESS
Today, an Internet connection is a must-have for school assignments, healthcare, communicating with friends, finding a job, and starting a business. While broadband service is now available to most Americans, too many families remain unconnected to this important network.
The cable industry is committed to closing this digital divide by promoting the benefits of broadband, encouraging families to connect, and by offering programs that help families overcome barriers they may face.
Broadband Adoption - Six Block
Programs that Connect Us
Cable companies are providing discounted Internet services, hardware, digital literacy training, and technology centers across the country to increase accessibility. Here are some great examples:
COMCAST INTERNET ESSENTIALS
offers low-cost broadband services, discounted computers, and digital literacy training in 39 US states.
TIME WARNER CABLE TECH CENTERS & LEARNING LABS
are equipped with Internet, computers, and other devices in New York, Ohio, Missouri, Texas and California.
provides discounted broadband and computers and digital literacy training in the service areas of Bright House Networks, Cox Communications, Eagle Communications, Mediacom, and Suddenlink.
MIDCONTINENT BROADBAND LIFELINE ASSISTANCE PROGRAM
provides low-cost broadband and free wireless modems in Minnesota, North Dakota, and South Dakota.
COMCAST DIGITAL CONNECTORS PROGRAM
teaches young adults technology skills, digital literacy, and financial management, across 15 US states.
COX TECHNOLOGY CENTERS
in partnership with Boys & Girls Clubs, provide Internet services, computers, software, and digital literacy training across the US.
million dollars invested in broadband adoption programs
families connected through cable broadband adoption programs
schools offered broadband adoption programs by cable
Encouraging adoption is more than just delivering the Internet. It’s about education, community, and support.
Ways to Encourage Internet Adoption
Reveal the Internet's Relevance
Many people who do not have home Internet access question its relevance. Through messaging and partnerships, the cable industry is helping families understand the impact broadband can have on their lives and communities. A 2014 survey of homes participating in broadband adoption programs found that 98% of families chose to sign up for the adoption program because their kids need the Internet for school. A majority, 62%, said they need Internet services to look for or apply for jobs, and 68% said a reason for getting broadband access at home was to get health and medical information online.Show Details
Assist with Digital Literacy Training
One of the primary barriers to broadband adoption is the absence of digital literacy skills. Many people are intimidated by new, unfamiliar technologies and don’t know how to use Internet to find information or connect with friends. To help families overcome these challenges, the cable industry partners with a wide variety of national organizations and local community groups to offer digital literacy classes that explore the benefits of being online and teach skills to safely use the Internet.Show Details
Help with Costs
Costs associated with Internet access and computers can serve as barriers to adoption for many families. The cable industry has invested more than $300 million in broadband adoption programs, such as Internet Essentials and partnerships with Connect2Compete, to provide discounted Internet services as well as computers and hardware to qualifying families. These programs have connected more than 750,000 families to the Internet.Show Details
Broadband Adoption - Graphic - Why Families Joined - I need the internet
I NEED THE INTERNET BECAUSE...
When asked how internet adoption programs impact families, here’s what they said.
Broadband Adoption - Graphic - Adoption over time
CONNECTING MORE AMERICANS TO THE INTERNET
Every year, more and more families are discovering the importance of an Internet connection at home. Through awareness, education, broadband adoption programs, and an ever-growing network, more people are connected to the Internet than ever before. | <urn:uuid:b5135295-bfc2-4c4f-b1e6-70eda1ade6cf> | CC-MAIN-2017-04 | https://www.ncta.com/positions/closing-the-digital-divide | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905607 | 855 | 2.515625 | 3 |
Ever seen a /32 prefix in the IP routing table?
A /32 prefix is commonly referred to as a host route since it identifies a route to a specific IP host address. Since most (but not all) host computers don’t run routing protocols, we could create a host route on a router and then advertise it to other routers using a dynamic routing protocol. The routers would then use the host route to reach that specific host.
To create a host route, we use the ip route command from global configuration mode and specify a “255.255.255.255” (“/32”) mask. This mask denotes that the route pertains to a specific IP host address. For example, if we want our router to reach the host with address 192.168.1.1 via a next hop of 10.1.2.3, we might do this:
Router(config)#ip route 192.168.1.1 255.255.255.255 10.1.2.3
Assuming that the next hop of 10.1.2.3 is reachable, this route would now appear as an S route in our router’s IP routing table.
If the outbound interface is point-to-point, such as a serial link running HDLC or PPP, or a point-to-point Frame Relay subinterface, we also have the option of specifying the outbound interface instead of the next hop like this:
Router(config)#ip route 192.168.1.1 255.255.255.255 Serial0/1
Assuming that the Serial0/1 interface is up/up, the route would appear as an S route in our router’s IP routing table. In either case, the /32 mask specifies that this is a host route.
Now, having configured a static route for the specific host, we could advertise this route to other routers with a dynamic routing protocol (using route redistribution, for example). Since a /32 mask is not the default mask for any classful network, to advertise this specific route we need a classless routing protocol (the type that advertises the mask with the updates) such as RIPv2, EIGRP, OSPF, IS-IS, or BGP.
Note that while other routes to the host’s subnet or network may exist, because routers use the best (longest) match and nothing beats a /32, the result should be that the routers will use the host-specific route to reach that host.
While host routes are perfectly okay, they aren’t used much because they don’t scale well. A large enterprise may have tens of thousands of hosts, and you wouldn’t want that many entries in the routing tables (think about the RAM, bandwidth, and CPU required by the routing protocol).
When we do see host routes, they are commonly used with router loopback interfaces, which we’ll discuss in the future. | <urn:uuid:6cbee774-6114-4a5b-b546-decb289a4161> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/11/15/slash-32/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901239 | 628 | 3.890625 | 4 |
Most of us remember a far off time when computers, books, and telephones were three separate items; the third of which had no place in a classroom. Then in stepped smartphones and tablets. Let’s face it – not only have smartphones become commonplace in schools for teachers, students and staff, handheld digital devices are now becoming a staple in effective teaching.
Rather than restricting digital natives from using their tablets in class, countless schools are adopting bring your own device (BYOD) policies, which embrace the technology. In fact, “a recent survey conducted by the Center for Digital Education and the National School Board Association found that uptake of BYOD in American schools has increased over 30 percent since last year’s survey; currently, 56 percent of school districts are implementing BYOD programs.” (NMC Horizon Report, 2014 K-12 Edition). Another report, the SSIA Vision K-12 Survey, says K-12 schools forecast an increase in the use of BYOD with 85 percent of secondary, 66 percent of elementary and 83 percent of K-12 district participants saying handheld digital devices will be allowed within the next five years.
With such high statistics of BYOD adoption in schools, procedures must be put into place to monitor students’ use and activities so that devices are a learning tool rather than a distraction. Although controversial to some, the case can be made that remote monitoring of BYOD devices by instructors and IT staff can and should be a critical part of a district’s BYOD plan and policy. Basically, reports suggest that even schools that haven’t created a policy for BYOD, will be doing so very soon!
To help you plan and/or revise your district’s BYOD use policies or make the case for remote monitoring software purchases, here are five positives to remote monitoring BYOD electronics:
Positive 1: Complying with federal laws
In order to take part in the Universal Service Program for Schools and Libraries (E-Rate), a federal telecommunications and information services affordability program, schools must comply with the Children’s Internet Protection Act (CIPA). CIPA requirements state that schools must block or filter Internet access to a host of harmful images and activities. Additionally, to comply with CIPA, schools must meet two other requirements:
If your school is already receiving the E-Rate discounts, BYOD policies must comply with CIPA. This makes implementing remote monitoring software mandatory, not optional.
Positive 2: Protecting teachers
Without remote monitoring abilities, a teacher is at risk. In schools with cyberbullying policies (and even schools without them), it is typically the teacher’s responsibility to report any student activity that could be harmful. If a teacher reports that a student is bullying or being bullied, he or she is expected to provide proof. When attempting to monitor a student’s online activities on a student-owned device without remote monitoring software, providing proof is often the student’s word against the teacher. Having remote monitoring in place gives the teacher the protection of having recorded online activities to back up any reporting of suspicious behavior. In addition, having software in place that monitors student activity can take the responsibility of reporting off the teacher’s shoulders. When activity is monitored and put into reports, there’s no arguing with data.
Positive 3: No more “Sage on the Stage”
“Class, everyone turn to page 23 of your textbook and follow along while I lecture for 45 minutes.” What?! With the new digitally native mindset this old model just doesn’t work anymore. Handheld device technology ramps up the inherent need to shift from teacher-led, lecture-based learning models, to interactive, student initiated educational activities. The “Sage on the Stage” idea of “teacher-say, student-do” instruction can be turned on its head with the combination of BYOD electronics and remote monitoring. Rather than writing websites on the board, telling students to plow through online activities, then having a discussion afterward (BORING!) a teacher can push out a website to all students simultaneously, click through with them, or even allow one student’s screen to show on everyone else’s. Giving students the reigns allows them to showcase their capabilities, and can provide excellent opportunities for formative evaluation of learning.
From simple apps and tools to full STEM projects, this blog post from eSchool News provides 10 resources for mobile learning lesson plans!
Positive 4: Productivity, productivity, productivity!
With remote monitoring technology in place, such as Impero Education Pro, teachers can now view iPad screens remotely, in real time from within the classroom. Students can also share their iPad screens with their classmates. Not only does this keep students engaged in the classroom, but the teacher can now ensure their students cannot access illegal websites and has full insight on the terms and phrases that his or her students are using during online searches. These features allow a teacher to keep to the business of teaching, rather than running around the classroom trying to keep everyone on task.
Positive 5: Protecting student privacy
As has been reported recently, using remote monitoring software on school-issued computers has been a controversial subject. Several US school districts have been involved in lawsuits over using remote monitoring functions that utilize device cameras for surveillance of students off campus and in their homes. BYOD devices, when accompanied by independent platforms such as Impero Software’s solutions, can only be monitored while on school networks. Impero technologies are installed by professional technicians who set parameters that cannot be adjusted by administrators or teachers. Cameras on portable devices cannot be monitored. This ensures there can be no invasion of a student’s privacy, allowing peace of mind to all parties involved.
Additional resources for school BYOD planning and implementation:
6 benefits of BYOD in the classroom – by Tiziana Saponaro, international teacher
Making BYOD work in schools – 3 case studies – Emerging Ed Tech
Oak Hill Schools BYOD Plan (Don’t re-invent the wheel!)
Impero Remote Manager – remote access software solution | <urn:uuid:665a7c27-1a50-4487-91d1-4a221ae6362d> | CC-MAIN-2017-04 | https://www.imperosoftware.com/five-positives-remote-monitoring-byod-devices-classrooms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949196 | 1,269 | 2.75 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.