text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The idea behind "Born to Die" electronics is that mist gadgets now have pitifully short lives. Tech products such as cell phones and pads now have useful lives measured in months and the result is landfills and recycling facilities overwhelmed with electronic gadgets that need to be broken down and safely disposed of.
Researchers at the University of Illinois at Urbana-Champaign have begun a program to create electronics that can be dissolved by simply immersing them in water. The program is in its early days but a circuit mounted on a film of silk has been shown that "melts" when water soaks it (see the video below).
As an environmentally conscious technology this has huge potential assuming that what's left of the circuits isn't problematic in any way. On the other hand, drop your cell phone in water and your insurance company won't need to ask you if the white dot inside the battery compartment has turned blue; there won't be a cell phone to claim for. | <urn:uuid:aadbc4e1-196f-4205-8d16-5f9351be47e0> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2224970/infrastructure-management/electronics-that-will-be--born-to-die-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00618-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964247 | 198 | 3.171875 | 3 |
Windows Safe Mode is a way of booting up your Windows operating system in order to run administrative and diagnostic tasks on your installation. When you boot into Safe Mode the operating system only loads the bare minimum of software that is required for the operating system to work. This mode of operating is designed to let you troubleshoot and run diagnostics on your computer. Windows Safe Mode loads a basic video drivers so your programs may look different than normal.
If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware.
Windows 7 hides certain files so that they are not able to be seen when you exploring the files on your computer. The files it hides are typically Windows 7 System files that if tampered with could cause problems with the proper operation of the computer. It is possible, though, for a user or piece of software to set make a file hidden by enabling the hidden attribute in a particular file or folder's properties. Due to this it can be beneficial at times to be able to see any hidden files that may be on your computer. This tutorial will explain how to show all hidden files in Windows 7.
By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them.
Windows Vista comes with a rich feature set of diagnostic and repair tools that you can use in the event that your computer is not operating correctly. These tools allow you to diagnose problems and repair them without having to boot into Windows. This provides much greater flexibility when it comes to fixing problems that you are not able to resolve normally. This guide focuses on using the Startup Repair utility to automatically fix problems starting Windows Vista. The tutorial will also provide a brief description of the advanced repair tools with links to tutorials on how to use them.
HijackThis is a utility that produces a listing of certain settings found in your computer. HijackThis will scan your registry and various other files for entries that are similar to what a Spyware or Hijacker program would leave behind. Interpreting these results can be tricky as there are many legitimate programs that are installed in your operating system in a similar manner that Hijackers get installed. Therefore you must use extreme caution when having HijackThis fix any problems. I can not stress how important it is to follow the above warning.
To remove an app directly from your iPad, iTouch, or iPhone, press the icon on the device for the particular app you wish to delete until all of the icons on the screen start to wiggle. Once they are wiggling you will also see the symbol appear in the upper left-hand corner of each icon as shown in the image below.
Before Windows was created, the most common operating system that ran on IBM PC compatibles was DOS. DOS stands for Disk Operating System and was what you would use if you had started your computer much like you do today with Windows. The difference was that DOS was not a graphical operating system but rather purely textual. That meant in order to run programs or manipulate the operating system you had to manually type in commands. When Windows was first created it was actually a graphical user interface that was created in order to make using the DOS operating system easier for a novice user. As time went on and newer versions of Windows were developed DOS was finally phased out with Windows ME. Though the newer operating systems do not run on DOS, they do have something called the command prompt, which has a similar appearance to DOS. In this tutorial we will cover the basic commands and usage of the command prompt so that you feel comfortable in using this resource.
Windows Vista has made it a little harder to find the Folder Options settings than it had in previous versions. The easiest way is to use the Folder Options control panel to modify how folders, and the files in them, are displayed. You can still show the Folder Options menu item while browsing a folder, but you will need to hold the ALT key for a few seconds and then let go to see this menu.
The iPad is ultimately a device create to allow you consume content in an easy and portable manner. As there is no better location for consumable content than the Internet, being able to connect to a Wi-Fi network so you can access the Internet is a necessity. This guide will walk you through all of the steps required to connect to a Wi-Fi network using your iPad. We have also outlined steps that will allow you to access almost all types of Wi-Fi networks as well as using proxy servers if your particular scenario requires it. | <urn:uuid:f70be1be-083e-4b6e-92ba-832c7eaccb54> | CC-MAIN-2017-09 | https://www.bleepingcomputer.com/tutorials/popular/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00618-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953139 | 1,020 | 2.796875 | 3 |
Making Clouds Secure
The concept of Cloud Computing — what just about every IT community is dreaming about these days — has a multitude of indisputable advantages over more traditional modes of software distribution and usage. But Cloud Computing has a long way to go before it takes over the market — not in terms of technology, but in terms of how it is perceived by potential clients. For the majority of them, Cloud Computing seems like an interesting — but not very secure — idea.
If you were to review the evolution of the concept (which, incidentally, is considerably older than it might seem), you would see the close connections between Cloud Computing and information security. As Enomaly founder and Chief Technologist Reuven Cohen has rightly noted, the Cloud Computing concept was first mastered by cyber criminals who had created rogue networks as early as ten years ago. Not much time passed before people started using Cloud Computing for legitimate purposes, and the technology is just now beginning to come into its own.
What is a “Cloud”?
Let's take a look at the formal definition of the concept before we tackle the modern aspects of security and Cloud Computing. There is still no common or generally recognized definition of “Cloud Computing” in the IT industry, and most experts, analysts, and users have their own understanding of the term.
In order to come to a more precise definition, we first need to move from the general to the specific. In general, Cloud Computing is a concept whereby a number of different computing resources (applications, platforms or infrastructures) are made available to users via the Internet. While this definition seems to capture the essence of Cloud Computing, in practice it is much too abstract and broad. If you wanted to, you could include practically everything even vaguely related to the Internet in that definition. The definition needs to be made more specific, and in order to do so, we will first take a look at the position of the scientific and expert community.
The work “Above the Clouds,” published by the RAD Lab at UC Berkeley, has identified the three most common features of Cloud Computing:
- The illusion of infinite computing resources available on demand, thereby eliminating the need for Cloud Computing users to plan far ahead for provisioning.
- The elimination of an up-front commitment by Cloud users, thereby allowing companies to start small and increase hardware resources only when there is an increase in their needs.
- The ability to pay for use of computing resources on a short-term basis as needed (e.g., processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful.
The specifications for building a Cloud platform, such as virtualization, global distribution or scale, are not so much features of Cloud Computing, but merely help put this paradigm into practice. In particular, the use of virtualization technologies helps achieve the “illusion of infinite computing resources” mentioned above.
The main features of any Cloud service are the kinds of resources it offers users via the Internet. Depending on these resources, all services can be divided into a number of different categories (see Figure 1). Each of these carries the suffix *aaS, where the asterisk represents the letter S, P, I or D, and the abbreviation “aaS” means “as a service.”
Figure 1. The ontology of Cloud services
Essentially, Cloud Computing makes resources available through the Internet and has three fundamental features, as noted above. The types of resources made available may be software (SaaS), a platform (PaaS), an infrastructure (IaaS), or storage (DaaS).
Defining security problems on Cloud servers
Practically every expert in the industry approaches Cloud Computing with their own interpretation of the concept. As a result, after examining numerous published works on the subject, one might get the impression that there is really no standardization at all. Questions regarding the security of Skype — a typical consumer Cloud service — get jumbled up with the business aspects of installing SaaS, while Microsoft Live Mesh is already becoming a headache for companies that never even planned on using it in the first place.
That's why it would make complete sense to deconstruct the problem of Cloud Computing security into several high-level categories. In the end, all aspects of Cloud service security can be put into one of four main categories:
- Security issues with consumer Cloud and Web 2.0 services. As a rule, these problems don't have as much to do with security as they do with privacy and the protection of personal data. Similar problems are common among most major Internet service providers — just think about all of the accusations against Google or Microsoft that come up from time to time with regard to tracking user activity.
- Corporate-level security issues resulting from the popularity of consumer Cloud services. This becomes a problem when employees get together on sites like Facebook and gossip about corporate secrets.
- Cloud computing security issues related to corporate usage, and the use of SaaS in particular.
- Issues concerning the use of the Cloud Computing concept in information security solutions.
In order to avoid any confusion or contradictions, we will address only the third category from the list above, since this is the most serious issue in terms of corporate information system security. Consumer Cloud services have already won over Internet users, and there are really no security problems that could break that trend. The hottest topic right now is just how quickly Cloud Computing can become a corporate platform suitable not only for SMEs, but for large international organizations as well.
Deconstructing corporate Cloud services
IDC analysts who spoke at the IDC Cloud Computing Forum in February 2009 stated that information security is the top concern among companies interested in using Cloud Computing. According to IDC, 75% of IT managers are concerned about Cloud service security.
In order to understand why, we need to continue our deconstruction of the security issue. For corporations using Cloud services, all security issues can be further divided into three main categories:
- the security of a platform that is located on the premises of the service provider;
- the security of workstations (endpoints) that are located directly on the client's premises;
- and finally, the security of data that are transferred from endpoints to the platform.
The last point concerning the security of transferred data is de facto already resolved using data encryption technologies, secure connections, and VPN. Practically all modern Cloud services support these mechanisms, and transferring data from endpoints to a platform can now be seen as a fully secure process.
The platform: trust and functionality problems
Clearly, security issues related to service platform functionality are the biggest headache for IT managers today. For many, figuring out how to ensure the security of something that cannot be directly controlled is not a very straightforward process. The platform of a typical Cloud service is not simply located on the premises of a third-party organization, but often at an unknown data center in an unknown country.
In other words, Cloud Computing's basic security problem comes down to issues of client trust (and verifying trust) in service providers and is a continuation of the same issues that arise with any type of outsourcing: company specialists and management are simply not accustomed to outsourcing something as crucial as the security of business data. However, one can be certain that this problem will be resolved since other forms of outsourcing for the same IT processes and resources no longer give rise to any fundamental concerns.
What is this certainty based on? First of all, it is considerably easier for Cloud service providers to ensure the security of the data centers where available resources are located. This is due to the scale effect: since the service provider is offering services to a relatively large number of clients, it will provide security for each of them at the same time and, as a result, can use more complex and effective types of protection. Of course, companies like Google or Microsoft have more resources to ensure platform security than a small contracting firm or even a large corporation with its own data center.
Second, using Cloud services between client and provider organizations is always based on their respective Cloud services quality agreements (SLA), which clearly set out the provider's responsibility for various information security issues. Third, the provider's business directly depends on its reputation, which is why it will strive to ensure information security at the highest possible level.
In addition to verification and trust issues, Cloud platform clients also worry about the full functionality of information security. While most in-house systems already support this feature (thanks to many years of evolution), the situation is much more complicated when it comes to Cloud services.
Gartner's brochure “Assessing the Security Risks of Cloud Computing” examines seven of the most relevant Cloud service security problems, most of which are directly related to the idiosyncrasies of the way Cloud systems function. In particular, Gartner recommends looking at Cloud system functions from the viewpoint of access rights distribution, data recovery capabilities, investigative support and auditing.
Are there any conceptual restrictions that might make it impossible to put these things into practice? The answer is definitely no: everything that can be done within an organization can technically be executed within a “Cloud.” Information security problems essentially depend on the design of specific Cloud products and services.
When it comes to Cloud Computing platform security, we should address yet another important problem with regard to laws and regulations. Difficulties arise because a separation of data takes place between the client and the service provider within the Cloud Computing environment, and that separation often complicates the process of ensuring compliance with various statutory acts and standards. While this is a serious problem, it will no doubt be resolved sooner or later. On the one hand, as Cloud Computing becomes more widespread, the technologies used to ensure compliance with legal requirements will be improved. On the other hand, legislators will have to consider the technical peculiarities of the Cloud Computing environment in new versions of regulatory documents.
In summary, the concerns about information security as it pertains to the platform component of the Cloud Computing environment lead us to the conclusion that while all of the problems that potential clients have identified do in fact exist today, they will be successfully resolved. There simply are no conceptual restrictions in Cloud Computing.
Endpoint: difficulties remain� and are getting worse
In the theoretically ideal “Cloud World,” Cloud Computing security takes place on the platform level and through communication with edge devices, since data is not stored on the devices themselves. This model is still too premature to be put into practice, and the data that reaches the platform is de facto created, processed and stored on the endpoint level.
It turns out that there will always be security problems with edge devices in a Cloud environment. In fact there is another much stronger theory that these problems are actually becoming worse. In order to understand why this is happening, let us take a look at some conceptual diagrams of traditional in-house IT models compared to the Cloud Computing environment (Figures 2 and 3).
Figure 2. Security threats for traditional models for running software
Figure 3. Security threats in a corporate Cloud environment
In each case, most of the threats are clearly coming from the global network and entering the client's corporate infrastructure. In the in-house system, the main blow is dealt to the platform, in contrast to the Cloud environment, in which the more or less unprotected endpoints suffer. External attackers find it useless to target protected provider Clouds since, as we noted above, the protection level of global Cloud platforms like Google and Microsoft, due to the numerous capabilities, professional expertise and unlimited resources, will be significantly higher than the data protection supplied by any individual corporate IT system. As a result, cyber criminals end up attacking edge devices. The very concept of Cloud Computing, which presumes access to a platform from wherever and whenever it is convenient to do so, also increases the probability of this type of scenario.
On the other hand, having observed an increase in a variety of attacks on endpoint computers, corporate information security services have had to resort to focusing their efforts on protecting edge devices. It is this task in particular that, it would seem, will become a critical problem for corporate information security.
DeviceLock — a developer of software protection systems against data leakages via ports and endpoint computer peripherals — believes this is a crucial trend. Systems like those designed by DeviceLock become especially valuable in the Cloud Computing environment, since they help reduce the risk of corporate data leakages via endpoints, which are the focus of corporate information security service efforts at companies where Cloud services are used.
Instead of a conclusion�
"I think a lot of security objections to the Cloud are emotional in nature, it's reflexive," said Joseph Tobolski, director for Cloud Computing at Accenture. Shumacher Group CEO Doug Menafee is also familiar with the emotional aspects: "My IT department came to me with a list of 100 security requirements and I thought, Wait a minute, we don't even have most of that in our own data center".
Deciding to use Cloud Computing is just like getting behind the wheel of a car for the first time. On the one hand, many of your colleagues may have already made the leap, but on the other hand, getting onto a busy highway for the first time can be scary — especially when you keep seeing stories of horrible accidents on the news. However, it‘s not much more dangerous to drive than it is to drink coffee on a moving train or to wait at a bus stop.
For the most part, the situation with Cloud Computing is the same as with classic software usage models. The Cloud environment requires attention to information security, but we`re totally confident that there would be solutions to the problems that currently exist. There are specific nuances in Cloud security, primarily related to a blend of priorities — from perimeter protection to edge device protection. But if data security developers help companies resolve this problem, the future for “Clouds” will be sunny indeed. | <urn:uuid:38060fdb-ca58-4222-8f95-01f8277a3081> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/datacenter/datacenter-blog/making-clouds-secure?page=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00562-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945373 | 2,873 | 2.875 | 3 |
Carl Manion is a managing principal of Raytheon Foreground Security.
Targeted attack campaigns by advanced cyber adversaries have become a mainstay that most—if not all—organizations now need to be concerned about. This type of threat may stay hidden on your network, undetected for long periods of time, laterally moving across your systems as the attackers try to find the valuable information they’re interested in stealing.
Although such targeted attacks are difficult to detect, there are proven techniques and best practices, such as threat hunting, that can be implemented to significantly improve your chances of finding clues that serve as indicators of ongoing attacks. As such, it’s highly critical for enterprises to incorporate best practices into their security operations to mitigate the risks that targeted attacks pose.
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
Implementing a threat-hunting capability, along with standard IT security controls and monitoring systems, can improve an organization’s ability to detect and respond to threats. Because threat hunting is primarily a human-based activity, it takes skilled threat-hunting experts to implement an effective program.
So what makes a threat hunter successful? Here’s a list of four critical skills:
1. Pattern Recognition/Deductive Reasoning: Attackers are constantly finding new, creative ways to exploit weaknesses in popular operating systems and applications. Unforeseen zero-day exploits with no existing signatures are nearly an everyday occurrence, therefore, threat hunters need to look for patterns that match the tactics, techniques and procedures of known threat actors, advanced malware and unusual behaviors. To detect such patterns, a skilled threat hunter must also understand what normal behavior and patterns look like on their network. They must also be able to formulate and develop logical theories on how to access a network or exploit a system to gain access to specific critical information. Once they’ve developed their theory, they need to be able to work backward, using deductive reasoning, to look for likely clues and traces that would be left behind by attackers within those scenarios.
2. Data Analytics: Threat hunters rely on technology to monitor environments and collect logs and other data to perform data analytics. As such, threat hunters must have a solid understanding of data analytics and data science approaches, tools and techniques. Leveraging best practices such as the use of data visualization tools to create charts and diagrams significantly helps threat hunters identify patterns so they can determine the best actions to take in conducting threat-hunting activities and related investigations.
3. Malware Analysis/Data Forensics: When threat hunters find new threats, they often have to analyze and reverse engineer newly discovered malware and data forensics activities to understand how the malware was initially deployed, what its capabilities are and the extent of any damage or exposure it may have caused.
4. Communication: Once a threat hunter detects a threat, vulnerability, or weakness within the target network, they must effectively communicate to the appropriate stakeholders and staff members so the issue can be addressed and mitigated. If threats and related risks aren’t properly communicated to the right stakeholders, attackers will continue to have the upper hand.
As cyber adversaries continue to evolve, skilled threat analysts are needed to help defend our networks. Fortunately, a recent survey conducted by the National Cyber Security Alliance found 37 percent of young adults say they’re more likely to consider a cyber career than they were a year ago. Young adults also said they’re interested in career opportunities that will allow them to use their problem-solving, data analysis and communication skills. Threat hunting is an opportunity for them to use all of those skills. | <urn:uuid:ddce13cb-e998-4c6f-b76f-23ebbc0196e4> | CC-MAIN-2017-09 | http://www.nextgov.com/technology-news/tech-insider/2017/01/4-skills-every-threat-hunter-should-have/134186/?oref=ng-featured | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00562-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947349 | 746 | 2.53125 | 3 |
It did not light up the sky like real aurora borealis can but researchers with the U.S. Naval Research Laboratory said they have created an artificial version that can be used to explore ionospheric occurrences and their impact on communications, navigation and space weather.
Specifically what the researchers did was produce what they called a "sustained high density plasma cloud in Earth's upper atmosphere," using the 3.6-megawatt High-frequency Active Auroral Research Program (HAARP) transmitter facility, Gakona, Alaska.
[IN OTHER NEWS: 25 crazy and scary things the TSA has found on travelers]
"Previous artificial plasma density clouds have lifetimes of only ten minutes or less," said Paul Bernhardt, Ph.D., NRL Space Use and Plasma Section in a statement. "This higher density plasma 'ball' was sustained over one hour by the HAARP transmissions and was extinguished only after termination of the HAARP radio beam."
According to the lab, the HAARP transmitter creates plasma clouds, or balls of plasma, which are being studied for use as artificial mirrors at altitudes 50 kilometers below the natural ionosphere and are to be used for reflection of high frequency (HF) radar and communications signals. The artificial plasma clouds are detected with HF radio soundings and backscatter, ultrahigh frequency (UHF) radar backscatter, and optical imaging systems, the lab stated.
The test cloud in this case would glow green that could be seen by the naked eye but it was not anywhere near as impressive as a true aurora borealis light display.
The tests are part of research sponsored by the Defense Advanced Research Projects Agency (DARPA) and its program known as Basic Research on Ionospheric Characteristics and Effects (BRIOCHE) which it says explores the "physics of ionospheric storms, scintillations and other ionospheric effects over a broad range of optical and radio frequencies."
Check out these other hot stories: | <urn:uuid:86ec5a96-0c70-4e99-8aed-b9d2e013765c> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2224187/data-center/artificial-aurora-lights-up-arctic-skies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00086-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946484 | 407 | 3.625 | 4 |
Researchers have uncovered an extremely critical vulnerability in recent versions of OpenSSL, a technology that allows millions of Web sites to encrypt communications with visitors. Complicating matters further is the release of a simple exploit that can be used to steal usernames and passwords from vulnerable sites, as well as private keys that sites use to encrypt and decrypt sensitive data.
“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.”
An advisory from Carnegie Mellon University’s CERT notes that the vulnerability is present in sites powered by OpenSSL versions 1.0.1 through 1.0.1f. According to Netcraft, a company that monitors the technology used by various Web sites, more than a half million sites are currently vulnerable. As of this morning, that included Yahoo.com, and — ironically — the Web site of openssl.org. This list at Github appears to be a relatively recent test for the presence of this vulnerability in the top 1,000 sites as indexed by Web-ranking firm Alexa.
An easy-to-use exploit that is being widely traded online allows an attacker to retrieve private memory of an application that uses the vulnerable OpenSSL “libssl” library in chunks of 64kb at a time. As CERT notes, an attacker can repeatedly leverage the vulnerability to retrieve as many 64k chunks of memory as are necessary to retrieve the intended secrets.
Jamie Blasco, director of AlienVault Labs, said this bug has “epic repercussions” because not only does it expose passwords and cryptographic keys, but in order to ensure that attackers won’t be able to use any data that does get compromised by this flaw, affected providers have to replace the private keys and certificates after patching the vulnerable OpenSSL service for each of the services that are using the OpenSSL library [full disclosure: AlienVault is an advertiser on this blog].
It is likely that a great many Internet users will be asked to change their passwords this week (I hope). Meantime, companies and organizations running vulnerable versions should upgrade to the latest iteration of OpenSSL – OpenSSL 1.0.1g — as quickly as possible.
Update, 2:26 p.m.: It appears that this Github page allows visitors to test whether a site is vulnerable to this bug (hat tip to Sandro Süffert). For more on what you can do you to protect yourself from this vulnerability, see this post. | <urn:uuid:7241bbaa-e69a-4b3d-b2b5-5838e5e75fdd> | CC-MAIN-2017-09 | https://krebsonsecurity.com/tag/yahoo-com/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00262-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928089 | 573 | 2.609375 | 3 |
Emerging protocol can help manage the Internet of Things
- By Kevin McCaney
- May 01, 2013
The ever-expanding networks of sensors and other machine-to-machine devices on the “Internet of Things” are creating huge stores of data for everything from traffic and weather monitoring to health care and finance.
And there will only going be more of them. Sensors, for example, will play a key role in the Obama Administration’s recently released National Strategy for Civil Earth Observations, a plan to increase the efficiency and effectiveness Earth-observations. Along with other steps toward streamlining the efforts of the 11 agencies involved in the observations, it calls for extensive use of sensors in gathering the data.
Of course, having all that data is one thing. Making sense of it — quickly — is another. One key is the emerging Message Queuing Telemetry Transport (MQTT) protocol, a lightweight messaging transport for machine-to-machine communications that recently was proposed as an OASIS standard.
OASIS in March began the process “to define an open publish/subscribe protocol for telemetry messaging designed to be open, simple, lightweight and suited for use in constrained networks and multi-platform environments.” The protocol, which consumes little power, is designed to help sensors and other devices — which tend to be low-power and low-bandwidth — communicate reliably.
While OASIS works on a standard, MQTT already is being put to use. IBM said support of MQTT was “foundational” to its just-released MessageSight appliance, which is designed “help organizations manage and communicate with the billions of mobile devices and sensors found in systems such as automobiles, traffic management systems, smart buildings and household appliances.”
IBM cites research estimating that by 2020 there will be 22 billion devices connected to the Web, generating 2.5 quintillion bytes of data daily, enough to fill 7 million DVDs every hour. (For comparison, when HD Moore, lead researcher for Rapid 7, recently pinged the entire Internet “for fun,” he found 3.7 billion connected devices, Technology Review reported. Other estimates are higher; Cisco puts the number at 8.7 billion.)
Products such as MessageSight, part of IBM’s Smarter Planet strategy, allow for real-time processing of all that information. The company said Message Sight can support 1 million sensors or smart devices at a time, and it can handle up to 13 million messages per second. In situations such as traffic monitoring or weather emergencies, it can allow agencies to make decisions quickly.
In developing a standard for MQTT, OASIS said it should allow bi-directional messaging, reliable messages on networks with limited bandwidth and have connectivity awareness for devices and networks that are intermittently connected. It also should be flexible enough to allow for high-volume bursts of data (perhaps as will weather sensors in a hurricane) and, because of its open architecture, support a growing range of devices.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:2f3bd4d6-1b5d-4fc2-9a2d-a04422494cf9> | CC-MAIN-2017-09 | https://gcn.com/articles/2013/05/01/emerging-protocol-manage-internet-of-things.aspx?admgarea=TC_EmergingTech | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00614-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942084 | 644 | 2.53125 | 3 |
Kids' parties sometimes have entertainers geared at doing all sorts to keep children occupied: magic tricks, juggling and so on. I don't know about you, but I'd much rather have this awesome Japanese robot doing the entertainment at parties than the typical human act.
The folks over at Chinba University demonstrated how one of its hand-arm systems can juggle two balls at once--single-handedly and extremely fast--at the IEEE International Conference on Robotics and Automation. The research is inspired by the way humans have the ability and skill to juggle balls, and hopes to transfer this sort of talent to robots.
[ FREE DOWNLOAD: 6 things every IT person should know ]
Due to how dexterous the robotic arm is, it can move in an almost human-like manner, which aids its juggling ability. What makes the arm even cooler is that it's rigged up to a high-speed vision system, which allows it to plan for every catch and throw. When we say high-speed, by the way, we're talking 500 frames per second.
Sadly there is a slight catch: Since it lacks a shoulder joint, the robot can only manage about five cycles before it drops a ball. This is because the robot can't get to a ball if it drifts slightly, either due to the environment or the way the robot throws the ball in the first place.
Of course, the researchers are looking into ways of improving the juggle bot's technique with new ways of throwing the ball. They are also hoping to add in more balls to the robot's act, and then other complicated variants of juggling. Not bad for a metal arm!
Like this? You might also enjoy...
- You Can Use Honda's Segway-Like Vehicle While Sitting: There Goes the Human Race
- Glove Turns Sign Language Into Spoken Letters, Opens Up Communication
- SpaceX Re-Schedules Space Station Launch: Here's How You Can Watch
This story, "Juggling Robot Does Tricks, Is Cooler Than Your Typical Clown" was originally published by PCWorld. | <urn:uuid:1ad1419c-a9c3-4054-b222-5599e62fd6c7> | CC-MAIN-2017-09 | http://www.itworld.com/article/2726798/it-management/juggling-robot-does-tricks--is-cooler-than-your-typical-clown.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00490-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942719 | 428 | 2.546875 | 3 |
Hackers have targeted politicians, journalists and activists using the 'legal' spyware tool, Galileo, with previously undiscovered mobile trojans that work on Android and iOS.
Kaspersky Lab has published a new research report mapping the presence of a large global infrastructure used to control 'Remote Control System' malware implants.
The report also identifies previously undiscovered mobile Trojans that work on both Android and iOS. The uncovered modules are part of the so-called 'legal' spyware tool, Galileo, developed by the Italian company, HackingTeam.
The list of victims referred to in the new research, conducted by Kaspersky Lab together with its partner Citizen Lab, includes activists and human rights advocates, as well as journalists and politicians, according to Kaspersky. One of the major discoveries has been learning precisely how a Galileo mobile Trojan infects an iPhone. It helps if the device is jailbroken. However, non-jailbroken iPhones can become vulnerable too.
An attacker can run a jailbreaking tool like 'Evasi0n' via a previously infected computer and conduct a remote jailbreak, followed by the infection, according to Kaspersky Lab. The operators behind the Galileo RCS build a specific malicious implant for every concrete target.
The attacker then delivers it to the mobile device of the victim.
Some of the known infection vectors include spear-phishing, attacks via social engineering, often coupled with exploits, including zero-days; with local infections delivered via USB cables while synchronising mobile devices. Kaspersky Lab's experts recommend that users avoid jailbreaking their iPhone, as well as ensuring that users constantly update the iOS on their device to the latest version. "The RCS mobile modules are meticulously designed to operate in a discreet manner," the report said.
"This is implemented through carefully customised spying capabilities executed through special triggers.
"For example, an audio recording may start only when a victim is connected to a particular Wi-Fi network; when the user changes the SIM card; or while the device is charging." In general, the RCS mobile Trojans are capable of performing many different kinds of surveillance functions, including reporting the target's location; taking photos; copying events from the calendar; registering new SIM cards inserted in the infected device; and interception of phone calls and messages.
In addition to regular SMS texts, the latter includes messages sent from specific applications such as Viber, WhatsApp and Skype. Kaspersky Lab has been working on different security approaches to locate Galileo's command and control servers around the globe.
For identification process, experts relied on special indicators and connectivity data obtained through existing reverse engineering samples. During the latest analysis, Kaspersky Lab's researchers were able to map the presence of more than 320 RCS command and control servers in over 40 countries.
The majority of the servers were based in the United States, Kazakhstan, Ecuador, the UK and Canada. Kaspersky Lab principal researcher, Sergey Golovanov, said the presence of these servers in a given country didn't mean they were used by that particular country's law enforcement agencies.
"However, it makes sense for the users of RCS to deploy C&Cs in locations they control -- where there are minimal risks of cross-border legal issues or server seizures." Although in the past it had been known that HackingTeam's mobile Trojans for iOS and Android existed, no organisation has actually previously identified them or noticed them being used in attacks, according to Kaspersky.
New variants of samples received from victims through Kaspersky Lab's cloud-based Security Network assisted with the investigation.
This story, "Politicians and journalists hacked using 'legal' spyware tool Galileo" was originally published by ARNnet. | <urn:uuid:801f41be-883e-4b25-a6d0-5c0fb3298941> | CC-MAIN-2017-09 | http://www.csoonline.com/article/2367525/politicians-and-journalists-hacked-using-legal-spyware-tool-galileo.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00610-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947063 | 768 | 2.53125 | 3 |
Corporations and government are using information about us in a new – and newly insidious – way. Employing massive data files, much of the information taken from the Internet, they profile us, predict our good or bad character, credit worthiness, behavior, tastes, and spending habits – and take actions accordingly.
As a result, millions of Americans are now virtually incarcerated in algorithmic prisons.
Some can no longer get loans or cash checks. Others are being offered only usurious credit card interest rates. Many have trouble finding employment because of their Internet profiles. Others may have trouble purchasing property, life, and automobile insurance because of algorithmic predictions. Algorithms may select some people for government audits, while leaving others to find themselves undergoing gratuitous and degrading airport screening.
An estimated 500 Americans have their names on no-fly lists. Thousands more are targeted for enhanced screening by the Automated Targeting System algorithm used by the Transportation Security Administration. By using dataincluding "tax identification number, past travel itineraries, property records, physical characteristics, and law enforcement or intelligence information" the algorithm is expected to predict how likely a passenger is to be dangerous.
Algorithms also constrain our lives in virtual space. They determine what products we will be exposed to. They analyze our interests and play an active role in selecting the things we see when we go to a particular website..
Eli Pariser, argues in The Filter Bubble, "You click on a link, which signals your interest in something, which means you are more likely to see articles about that topic" and then "You become trapped in a loop…" The danger being that you emerge with a very distorted view of the world.
If you’re having trouble finding a job as a software engineer, it may be because you got a low score from the Gild, a company that predicts the skill of programmers by evaluating the open source code they have written, the language they use on LinkedIn, and how they answer questions on software social forums
Algorithmic prisons are not new.. Even before the Internet, credit reporting and rating agencies were a power in our economy. Fitch’s, Moody’s, and Standard and Poor’s have been rating business credit for decades. Equifax, the oldest credit rating agency, was founded in 1899.
When algorithms get it right (and in general they do a pretty good job), they provide extremely valuable services to the economy. They make our lives safer. They make it easier to find the products and services we want. Amazon constantly alerts me to books it correctly predicts I will want to read. They increase the efficiency of businesses.
But when algorithms get it wrong, real suffering follows.
Most of us would not be concerned if ten or a hundred times too many people ended up on the TSA’s enhanced airport screening list as long as an airplane hijacking was avoided. In times when jobs are scarce and applicants many, most employers would opt for tighter algorithmic screening. There are lots of candidates to hire and more harm may be done by hiring a bad apple than by missing a potentially good new employee. And avoiding bad loans is key to the success of banks. Missing out on a few good ones in return for avoiding a big loss is a decent trade off.
But we’ve reached the point where, inmany cases, private companies and public institutions stand to gain more than they will lose if a lot of innocent people end up in algorithmic prison.
An related concern is this: Surveillance has become automated through the use of Internet tools, capturing data from cellular phones, low cost cameras, and the ability of economically analyze big databases. As a result, it has become much easier -- and a lot less costly -- to construct algorithmic prisons. Not only can we expect to see a great increase in the number of algorithmic prisons, but thanks to cheaper and more efficient tools the value derived from establishing them will increase.
A number of services already facilitate the creation of algorithmic prisons. Axciom, for instance, a marketing services company, monitors 50 trillion transactions annually and maintains about 1,500 data points on 500 million consumers worldwide. That same database can serve as a key component in the construction of an algorithmic prison.
There are other features of algorithmic prisons that a latter-day antagonist in a tale by Kafka might have dreamed up. A consumer or job seeker might know only that he has trouble getting credit or a job interview. What he may not know is that the bars of an invisible prison are keeping him from reaching his goal.
The federal Consumer Financial Protection Bureau lists more than 40 consumer-reporting companies. These are services that provide reports for banks, check cashers, payday lenders, auto and property insurers, utilities, gambling establishments, rental companies, medical insurers, and companies wanting to check out employment history. The good news is that the Fair Credit Reporting Act requires those companies to give consumers annual access to their reports and allows a consumer to complain to the Consumer Financial Protection Bureau if he is being treated unfairly.
Good luck with that.
Even if an algorithmic prisoner knows he is in a prison, he may not know who his jailer is. Is he unable to get a loan because of a corrupted file at Experian or Equifax? Or could it be TransUnion? His bank could even have its own algorithms to determine a consumer’s creditworthiness. Just think of the needle-in-a-haystack effort consumers must undertake if they are forced to investigate dozens of consumer-reporting companies, looking for the one that threw them behind algorithmic bars. Now imagine a future that contains hundreds of such companies.
A prisoner might not have any idea as to what type of behavior got him sentenced to a jail term. Is he on an enhanced screening list at an airport because of a trip he made to an unstable country, a post on his Facebook page, or a phone call to a friend who has a suspected terrorist friend?
Finally, how does one get his name off an enhanced screening list or correct a credit report? Each case is different. The appeal and pardon process may be very difficult—if there is one.
It is impossible to fathom all the implications of algorithmic prisons. Yet a few things are certain: Even if they do have great economic value for businesses, and even if they do make our country a safer place, as they continue to proliferate, many of us will be injured, seriously inconvenienced, or experience greatly frustrated as a result.
Even if we all believed algorithmic prisons present a serious threat to individual freedom, it would be difficult to come up with a reasonable solution to the problems they create.
I would personally favor requiring all companies to destroy within, say, 48 hours, all data collected about me unless I have given explicit permission otherwise. I would also prohibit the sale of my personal information or its use for advertising.
Well, that is a nice idea but it is fraught with problems. Under those rules, accurate credit reports would be impossible. And I would want law enforcement agencies to have access to all that information subject to the right restrictions and oversight. If the data is destroyed, that would be impossible.
What is clear is that the consumer protections in place at the moment do not suffice. An additional a set of carefully constructed restrictions is required. Being held in any number of algorithmic prisons is a scenario I for one do not want to be caught up in. And I doubt I am alone. | <urn:uuid:93588a76-01eb-468b-a62a-e44c22296fb8> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2014/02/commentary-welcome-algorithmic-prison/79196/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953626 | 1,534 | 2.53125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content.
Embed code for: trabajo de ingles 06 Octubre traducido
Select a size
Bolivarian Republic of Venezuela
Ministry of Popular Power for University education, science and technology
University Institute of Management (IUPG Professions)
Section: 6 "DB"
Active Voice & Passive Voice
Professor (a): Norman A Canaie Members:
Caracas, 13 October 2016
Active Voice 4
Active Voice 4
Subject --> action --> Object 4
The Passive Voice 8
Grammatical Rules 11
Uses of the Passive Voice 11
Comparison of Active Voice and Passive Voice 15
ACTIVE / PASSIVE OVERVIEW 16
In the English language there are several grammatical resources used in sentences, one of them are the voices which can be active or passive. When we talk about asset we refer to something that produces an effect, and when we talk about liability we refer to something that lies dormant letting things happen without your intervention.
Similarly, in a sentence can present the passive voice, this appears in the nominative-accusative languages, and in it the verb has a subject suffering from the action, ie, it is a subject patient, and do not realize it, executes or controls, as in the active voice. In other words we can say that the active voice is where the action of the verb falls on the object. The difference between the active voice and the passive voice is not merely formal. It is also a difference in meaning: in the active sentence, the subject of the sentence is responsible for the action; in the passive sentence, the subject of the sentence receives the effects of the action.
The active voice appears in the field of grammar and is linked to a form of conjugate verbs. Also known as direct voice, the active voice expresses a subject agent who executes an action.
It was also said that a sentence is in active voice when the significance of the word is produced by the person to whom one is grammatical concerns:
(Pedro de Mendoza founded in Buenos Aires)
In this mode, the prayer expresses that a subject performs an action and that action is received by the object.
Subject --> action --> Object
Is the type of prayer that we use more frequently in all tenses. In the following prayers, the action of sweep is performed by Julia and lies in the street. And we can express it in different tenses.
Present : Julia sweeps the Street.
Past: Julia swept the street.
Future: Julia will sweep the street.
Future: Julia is going to sweep the street.
Present Continuous: Julia is sweeping the Street.
Past continuous: Julia was sweeping the Street.
Continuous future: Julia will be sweeping the Street.
Present Perfect: Julia has swept the street.
Past Perfect: Julia had swept the street.
Future perfect: Julia will have swept the street.
Present Perfect Continuous: Julia has been sweeping the street.
Past Perfect Continuous: Julia had been sweeping the street.
Future Perfect Continuous: Julia will have been sweeping the street.
As we can see in every prayer we have a subject that acts: Julia, and an object that receives its action: the street. The action it performs is expressed in the different tenses.
In the case of simple times (present, past and future), uses the conjugation simple corresponding to each of the times.
In the case of the times perfect, uses the basic structure of each one of them, always using have in the corresponding time, more the participle of the active verb (you have swept, had swept, will have swept).
For the rest of the continuous times, with the active voice is always uses the shape of the gerundio (verb ending in -ing.
The active voice also use it when we express an action performed by the subject, but without indicating their object:
The car runs.
The flower will blow.
The dog barked.
We also use the active voice with the modal verbs to indicate that someone can or must perform an action. In these cases the word is also used in their conjugations simple of present, past, or future:
You must study
Mary could won the race.
We may achieve the goal.
They can arrive on time.
THIS SIMPLE (SIMPLE PRESENT)
SUBJECT + VERB + COMPLEMENT.
THEY PAINTS THE HOUSE.
THIS PRESENT PROGRESSIVE:
SUBJECT + VERB TOBE + (VERB + ING) + COMPLEMENT
THEY ARE PAINTING THE HOUSE
SIMPLE PAST SIMPLE PAST:
TO BE SUBJECT + VERB + COMPLEMENT IN PAST
THEY PAINTED THE HOUSE
SUBJECT + VERB TO BE IN PAST + (VERB + ING) + COMPLEMENT
PAINTING THE HOUSE THEY WERE
THIS PERFECT (PERFECT PRESENT)
SUBJECT + VERB + YOU HAVE OR PARTICLE + COMPLEMENT
EXAMPLE: THE PAINTED HOUSE THEY HAVE
PRESENT PERFECT PROGRESSIVE:
SUBJECT + HAVE OR HAS BEEN + (VERB + ING) + COMPLEMENT
THEY HAVE BEEN PAINTING THE HOUSE
GOING TO FUTURE (FUTURE GOING TO)
SUBJECT + VERB + + GOING TO COMPLEMENT
THEY GOING TO PAINT THE HOUSE
WILL FUTURE (FUTURE WILL)
SUBJECT + VERB + COMPLEMENT WILL
PAINT THE HOUSE THEY WLL
The Passive Voice
The passive voice is a construction or verbal conjugation in some languages for which there is the subject as passive (subject patient), while the action executed by the verb is played by a snap-in (agent plug-in) and not by the subject agent of the verb in active voice. The passive voice converts to a transitive verb in a verb intransitivo with only one main argument as possible (the agent when it is expressed through an attachment marked with case oblique or prepositional).
Is known as grammatical voice to the category that is associated with the word and that alludes to the semantic link that maintains with the object and with the subject. According to the voice grammar, the subject is patient or agent according to receive or run the action.
Auxiliary verb (to be) + past participle
Am/are/is + PP
Spanish is spoken here.
Am/are/is being + PP
Your questions are being answered.
Will be + PP
It'll be painted by next week.
FUTURE (GOING TO)
Am/are/is going to be + PP
Terry is going to be made redundant next year.
Was/were + PP
We were invited to the party, but we didn't go.
Was/were being + PP
The hotel room was being cleaned when we got back from the shopping.
Have/have been + PP
The President of America has been shot.
Had been + PP
When i got home I found that all of his money had been stolen.
Will have been + PP
Our baby will have been born before Christmas.
Present Simple Passive
Present simple: Subject + present the verb to be + main verb in passive participle.
Simple Present is used for common actions and general truths.
Example: two days a week.
Subject + past verb to be + main verb in passive participle.
Note: The past simple of to be is singular Was - Were Plural. (Depending on the subject.)
Simple Past is used for actions that are completed.
Subject + present continuous to be + main verb in passive participle.
Note: The present continuous is to be being Am - Is being - are being. (Depending on the subject)
The Present Continuous is used for actions that are trascurriendo at the time we're talking about.
Example: The children are at school being Taught French.
Subject + past continuous to be + main verb in passive participle.
Note: The past continuous is to be being Was, Were being the past continuous is used for actions that were trascurriendo in the past. Example: The house was being painted.
Subject + present perfect to be + main verb in passive participle.
Note: The present perfect of to be Have been or is He Has been present perfect is used for completed actions that relate to the present time.
Example: Mike has-been told. I Have Been Promoted to general manager.
Subject + Past perfect of to be + main verb in passive participle.
Note: past perfect of to be is HAD Been for all subjects.
The past perfect is used when we have two past actions and precedes the other.
Example: By the time the police arrived, the house had broken into Been.
Subject + will be + main verb in passive participle.
The Simple future passive is used for actions that will occur at a certain time in the future.
Example: Alicia will be in marriage Asked by Steven.
+ Will Have Been + main verb in passive participle.
The future perfect passive is used for actions that were completed in a certain time in the future.
Auxiliary question and denial are: will Have Been
Example: She Will Have Been invited to the party
Would be + subject + main verb in passive participle.
Example: Amanda would be Asked out.
Subject + Would Have Been + main verb in passive participle.
It is used to set unrealistic actions it refers to actions already completed
The passive voice is formed with the auxiliary verb "to be" and the past participle of the verb.
Subject + auxiliary verb (to be) + past participle…
The passive voice, as its name indicates, appears with the subject patient. An add-in is the grammatical element that executes the action of the verb, while the subject receives it
The speech is written for the president.
The house was built in 1975.
My wallet has been stolen. The room will be cleaned while we are out.
Uses of the Passive Voice
We use the passive voice when we do not know who has performed the action.
To civilian has been killed.
The car was stolen.
We use the passive voice when we want to give more importance to what happened, that who performed the action or when we do not want to say who was made.
The letter was delivered yesterday.
A mistake was made.
Note: we cannot use the passive voice with verbs intransitivos as "die", "arrive" or "Go". Intransitivos verbs are verbs which do not bear a direct object.
To pass a sentence in the active voice to passive voice, the direct subject of prayer active (the testimonies) becomes the subject of the passive.
The passive voice of process shape with the corresponding shape of being, while the passive state shape with BE. The main verb (collect) is included in the passive sentence as variable participle, agreeing in gender and number with the noun (collected/to/os/ACE).
The subject of the active prayer can be included as a complement in the passive sentence preceded by the preposition by, but in the passive state will not appear.
The police records the testimonies.
The testimonies are collected (by the police). → Passive process
The testimonies are collected. → Passive state
Testimonies (masculine, plural) → collected
Summary - tenses in the active voice, the passive state and the passive process
The testimonies are collected.
← Present → The police records the testimonies.
The testimonies were collected.
← imperfect → The police collected testimonies.
The testimonies were gathered.
← Undefined → The police collected the testimony.
The testimonies have been collected.
← The Present Perfect Tense → The police has collected testimonies.
The evidence had been collected.
← Past Pluscuamperfecto → The police had collected the testimony.
The testimonies will be collected.
← Future Simple → The police will collect the testimonies.
← Future composed → The police have been collected the testimony.
The testimony would be collected.
← Conditional Simple → The police would reap the testimonies.
The witnesses have been collected.
← Composite Conditional → The police would have collected the testimony.
"The passive voice in English is generally used in written records, in scientific articles and technical documents, but also in newspapers or other formal documents.
Normally we focus on agents that starring the actions, however, the passive voice allows us to speak of objects, processes or people who are passive subjects of actions undertaken by other people, and in this way give prominence in our speech".
It is said that a sentence is in passive voice when the significance of the word is received by the person to whom one is grammatical relates: Buenos Aires was founded by Pedro de Mendoza.
Forms with the assistant of the verb to be and the past participle of the verb that conjugates.
The supplement to the prayer becomes active subject of the passive. As in Spanish, the subject of the active can be retained as subject agent.
When a verb has two add-ons can make two passive structures:
To book was sent to Tom by Mr. Smith.
Tom was sent to book by Mr. Smith (passive language).
Model of verb in Passive Voice TO BE SEEN = BE SEEN
I am seen.
You are seen.
He is seen.
We are seen.
They are seen.
I have been seen.
You have been seen.
I have been seen.
We have been seen.
They have been seen.
I was seen.
You were seen.
I was seen.
We were seen.
They were seen.
I shall be seen.
You will be seen.
I will be seen.
We shall be seen.
You will be seen.
They will be seen.
I had been seen.
I should be seen.
I shall have been seen.
I should have been seen.
Comparison of Active Voice and Passive Voice
S. Liabilities + + verb. Passive + Agent Plug-in
S + verb. + Complement(can be direct, indirect or circumstantial)
The Passive Voice is prayer where the meaning of the word is received by the person to whom it refers
Active Voice is when a person performs the action directly or also on the same person
Passive Voice The Agent Plug-in always has to start with the preposition -By-
Transitive verb without there is no OD and therefore there is no active voice
The carriage is being sought by Me
I seek The Expensive
ACTIVE / PASSIVE OVERVIEW
Once a week, Tom cleans the house.
Once a week, the house is cleaned by Tom.
Right now, Sarah is writing the letter.
Right now, the letter is being written by Sarah.
Sam repaired the car.
The car was repaired by Sam.
The salesman was helping the customer when the thief came into the store.
The customer was being helped by the salesman when the thief came into the store.
Many tourists have visited that castle.
That castle has been visited by many tourists.
Present Perfect Continuous
Recently, John has been doing the work.
Recently, the work has been being done by John.
George had repaired many cars before he received his mechanic's license.
Many cars had been repaired by George before he received his mechanic's license.
Past Perfect Continuous
Chef Jones had been preparing the restaurant's fantastic dinners for two years before he moved to Paris.
The restaurant's fantastic dinners had been being preparedby Chef Jones for two years before he moved to Paris.
Simple Future will
Someone will finish the work by 5:00 PM.
The work will be finished by 5:00 PM.
Simple Future be going to
Sally is going to make a beautiful dinner tonight.
A beautiful dinner is going to be made by Sally tonight.
Future Continuous will
At 8:00 PM tonight, John will be washing the dishes.
At 8:00 PM tonight, the dishes will be being washed by John.
Future Continuous be going to
At 8:00 PM tonight, John is going to be washing the dishes.
At 8:00 PM tonight, the dishes are going to be being washedby John.
Future Perfect will
They will have completed the project before the deadline.
The project will have been completed before the deadline.
Future Perfect be going to
They are going to have completed the project before the deadline.
The project is going to have been completed before the deadline.
Future Perfect Continuous will
The famous artist will have been painting the mural for over six months by the time it is finished.
The mural will have been being painted by the famous artist for over six months by the time it is finished.
Future Perfect Continuous be going to
The famous artist is going to have been painting the mural for over six months by the time it is finished.
The mural is going to have been being painted by the famous artist for over six months by the time it is finished.
Jerry used to pay the bills.
The bills used to be paid by Jerry.
My mother would always make the pies.
The pies would always be made by my mother.
Future in the Past Would
I knew John would finish the work by 5:00 PM.
I knew the work would be finished by 5:00 PM.
Future in the Past Was Going to
I thought Sally was going to make a beautiful dinner tonight.
I thought a beautiful dinner was going to be made by Sally tonight.
It can be noted that even if the active voice is easier and clearer to many people, does not mean that the passive must be put aside.
The active voice and the passive voice are two ways of presenting the same situation focusing it from different perspectives. In the case of the active voice, we want the person responsible for the action (the agent), while in the passive voice we want the patient or the result of this action. So each person depend on voice use but most important is to have clear the grammatical structure of each sentence.
2 | Página Active Voice and Passive Voice
They will have completed the project b | <urn:uuid:ca306da6-d9b2-416b-9184-ae70b51733d6> | CC-MAIN-2017-09 | https://docs.com/jefferson-castro-1/6529/trabajo-de-ingles-06-octubre-traducido | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00134-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94859 | 3,942 | 2.96875 | 3 |
While the world's attention was recently focused on the Syrian crisis and the alleged use of chemical weapons, cyber-criminals were taking advantage of the situation. News seekers, eager to learn about the latest developments, the possibility of a U.S. strike and the diplomatic efforts to end the civil war, became easy targets. Using fake news alerts, cyber-criminals lured unsuspecting readers to malicious websites where their devices were infected with advanced, information-stealing malware.
In one such spear-phishing campaign, emails contained links that directed the reader to a legitimate website that had been compromised -- these sites are often called 'watering holes.' The compromised site contained malicious code that exploited a known Java vulnerability to silently download malware on the victim's machine using the now-familiar infection process known as a 'drive by download.'
Using breaking news to carry out phishing attacks is nothing new, but it is effective. That's because an email containing this type of information is more readily opened than one that claims to offer a unique investment opportunity or new weight loss product. In addition to breaking news, attackers will also exploit the name of trusted institutions to deliver malware.
[How can banks balance protection of customer information with the need to optimize convenience, simplicity and ease of use? 4 Ways Banks Can Improve Their Fraud-Fighting Efforts]
For example, in July the FBI's Internet Crime Complaint Center and the Department of Homeland Security received complaints regarding a ransomware campaign using the name of DHS to extort money from unsuspecting victims. The scam directed victims to a download website where the Reveton malware was installed on their computers and attempted to coerce them into paying a fine to "unlock" the machine.
The Trojans installed in these cyber-attacks allow the criminals to capture log-in credentials and other sensitive information from the user's machine. This information is typically used to conduct financial fraud or an advanced targeted attack.
In August a hacker group called the Syrian Electronic Army (SEA) used a targeted phishing attack to steal credentials from a reseller for an Australian domain registrar. The stolen information was used to change the DNS (Domain Name System) records for several domain names, including nytimes.com, sharethis.com, huffingtonpost.co.uk, twitter.co.uk and twimg.com. This resulted in traffic to those websites being temporarily redirected to a server under the attackers' control.
Spear-phishing attacks use two techniques to secretly install malware on end-user devices. The first embeds a link to a malicious website in the email message that either takes advantage of application vulnerabilities to secretly install malware in the background or entices the user to download a file that contains malware. The second technique embeds a file in the email message, usually a "weaponized document" that secretly installs malware when opened. Additionally, machines can be compromised when users visit legitimate websites that have been infected with malware installers or by installing legitimate-looking files that actually contain malware (Trojan horses).
Preventing these attacks is getting harder. Cyber-criminals are continuously sharpening their spear-phishing messages so they are more likely to be opened by users. Today, spear-phishing is one of the main tools used to compromise endpoints inside financial institutions. Once a machine is infected, an attacker can access information and has full control over the device. It can be used to commit financial fraud, or to gain a foothold within a corporate network. In fact, on June 25, 2013, the FBI issued a warning about the increase in the use of spear-phishing attacks to target multiple industry sectors.
Given the advancing sophistication and "believability" of phishing and especially spear-phishing attacks, end-user education no longer provides sufficient protection. Making sure that endpoint devices are properly patched to prevent the exploitation of vulnerabilities and drive-by downloads is essential. For stronger, more proactive protection, financial institutions should implement exploit prevention technologies that are now becoming available.
George Tubin is senior security strategist for cyber-crime prevention vendor Trusteer. | <urn:uuid:a2ea9fd2-dedf-415a-9d42-a3fe5af7e4a5> | CC-MAIN-2017-09 | http://www.banktech.com/how-breaking-news-is-used-to-plant-malware/a/d-id/1296754?cid=sbx_banktech_related_commentary_default_us_sec_chairman_mary_schapiro_to_step_do&itc=sbx_banktech_related_commentary_default_us_sec_chairman_mary_schapiro_to_step_do | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934633 | 834 | 2.625 | 3 |
The "learn to code" movement is ramping up in full force this week, Computer Science Education Week, with major tech companies and celebrities supporting Code.org in its "Hour of Code" mission to get students of all ages to learn programming. If you'd like some free hands-on training, Apple will help you this Wednesday.
Head to any Apple Retail Store on December 11 at 5pm for a free one-hour workshop. One of the limitations of learning iOS programming is you need a Mac to do it, but by trying it out at an Apple Store, you can see what all the fuss is about and whether it's for you.
If that hour doesn't work for your schedule, there are tons of one-hour online tutorials available at Code.org, including ones taught by Mark Zuckerberg (with the Angry Birds), Bill Gates, and other top names in coding. Even more exciting for educators and students, perhaps: there are "unplugged" computer science lessons, so you can learn the programming mindset without the need for any devices (like expensive Macs).
President Obama put it this way: "Learning these skills isn't just important for your future, it's important for America's future." "Don't just buy a new video game, make one. Don't just download the latest app, help design it. Don't just play on your phone, program it."
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:616ca48c-c1fd-4fa0-b617-af8fea62a332> | CC-MAIN-2017-09 | http://www.itworld.com/article/2703539/consumerization/learn-to-code-for-free-at-an-apple-store-this-week.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00006-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941677 | 342 | 2.9375 | 3 |
Defining the Issue
Defining the Issue
Consumers around the world continue to turn to mobile devices for voice, data, and Internet access. At the same time, demand continues to grow for technologies relying on wireless fixed use, also known as â€Å“transportâ€ï¿½ links or, in emerging markets, as access connections. The world’s leading mobility technologies—incumbent technologies such as GSM and CDMA, and new ones like WiFi and WiMax—are constantly evolving to provide ever more robust broadband services. All of these factors spotlight the need for strategically allocated radio spectrum, both licensed and unlicensed.
Regulators who oversee spectrum play a key role in ensuring access to broadband spectrum is transparent, efficient, and equitable. One of the pitfalls in regulating the wireless spectrum is assigning specific frequencies to particular technologies, which inadvertently leads to â€Å“picking a technology winner.â€ï¿½
Limiting spectrum to a particular service without flexibility or a plan for future allocations can result in unintended consequences. For example, in some developed countries, spectrum that has been assigned to inefficient 20th century technologies and services now needs to be migrated to 21st century technologies and services. Of course, countries also must work together to harmonize their spectrum allocations, such as 2.4 GHz unlicensed spectrum, to better support the needs of global citizens, businesses and international relations.
In addition to being the largest manufacturer of WiFi devices that use license-exempt spectrum, Cisco is also a vendor of WiMax base stations, antenna systems, and client cards. In addition, many Cisco networking technologies and solutions are designed to work with, and enhance, the operation of wireless networks. As a leading provider of wireless technology solutions, Cisco supports:
- New allocations and re-allocations of spectrum for broadband uses, including:
- 700 MHz, which was globally harmonized for wireless broadband services by the World Radiocommunication Conference of 2007. In countries where 700 MHz is occupied by analog television, Cisco supports the transition to digital broadcasting with the goal of opening up spectrum suitable for broadband applications
- 2.5 and 3.4-3.6 GHz, frequencies used by WiMax systems. Cisco is working with the WiMax Forum to open this spectrum around the world.
- 2.4 and 5 GHz, license-exempt spectrum used by WiFi networks.
- Regulators should broadly allocate spectrum without regard to specific technologies, and should set minimum technical rules necessary to avoid harmful interference.
- Regulators should not specifically limit particular frequencies to a specific use (e.g., spectrum that may be used today for mobile TV delivery could also be used for bidirectional broadband services).
International Telecommunications Union
The National Telecommunications Information Administration | <urn:uuid:78e29843-7a4c-49b4-8d3e-800eece68c98> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/about/government-affairs/government-policy-issues/wireless-spectrum.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00182-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916254 | 578 | 2.59375 | 3 |
That's right, March 14 is international Pi Day. Get it -- pi is 3.14, and March 14 is 3/14?
Most everyone knows pi -- the ratio of the circumference of a circle to its diameter. But how much do you really know about this magical number? Below are 28 fun facts about pi split up into tidy categories. Enjoy!
[ MORE PI DAY: 10 Awesome ways to celebrate Pi Day 2013
EVEN MORE PI DAY: 10 More ways to celebrate Pi Day
FUN FACT: How Indiana tried to redefine pi ]
Pi in society
-Pi Day is also Albert Einstein's birthday, along with the birthdays of Apollo 8 Commander Frank Borman, Astronomer Giovanni Schiaparelli, and last-man-on-the-moon Gene Cernan.
-There is a pi cologne.*
-Computing pi is a stress test for a computer -- a kind of "digital cardiogram.*
-The record for calculating pi, as of 2010, is to 5 trillion digits (source: Gizmodo).
Random pi information
- If you were to print 1 billion decimal values of pi in ordinary font it would stretch from New York City to Kansas (source: Buzzle).
- 3.14 backwards looks like PIE.
- "I prefer pi" is a palindrome.
-If you divide the circumference of the sun by its diameter, what will you have? Pi in the sky! (source: Jokes4us.com)
- What do you get if you divide the circumference of a jack-o'-lantern by its diameter? Pumpkin pi! (source: Jokes4us.com)
Pi in movies and TV
-There's a reference to Pi in "Star Trek." Check it out here.*
-Many movies have been made about pi, including "Pi: Faith in Chaos," which is about a man who goes mad trying to rationalize pi.*
-Other movie references to pi include pi being the secret code in Alfred Hitchcock's "Tom Curtain" and "The Net" with Sandra Bullock.*
-In the book "Contact" by Carl Sagan, humans study pi to gain awareness about the universe.*
-The first million decimal places of pi consist of 99,959 zeros, 99,758 ones, 100,026 twos, 100,229 threes, 100,230 fours, 100,359 fives, 99,548 sixes, 99,800 sevens, 99,985 eights and 100,106 nines.*
-There are no occurrences of the sequence 123456 in the first million digits of pi -- but of the eight 12345s that do occur, three are followed by another 5. The sequence 012345 occurs twice and, in both cases, it is followed by another 5.*
-The first six digits of pi (314159) appear in order at least six times among the first 10 million decimal places of pi.*
-At position 763 there are six nines in a row, which is known as the Feynman Point.^
Pi the number
-The fraction 22/7 is a well-used number for Pi. It is accurate to 0.04025%.^
-Another fraction used as an approximation to Pi is (355/113), which is accurate to 0.00000849%.^
-A more accurate fraction of Pi is (104348/33215). This is accurate to 0.00000001056%.^
-The square root of 9.869604401 is approximately Pi.^
The symbol pi
-In the Greek alphabet, pi (piwas) is the 16th letter. In the English alphabet, p is also the 16th letter.*
There are pi haters
-Check out this slideshow of ways to celebrate Tau Day, an alternative calculation to Pi Day.
-Around 2000 B.C., Babylonians established the constant circle ratio as 3 1/8 or 3.125. The ancient Egyptians arrived at a slightly different value of 3 1/7 or 3.143.*
-One of the earliest known records of pi was written by an Egyptian scribe named Ahmes (c. 1650 B.C.) on what is now known as the Rhind Papyrus. He was off by less than 1% of the modern approximation of pi (3.141592).*
-Plato (427-348 B.C.) supposedly obtained for his day a fairly accurate value for pi: √2 + √3 = 3.146.*
-The father of calculus (meaning "pebble used in counting," from calx or "limestone"), Isaac Newton, calculated pi to at least 16 decimal places.*
-William Jones (1675-1749) introduced the symbol "π" in the 1706, and it was later popularized by Leonhard Euler (1707-1783) in 1737.* | <urn:uuid:763af512-9530-4ce6-972b-bc752268f75a> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2164391/data-center/data-center-28-facts-about-pi-that-you-probably-didn-t-know.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00478-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920689 | 1,031 | 2.96875 | 3 |
What if researchers could access and share scientific simulation and modeling tools as easily as YouTube videos with the power of the cloud to drive it all? That’s the underlying premise for the HUBzero Platform for Scientific Collaboration, a cyberinfrastructure developed at Purdue University.
HUBzero was created to support nanoHUB.org, an online community for the Network for Computational Nanotechnology (NCN), which the U.S .National Science Foundation has funded since 2002 to connect the theorists who develop simulation tools with the experimentalists and educators who might use them.
Since 2007, HUBzero’s use has expanded to support more than 30 hubs — and growing — in fields ranging from microelectromechanical systems and volcanology to translating lab discoveries into new medical treatments and the development of assistive technologies for people with disabilities.
HUBzero is now supported by a consortium including Purdue, Indiana, Clemson and Wisconsin. Researchers at Rice, the State University of New York system, the University of Connecticut and Notre Dame use hubs. Purdue offers a hub-building and -hosting service and the consortium also supports an open source release, allowing people to build and host their own. HUBbub2010, the first of planned annual HUBzero conferences, drew more than 100 people from 33 institutions as far away as Korea, South Africa and Quebec, along with U.S. universities nationwide.
Although they serve different communities, the hubs all support collaborative development and dissemination of computational models running in an infrastructure that leverages cloud computing resources and makes it easier to take advantage of them. Meanwhile, built-in social networking features akin to Facebook create communities of researchers, educators and practitioners in almost any field or subject matter and facilitate communication and collaboration, distribution of research results, training and education.
“Contributors can structure their material and upload it without an inordinate amount of handholding; that’s really a key because you want people to contribute,” says Purdue chemical engineering Professor Gintaras Reklaitis. He’s the principal investigator for pharmaHUB.org, a National Science Foundation-supported Virtual Engineering Organization for advancing the science and engineering of pharmaceutical product development and manufacturing.
One could cobble some of this functionality together with commercial Web software, but HUBzero integrates everything in a single package. Add the research tool-enabling features and research-oriented functions like tracking the use of tools (useful for quantifying outreach) and citation tracking and you have something quite different — and powerful.
HUBzero can be a prime tool for satisfying cyberinfrastructure requirements, such as data management and access, of granting agencies like the NSF. HUBzero’s emphasis on interdisciplinary collaboration only makes it more attractive in funding proposals. A hub is central to the Purdue-based Network for Earthquake Engineering Simulation (NEES), a $105 Million NSF program announced in 2009, the largest single award in Purdue history. Purdue’s PRISM Center for micro-electro-mechanical systems and C3Bio biofuels research center, both funded by the U.S. Department of Energy, are some other recent major award winners employing hubs.
Such an infrastructure can have an impact on scientific discovery, as nanoHUB.org clearly shows.
As of December 2010, NCN identified 719 citations in the scientific literature that referenced nanoHUB.org. In addition, user registration information indicates that more than 480 classes at more than 150 institutions have utilized nanoHUB. Because the site is completely open and notification of classroom usage is voluntary, the actual classroom usage undoubtedly exceeds these numbers. There are nanoHUB.org users in the top 50 U.S. universities (per the listing by U.S. News and World Report) and in 18 percent of the 7,073 U.S. institutions carrying the .edu extension in their domain name. “Nano” is a tiny area in science and technology, but nanoHUB is big in many institutions.
The nanoHUB.org community Web site now has more than 740 contributors and 195 interactive simulation tools. In 2010 more than 9,800 users ran 372,000 simulations. In addition to online simulations, the site offers 52 courses on various nano topics as well as 2,300 seminars and other resources, which have propelled the annual user numbers to more than 170,000 people in 172 countries.
Likewise, the cancer care engineering hub cceHUB.org, one of the early hubs following nanoHUB, has proven to be the linchpin in building an online data tracking, access and statistical modeling community aimed at advancing cancer prevention and care.
“We were looking for a solution for sample tracking and data storage that would not cost $5 million and it was a true logistical challenge needing a comprehensive cyberinfrastructure support system,” says Julie Nagel, managing director of the Oncological Sciences Center in Purdue’s Discovery Park. “The hub is the core of the CCE project and has brought the project forward so much faster than we could have if we had started from scratch.”
The success of nanHUB.org is what attracted the attention of Noha Gaber, who was seeking a good way to facilitate collaboration in the environmental modeling field when she came across the thriving international resource for nanotechnology research and education.
HUBzero, the technology powering nanoHUB, could obviously be used to build a Web-based repository of models and related documentation for projecting the spread and impact of pollutants. It also had built-in features, such as wiki space, enabling environmental researchers to share ideas and information. But the ability to make the models operable online, right in a Web browser window, and allow researchers to collaborate virtually in developing and using models was the deal closer.
“It’s not just providing a library of models, but providing direct access to these tools,” says Gaber, executive director of the U.S. Environmental Protection Agency’s Council for Regulatory Environmental Modeling. She’s a driving force behind the new iemHUB.org for integrated environmental modeling.
Under the hood, HUBzero is a software stack developed by Purdue (and being refined continuously by the consortium and hub users) and designed to work with open source software supported by active developer communities. This includes Debian GNU/Linux, Apache HTTP Server, LDAP, MySQL, PHP, Joomla and OpenVZ.
HUBzero’s middleware hosts the live simulation tool sessions and makes it easy to connect the tools to supercomputing clusters and cloud computing infrastructure to solve large computational problems. HUBzero’s Rappture tool kit helps turn research codes written in C/C++, Fortran, Java, MATLAB, and other languages into graphical, Web-enabled applications.
On the surface, the simulation tools look like simple Java applets embedded within the browser window, but they’re actually running on cluster or cloud hosts and projected to the user’s browser using virtual network computing (VNC). Each tool runs in a restricted lightweight virtual environment implemented using OpenVZ, which carefully controls access to file systems, networking, and other server processes. A hub can direct jobs to national resources such as the TeraGrid, Open Science Grid and Purdue’s DiaGrid as well as other cloud-style systems. This delivers substantial computing power to thousands of end users without requiring, for example, that they log into a head node or fuss with proxy certificates.
The tools on each hub come not from the core development team but from hundreds of other researchers scattered throughout the world. HUBzero supports the workflow for all of these developers and has a content management system for tool publication. Developers receive access to a special HUBzero “workspace,” which is a Linux desktop running in a secure execution environment and accessed via a Web browser (like any other hub tool). There, they create and test their tools in the same execution environment as the published tools, with access to the same visualization cluster and cloud resources for testing. HUBzero can scale to support hundreds of independent tool development teams, each publishing, modifying, and republishing their tool dozens of times per year.
If a tool already has a GUI that runs under Linux, it can be deployed as-is in a matter of hours. If not, tool developers can use HUBzero’s Rappture toolkit to create a GUI with little effort. Rappture reads an XML description of the tool’s inputs and outputs and then automatically generates a GUI. The Rappture library supports approximately two dozen objects — including numbers, Boolean values, curves, meshes, scalar/vector fields, and molecules — which can be used to represent each tool’s inputs and outputs. The input and output values are accessed within a variety of programming languages via an Application Programming Interface (API). Rappture supports APIs for C/C++, Fortran, Java, MATLAB, Python, Perl, Ruby, and Tcl, so it can accommodate various modeling codes. The results from each run are loaded back into the GUI and displayed in a specialized viewer created for each output type. Viewers for molecules, scalar and vector fields, and other complex types can be automatically connected to a rendering farm for hardware-accelerated 3-D data views.
HUBzero sites also provide ways for colleagues to work together. For example, because of the unique way the HUBzero middleware hosts tool sessions, a single session can be shared among any number of people. A group of people can look at the same session at the same time and discuss ideas over the phone or instant messaging. If some of the people aren’t online or available, they can access the session later from their My Hub page and follow up at their convenience. Some commercial collaboration tools, such as Adobe Presenter, also work within HUBzero (hub builders are required to license these).
As people are using the tools, questions arise and sometimes things go wrong. HUBzero supports many ways for users to find help and help one another and includes a built-in trouble report system. Users also can post questions in a community forum modeled after Amazon.com’s Askville or Yahoo! Answers. In practice, many tickets aren’t really problems but are actually requests for new features. HUBzero supports a wish list capability for collecting, prioritizing and acting on such requests. Users can post an idea to the wish list associated with each tool or to the general list associated with the hub itself.
HUBzero’s unique blend of simulation power and social networking seems to resonate across engineering and science communities. As hub use continues to grow, a goal is to develop new capabilities to connect related content so that tools published on one hub can be easily found on all others. Another goal is to improve tool interconnection, so that one tool’s output can be used as input to another, letting developers solve larger problems by connecting a series of models from independent authors.
Michael McLennan is the senior research scientist and hub technology architect at Purdue. Greg Kline is the science and technology writer for Information Technology at Purdue (ITaP). | <urn:uuid:2cf2cce3-dc2e-4456-b589-3552e242d398> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/02/28/hubzero_paving_the_way_for_the_third_pillar_of_science/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00354-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921674 | 2,327 | 2.796875 | 3 |
Sometimes, when a network is breached, when servers are compromised, or when unencrypted data is at risk, companies will get, or even seek, assistance from government offices. The nature of cybercrime points to the ways in which our digital architectures are interconnected – over the Internet, but also in terms of how sensitive information plays different roles in business and in civic life.
All this to say that leaders in the security community are always focusing on how to define threats, how to promote specific levels of response, and generally, how to more robustly protect systems.
With that in mind, it surprises some security-minded people to know that in some ways, the U.S. government and the Pentagon have not fully come to terms with the scope of cyberwarfare, and that key pieces of counter-cyber-espionage strategy are not yet in place.
At this late date, with the infamous DNC hack and big breaches of many Fortune-500 data systems, with the tech media fairly screaming about cybersecurity, the federal government still has no concrete idea of when a cybercrime constitutes an act of war.
The Cybercrime Controversy
This Slate piece by Fred Kaplan highlights some of the back-and-forth that has gone on over the issue, starting with queries from Robert Gates as Defense Secretary in 2006, and revealing a bit of dissembling on the part of the Pentagon Defense Science Board, along with implications of thorny questions such as how to create a “proportional response” or how to “expel” a piece of the malware as you could a human spy.
It also shows the limits of government involvement. Indeed, even common-sense federal protections to private infrastructure can easily be seen as “Orwellian” or as a government overreach.
However, steps to clarify something like a cyber act of war are unilateral, and therefore not so controversial. It seems likely that what has delayed the implementation of this type of standard is not so much dissent as simple procrastination.
Federal News Radio and other outlets have covered the investigation of Senator Mike Rounds (R-S.D.) into the issue, and a bill sponsored by Rounds, the Cyber Act of War Act of 2016, that was introduced to the House of Representatives in May. The bill still has to go through committee review, and a quick look at tracking site Congress.gov shows no action on the bill since its introduction.
Why is this Important for Private Businesses?
The less leaders address cybercrime and its corrosive effects on both business and civic life, the more businesses have to innovate and pioneer in the field of cyberdefense. In essence, a company is on its own to arm itself with what it needs to ward off hordes of hackers and assorted cybercriminals operating on a global network with few fences.
SentinelOne’s next-generation endpoint and server security tools anticipate this important work, and help to standardize the responses of enterprises. These versatile, proactive security tools are focused on the new perimeter – the endpoint – offering protection from unknown and zero-day attacks using automated behavior detection and machine learning. To what end? Using a heuristic model and machine learning principles, these resources promote threat visibility, where companies can see danger a mile away. Endpoint protection and related processes reduce dwell time, a term that has become something of a spine-shaking buzzword evoking unknown malice lurking in digital systems. There’s a real need for businesses to take those steps of initiative, to “expel” the attempts of hackers and keep a clean house, in an age when no place seems safe from cybercrime. | <urn:uuid:5e0de4e4-fdfb-4847-8433-72e42269edc7> | CC-MAIN-2017-09 | http://www.csoonline.com/article/3156464/security/cybercrime-not-an-act-of-war.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00530-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953676 | 748 | 2.59375 | 3 |
Concerns about the safety of cellular telephones-whether they create health risks or are safe to use in all operating environments-have spread to other wireless devices, such as the wireless networking equipment (WLANs)* manufactured by Cisco Systems
® and Linksys
®. These issues are of concern not only to Cisco customers, but to Cisco as well.
There is no correlated proof that these low-power devices pose any health risks to the user or the general public. Further, Cisco and Linksys wireless products are required to be evaluated for compliance with international RF regulations before being placed on the market for sale.
This document discusses the results of research into the possible health effects of RF devices.
Low-Power Wireless Devices Pose No Known Health Risk
Do low-power wireless devices such as WLAN client cards, access points, or RFID tags pose a health threat? Available evidence today suggests that there is no clear correlation between low-power wireless use and health issues.
Recent studies strongly suggest that the use of cellular telephone equipment does not create health risks. Two important recent studies that reached this conclusion are:
• A report written by Dr. John D. Boice, Jr. and Dr. Joseph K. McLaughlin of the International Epidemiology Institute in the United States in September 2002 for the Swedish Radiation Protection Authority.
• A report to the European Commission from the Scientific Committee on Toxicity, Ecotoxicity, and the Environment, titled "Opinion on Possible Effects of Electromagnetic Fields, Radio Frequency Fields, and Microwave Radiation on Human Health."
Few studies deal directly with the affects of WLAN devices. The emission levels of WLAN and RFID tags are below RF emission levels from typical cellular telephones. Therefore, any conclusions relating to the safety of cellular telephone equipment can almost certainly be applied to WLAN or RFID devices**.
The RF emission levels from a typical WLAN are well within the safety emission level thresholds set by the World Health Organization (WHO)***
* These devices are also referred to as RLANs by the ITU-R;, however, this paper refers to these devices as WLANs.
** Though Cisco does not make RFID devices, vendors and customers will require Cisco in some cases to use RFID devices to track products. Hence, the customer needs to be aware of RF issues concerning these devices.
*** The RF emission limits adopted by various national agencies are based on guidelines from the WHO International Commission on Non-Ionizing Radiation Protection (ICNIRP).
CISCO AND LINKSYS COMPLIANCE WITH RF EXPOSURE REQUIREMENTS
All Cisco and Linksys wireless products are evaluated to ensure that they conform to the RF emissions safety limits adopted by agencies in the United States and around the world. These evaluations are in accordance with the various regulations and guidelines adopted or recommended by the Federal Communications Commission (FCC)* and other worldwide agencies**.
Compliance for these devices is typically based on the Maximum Permissible Exposure (MPE) levels for mobile or fixed devices*** or per Specific Absorption Rate (SAR) tests for portable**** devices. Depending on the type of product, compliance is based on modeling, technical analysis, or RF measurement testing. The analysis or testing is performed in accordance with the various national and international standards adopted by independent third-party accredited labs.
Before any wireless device can be placed on the market, Cisco submits MPE technical analysis or SAR test data results to the appropriate agencies for review. These studies and test reports must demonstrate that the devices meet the RF emissions safety limits, or they cannot be placed on the market. Cisco and Linksys make sure that all of their products adhere to the stricter standards imposed by the worst case-the uncontrolled environment that imposes the tightest compliance limits.
The Cisco and Linksys manuals include statements on compliance with the various RF safety regulations, as well as guidance on proper installation and operation of these systems, to ensure that they remain in compliance with all applicable regulations.
IMPACT ON MEDICAL DEVICES
Another concern about cellular telephones has been their potential impact on medical devices. Many hospitals ban such phones from emergency rooms or other sensitive areas. Again, this has led some to question whether wireless networking devices can be used in proximity to medical equipment.
To address these concerns, Cisco wireless networking devices are specifically designed to reduce emissions that could interfere with medical devices. Cisco radio module products meet both the FCC and European Commission emission levels required for devices operating in a medical environment, specifically the EN 55011 emission standards.
In September 1996, an independent test was conducted by a hospital before the installation of a Cisco spread spectrum wireless network. The results showed that the Cisco 2.4-GHz wireless network devices did not interfere with or degrade the performance of heart pacemakers, even when operated at close proximity to these devices. In 2003, Cisco did further research testing with medical implant devices from two major medical equipment manufacturers, and tested its WLAN system with an MRI system at a major hospital research center. The results of the latest research was that the Cisco WLAN systems did not degrade the performance of either the MRI machine, nor degrade the performance of the pacemakers used in the research test. This research is continuing, including testing with Cisco 5-GHz devices whose initial tests are yielding similar results.
* The requirements as referenced are in Office of Engineering and Technology Bulletin 65C Revision 01-01, Evaluating Compliance with FCC Guidelines for Human Exposure to Radiofrequency Electromagnetic Fields.
** Such as ITU-T Recommendation K-52 Guidance on complying with limits with human exposure to electromagnetic fields
*** For discussion purposes, Cisco and Linksys access points and bridges are classified as either mobile or fixed, depending on antenna gain and installation requirements.
**** For discussion purposes, Cisco and Linksys client cards and voice over IP (VoIP) phones are classified as portable devices and may be subject to SAR testing.
OPERATION IN HAZARDOUS ENVIRONMENTS
Another occasional RF safety concern is the use of RF devices in hazardous locations such as oil refineries, mines, or construction sites where explosives are used. Several countries, including Australia and countries in the European Union, have adopted guidelines for operating wireless devices in hazardous environments, although they do not specifically address low-power wireless networking systems.
In most circumstances, low-power radios (such as WLANs) operating at less then 100mW Effective Isotropic Radiated Power (EIRP) and operating at 2.4 and 5.8 GHz should not pose any risk if operated under normal circumstances. However, it is recommended that you first consult the facility's safety administration to determine its policy on the use of RF devices in certain areas. The chances are extremely low that the radio will cause interference that could lead to a safety problem or actually cause a heating effect that can cause an accident; however, caution is urged.
It is recommended that the installation of radio devices in hazardous areas be done by professional installers in accordance with the recommendations of the group responsible for safety at that site.
5. Epidemiologic Studies of Cellular Telephones and Cancer Risk: Dr. John Boice and Dr. Joseph Mclaughlin, October 2002.
6. European Commission Report, Scientific Committee on Toxicity, Ecotoxicity, and the Environment: "Opinion on Possible Effects of Electromagnetic Fields, Radio Frequency Fields, and Microwave Radiation on Human Health," 10/30/2001.
7. International Telecommunications Union-Telecom Sector Recommendation K-52 Guidance on complying with limits for human exposure to Electromagnetic Field, September, 2004. | <urn:uuid:e7f546fb-1df8-468c-ad30-7c8af1396868> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/products/collateral/wireless/aironet-1200-access-point/prod_white_paper09186a0080088791.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00054-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.933209 | 1,574 | 2.75 | 3 |
Statistics sometimes get a bad rap, as being somehow divorced from the real world of complex events and relationships. But in several notable cases, statistics helped provide a useful view of seemingly diverse and sporadic events.
Back in the 1980s for example Jack Maple, a cop in New York City's subways, got tired of responding to crimes after the fact, and decided to put together information that would predict where crimes would occur. He had no computers or fancy analytic technology -- just crayons and butcher paper -- but Maple's analysis of crime statistics superimposed on maps of the subway, revolutionized police work, and comstat, as it is now called, has enlisted the help of computerized analytical tools and has spread to police departments around the world.
The South Carolina Office of Research and Statistics (ORS) is also breaking new ground in the use of statistical data. ORS crunches the numbers to help analyze a broad spectrum of social services programs -- from health to justice, education and corrections -- to provide a sort of "information dashboard" for some 20 state agencies and private health-care providers, in order to help the state assess the effectiveness of various programs and focus social services money and attention where it will make the biggest difference in the lives of those being served.
"One of the projects we did," said Pete Bailey, health and demographics section chief, "was to look at what happened to children that aged out of the juvenile justice system, what proportion of them were incarcerated later, and so on. The Department of Juvenile Justice itself didn't have any data on adult arrests or incarceration, but we do, because we receive that from state law enforcement. So with permission, we conducted a study."
Bailey says ORS is also doing a study for the Department of Education. "Unfortunately, in South Carolina we did not always have the ability to track a kid from year to year." Bailey said the study will tie educational data to Medicaid system data for low-income children, and to the social services and juvenile justice systems. "And what that means is that ... you would be able to do analysis to see how Medicaid children are doing in school versus food stamp children, foster care, or protective services cases that weren't removed from the home. You could look at the impact of all of those. And the next step we're going to is ... to be able to look individually at each of those kids with a tracking number and -- without knowing who they are -- look at their history in terms of how did they get where they were in the educational system, in health, or with social services, or with law enforcement ... What caused their blocks and their breaks and their successes?
"That's an awesome capability. Government has a responsibility to use all the information that's sitting in every computer they can get their hands on, to better understand and evaluate why our programs work or don't work and how to come up with better outcomes. If we do that, you add to government a volume way of work per employee and who gets the best outcomes. We can use that to improve those that aren't doing so well."
Connecting human services data to elected officials responsible for funding programs is one of the possibilities said Bailey. "Tell them the problems people have in their districts ... How many people are on food stamps or in foster care? How are kids doing in school? continued Bailey. "And once you do this, when they are elected, you can evaluate every year how things have gone. It sort of feels like democracy."
So if this is such a great idea, why isn't every state doing it? David Patterson, deputy chief of health and demographics said that while South Carolina probably has the largest and most comprehensive state level warehouse in existence, other states are also looking at doing something similar. For example, they have had some contact with Arizona, Arkansas and Maryland in that regard. But as every government agency knows, sharing data, protecting privacy and knocking down stovepipes to get an enterprise view is not always easy. There are technological as well as human barriers. So how did ORS build its data-sharing system, and what advice do they have for others?
Workable Data Sharing
ORS collects data from over 20 state agencies said Patterson, "and we don't release anything without prior approval of the originating source. We apply algorithms to the data that allow us to maintain entity relationships at the person level across all these data."
"We're like a data Switzerland," he said. "We develop a lot of applications for customers, everything we do is customer-driven and requires their approval. The [memo of understanding] process resulted in some statutes to clarify and extend ORS' authority."
When it comes to sharing data, said Bailey, there are some wrong ways to go about it. "A lot of states tend to try to have one agency grab another agency's data, and people might be willing to share their data, but they're not willing to have you put your data in their computer, because we live in a world where whoever has the most data or the biggest computer is thought to be the winner ... And if you put your data in my computer, you're likely to get up the next morning and see in the paper that I've done an analysis showing you did some stupid stuff."
Instead, ORS has an agreement with data-providing agencies so that those agencies have full control of their data. "They allow us to run through the unique IDs," said Bailey, "and build a tracking number so we can link data across all of these systems without using the identifiers. Once we have the tracking number, that's what's linked to the statistic, so we can link a massive amount of data from some 27 blocks of agencies and God only knows how many programs, and [in this way] they love to share data and do research together, because that's where the great answers are. So you see, that's different from saying 'I want you to give me your data'"
Patterson said that preceding the agreements "is the issue of privacy protection, consensus on the mission, and transparency on what we do. If we weren't neutral, then none of these other things would happen."
ORS is not competitive for budget with any of these agencies, and has it has the trust of the private sector, which enables the collection of data on hospitalizations, emergency room visits, outpatient surgeries, even home health and free clinic visits, etc., and that in turn can be linked with state agency data.
So ORS crosses the boundaries between private, public, not for profit, health, social services, criminal justice and education systems -- without making anybody unhappy or upsetting the balance of power between organizations.
However, said Bailey, each state agency has a "federal godfather," and if those federal agencies don't get along well at the national level, it can make it difficult at the state level.
Connecting social services data to geographical location gives it additional import. "Panorama has helped us to build the mapping application," said Randy Rambo, IT/DBA manager of health
and demographics., "using ESRI mapping to drill down to any type of grouping that you want to have mapped -- legislative boundaries, census tracts, virtual neighborhoods, etc."
The enthusiasm for data use at this level is evident as ORS staff suggested combining data -- such as people moving in or out of a community, crime data, kids' progress in schools, emergency room injuries or violence -- in such a way as to help isolate the actual causes of community decay or other problems that may develop.
"We have massive data sitting in government computers that represent pieces of the puzzle," said Bailey. "If you put it together we can better understand our children, and our parents and us as humans, so that we could better make a substantial difference between the world we have versus the world we could have. And the world we could have is awesome." | <urn:uuid:53bee30f-805d-4f59-86c6-9e899d6b515a> | CC-MAIN-2017-09 | http://www.govtech.com/health/South-Carolina-Builds-Enterprise-Social-Services.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00474-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968644 | 1,621 | 2.578125 | 3 |
This type of virus infects the Master Boot Record or DOS Boot Record of a hard drive, or the Floppy Boot Record of a floppy drive.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
A boot virus (also known as a boot infector, an MBR virus or DBR virus) targets and infects a specific, physical section of a computer system that contains information crucial to the proper operation of the computer's operating system (OS).
Though boot viruses were common in the early 90s, they became much rarer after most computer motherboard manufacturers added protection against such threats by denying access to the Master Boot Record (the most commonly targeted component) without user permission.
In recent years however, more sophisticated malware have emerged that have found ways to circumvent that protection and retarget the MBR (e.g, Rootkit:W32/Whistler.A).
How a boot virus infects
Boot viruses differ based on whether they target the Master Boot Record (MBR), the DOS Boot Record (DBR) or the Floppy Boot Record (FBR):
- The MBR is the first sector of a hard drive and is usually located on track 0. It contains the initial loader and information about partition tables on a hard disk.
- The DBR is usually located a few sectors (62 sectors after on a hard disk with 63 sectors per track) after the MBR, and contains the initial loader for an operating system and logical drive information.
- The FBR is use for the same purposes as DBR on a hard drive, but it is located on the first track of a diskette.
A boot virus can be further subdivided into either overwriting or relocating:
- An overwriting boot virus overwrites MBR, DBR or FBR sector with its own code, while preserving the original partition table or logical drive information.
- A relocating boot virus saves the original MBR, DBR or FBR somewhere on a hard or floppy drive. Sometimes, such an action can destroy certain areas of a hard or floppy drive and make a disk unreadable.
All boot viruses are memory-resident . When an infected computer is started, the boot virus code is loaded in memory. It then traps one of BIOS functions (usually disk interrupt vector Int 13h) to stay resident in memory.
Once resident in memory, a boot virus can monitor disk access and write its code to the boot sectors of other media used on the computer. For example, a boot virus launched from a diskette can infect the computer's hard drive; it can then infect all diskettes that are inserted in the computer's floppy drive. | <urn:uuid:7ad87887-e6f4-4a1a-9ac7-1c82d285c227> | CC-MAIN-2017-09 | https://www.f-secure.com/v-descs/boovirus.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00474-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.909069 | 621 | 2.953125 | 3 |
Google Captcha Dumps Distorted Text ImagesTired of reading those wavy words? Changes to Google's reCaptcha system -- which doubles as quality control for its book and newspaper scanning projects -- prioritize bot-busting puzzles based on numbers.
9 Android Apps To Improve Security, Privacy (click image for larger view)
Google is making changes to its reCaptcha system: distorted text images are out, while numbers and more-adaptive, puzzle-based authentication checks are in.
The change is necessary because text-only Captchas are no longer blocking a sufficient number of automated log-in attempts, according to Google's reCaptcha product manager, Vinay Shet. "Over the last few years advances in artificial intelligence have reduced the gap between human and machine capabilities in deciphering distorted text," he said in a Friday blog post. "Today, a successful Captcha solution needs to go beyond just relying on text distortions to separate man from machine."
Based on extensive user testing, Google thinks it can better separate real users from bots by using better risk analysis. This is based in part on watching what a supposed user is doing before, during and after the check, and serving up multiple puzzle-based checks. Although Shet didn't spell out exactly what these puzzles might look like, he did say that unlike humans, bots have a tough time with numbers.
[ Twitter's new security measures can be a double-edged sword. Read Twitter Two-Factor Lockout: One User's Horror Story. ]
"We've recently released an update that creates different classes of Captchas for different kinds of users. This multi-faceted approach allows us to determine whether a potential user is actually a human or not, and serve our legitimate users Captchas that most of them will find easy to solve," he said. "Bots, on the other hand, will see Captchas that are considerably more difficult and designed to stop them from getting through."
The Captcha -- an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart -- challenge-response technique was first developed at Carnegie Mellon University in 2000. The approach is designed to create a test that humans can pass, but computers can't. In theory, Captchas can be used for a variety of tasks, including preventing automated spam from appearing in blog comments, blocking automated spam-bot signup attempts for email services -- such as free Gmail accounts -- and safeguarding Web pages that site administrators don't want to be tracked by search bots.
In fact, Google purchased reCaptcha in 2009, in a bid to better block spammers who signed up for free accounts. The approach offered by reCaptcha was notable not just for presenting users with a Captcha phrase, but drawing those images from scans of books. That squares with Google's own Google Books and Google News Archive Search projects, which rely on optical character recognition (OCR) scans of printed source material, which aren't 100% accurate. By designating scanned content for use with the reCaptcha system, however, Google killed two birds with one stone: creating a security check, while also tapping users to manually enter or verify scanned text for free.
In short order, Google also rolled out -- and still offers -- reCaptcha as "a free anti-bot service that helps digitize books," and is available for use by any website. "Answers to reCaptcha challenges are used to digitize textual documents," according to Google's reCaptcha overview. "It's not easy, but through a sophisticated combination of multiple OCR programs, probabilistic language models, and most importantly the answers from millions of humans on the internet, reCaptcha is able to achieve over 99.5% transcription accuracy at the word level."
But no information security challenge-response system -- at least to date -- is perfect. Spam rings also have access to OCR tools, and have duly defeated many Captcha systems. Other criminal groups, echoing Google's crowd-sourced reCaptcha approach, have even tricked users into recording target sites' Captcha phrases -- most sites have a finite pool of possibilities -- with the lure of free porn.
By adopting a more adaptive approach to verifying people's identities via reCaptcha, Google has taken a page from Facebook's login verification system, which looks at a variety of factors when someone attempts to log into an account, including their geographic location, and whether they're using a computer that Facebook has seen before. For unusual types of log-ins, Facebook's system can hit would-be users with an escalating series of security challenges.
Similarly, RSA's Adaptive Authentication system, which is used by about 70 of the country's 100 biggest banks to verify their customers' identity, assesses a number of risk factors before granting access. Based on different risk factors, furthermore, users can also be made to jump through more hoops before the system believes that they are who they say they are.
It's been a busy month for Captcha researchers. Earlier this month, a team of Carnegie Mellon researchers unveiled an inkblot-based Captcha system that's designed to defeat automated attacks.
This week, startup firm Vicarious claimed it has created an algorithm that can successfully defeat any text-based Captcha system, as well as defeat reCaptcha -- widely seen as the toughest Captcha system available -- 90% of the time, New Scientist reported. But Luis von Ahn, who was part of the Carnegie Mellon team that created Captchas, remains skeptical, saying he's counted 50 such Captcha-breaking claims since 2003. "It's hard for me to be impressed since I see these every few months," he told Forbes. | <urn:uuid:14152d75-9af4-4f67-806c-c47dc9db708c> | CC-MAIN-2017-09 | http://www.darkreading.com/attacks-and-breaches/google-captcha-dumps-distorted-text-images/d/d-id/1112111?cid=sbx_bigdata_related_slideshow_vulnerabilities_and_threats_big_data&itc=sbx_bigdata_related_slideshow_vulnerabilities_and_threats_big_data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00350-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951296 | 1,171 | 2.71875 | 3 |
Education isn't a one-size-fits-all endeavor. Nowhere is that more true than in New York City's schools, which educate more than one million students who among them speak more than 800 languages.
One of the ways the city is working to meet the challenge of effectively educating such a diverse population is through School of One (So1), an approach that uses technology to improve and personalize math instruction in five city middle schools. So1 is part of New York's iZone, an ambitious effort focused on figuring out what each student needs to learn and then providing it.
Instead of the usual 25 or so students and a teacher, So1 uses a larger room with 60 to 90 students and three teachers, with support from student teachers and/or paraprofessionals. Students are assessed at the beginning of the year so lessons can be created that are appropriate for the level each child is at.
But that's just the beginning. Every day, the program's software produces a lesson customized for each student based on the strengths and weaknesses he or she has demonstrated in work completed up to and including the previous day. Part of the lesson might be online, or it might involve individual or small-group instruction.
So far, the results have been encouraging. Radical changes in how education is delivered usually result in an initial drop in test scores, but 2011 results found that students' performance held steady during their first year in So1.
Scores rose again last year, but the improvement wasn't consistent across the board. While students at every performance level did better, the improvement was much greater among students who began at lower levels, meaning that So1 would appear to be succeeding at the most elusive of educational goals: closing race- and income-based achievement gaps.
Because the So1 approach is such a departure from the norm for both students and teachers, New York introduces the concept to each new school as an after-school program before it becomes part of the regular school day. That allows teachers to gain the training they need to be effective in the new environment.
Teacher professional development and the technology So1 requires don't come cheap. But the good news is that they are largely one-time costs that New York has been able to cover with outside funding, such as a federal Investing in Innovation Fund grant. Once the program is underway, costs are comparable to those of more traditional approaches.
While So1's additional costs are largely incurred upfront, there is reason to believe that the benefits may be more permanent. Teachers report that enhanced professional development and the experience of teaching in the So1 environment make them more adept at identifying and addressing the needs and levels of individual students, whether they are in So1 or a traditional setting.
New York is currently determining what So1's future pace of expansion will be. With so-called differentiated or individualized education attracting ever-more attention, educators and public officials -- particularly those who serve diverse student populations -- would be well advised to take a close look at School of One.
This article originally appeared on Data-Smart City Solutions. | <urn:uuid:ce8d0da1-d5ae-469a-92f1-4a324b75e7e9> | CC-MAIN-2017-09 | http://www.govtech.com/education/How-New-York-City-Uses-Technology-to-Teach-Math-One-on-One.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00050-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.970339 | 629 | 2.84375 | 3 |
It’s no secret there is a pronounced gender gap in technology fields. In 2014, 70 percent of the employees at the top tech companies in Silicon Valley, such as Google, Facebook and Twitter, were male. In technical roles, this phenomenon is even more pronounced; for example, only 10 percent of the technical workforce at Twitter is female.
But things haven’t always been this way. The numbers of enrollments among men and women in computer science were on their way toward parity in the 1970s and early 1980s. In 1984, 37 percent of computer science graduates were women, but those numbers began to drop dramatically in the middle of the decade.
By 2016, that number had been whittled down to 18 percent. This dip in the 1980s has created a chasm that the past 30 years hasn’t been able to overcome—and the dude-centric computer marketing campaigns of that time may be to blame.
Programming is fast becoming the most lucrative skill you can have in the modern world. According to a recent study conducted by Glassdoor, 11 of the 25 best-paying jobs are technology related, with an average earning potential of between $106,000 and $130,000 a year. And this trend is showing no signs of letting up: The Bureau of Labor Statistics projects that employment of computer and information-technology occupations will grow by 12 percent between 2014 and 2024, faster than the average for any other occupation.
However, women will not have the same access to the opportunities presented by this industry. Many of these companies blame the pipeline, citing poor enrollment and graduation rates among women in technology. And they have a point: In states like Mississippi, Montana and Wyoming not a single girl took an AP-level computer-science examination in 2014.
Although a whole host of factors played a role in this phenomena, Elizabeth Ames from the Anita Borg Institute for Women in Technology believes one of the primary reasons can be traced back to the close relationship between computing and gaming in the 1980s.
“A lot of early computers were used for game playing,” Ames says. “Those games tended to be more aimed more at boys and men, so it was easy for boys to get a leg up in that area through gaming.”
For example, the Apple personal computer that was released at the time was marketed specifically to boys (included them teasing girls’ computer skills), as were a whole range of other consoles. This gave rise to male computing culture. As a result, a 1985 study reported 73 percent of men used a computer on a weekly basis, compared to 45 percent of women surveyed.
This bias toward boys in advertising has a fascinating history in its own right. In 1983, the U.S. experienced a video game recession. The market shrank from $3.2 billion in 1983 to $100 million in 1985, a drop of 97 percent. The crash was primarily brought about by low-quality games flooding the market, which smothered consumer confidence. Subsequently, marketers trying to rebuild the industry sought to leverage the small audience they had left, which, according to their research, was mostly boys.
From then on, a kind of chicken-and-egg cycle took hold. Advertisers sold games to boys because boys were the ones buying them, and boys were the ones buying them because of the advertisers’ targeted marketing. It’s no coincidence the console touted to have saved the industry was called a “Game Boy.”
The experience gap widens
This led to what researcher Jane Margolis calls the “experience gap.” In a study she conducted in 1995, she found among first-year computer-science students at Carnegie Mellon University, 40 percent of male respondents passed the advanced-placement computer-science exam, meaning they could skip the introductory-level programming class. None of the first year women achieved the same result.
Men were also more familiar with programming languages than women and were more likely to report having an “expert” level of programming proficiency before enrolling at Carnegie Mellon. Unsurprisingly, many women opted out of the computer-related courses early.
An American Association of University Women review of more than 380 studies from academic journals, corporations and government sources found that more early exposure to engineering and computing among boys in school creates “more positive attitudes toward and interest in STEM subjects.”
By the time students now reach university, 20 percent of men plan to take on a career in engineering or computing. Among women, that number is just 5.8 percent. Women start out so far behind, they often can’t catch up.
Even when women do have considerable experience with coding and mathematics, the male-dominated environment that has arisen becomes an obstacle to entry for many. Professor Linda Sax, a researcher at UCLA’s Graduate School of Education and Information Studies, recalls, “I was someone who grew up very confident in my math abilities, but it wasn’t until I went to college that I began to doubt myself.”
Sax says she felt intimidated by the male-dominated culture she encountered at university. She remembers one incident in which she asked the professor a question he curtly dismissed. A male student asked the same question moments later and got a positive response.
“It just felt very isolating,” she says. Sax ended up not completing her degree in programming, choosing a career in quantitative research in education instead.
The few women who do make it into the field are far less likely to stay than their male counterparts. The Center for Talent Innovation, a research think tank, found that U.S. women are 45 percent more likely than men to leave careers in technology. The research revealed that women often feel isolated because of a lack of female role models and the sense of being excluded from “buddy networks” among men. Once you add in the fact only 38 percent of U.S. women get their ideas endorsed by leadership (compared to 44 percent of men), you soon end up with the scenario in which almost a third of women say they want to quit within the first year.
Though there are isolated examples of both vintage and contemporary computer advertising aimed at women, it is clear that the advertising narrative around women and technology needs to be more inclusive if the gender gap is going to close. Until that happens, as Ames argues, advertising will continue to drive “a subtle message to girls and women that it’s not a place where they belong.” | <urn:uuid:42850cd2-5ea6-4a23-86eb-c0fa821fb8cd> | CC-MAIN-2017-09 | http://m.nextgov.com/cio-briefing/wired-workplace/2017/02/silicon-valleys-gender-gap-result-computer-game-marketing-20-years-ago/135494/?oref=m-ng-river | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00226-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.975635 | 1,344 | 3 | 3 |
Cisco Basics – User Exec Mode
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss the CCNA concept of Entering a Cisco Router's User Mode. As you progress through your CCNA exam studies, I am sure with repetition you will find this topic becomes second nature.
Let's see what it looks like to be in each one of these modes. Here I have telneted into our lab router and I am in User Exec Mode:
The easiest way to keep track of the mode you're in is by looking at the prompt. The “>” means we are in User Exec Mode. From this mode, we are able to get information like the version of IOS, contents of the Flash memory and a few others.
Now, let's check out the available commands in this mode. This is done by using the “?” command and hitting enter:
Wow, see all those commands available ? And just to think that this is considered a small portion of the total commands available when in Privileged Mode ! Keep in mind that when you're in the console and configuring your router, you can use some short cuts to save you typing full command lines. Some of these are :
Tab: By typing the first few letters of a command and then hitting the TAB key, it will automatically complete the rest of the command. Where there is more than one command starting with the same characters, when you hit TAB all those commands will be displayed. In the picture above, if i were to type “lo” and hit TAB, I would get a listing of “lock, login and logout” because all 3 commands start with “lo”.
?: The question mark symbol “?” forces the router to print a list of all available commands. A lot of the commands have various parameters or interfaces which you can combine. In this case, by typing the main command e.g “show” and then putting the “?” you will get a list of the subcommands. This picture shows this clearly:
Other shortcut keys are :
CTRL-A: Positions the cursor at the beginning of the line.
CTRL-E: Positions the cursor at the end of the line.
CTRL-D: Deletes a character.
CTRL-W: Deletes a whole word.
CTRL-B: Moves cursor back by one step.
CTRL-F: Moves cursor forward by one step.
One of the most used commands in this mode is the “Show” command. This will allow you to gather a lot of information about the router. Here I have executed the “Show version” command, which displays various information about the router:
The “Show Interface < interface> ” command shows us information on a particular interface. This includes the IP address, encapsulation type, speed, status of the physical and logical aspect of the interface and various statistics. When issuing the command, you need to replace the < interface> with the actual interface you want to look at. For example, ethernet 0, which indicates the first ethernet interface :
Some other generic commands you can use are the show “running-config” and show “startup-config”. These commands show you the configuration of your router.
The running-config refers to the running configuration, which is basically the configuration of the router loaded into its memory at that time.
Startup-config refers to the configuration file stored in the NVRAM. This, upon bootup of the router, gets loaded into the router's RAM and then becomes the running-config !
So you can see that User Exec Mode is used mostly to view information on the router, rather than configuring anything. Just keep in mind that we are touching the surface here and not getting into any details.
This completes the User Exec Mode section. If you like, you can go back and continue to the Privileged Mode section.
We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career. | <urn:uuid:d2b70b2d-86f0-489c-bff2-9e87a8fdd2b9> | CC-MAIN-2017-09 | https://www.certificationkits.com/cisco-router-user-mode/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00402-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.918175 | 908 | 2.546875 | 3 |
While it may seem counterintuitive for hackers to attack small- and mid-sized companies, it’s not just big companies who are at risk from cybercriminals. Cyberattacks on small- and medium-sized companies have been on the increase since 2004; in fact, up to 85 percent of the targets for cyber-related crimes are small- and mid-sized businesses. This high percentage represents a digital pandemic.
What’s Fueling the Increase in Attacks
Gone are the days where small- and mid-sized companies could ignore network security with little consequence. Cybercriminals now realize that these companies are an easy target, because they often don’t have the IT resources for protection and don’t invest money in cybersecurity. Another reason for the increase is that many small- and mid-sized companies lack a formal or even an informal Internet security policy for employees. And many of these companies have employees who use the Internet daily, and some depend on the Internet for daily operations. Other reasons for the uptick in cyberattacks on small- and mid-sized companies include lack of risk awareness, lack of employee training, failure to secure endpoints, and failure to keep security defenses updated. Plus, small- and mid-sized businesses are more interconnected today. Instead of just having email accounts and a website, they often have more complex networks, including cloud, mobile and interactive connections with partners and customers. All of these factors make it easy for an opportunistic cybercriminal.
According to Greg Shannon, chief scientist at the CERT Division of the Software Engineering Institute at Carnegie Mellon, “Size is somewhat of a red herring. It’s more about scale. Small- and mid-sized businesses are a huge target because attacks are automated. The criminals don’t care who they’re attacking, and while any given business isn’t worth much, they have viruses or ransomware that allow them to attack thousands or millions.”
How Small- and Mid-Sized Companies Can Improve Security
Small and mid-sized companies can implement a number of strategies to ramp up security and prevent cyberattacks:
Outsourcing a professional IT team brings many benefits to small- and mid-sized businesses. Companies get a team of professionals who have collective knowledge in all areas of IT. Plus, these professionals are available 24/7 in case of an emergency. Apex is the trusted choice when it comes to staying ahead of the latest information technology tips, tricks, and news. Contact us at (800) 310-2739 or send us an email at firstname.lastname@example.org for more information. | <urn:uuid:64aeffbc-dc56-4a97-9551-bf4d2ee8024a> | CC-MAIN-2017-09 | https://www.apex.com/small-mid-sized-businesses-risk-cyberattacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00398-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946292 | 538 | 2.671875 | 3 |
Table of Contents
Windows 8 comes with a new user interface called the Windows Start Screen that is the first thing you see when you login to Windows 8. This is the main interface that Windows 8 user's use to launch applications, search for files, and browse the web. This Start screen contain tiles that represent different programs that you can launch by clicking on the title. One of the features of this new interface is that the tiles themselves are able to show you real-time information directly on the Start screen. This will allow you to use the Start screen not only as a way to start an application, but also as a way to quickly see data such as the weather, e-mail information, new RSS feed articles, etc. For example, the weather tile will not only allow you to launch the main Weather application, but will also display your actual weather conditions directly on the Windows 8 Start screen. This type of real-time information can be seen in the image of the Windows 8 Start Screen that is shown below.
Programs that are designed for the Start Screen interface are called Apps . These Apps are designed to work with the Start Screen so that you can share information with other Apps , synchronize them with other computers, and easily be deployed via the Windows Store.
With the release of Windows 8, Microsoft also introduced a new Windows Store. The Windows Store allows you to download and purchase apps that are designed to operate in the Start Screen. Similar to the iTunes Store, you are able to login to the Windows Store and download free Apps and trials that you can run on your computer. If you decide to purchase an app, then it will be automatically downloaded and installed on your computer and you will be able to use it on 4 other Windows 8 computers as well.
When you create an account in Windows 8 you will have the option to make your account a local account or a Microsoft account. If you choose to create a local account, then this account will only be able to logon to your local computer and your information will not be synchronized with other computers you may use. On the other hand, if you choose to create a Microsoft account then Windows 8 will synchronize certain data, such as app settings, profile pictures, and passwords to the Microsoft Cloud. This data will then be synchronized to other computers that you use with the same Microsoft account allowing a desktop experience that travels with you from computer to computer.
In order to use this feature you will need to have the same Microsoft account on every computer you use. You will then need to enable synchronization through the Sync PC settings settings area. Once synchronization is enabled you can then fine tune what you want synchronized between the various computers.
It is important to note that you do not need to use a @live.com or @hotmail.com e-mail address in order to use this feature. All you need is an e-mail address that you control and that has been registered as a Microsoft account at Live.com.
The Start Screen is a very simple interface to navigate. When you first install Windows 8, your start screen will be comprised of various Apps designed for the Start Screen as well as programs that you can launch from the classic Windows desktop. Each of these programs or Apps are represented as a tile. These tiles can be configured to display as a small square or a rectangle. If the title is set to be the square, then it will just act as a program launcher when you click it. If you make the tile into the rectangle, though, then this tile will display real-time information, if available, from the application directly onto the start screen.
The Start screen also has numerous pages, where each page contain different tiles. Therefore if you run out of room on one page, then you can simply start adding tiles to other pages. To organize your tiles based on how often you use them, or by a particular category, you can move the tiles between groups or pages and even create brand new groups of tiles. Information on how to do this can be found in the further reading section of this tutorial.
To configure the characteristics of a particular tile, you can hold down a particular tile with your finger or right-click on it with your mouse. Once you do that, the tile will become checked and a new panel will be displayed at the bottom of the Start screen where you can change various characteristics. These characteristics include pinning or unpinning the tile, the size of the tile, and various advanced characteristics such as running the program as an Administrator. An example of this panel can be seen below.
It is also possible to redirect the output of any App you are using to a connected projector by pressing the Windows key and the P key at the same time. When you do this , a new screen will be displayed asking how you would like the screen to be displayed on the projector.
Finally, when using a particular App you can also modify its settings from within the application by pressing the Windows key and the I key at the same time or by using the Start Menu that is described below. More information about the Start Menu and App settings can be found in the next section.
The Windows 8 Charms Bar is a small menu that appears when you hover your mouse over the bottom right corner or the upper right of the screen or by pressing the Windows and C key at the same time. This menu contains five options labeled Search, Share, Start, Devices, and Settings. An example of the Charms Bar can be seen in the image below.
Below are descriptions of what each menu option performs in the Start Screen.
Clicking on the search option displays the search interface. From here you can type in a keyword and Windows 8 will search through your Apps , files, and Settings for items that match the keyword. You can then select the Apps , Files, or Settings categories to see what was found for each of them.
Clicking on the share option allows you to share the data from the various Apps with another App, program, or service. For example, when in the Weather app you can share a screen shot with others and in Internet Explorer you can share that page as a tweet in Twitter or a wall post in Facebook.
Clicking on the start option simply bring you to the classic Windows desktop.
Clicking on the devices option allows you to specify what devices you would like to play the App to. This would allow you to specify a device that a particular Apps will display its content on.
Clicking on the settings option allows you to configure the settings for any App that you are currently using. When using an App you can also access this Settings screen by pressing the Windows key and the I key at the same time. Once in the settings screen you modify various options for the App as well as change the volume, shutdown or restart the computer, change your language, enable notifications, and monitor network connections.
Though Windows 8 no longer has a Start Menu as we have known in the past, Windows does include a basic Start Menu that can be used to quick launch commonly used programs. To access this Start Menu, you should hover your mouse over the lower left hand corner of the desktop or Start Screen and then right-click on your mouse. This will open up the Start Menu as shown below:
From the Start Menu you have quick access to various tasks and programs on your computer.
Below are other tutorials that discuss how to use the Windows 8 Start Screen as well as manipulate Apps:
As you can see the Windows Start Screen was designed to be fully customized by the user so that they can use it in a way that is best for them. By arranging the tiles and deciding what tiles you wish to display real-time information on, you can effectively manage your screen space without losing access its benefits. If you have questions regarding this new interface or just want to chat with others about Windows 8, please feel free to post in our Windows 8 Forum.
The Windows 8 Metro Start screen contains small squares and rectangles, called tiles, that are used to represent various programs that you can access. The default tiles that are on your Start screen are not, though, the only programs that you can add. It is possible to add other programs by searching for them or using more advanced techniques to make them available. This guide will explain how to ...
In the past when you wanted to uninstall an application in Windows, you would uninstall it from the Uninstall a Program control panel. Though this option still exists for installed programs, Metro Apps that are purchased from the Windows Store or that come with Windows 8 are not shown in this control panel. In order to uninstall these Apps, you will need to use a different procedure. To uninstall ...
The Windows 8 Metro Start screen is designed to make it so that you can easily resize and move tiles as well as make new tile groups. This allows you to organize the interface in a way that works best for you. The instructions below will explain how you can perform these tasks in the Windows 8 Start screen.
Windows 8 allows you to customize the background and text color of the Start screen so that it is to your preferences. To change the Windows 8 Start screen background you should go to the 8 Start screen and type start screen.
When you run apps from the Windows 8 Start Screen and switch to another one, the original app that you were using is not actually closed. Instead this App is left running in the background so that you can easily switch between them. When you leave apps running in the background they use resources such as memory and CPU power that could be better used by other programs on your computer. Therefore ... | <urn:uuid:5460eae8-9784-49df-b3d4-c5229e51408d> | CC-MAIN-2017-09 | https://www.bleepingcomputer.com/tutorials/how-to-use-windows-8-start-screen/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00574-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920831 | 1,954 | 2.8125 | 3 |
Exploring the deep web
Internal and external federated systems lead users to treasures that regular search engines can't find
GCN Illustration by Sam Votsis
For the past decade, the Energy Department's Office of Scientific and Technical Information in Oak Ridge, Tenn., has been using the Internet to speed research processes.
'When we first started posting information on the Web in 1997, we relied on search engines provided by the database vendors,' said OSTI Director Walt Warnick. 'It soon occurred to us that it would be helpful to provide our patrons with the ability to search across multiple databases at one time.'
That led the agency to install federated search software ' a search engine that simultaneously executes a query against a number of databases in real time, then aggregates and ranks those results. In April 1999, OSTI launched the EnergyFiles site (www.osti.gov/EnergyFiles/), providing access to over 500 DOE databases and sites. That was followed in 2002 by Science.gov, which allows a single query to pull data from 30 scientific research databases at 12 federal agencies. February 2007 saw the release of Science.gov 4.0 with greatly enhanced relevance ranking. OSTI is now working to expand the system to include government research sites worldwide.
'Our mission is to accelerate the spread of knowledge to accelerate the advance of science,' Warnick said. 'Federated search is a very useful way for making that happen.'The dark Web
Google may dominate the search market, but it has two major shortcomings. The first is that it barely accesses what is known as the deep Web, invisible Web or hidden Web ' data that is available over the Internet but cannot be indexed by Web crawlers, at least not without Webmasters preparing a text file listing all the entries of that database. All this material that resides in databases can only be summoned by dispatching a query or by filling out a form.
'In 2000/2001 we did some analysis and realized that the quantity of documents from these deep-Web databases was far bigger than what everyone was calling the Internet,' said Jerry Tardif, vice president at search firm Bright Planet.
Tardif estimated that the deep Web is several hundred times the size of the surface Web ' the data that search engines normally capture. Others give a lower figure ' Abe Lederman, president at Deep Web Technologies, the company that makes the Explorit search software used by Science.gov and the Defense Technical Information Center (DTIC) ' said the deep Web contains about 94 percent of what is on the Internet. But whatever that size, if you are only using Google or Yahoo, you are missing most of what is out there.
'Google makes search look simple, but in fact, search is not simple, particularly when completeness is important,' said David Fuess, a computer scientist at Lawrence Livermore National Laboratory's Nonproliferation, Homeland and International Security (NHI) directorate.
The other problem is information overload. Public search engines may be fine for locating a hotel in Singapore, but not for professional research.
Federated search engines address both of these problems. By searching multiple databases simultaneously ' an organization's own internal databases, in addition to other public or private databases ' they expose that massive quantity of data hiding on the invisible Web. They address information overload by searching only those databases required by a particular type of information customer. The Science.gov search engine, for example, doesn't even access all the data available on the DOE site.
'Science.gov is mostly [research and development] findings,' Warnick said. 'There are a lot of things that Science.gov does not have on it. For example, the Energy Information Administration is not a Science.gov site.'
Instead, it gives searchers in-depth access to research papers from CENDI (originally the Commerce, Energy, NASA, Defense Information Managers Group), an interagency working group of senior scientific and technical information (STI) managers from a dozen agencies, including DTIC, the National Agricultural Library, the National Library of Medicine and the National Science Foundation. Together, CENDI members control more than 95 percent of the federal R&D budget, so accessing their databases provides a near-comprehensive overview of federally funded research. OSTI also hosts several other federated search sites including E-Print Network (www.osti.gov/eprints) and Science Accelerator (www.scienceaccelerator.gov).
DTIC (www.dtic.mil) has its own federated search engine ' STINET (Science and Technical Information Network) Federated Search at www.dtic.mil/ dtic/search/federated_search.html ' specializing in providing research information to the Defense Department community. Databases include DTIC's own research collection, periodicals from the Air University Library and Joint Forces Staff College, and certain databases maintained by other federal agencies. Users can click on which databases they want to search before submitting their query.
'Our customers wanted to come to a single site and search for scientific information from both the DTIC and our sister organizations in other federal agencies,' said Ricardo Thoroughgood, chief of the STINET Management Division. 'Initially, it was an internal DOD resource, but we shut down that site and made it available to the public with all unclassified and unlimited information, so that data is readily available to the public through the STINET databases.'
In addition to the publicly available federated search sites, both Energy and DOD use federated search internally. Lawrence Livermore National Laboratory in Livermore, Calif., for example, uses Bright Planet's Deep Query Manager to provide custom searches for different types of users. In the case of NHI, Fuess said, federated search is used to find information on non-U.S. users and consignees who may receive dual-use, export-controlled goods from U.S. vendors.Search setup
Setting up a federated search system is not simply a matter of installing software.
'Information technology staff need to understand that this is not a trivial undertaking,' Lederman said. 'It is very unlikely that this is something an IT person at an agency can just purchase a copy of, set up and run.'
The process starts with defining exactly what types of searches your users will perform and what databases contain the desired information.
'If an agency is federating search on their own databases, they generally know what they have, where it is and the type of information that is in there,' Tardif said. 'But if they are doing something on the outside, they need subject-matter expertise on what public sources are available.'
Then there is the matter of setting up the user interface to be intuitive and easy to use, but also with enough detail to let users narrow searches to the exact source of relevant information. The California Digital Library (www.cdlib.org), for example, uses MetaLib from Ex Libris but found that extensive customization was needed.
'Since the user interface of the commercial product was not as flexible as we required, we needed to build our own user interface layer and use the application program interface of the commercial application to handle the connections to multiple sources, the searching, merging of search results, deduplication and ranking,' said Roy Tennant, the library's user services architect.
Then comes the matter of establishing the links to the data sources and keeping those up-to-date. BrightPlanet has scripts for searching more than 70,000 public databases, and the appropriate ones can be used as part of an agency's federated search engine. You might need custom scripts for any internal databases. But establishing those search links is not a one-time activity; they must be updated whenever the database owners make changes to their sites.
Finally, there is the matter of ensuring that the data returned is comprehensive and relevant.
'The real question you must answer is the consequence of missing a critical piece of available information vs. overwhelming your researchers with huge volumes of information,' Fuess said. 'To be effective, you must strike a proper balance that maximizes the probability that the information you seek is in the results and that the results can be reviewed within the response time allowed.' | <urn:uuid:5d4ded06-bcbe-4e4d-b04f-b1669c51efba> | CC-MAIN-2017-09 | https://gcn.com/articles/2007/06/02/exploring-the-deep-web.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00518-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.915631 | 1,703 | 2.78125 | 3 |
Up To: Contents
See Also: State Types
Nagios supports optional detection of hosts and services that are "flapping". Flapping occurs when a service or host changes state too frequently, resulting in a storm of problem and recovery notifications. Flapping can be indicative of configuration problems (i.e. thresholds set too low), troublesome services, or real network problems.
How Flap Detection Works
Before I get into this, let me say that flapping detection has been a little difficult to implement. How exactly does one determine what "too frequently" means in regards to state changes for a particular host or service? When I first started thinking about implementing flap detection I tried to find some information on how flapping could/should be detected. I couldn't find any information about what others were using (where they using any?), so I decided to settle with what seemed to me to be a reasonable solution...
Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. It does this by:
A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold.
A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold (assuming that it was previously flapping).
Let's describe in more detail how flap detection works with services...
The image below shows a chronological history of service states from the most recent 21 service checks. OK states are shown in green, WARNING states in yellow, CRITICAL states in red, and UNKNOWN states in orange.
The historical service check results are examined to determine where state changes/transitions occur. State changes occur when an archived state is different from the archived state that immediately precedes it chronologically. Since we keep the results of the last 21 service checks in the array, there is a possibility of having at most 20 state changes. In this example there are 7 state changes, indicated by blue arrows in the image above.
The flap detection logic uses the state changes to determine an overall percent state change for the service. This is a measure of volatility/change for the service. Services that never change state will have a 0% state change value, while services that change state each time they're checked will have 100% state change. Most services will have a percent state change somewhere in between.
When calculating the percent state change for the service, the flap detection algorithm will give more weight to new state changes compare to older ones. Specfically, the flap detection routines are currently designed to make the newest possible state change carry 50% more weight than the oldest possible state change. The image below shows how recent state changes are given more weight than older state changes when calculating the overall or total percent state change for a particular service.
Using the images above, lets do a calculation of percent state change for the service. You will notice that there are a total of 7 state changes (at t3, t4, t5, t9, t12, t16, and t19). Without any weighting of the state changes over time, this would give us a total state change of 35%:
(7 observed state changes / possible 20 state changes) * 100 = 35 %
Since the flap detection logic will give newer state changes a higher rate than older state changes, the actual calculated percent state change will be slightly less than 35% in this example. Let's say that the weighted percent of state change turned out to be 31%...
The calculated percent state change for the service (31%) will then be compared against flapping thresholds to see what should happen:
If neither of those two conditions are met, the flap detection logic won't do anything else with the service, since it is either not currently flapping or it is still flapping.
Flap Detection for Services
Nagios checks to see if a service is flapping whenever the service is checked (either actively or passively).
The flap detection logic for services works as described in the example above.
Flap Detection for Hosts
Host flap detection works in a similiar manner to service flap detection, with one important difference: Nagios will attempt to check to see if a host is flapping whenever:
Why is this done? With services we know that the minimum amount of time between consecutive flap detection routines is going to be equal to the service check interval. However, you might not be monitoring hosts on a regular basis, so there might not be a host check interval that can be used in the flap detection logic. Also, it makes sense that checking a service should count towards the detection of host flapping. Services are attributes of or things associated with host after all... At any rate, that's the best method I could come up with for determining how often flap detection could be performed on a host, so there you have it.
Flap Detection Thresholds
Nagios uses several variables to determine the percent state change thresholds is uses for flap detection. For both hosts and services, there are global high and low thresholds and host- or service-specific thresholds that you can configure. Nagios will use the global thresholds for flap detection if you to not specify host- or service- specific thresholds.
The table below shows the global and host- or service-specific variables that control the various thresholds used in flap detection.
|Object Type||Global Variables||Object-Specific Variables|
States Used For Flap Detection
Normally Nagios will track the results of the last 21 checks of a host or service, regardless of the check result (host/service state), for use in the flap detection logic.
Tip: You can exclude certain host or service states from use in flap detection logic by using the flap_detection_options directive in your host or service definitions. This directive allows you to specify what host or service states (i.e. "UP, "DOWN", "OK, "CRITICAL") you want to use for flap detection. If you don't use this directive, all host or service states are used in flap detection.
When a service or host is first detected as flapping, Nagios will:
When a service or host stops flapping, Nagios will:
Enabling Flap Detection
In order to enable the flap detection features in Nagios, you'll need to:
If you want to disable flap detection on a global basis, set the enable_flap_detection directive to 0.
If you would like to disable flap detection for just a few hosts or services, use the flap_detection_enabled directive in the host and/or service definitions to do so. | <urn:uuid:5514c7e3-fcbc-46e1-b21d-1c52d537ab87> | CC-MAIN-2017-09 | https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/flapping.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00394-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917473 | 1,377 | 2.578125 | 3 |
With private-sector backing, California’s state government and the state’s flagship public university are teaming up to develop an intelligent transportation solution that will help drivers avoid congested traffic.
The California Department of Transportation (Caltrans), the University of California, Berkeley’s California Center for Innovative Transportation, and IBM Research hope to improve the reliability of estimated commute times, and give drivers personalized travel recommendations that save time and fuel.
U.S. commuters waste 28 gallons of gas and $808 each year because they are stuck in traffic, according to the IBM announcement of the intelligent transportation project. Traffic snarls are notoriously acute in California.
The average person in Los Angeles wasted 38 hours per year in highway traffic jams, according to 2007 U.S. Bureau of Transportation Statistics. In San Francisco, it was 30 hours; in San Diego, it was 29 hours.
“As the number of cars and drivers in the Bay Area continue to grow, so too has road traffic. However, it’s unrealistic to think we can solve this congestion problem simply by adding more lanes to roadways, so we need to proactively address these problems before they pile up,” said Greg Larson, chief of the Office of Traffic Operations Research for Caltrans.
The collaborative research team hopes to give California reliable real-time traffic information before drivers get behind the wheel. “Even with advances in GPS navigation, real-time traffic alerts and mapping, daily commute times are often unreliable, and relevant updates on how to avoid congestion often reach commuters when they are already stuck in traffic and it is too late to change course,” according to IBM.
The company said its researchers have developed a new traffic modeling tool for travelers that continuously analyzes existing congestion data, commuter locations and expected travel start and arrival times throughout a metropolitan region for a variety of transportation modes, including mass transit. The tool could someday recommend the most efficient travel route and also integrate parking information. | <urn:uuid:85885a03-7bf7-4697-b48a-654110cdd30d> | CC-MAIN-2017-09 | http://www.govtech.com/innovationnation/California-Traffic-Jams.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00446-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935814 | 404 | 2.75 | 3 |
Are we facing an internet of too many things?
Much of the buzz in the tech world at the moment surrounds the internet of things, the idea that every piece of electronic kit might one day be connected via the web.
There are plenty of benefits from this but it also presents a number of challenges. Home automation specialist Custom Controls has released an infographic showing what needs to happen for the internet of things to work.
These include improved standards which will mean that companies and developers will need to work together in order to ensure that their devices speak a common language to enable them to exchange information. Security is also important to prevent information leakage and to prevent hackers from interfering with your TV or your fridge.
Speed is a factor too, with more than 50 million devices predicted to be online by 2020 there will be a greater need for fast and reliable internet connections. There's also the issue of service as much of the world is still unable to access high speed internet.
You can read more about the problems and solutions presented by the internet of things on the Custom Controls blog and view the infographic below. | <urn:uuid:7eed2c48-e47f-443a-a74d-946e3f236543> | CC-MAIN-2017-09 | https://betanews.com/2014/04/23/are-we-facing-an-internet-of-too-many-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00446-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956712 | 220 | 2.734375 | 3 |
With the proliferation of smart phones, tablets, notebooks and various other electronic devices in our everyday work and personal lives comes another thing -- e-waste.
The average person replaces his or her mobile phone every 18 months, and in the U.S. alone, more than 130 million mobile phones are discarded each year (as of 2010 -- that number is likely higher at this point). That equals 17,000 tons of e-waste, some of which is recycled, but much of which is just thrown out.
But increased recycling equates to harvesting valuable and precious metals found in complex electronics, according to the infographic by Fonebank.com below. What's the primary metal that can be pulled? Find the answer below (click infographic to enlarge): | <urn:uuid:7cab8528-e356-4cf2-936c-56910d9c524b> | CC-MAIN-2017-09 | http://www.govtech.com/infographics/E-Waste-and-Us-17000-Tons-Trashed-Each-Year-Infographic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00263-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956643 | 154 | 3.171875 | 3 |
According to NEC(National Electrical Code), there are three types of sections of a building’s inside area: plenums, risers and general purpose areas. As the higher and higher demands of the building network safety, types of fiber cables are not limited to the general types. Nowadays, some particular materials are used in the jackets of the fiber cables which can adapt to more applications, such as the sections above. Today, we are mainly taking about the plenums and its required cables in this paper while Fiberstore recently launched its new plenum(OFNP) fiber cables.
What is Plenum Space?
Plenum is an air-handling, air flowing and air distribution system space which is usually at greater than atmospheric pressure. In a building, plenum always refers to the space above a drop ceiling or under a raised floor which is used as the air return (source of air) for the air conditioning and typically used to wire the communication cables for the building’s computer and telephone network. However, the plenum is also a potential safety hazard. As the plenum spaces are full of fresh oxygen, fire can quickly spread into the plenum space. In addition, when cables were burning, they would give off toxic fumes and the fumes would be fed to the rest of the building by the air conditioner.
What is Plenum(OFNP) Fiber Cable & Why we need it?
Due to the potential safety hazard of the plenum spaces, people put forward many solutions. The plenum(OFNP) fiber cable is one of the achievements. OFNP stands for Optical Fiber Non-conductive Plenum. Plenum(OFNP) fiber cables is one of the highest levels which complies with the UL(Underwriters Laboratories) fire safety test. As we known, plastic cables burn quickly and give off toxic fumes and the black smoke. What is worse, they can spread the fire to the other parts of building once they catch fire. In order to protect the fires and keep more safety, any cables installed in a plenum space must be plenum rated. Because plenum cables are routed through air circulation spaces, which contain very few fire barriers by being coated in flame-retardant, low smoke materials.
How to Buy Plenum(OFNP) Fiber Cable with A Low Cost?
Although the NEC may allow non-plenum cable, Fire Marshall of many countries require their local building to use plenum cables in the network deployments of the plenum space. As the particular materials of the plenum(OFNP) fiber cable, cost of it may more expensive than the plastic one. It is not universally used in many developing countries. wanna a low cost safe cabling item of your plenum space, Fiberstore is your best choice! Fiberstore offers the plenum(OFNP) fiber cables with 1m to 30m long and all kinds of connectors and even the custom service for your special requires. All of our plenum(OFNP) fiber cables are up to the UL standard which promise you a highly quality.
Know more about Fiber Optic Cables, welcome to our website or contact us directly! | <urn:uuid:ff75b9d2-af40-4205-9f5f-4829e57030ca> | CC-MAIN-2017-09 | http://www.fs.com/blog/fiberstores-plenumofnp-fiber-cables-a-low-cost-option-for-your-network-deployments.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00083-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929741 | 647 | 3.171875 | 3 |
Residents in Miami-Dade County, Fla., will have a new Web tool at their disposal in the event Hurricane Irene comes ashore on the Eastern Seaboard later this week.
The new tool, called the Storm Surge Simulator, allows users to calculate storm surge levels in Miami-Dade’s three evacuation zones based on location and hurricane severity. The tool is accessible on the county’s website.
Located near the southeastern tip of Florida, a portion of Miami-Dade County is on the seashore. The county’s three designated evacuation zones total 800 square miles, with nearly half a million residents living there.
To view predicted storm surge levels, users click the hurricane category level (1 through 5), and then select if a person, house or villa would be affected by the surge. Users then either type in an address or click on the Google map provided on the site. The inputted information is used to calculate the storm surge levels.
For example, the simulator projects that a Category 3 hurricane in South Miami Heights would cause three feet of water surge at a house. During a Category 4 hurricane in Palmetto Bay, the tool projects a person would be four feet deep during a storm surge.
But calculating projected storm surge levels online shouldn’t encourage residents to ignore hurricane-related evacuation warnings, said Curtis Sommerhoff, director of the Miami-Dade Department of Emergency Management.
“We don’t want people to go and use the simulator and try to make their own call on whether they should evacuate or stay at home,” Sommerhoff said. “The main purpose of the simulator is to really bring to light the impact of storm surge.”
Sommerhoff said the tool was developed during the past year by the Miami-Dade County Department of Emergency Management and Florida International University’s International Hurricane Research Center and School of Computing and Information Sciences. Data for the tool was collected from the National Hurricane Center (NHC) from its computerized model called the Sea, Lake and Overland Surges from Hurricanes (SLOSH).
The SLOSH model collects data on surge levels such as estimated storm surge heights and winds based on pressure, size, forward speed and track, according to the NHC’s website.
“The SLOSH model is generally accurate within plus or minus 20 percent,” according to the NHC. “For example, if the model calculates a peak 10-foot storm surge for the event, you can expect the observed peak to range from 8 to 12 feet.”
According to Vicki Mallette, external affairs coordinator for the Miami-Dade Emergency Management Department, the Storm Surge Simulator’s total cost was $2,800, all of which was funded by a state grant. As new surge data is evaluated and finalized, the information will be added to the simulator.
The tool was announced Wednesday, Aug. 24, in conjunction with the nineteenth anniversary of Hurricane Andrew, a Category 5 hurricane that made landfall south of Miami in 1992. Andrew caused $27 billion in damage and was responsible for 23 deaths.
Hurricane season in the Atlantic Ocean officially begins June 1 and ends Nov. 30.
Discussion Starter: What other technologies are useful for hurricane-related information? Share your comments below. | <urn:uuid:aa12dcd3-af1d-49fa-b733-69bec3012491> | CC-MAIN-2017-09 | http://www.govtech.com/public-safety/Hurricane-Storm-Surge-Calculated-Web-Tool.html?elq=426ac008a9af4cbc9e841569a6bb7941 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00435-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934563 | 685 | 2.671875 | 3 |
The White House on Tuesday welcomed some of America’s most innovative students for the fourth-ever White House Science Fair, which this year emphasized the specific contributions of girls and young women who are excelling in science, technology, engineering and math.
Among those highlighted at the conference was Elana Simon, 18, who was diagnosed with a rare liver cancer at age 12, and her work with one of her surgeons to find a common genetic mutation across samples of other patients coping with the same cancer.
Cassandra Baquero, 13, Caitlin Gonzolez, 12 and Janessa Leija, 11, of Los Fresnos, Texas, also showcased their work as part of an all-girl team of app builders who built “Hello Navi,” an app that gives verbal directions to help their visually-impaired classmates navigate unfamiliar spaces based on measurements of a user’s stride and digital building blueprints. Girl Scout Troop 2612 of Baltimore, Md., demonstrated their computer program designed to automatically retract a bridge when flood conditions are detected by a motion sensor embedded in the river bed.
In remarks after viewing this year’s science projects, President Obama cited statistics that just one in five Bachelor’s degrees in engineering and computer science are earned by women, while fewer than three in 10 workers in science and engineering fields are women.
“That means we have half of our team we’re not even putting on the field,” Obama said. “We have to change those numbers. These are the fields of the future.”
Obama announced new efforts to invest in STEM education, including a $35 million grant competition by the Education Department to help train and prepare STEM teachers in support of the President’s goal to train 100,000 excellent STEM teachers.
The president also announced an expanded effort to provide STEM learning opportunities to more than 18,000 low-income students this summer through the STEM AmeriCorps program, which launched at the 2013 White House Science Fair. The summer program will bring together AmeriCorps members with community groups, educational institutions and corporate sponsors to help students learn about STEM – from building robots to writing code for the International Space Station to participating in “scientist-for-a-day” programs to explore various careers.
Seven cities across the country also will launch STEM mentoring efforts through the US2020 City Competition, sponsored by Cisco, which challenges cities to develop innovative models for scaling STEM mentorship for young students, particularly girls, minorities and low-income families. The goal of the program is to mobilize 1 million STEM mentors annually by the year 2020.
“Last week, we had the Superbowl champion Seattle Seahawks here, and that was cool,” Obama said. “But I believe what’s being done by these young people is even more important. As a society, we have to celebrate outstanding work by young people in science at least as much as we do Superbowl winners.” | <urn:uuid:58fac8a6-1147-443a-8e7f-01346c5430ec> | CC-MAIN-2017-09 | http://www.nextgov.com/cio-briefing/wired-workplace/2014/05/white-house-spotlights-contributions-girls-stem/85234/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00435-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.960108 | 617 | 2.84375 | 3 |
All 30 users on a single floor of a building are complaining about network slowness. Afterinvestigating the access switch, the network administrator notices that the MAC address table isfull (10,000 entries) and all traffic is being flooded out of every port. Which action can theadministrator […]
A network printer has a DHCP server service that cannot be disabled. How can a layer 2 switch beconfigured to prevent the printer from causing network issues?
A switch is being configured at a new location that uses statically assigned IP addresses. Whichwill ensure that ARP inspection works as expected?
Which of the following would need to be created to configure an application-layer inspection ofSMTP traffic operating on port 2525?
Which command is used to nest objects in a pre-existing group?
Which threat-detection feature is used to keep track of suspected attackers who createconnections to too many hosts or ports?
What is the default behavior of an access list on the Cisco ASA security appliance?
What is the default behavior of NAT control on Cisco ASA Software Version 8.3?
Which three options are hardening techniques for Cisco IOS routers? (Choose three.)
Which three commands can be used to harden a switch? (Choose three.) | <urn:uuid:bc354545-4ea5-47c0-88aa-3811cce8e1ae> | CC-MAIN-2017-09 | http://www.aiotestking.com/cisco/category/exam-300-206-implementing-cisco-edge-network-security-solutions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00487-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923742 | 258 | 2.578125 | 3 |
Programming Cars to KillBy Samuel Greengard | Posted 2016-03-28 Email Print
What happens when a mechanical part fails or there's a landslide, and a self-driving car must choose between saving its passenger or a motorist in another car?
The MIT Technology Review recently presented a story titled "Why Self-Driving Cars Must Be Programmed to Kill." Although the topic seems to careen entirely into the sensationalistic category, it actually represents a very real and disturbing dilemma for companies manufacturing products. There's a growing need to embed ethical decision making into systems that rely on artificial intelligence (AI) and algorithms.
Self-driving vehicles, as the article points out, are at the nexus of this technology conundrum. As automakers embed automatic and autonomous functions in cars and trucks—things like automatic braking, automated steering and self-parking functions, for instance—there's a need to think about what happens during an unavoidable accident (rather than the human negligence we typically describe as an "accident").
For example, what happens when a mechanical part fails or a landslide takes place and the car must make a choice between saving its passenger or a motorist in another car? How does the motor vehicle steer, break and sense the environment around it? Which safety systems spring into action and how do they work?
It's a given that manufacturers will embed features and capabilities that make autos and driving safer. Heck, simply removing phone- and food-wielding humans from the equation is a huge step forward. And while there's a clear need to understand liability laws and design products that operate in an ethical and legally permissible way in a digital world, there's also a gray area that is completely unavoidable.
And that's where the rubber hits the proverbial road. As the article points out: "If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation."
Unfortunately, there are no clear answers, and right and wrong are highly relative terms in this context.
When researchers at the Toulouse School of Economics in France presented the question about how autonomous vehicles should operate to several hundred Amazon Mechanical Turk participants, the results were fairly predictable: Cars should be programmed to minimize death tolls. However, respondents also noted that they had strong reservations about these systems. Simply put: People were in favor of cars that sacrifice the occupant to save other lives …but they don't want to ride in such a vehicle.
As we wade deeper into robotics, drones, 3D printing and other digital technologies, similar questions and ethical conundrums will occur. It may not be long until every organization requires a chief ethical officer to sort through the moral and ethical implications of technology. | <urn:uuid:c8fe4bd9-ac01-44a7-b2ee-6fe15439c86b> | CC-MAIN-2017-09 | http://www.baselinemag.com/blogs/programming-cars-to-kill.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00607-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959581 | 574 | 2.96875 | 3 |
In search of better flu forecasting
The Centers for Disease Control and Prevention recently released its weekly flu forecasts for the 2016-2017 season on FluSight, a beta website housing influenza activity forecasts provided by 21 research teams.
The research initiative was begun by the CDC in 2013. "Predict the Influenza Season Challenge" is a public contest to encourage people from around the world to predict the timing, peak and intensity of each year's flu seasons using social media data alongside data from the organization's own flu "surveillance" systems. The latter parses reports describing flu-like symptoms from doctor offices and health clinics. Accurate flu predictions help public health officials and other healthcare professionals schedule their vaccination campaigns, issue flu information and make staffing decisions.
Last year, Carnegie Mellon University’s Delphi research group contributed the three most accurate national-level flu forecasts during the 2015-2016 flu season, besting the forecasts of 10 other external groups working with CDC.
At CMU, the research team's top-ranked forecasting system uses machine learning to make predictions based on historic and current data from the CDC. Its second-ranked system uses weekly predictions fed by ordinary people into the "Influenza Edition" of Epicast. Each week, registered users spend a couple of minutes predicting current and future flu activity within one or more of the various health and human services regions of the United States.
This human-powered approach was actually the top-ranked forecasting system for the 2014-2015 flu season, said Roni Rosenfeld, professor in the School of Computer Science's Machine Learning Department and Language Technologies Institute, in a CMU News article about the work. "Any one human did not do better than the statistical system," he emphasized. "They did worse. But in the aggregate, the human system did better that season."
This year, the team is hoping to beat its record.
They'll be monitoring the results of various approaches to figure out which one is more accurate: using artificial intelligence and machine learning, or crowdsourcing the predictions. What they learn from the process isn't limited to the flu, however, as their work can be applied to predict other kinds of outbreaks.
"Our predictions last season proved to be reasonable, but the truth is that when it comes to forecasting epidemics, whether it be for the flu or for other diseases, we're just getting our feet wet," Rosenfeld said.
It's too early to tell how the forecasting will turn out this year. The season typically runs from October through May. Due to reporting lags, however, the true flu activity levels won't be known until the season is done.
While the researchers are refining their prediction methods and models on the flu, they're also applying their systems to forecasting outbreaks of dengue fever, which strikes about 100 million people around the world each year, killing thousands, according to the CDC. Eventually, the team would also like to apply its forecasting efforts to HIV, drug resistance, Ebola, Zika and Chikungunya.
This article was first posted to Campus Technology, a sister site to GCN.
Dian Schaffhauser is a writer who covers technology and business. Send your higher education technology news to her at email@example.com. | <urn:uuid:cec4408d-2291-4b56-8447-f2e256e60f0c> | CC-MAIN-2017-09 | https://gcn.com/articles/2016/12/21/cdc-flu-predictions.aspx?admgarea=TC_EmergingTech | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00007-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957524 | 670 | 2.765625 | 3 |
Understanding what makes an SSD different than an HDD is vital for any VAR hoping to sell SSDs to their clients. So here is a short breakdown of what makes SSDs different – and better.
How they work: A SSD is basically a package of memory chips; typically the same ones as are used in common flash memory cards. This means that SSDs have no moving parts, and they do not require any power to retain data on the chips once it has been recorded there.
One benefit of SSDs: Data fragmentation is not an issue. It doesn’t matter if data from a single file are stored side-by-side or in different locations.
The reason: Unlike an HDD, an SSD doesn’t have lots of moving parts that have to get from one area of its storage to another to retrieve data. It’s all done electronically. There is some ‘seek time’ expended, but it is some minimal as to be unnoticeable.
An HDD is a complex machine that contains multiple magnetically coated disks (platters), which spin for data access and recording. Data is either recorded to or read from the spinning platters using ‘drive heads’, which are mounted on moving arms.
The need to spin the platters and move the arms to access data is what accounts for HDD’s slow speed compared to SSDs. It is also why HDDs generate heat and need for cooling, and make noise due to all these parts being in motion.
The need for spinning and moving parts really becomes a problem when a file is fragmented on the HDD; that is, parts of it are stored in different locations on the platters. The drive has to physically search for and then read these sections, which takes time when all these moving parts are involved.
These fundamental differences explain why you can replace an older computer’s HDD with an SSD, and see remarkably improved performance. This improvement is a direct result of the SSD not spending the same amount of time as an HDD when it comes to accessing and recording data.
This is also why computers equipped with SSDs multitask better than those with HDDs. They take less time to do the same jobs.
There are other reasons why SSDs look better than HDDs when you compare them.
First is robustness: Imagine dropping a solid state digital display watch on the floor. Then imagine taking one of those expensive windup clocks in a clear glass dome, and dropping it. Smash!
In this comparison, the SDD is the solid state watch. The HDD is the glass-domed windup clock. The first doesn’t have the parts to misalign and break; the second does. (Even without the glass done.)
Second is reliability: Lacking moving parts, SSDs have a lot less to fail than HDDs do. This doesn’t mean SSDs don’t wear out. They do, because the number of read/write cycles are not infinite. Nevertheless, having no moving parts makes an SSD less vulnerable to failure, because there is less that can fail.
Third is power consumption: SSDs use up to ten times less power than HDDs. This can be a big plus when SSDs are installed in battery-powered devices such as laptop computers. Reduced power consumption can make a real difference when an employee is off-site, and not able to plug in their laptop to keep it running; especially if the laptop is older and has batteries that are not longer taking a 100% charge.
Put everything together, and SSDs are just plain better than HDDs -- hands down. This doesn’t just mean in terms of performance, but long-term value. This is because the time savings offered by SSDs add up for businesses in improved productivity and better employee morale.
What else would you include in a SSD comparison to HDD? | <urn:uuid:a76e9c3e-a8f2-4f71-b7d6-3b9f5b773fb6> | CC-MAIN-2017-09 | http://www.ingrammicroadvisor.com/components/a-simple-ssd-comparison-to-hdd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00003-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956378 | 799 | 3.15625 | 3 |
Last week I wrote about Apple’s new default encryption policy for iOS 8. Since that piece was intended for general audiences I mostly avoided technical detail. But since some folks (and apparently the Washington Post!) are still wondering about the nitty-gritty details of Apple’s design, I thought it might be helpful to sum up what we know and noodle about what we don’t.
To get started, it’s worth pointing out that disk encryption is hardly new with iOS 8. In fact, Apple’s operating system has enabled some form of encryption since before iOS 7. What’s happened in the latest update is that Apple has decided to protect much more of the interesting data on the device under the user’s passcode. This includes photos and text messages — things that were not previously passcode-protected, and which police very much want access to.*
So to a large extent the ‘new’ feature Apple is touting in iOS 8 is simply that they’re encrypting more data. But it’s also worth pointing out that newer iOS devices — those with an “A7 or later A-series processor” — also add substantial hardware protections to thwart device cracking.
In the rest of this post I’m going to talk about how these protections may work and how Apple can realistically claim not to possess a back door.
One caveat: I should probably point out that Apple isn’t known for showing up at parties and bragging about their technology — so while a fair amount of this is based on published information provided by Apple, some of it is speculation. I’ll try to be clear where one ends and the other begins.
Password-based encryption 101
Normal password-based file encryption systems take in a password from a user, then apply a key derivation function (KDF) that converts a password (and some salt) into an encryption key. This approach doesn’t require any specialized hardware, so it can be securely implemented purely in software provided that (1) the software is honest and well-written, and (2) the chosen password is strong, i.e., hard to guess.
The problem here is that nobody ever chooses strong passwords. In fact, since most passwords are terrible, it’s usually possible for an attacker to break the encryption by working through a ‘dictionary‘ of likely passwords and testing to see if any decrypt the data. To make this really efficient, password crackers often use special-purpose hardware that takes advantage of parallelization (using FPGAs or GPUs) to massively speed up the process.
Thus a common defense against cracking is to use a ‘slow’ key derivation function like PBKDF2 or scrypt. Each of these algorithms is designed to be deliberately resource-intensive, which does slow down normal login attempts — but hits crackers much harder. Unfortunately, modern cracking rigs can defeat these KDFs by simply throwing more hardware at the problem. There are some approaches to dealing with this — this is the approach of memory-hard KDFs like scrypt — but this is not the direction that Apple has gone.
How Apple’s encryption works
Apple doesn’t use scrypt. Their approach is to add a 256-bit device-unique secret key called a UID to the mix, and to store that key in hardware where it’s hard to extract from the phone. Apple claims that it does not record these keys nor can it access them. On recent devices (with A7 chips), this key and the mixing process are protected within a cryptographic co-processor called the Secure Enclave.
The Apple Key Derivation function ‘tangles’ the password with the UID key by running both through PBKDF2-AES — with an iteration count tuned to require about 80ms on the device itself.** The result is the ‘passcode key’. That key is then used as an anchor to secure much of the data on the phone.
Since only the device itself knows UID — and the UID can’t be removed from the Secure Enclave — this means all password cracking attempts have to run on the device itself. That rules out the use of FPGA or ASICs to crack passwords. Of course Apple could write a custom firmware that attempts to crack the keys on the device but even in the best case such cracking could be pretty time consuming, thanks to the 80ms PBKDF2 timing.
(Apple pegs such cracking attempts at 5 1/2 years for a random 6-character password consisting of lowercase letters and numbers. PINs will obviously take much less time, sometimes as little as half an hour. Choose a good passphrase!)
So one view of Apple’s process is that it depends on the user picking a strong password. A different view is that it also depends on the attacker’s inability to obtain the UID. Let’s explore this a bit more.
Securing the Secure Enclave
The Secure Enclave is designed to prevent exfiltration of the UID key. On earlier Apple devices this key lived in the application processor itself. Secure Enclave provides an extra level of protection that holds even if the software on the application processor is compromised — e.g., jailbroken.
One worrying thing about this approach is that, according to Apple’s documentation, Apple controls the signing keys that sign the Secure Enclave firmware. So using these keys, they might be able to write a special “UID extracting” firmware update that would undo the protections described above, and potentially allow crackers to run their attacks on specialized hardware.
Which leads to the following question? How does Apple avoid holding a backdoor signing key that allows them to extract the UID from the Secure Enclave?
It seems to me that there are a few possible ways forward here.
- No software can extract the UID. Apple’s documentation even claims that this is the case; that software can only see the output of encrypting something with UID, not the UID itself. The problem with this explanation is that it isn’t really clear that this guarantee covers malicious Secure Enclave firmware written and signed by Apple.
Update 10/4: Comex and others (who have forgotten more about iPhone internals than I’ve ever known) confirm that #1 is the right answer. The UID appears to be connected to the AES circuitry by a dedicated path, so software can set it as a key, but never extract it. Moreover this appears to be the same for both the Secure Enclave and older pre-A7 chips. So ignore options 2-4 below.
- Apple does have the ability to extract UIDs. But they don’t consider this a backdoor, even though access to the UID should dramatically decrease the time required to crack the password. In that case, your only defense is a strong password.
- Apple doesn’t allow firmware updates to the Secure Enclave firmware period. This would be awkward and limiting, but it would let them keep their customer promise re: being unable to assist law enforcement in unlocking phones.
- Apple has built a nuclear option. In other words, the Secure Enclave allows firmware updates — but before doing so, the Secure Enclave will first destroy intermediate keys. Firmware updates are still possible, but if/when a firmware update is requested, you lose access to all data currently on the device.
All of these are valid answers. In general, it seems reasonable to hope that the answer is #1. But unfortunately this level of detail isn’t present in the Apple documentation, so for the moment we just have to cross our fingers.
Addendum: how did Apple’s “old” backdoor work?
One wrinkle in this story is that allegedly Apple has been helping law enforcement agencies unlock iPhones for a while. This is probably why so many folks are baffled by the new policy. If Apple could crack a phone last year, why can’t they do it today?
But the most likely explanation for this policy is probably the simplest one: Apple was never really ‘cracking’ anything. Rather, they simply had a custom boot image that allowed them to bypass the ‘passcode lock’ screen on a phone. This would be purely a UI hack and it wouldn’t grant Apple access to any of the passcode-encrypted data on the device. However, since earlier versions of iOS didn’t encrypt all of the phone’s interesting data using the passcode, the unencrypted data would be accessible upon boot.
No way to be sure this is the case, but it seems like the most likely explanation.
* Previous versions of iOS also encrypted these records, but the encryption key was not derived from the user’s passcode. This meant that (provided one could bypass the actual passcode entry phase, something Apple probably does have the ability to do via a custom boot image), the device could decrypt this data without any need to crack a password.
** As David Schuetz notes in this excellent and detailed piece, on phones with Secure Enclave there is also a 5 second delay enforced by the co-processor. I didn’t (and still don’t) want to emphasize this, since I do think this delay is primarily enforced by Apple-controlled software and hence Apple can disable it if they want to. The PBKDF2 iteration count is much harder to override. | <urn:uuid:95ea2cf0-bc25-4582-8134-280f5d5c1153> | CC-MAIN-2017-09 | https://blog.cryptographyengineering.com/2014/10/04/why-cant-apple-decrypt-your-iphone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00475-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944584 | 1,973 | 2.546875 | 3 |
Accurately modeling the flow of shockwaves across a fluid body can be a difficult thing to do. Attempts to address the problem by dialing up the computational accuracy of the models can actually make it worse. Now, researchers at the A*STAR Institute of High Performance Computing (IHPC) have come up with an innovative way that models shockwaves with a higher level of overall accuracy.
Like scientists in many fields, researchers working in the field of computational fluid dynamics are accustomed to varying their models to suit the particular needs of their experiment. A scientist may test a fresh hypothesis with a low-order approximation that delivers a similarly low level of accuracy. He may follow that up by using a higher order model that is tuned to deliver more accuracy and similarity to real-world conditions. Simultaneously, he may tighten up the three-dimensional computational mesh to get more data into the equation.
While one may assume that the higher order model would deliver a better result, that is not the case when it comes to modeling shockwaves. As Vinh-Tan Nguyen of the IHPC explains, shockwaves are a special case.
“Simulating flows using high-order approximations triggers oscillations, which cause miscalculations at the front of shock waves where the flow is discontinuous,” Nguyen tells Phys.org. “It therefore becomes counterproductive to have high-order approximations in place right across shock regions.”
Nguyen and his team addressed this problem by basically de-tuning the model and using lower-order approximations in the specific regions where shockwave fronts are active, which they detect by using a sensor. The researchers simultaneously increased the resolution of the 3D computational mesh to compensate for the lower-order approximations.
Nguyen explains the outcome: “With precise detection through the shockwave sensor we can apply the right capturing scheme to treat each shockwave, regardless of its strength,” he tells Phys.org. “Our mesh adaptation procedure then simultaneously refines the mesh in shockwave regions and coarsens it in areas of least change, reducing computational costs significantly.”
The new technique is applicable to modeling any high-speed shockwave, and is an improvement over the previous approaches, which were specific to particular flow problems. The approach is expected to have real-world application in the fields of aerodynamics and blast analysis. The researchers say the computational scheme may also be useful for simulating the interface between air and water, which would be useful in the marine industry.
The IHPC was established in April 1998 under the Agency for Science, Technology and Research (A*STAR). The organization promotes and spearheads scientific advances and technological innovations through computational modeling, simulation and visualization methodologies and tools. | <urn:uuid:40eebec6-6206-48cd-a123-0ff6ce51fb9e> | CC-MAIN-2017-09 | https://www.hpcwire.com/2013/08/08/singapore_researchers_build_a_better_shockwave_model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00651-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.936964 | 569 | 2.953125 | 3 |
Iceland's National Energy Authority has created the world's first magma-based geothermal energy system after drilling 1.3 miles (2,100 meters) through the Earth's crust.
It is only the second time that a drilling operation has broken through to the mantle, the next layer after the Earth's crust, the group said. It is the world's first magma-based enhanced geothermal system (EGS).
The drilling operation was the work of the Iceland Deep Drilling Project (IDDP), a consortium of the National Energy Authority of Iceland and the nation's leading energy companies.
The borehole is located in Krafla, in northeast Iceland, near a volcanic crater. The hole created a shaft with high-pressure, super-heated steam that could power a nearby electrical plant, the project leaders said.
"According to the measured output, the available power was sufficient to generate up to 36 megawatts electricity, compared to the installed electrical capacity of 60 megawatts in the Krafla power plant," IDDP stated in a document.
The team was able to bore the deep hole by pumping in cold water to break up the rock next to the magma in a process known as hydrofracking.
Once the IDDP reached molten magma of the Earth's mantle, it lined the bottom of the bore hole with a steel casing, creating a shaft of high-pressure steam that exceeded 842 degrees Fahrenheit (450 Celsius). The project broke a world record for geothermal heat and power.
The team said the steam from the IDDP-1 well, as it's called, could be fed directly into the power plant at Krafla.
Iceland's National Power Company was preparing to connect to the magma-powered steam pipe just before the hole had to be closed due to a valve failure.
IDDP, however, is planning to attempt a reopening of IDDP-1, as well as to drill a second borehole (IDDD-2) in Reykjanes, Iceland, in the coming years.
"In various parts of the world so-called EGS geothermal systems ... are being created by pumping cold water into hot dry rocks at 4 to 5 km depths. Then the heated water is taken up again as hot water or steam from nearby production wells. In recent decades, there has been considerable effort invested in Europe, Australia, USA, and Japan, with uneven results and typically poor results," the IDDP stated.
The Earth's layers
Scientists theorize that the Earth is made up of four layers: a crust, a mantle, a core and an inner core. The sub-ocean crust is three to five miles thick, and the continental crust is 20 to 30 miles thick. That crust only makes up about 1% of the Earth's mass.
The mantle is the next layer below the crust. The mantle is about 1,800 miles thick and makes up about 70% of the planet's mass. Researchers believe the mantle is where most of the Earth's internal heat is located because of its sheer size and because most of it is molten rock.
The Earth's layers (Source: Kelvinsong (CC BY-SA 3.0)
While the IDDP-1 is not the first bore hole to reach the planet's magma, it is the first time the IDDP was able to harness the mantle's heat to produce a steam pipe that could power a plant. In 2007, Puna Geothermal Venture, which was looking for ways to produces geothermal power using Hawaii's volcanoes, drilled 2.5 kilometers into Hawaii's Big Island, and broke through to the mantle.
"The success of this drilling and research is amazing to say the least, and could in the near future lead to a revolution in energy efficiency in high-temperature geothermal areas of the world," the IDDP stated.
While the hole ultimately had to be closed after a few months, the IDDP said by successfully drilling the hole and carrying out experiments, it demonstrated that a high-enthalpy (energy) geothermal system can be created using the Earth's magma.
The IDDP-1 geothermal boring operation as seen from a distance on Iceland's barren landscape. (Photo: IDDP)
"What is the future and do the results have a practical value? Sure, the future is bright and the answer is 'yes'. Although the IDDP-1 hole is unusable at the moment, in [the] future the aim is to drill a similar hole and/or to repair IDDP 1 hole," IDDP stated. "The experiment at Krafla suffered various setbacks and tried personnel and equipment throughout. However, the process itself was very instructive, and... comprehensive reports on practical lessons learned are nearing completion."
This article, Iceland taps the ultimate renewable energy source: Earth's magma, was originally published at Computerworld.com.
Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is email@example.com.
Read more about sustainable it in Computerworld's Sustainable IT Topic Center.
This story, "Iceland Taps the Ultimate Renewable Energy Source: Earth's Magma" was originally published by Computerworld. | <urn:uuid:6c3099c8-acfe-414b-8e50-a1d602336c03> | CC-MAIN-2017-09 | http://www.cio.com/article/2379058/energy/iceland-taps-the-ultimate-renewable-energy-source--earth-s-magma.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00527-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.960442 | 1,123 | 3.21875 | 3 |
(TNS) - The critical document that determines how much space should be left in Lake Oroville for flood control during the rainy season hasn’t been updated since 1970, and it uses climatological data and runoff projections so old they don’t account for two of the biggest floods ever to strike the region.
Independent experts familiar with the flood-control manual at Oroville Dam said Wednesday there’s no indication the 47-year-old document contributed to the ongoing crisis involving the dam’s ailing spillways. The current troubles stem from structural failures, not how the lake’s flood-storage space was being managed.
But the experts say Oroville’s manual does point to larger operational issues that affect most of California’s primary flood-control dams. Like the dams, most of the manuals were designed decades ago by engineers using slide rules instead of computers. Many of the documents and licenses that govern dam operations don’t account for advances in hydrology, meteorology and engineering, or for a changing climate.
“California’s flood infrastructure is based on the hydrology of the past,” said Jeffrey Mount, a senior fellow at the Water Policy Center at the Public Policy Institute of California. “They use the hydrology of the past to design the infrastructure of the future.
“I don’t know a scientist anymore who thinks the future is going to look anything like the past.”
The flood-control manuals are created by the U.S. Army Corps of Engineers. California has more than 1,500 dams, 54 of which are considered primary flood-control structures. The owners of those 54 dams – they include the federal government, the state and in some cases local water districts – must abide by the modeling outlined in the manuals during the rainy season. The modeling is designed to ensure there’s ample space in the reservoirs to capture heavy river flows and mountain runoff, and to prevent catastrophic flooding downstream.
The majority of the manuals haven’t been updated since at least the 1980s. Some are so old, their pages include charts drawn by hand in pen.
The California Department of Water Resources, which operates Oroville Dam, is required to make releases according to charts outlined in the dam’s manual. It’s dated August 1970, two years after the dam’s construction was completed.
Ann Willis, a researcher at the UC Davis Center for Watershed Sciences, is among the critics who say the the manuals are too rigidly tied to outdated weather models. At Oroville, the manual cites weather patterns prior to the 1950s, and data doesn’t account for the catastrophic floods of 1986 and 1997. Plus, the manuals are designed around weather patterns that include capturing water from spring snowmelt, an annual occurrence expected to shift, in both timing and amount, with continued climate change.
Army Corps officials say the manuals have done their jobs, despite their age.
“Just because a water-control manual is old doesn’t mean it’s obsolete,” said Joe Forbis, chief of water management at the Corps’ Sacramento office. “It still allows the reservoir to be operated appropriately.”
He acknowledged his agency would prefer to have updated manuals. But, he said, it’s difficult because the updates require complex engineering and environmental studies. Funding would have to be approved by Congress.
Most recently, the issue of outdated dam manuals came up in the context of California’s five-year drought. At Folsom Dam near Sacramento, local water agencies complained that federal dam operators were releasing too much water from the reservoir during a lengthy dry spell when no major storms were forecast and the state was trying to conserve water. Federal operators said they had no choice, because Folsom’s manual dictated that it create flood-control space based on the time of year.
Unlike many dams, Folsom will get an update to its manual as part of a $900 million installation of a new auxiliary spillway scheduled to be completed later this year.
Mount, of the Public Policy Institute, said dam operations are generally guided by rigid sets of rules that don’t allow for necessary operational flexibility.
“Adjusting course on dams – whether by changing the infrastructure or the way they are operated – is difficult,” Mount wrote in a post on the PPIC’s website Wednesday. “Licenses for non-federal dams like Oroville – administered by the Federal Energy Regulatory Commission – last for 30-50 years. These lock in place all aspects of dam operation for several generations and require herculean efforts to overcome.”
Butte and Plumas counties raise similar concerns as part of a lawsuit pending in California’s 3rd District Court of Appeal. The suit, filed in 2008, argues that another document critical to operations at Oroville Dam fails to reflect modern climate science.
In 2008, the Department of Water Resources conducted an environmental review of dam operations as part of the structure’s 50-year relicensing process. Plumas and Butte counties – whose communities sit in the Feather River watershed above and below the dam – sued, alleging the analysis was inadequate because it did not properly account for climate change.
“They called it ‘speculation,’ ” Butte County Counsel Bruce Alpert said Wednesday.
The case eventually was moved to Yolo Superior Court, and in 2012 Judge Daniel P. Maguire ruled in favor of the state. He echoed arguments made by Department of Water Resources lawyers when he wrote in his statement of decision that an environmental review “need not (and should not) speculate about the future.”
“It is a long step from the relatively generalized climate change data in the record to the project-specific forecasting demanded here,” the judge wrote, “and Petitioners have not carried their burden of showing that DWR could have taken this step.”
The counties appealed, saying the review relied on outdated forecasting models that “fail to protect the public against the hazards of more severe flood events or water supply shortages under climate change.” They are asking the higher court to direct the Yolo judge to set aside DWR’s certification of the environmental review.
DWR lawyers countered in their opposition papers that the environmental report “adequately considers climate change ... based on the limited information available at the time the EIR was certified in 2008.”
“We absolutely account for climate change in all of our planning processes, and the impacts of climate change are integral to the California Water Action Plan,” department spokesman Doug Carlson said in an email.
Jay Lund, director of the Center for Watershed Sciences at UC Davis, said he expects the malfunctions now crippling Oroville Dam will prompt a review of operations and likely an update of its operating manual as part of any retrofit. It also may focus attention on other aging flood infrastructure in the state.
“One thing you learn from civil engineering is you have to have failures in order to make progress,” Lund said.
Ryan Sabalow: 916-321-1264, @ryansabalow
©2017 The Sacramento Bee (Sacramento, Calif.)
Visit The Sacramento Bee (Sacramento, Calif.) at www.sacbee.com
Distributed by Tribune Content Agency, LLC. | <urn:uuid:ca4d91c5-48f0-43d0-89ba-a1edb7beb215> | CC-MAIN-2017-09 | http://www.govtech.com/em/disaster/Oroville-Dams-flood-control-manual-hasnt-been-updated-for-half-a-century.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00527-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953745 | 1,540 | 2.6875 | 3 |
This series starts with an overview of wireless's most often-overlooked but fundamental elements: the properties of RF and waves.
RF and Waves Wireless networking is an RF (radio frequency) technology. Air is the vehicle through which the data is carried, just as Ethernet uses copper cables. WLAN frequency ranges are in the 2.4GHz and 5GHZ bands. The most common legacy wireless standards, 802.11b and 802.11g, use the 2.4GHz range. IEEE 802.11a uses 5GHz exclusively. The newer 802.11n operates mostly in 5GHz but can also use the 2.4GHz band. The forthcoming 802.11ac standard operates in 5GHz.
To give you a visual, 2.4GHz waves are about 5 inches long. 5GHz waves are approximately double the frequency, and therefore half the length, about 2.5 inches.
The size of these WLAN waves isn't just fodder for good water cooler trivia. As we all learned in middle school, the higher the wireless frequency is, the more compact the RF wave will be. The size of a wave has a significant effect on how it moves through the air, what it bounces off of, what will cause it to shatter, and how fast it loses power and fades away. We'll discuss that fading, called attenuation, later.
Wireless waves will bounce off and be deflected from any smooth surfaces large enough to impact the wave, such as metal ducts, lockers and large equipment. The tiny metal strings embedded in safety glass in many windows and doors will shatter those waves into little bits and scatter them in various directions. Thus, the physical properties of a building or structure where a wireless network is deployed will have a significant effect on the network's performance. The next time you walk through a building, look around and take a mental note of what may cause RF waves to be deflected or damaged.
Attenuation A fundamental property of waves is attenuation, or loss of power as a wave moves. There are many materials and environmental factors that can lead to attenuation. The most obvious is when wireless signals move through walls, brick, concrete and even sheetrock. But attenuation also happens through air. In a perfect vacuum, there would be no loss of power, as waves move uninhibited through space. On Earth, waves move through air and lose strength as they travel from their source.
Attenuation is the scientific word for concepts we already know intuitively. When that teenager in the car goes by with his stereo cranked up, you'll most likely still hear the bass long after the higher-pitched treble sound has faded from earshot. The lower frequencies of the music will travel farther, with lower attenuation than the higher ones.
Similarly, in WLANs, a 2.4GHz signal will last longer and go farther than the higher-frequency 5GHz signal. This has practical implications for designing a wireless network. When planning RF coverage for an area, if you intend to use a dual-radio AP that will serve clients on both 2.4GHz and 5GHz, you'll need to plan for the lowest common denominator in signal coverage--the 5GHz. For that reason, any predictive wireless planning or onsite survey planning should be based on 5GHz. Understand as your client landscape changes, and you have more 802.11n and 802.11ac clients using 5GHZ, you may find you need to add more APs, or move them closer together.
In part two, we'll review two concepts that govern much of the complex behavior of radio frequencies: half duplex and collision avoidance.
Jennifer Jabbusch Minella is CISO and infrastructure security specialist at Carolina Advanced Digital. | <urn:uuid:e88967ab-c9f1-4b0f-bac6-4d3e3befad91> | CC-MAIN-2017-09 | http://www.networkcomputing.com/networking/wireless-beginners-part-1-rf-and-waves/1640728721?cid=sbx_nwc_related_mostpopular_default_next_gen_network_tech_center&itc=sbx_nwc_related_mostpopular_default_next_gen_network_tech_center&piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00051-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94944 | 772 | 3.859375 | 4 |
Mobile communications have come a long way. Citrix Systems including XenMobile and many features such as TCP Westwood and SPDY integrated into NetScaler have played a big role. Nevertheless there is one source of irritation that causes angst among mobile clients as they roam about their day.
Standard TCP connections are not maintained when mobile devices switch from one network to another. This causes loss of state information for applications using a TCP connection that fails. For example, if the user is streaming a video on a mobile phone over a 4G network, streaming will be interrupted when connecting to a Wi-Fi system. TCP connectivity is lost and must be reestablished, causing the user to start the video from the beginning.
To ensure reliable mobile sessions when the network changes, multiple client-to-server paths for the same TCP flow are required. In most deployments today hosts and clients typically have the option of multiple network paths between them, including 3G/4G and 802.11n access. This allows the use of the optimal path for data transfer and improves the end user experience for faster data transmissions. Mobile device communications, particularly running LTE and other 4G networks, are especially enhanced.
As one would expect, the technology supporting multiple client/server paths is called Multi-path TCP (MPTCP.) This connectivity method is an extension of the TCP/IP protocol and leverages multiple paths available between MPTCP-enabled hosts and clients to maintain the TCP sessions. It is a major modification to TCP that allows these paths to be used simultaneously by a single transport connection. With MPTCP enabled, transactions can continue even if one of the network paths is not available. MPTCP offers better robustness and availability than standard TCP, because the application session does not fail if one link goes down. MPTCP is an IETF charter en-route to standardization. MPTCP has been adopted by at least one major mobile manufacturer and is expected to be broadly adopted by end of this year.
The idea behind MPTCP is to make use of the additional paths that are otherwise ignored by the routing system. Doing so can provide more bandwidth, fault tolerance and higher network utilization for network operators. MPTCP involves modifying TCP to give it the capability to send a given packet over a given path. The TCP New Reno congestion control algorithms are run separately for each path, so each has its own transmission window that reflects that path’s available bandwidth. MPTCP can send: Many packets over paths with a large transmission window and/or a small round-trip-time (RTT); and fewer packets over paths that have a small window and/or a large RTT. This way, a Multipath TCP can automatically and quickly adjust to congestion in the network, moving traffic away from clogged paths and towards uncongested routes.
NetScaler is unique among Application Delivery Controllers in their full support for multiple TCP paths. Through an integrated MPTCP Gateway feature NetScaler holds a dual TCP stack in between the service and client to help with mobility and efficient access. Initially a mobile client is switched on, and connects to the carrier network. Then the Client-MPTCP detects this network, opens a connection and NetScaler starts transmitting data over the wireless carrier network. If available the client can then detect and connect to a new Wi-Fi network. The Client-MPTCP detects this link and opens a new TCP connection. If NetScaler detects congestion on the carrier link it then dynamically sends data over Wi-Fi. The result is a seamless load distribution and path failover with faster data transmission for a superior end user experience. | <urn:uuid:ff52fe57-ae2d-457c-b07f-5361b27665d4> | CC-MAIN-2017-09 | https://www.citrix.com/blogs/2013/05/28/maximize-mobile-user-experience-with-netscaler-multipath-tcp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00051-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.924073 | 744 | 3.0625 | 3 |
Just about the time you thought you were getting a grip on computer networking -- or were at least clinging to it by your fingernails -- there's another 14-wheeler racing toward you on the information superhighway. Electronic commerce and digital signatures will become central issues in public access to government in the very near future.
Electronic commerce, like ol' fashioned commerce, includes written contracts, legally binding signatures and associated payments.
Contracts are the easy part -- text is text, whether it is on paper or on screen. But what about guaranteeing that all copies of an electronic contract are identical and remain unchanged? And how does one sign paperless contracts or authenticate such signatures? And how can one send or receive payments online?
CRYPTOGRAPHY IS THE KEY
Crypto is useful for far more than merely scrambling files and messages that one wishes to keep secret. A particular class of crypto, known as public key cryptography, is especially useful.
All crypto uses one or more keys to lock and unlock scrambled information. And, of course, the keys are not metal, and [un]locking is simply a computer process that [un]scrambles the [un]protected information. The keys are merely long sequences of 0's and 1's (binary digits or "bits"), and can be represented by decimal digits or even alpha-numeric sequences that are much shorter (but still tediously long).
Public key crypto uses matched pairs of keys -- one known as a "public" key; the other known as the matching "private" key. Either key can be used to scramble information, and the other matching key is the only one that can be used to unscramble the information.
One's public key can be freely shared and is often published in lists of public keys, just like people publish their phone numbers. Then, anyone wishing to securely communicate with the key owner uses their public key to encrypt the communication; sends it to them; and the recipient uses their other key -- that they keep secret to themselves -- to decrypt the message.
Furthermore, any change whatsoever in the encrypted document will make it un-decryptable, thus guaranteeing against undetected modifications -- accidental or intentional.
Similarly, a public key owner can encrypt a document with their secret key, and the document can be decrypted only by using their matching public key, thus providing authentication that the document came from the key owner. Such authentication used to be done with much-more-forgeable written signatures, thus this is called "digital signatures" -- which has NOTHING to do with digitizing an actual hand-written signature.
The same techniques are used to protect and exchange what is often called "digital cash" or "electronic money" -- and to conduct global "anonymous banking," an exciting prospect for taxing agencies.
Public key crypto techniques have been world-published for 15 years, and well-tested computer programs that provide very robust implementations of "p.k." are available globally. The most popular version -- known as PGP -- is available worldwide as freeware! (PGP stands for "Pretty Good Privacy," but it is actually very good -- so good that surveillance agencies are very upset that it is globally available. Cryptographers consider p.k. in general, and PGP in particular, to be uncrackable, if keys are used that are 100 bits long or longer.)
How can these cheap and free global computer programs enhance public access to government?
* They can guarantee to agencies and recipients that agency-scrambled copies of public documents remain unchanged as they are distributed and circulated online.
* They permit secure financial transactions so citizens and corporations can electronically pay for government records received online -- limited, of course, only to the actual cost of providing the incremental access or copy, as one would expect in a nation that advocates equal access.
* They also facilitate electronic exchange of unmodifiable documents with unforgeable, automatically verifiable digital "signatures" -- everything from driver's license applications and voter registrations to electronic ballot initiatives and even online voting -- now that states and the federal government are beginning to consider and adopt standards for conducting electronic commerce with public agencies.
Jim Warren has served on the California Secretary of State's Electronic Filings Advisory panel, received John Dvorak's Lifetime Achievement Award, the Northern California Society of Professional Journalists' James Madison Freedom-of-Information Award, the Hugh M. Hefner First-Amendment Award, and the Electronic Frontier Foundation Pioneer Award in its first year. He founded the Computers, Freedom & Privacy conferences and InfoWorld magazine. He lives near Woodside, Calif. E-mail: email@example.com | <urn:uuid:7a2ef5fd-65a9-4f36-b463-128b99125909> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/100556254.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00575-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9421 | 959 | 3.1875 | 3 |
What makes people healthy?
That's a question researchers at Google X, along with scientists at Duke University and Stanford University, are looking to answer.
Google has launched a new project, dubbed the Baseline Study, that seeks to develop a greater understanding of what it means to be healthy.
"Most research studies focus on a particular disease. We're going to study health. We want to understand what it means to be healthy, down to the molecular and cellular level," the company noted in a release. "We think this could someday yield powerful insights for how diseases are understood, detected, and treated."
Google said it is in the process of enrolling 175 healthy people this summer. As the project, which is being led by Dr. Andrew Conrad of Google X, goes on, the patient base will be expanded.
Any results, according to the company, will be made available to other medical researchers.
Everyone in the study will undergo a physical exam similar to what they would get from a primary care physician, including the collection of body fluids like blood and saliva, Google noted.
"The biochemical fingerprint of a healthy individual would be a hugely important contribution to medical science, and it's possible that this study could bring that within reach," said Rob Califf, vice chancellor for clinical and translational research at Duke, in a statement. "This is worth striving for, because it could speed the pace of clinical research for decades to come and enable the development of new tests and techniques for detecting and preventing disease."
According to the researchers, it's important to study the makeup -- down to the molecular and cellular level -- of the healthy so researchers can create what's being called a baseline map of health. It's a "biochemical fingerprint" of what makes up a healthy person.
The map is important because a healthy person doesn't suddenly fall ill, for example, with heart disease or cancer.
"In reality, our body's chemistry moves gradually along a continuum from a state of health to a state of disease, and we only have observable symptoms when we're already far along that continuum," Google noted. "But long before those symptoms appear, the chemistry of the body has changed -- its cells, or the molecules inside cells. Unfortunately, the medical profession today doesn't understand at that molecular level what happens when a body starts to get sick."
Since doctors can't discover that someone is ill until the patient shows symptoms, the disease may have had time to progress to a dangerous state.
"If we could somehow detect those changes earlier, as soon as a body starts to move away from a "healthy" chemistry, this could change how diseases are detected, treated, or even prevented," Google noted.
Google said it's not working on the project to develop a new product but to contribute to medical science.
"It may sound counter-intuitive, but by studying health, we might someday be better able to understand disease," said Conrad. "This research could give us clues about how the human body stays healthy or becomes sick, which could in turn unlock insights into how diseases could be better detected or treated."
In December 2010, Google launched a tool to provide users with a Body Browser, the tool explores inside the human body, using features initially created for the company's popular Google Earth and Maps applications.
Body Browser was built to enable users to identify various parts of the human body while searching for bones, organs and muscles. With the tool, users can rotate a 3D image of the body, peel away skin and investigate the different layers inside.
This article, Google's next frontier: What it means to be healthy, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:ae984d9e-1d93-4ada-af25-d9ecf611032e> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2490451/healthcare-it/google-s-next-frontier--what-it-means-to-be-healthy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00219-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959071 | 811 | 3.5625 | 4 |
As people’s constant pursuit of enjoyment of life, nowadays most consumer electronics are applied HDMI(High-Definition Multimedia Interface) for multimedia data transmission, such as the digital televisions, Blu-ray player, PC, laptop and smart phone with a variety of devices to the HDMI adopted. Although the HDMI technology is known as an advance technology and favoured by people, it is limited by the transmission distance. A test report shows if transmission distance is equal to more than 10 meters, data will be corrupted. To solve this problem, engineers combine the fiber optic communication technology and make data transfer a few kilometers or more possibly and can carry 1080P video by using fiber optic cables. Thanks to such perfect combination, the HDMI video fiber optic transmission system comes into being. A variety of video fiber optic transmission products such as the HDMI converter, optical HDMI extender etc. enter the market. It means a new chapter of Video transmission technology has opened.
What is Optical HDMI Extender ?
In the HDMI video optic transmission system, uncompressed and high definition HDMI video and audio signal could be transmit over one 1-core fiber up to 100Km. Optical HDMI extender plays an important role in such transmission system. Optical HDMI extender, also called HDMI video multiplexer, is a device which is used to long-distance transmission images or other signals of lossless high-definition multimedia data. Optical HDMI extender is to serialize the HDMI electrical signal using the SerDes(Serializer/Deserializer). A SerDes converts a parallel data source to one or more serial data lanes and vice-versa. The serial data channels use differential signaling and a point to point configuration with a Transmitter(T) and a Receive(R) function comprising a lane. Even though both optical HDMI extender and fiber video converter can convert the signal, obviously the former’s functionality is more powerful.
Optical HDMI extender is widely used in many fields such as the large-scale multimedia HD display, information dissemination and information centers, traffic guidance and information display systems, outdoor large screen display systems, large stage display box AV entertainment center, sports arena, live television, multimedia conference systems, control centers, medical image processing, and multimedia teaching systems etc. which are closely with our life.
More video optic transmission system products
According to the interface on the device, there are SDI, VGA and DVI video extender among the video optic transmission system. The working principle is similar with the HDMI’s but there are some differences between optical HDMI extender, such as the transmission distance and the transport protocol.
Fiberstore supplies all kinds of video optic transmission system products which are in accordance with international standards, easily used in various operating environments. Know more information, welcome to our website or contact us directly! | <urn:uuid:b24ab87a-8a54-46f1-ab60-b151f6c99119> | CC-MAIN-2017-09 | http://www.fs.com/blog/introduction-to-hdmi-video-optic-transmission-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00219-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.884147 | 574 | 3 | 3 |
Emergency managers know that having a foolproof disaster communications plan is nothing more than a fantasy. That's because even the most redundant backup strategies can leave responders unable to communicate. Consequently agencies remain focused on providing diversified options for communications.
Why? If a disaster has cut the phone lines, it might not have disabled the radio towers, which would enable responders to rely on land mobile radios (LMR). But what if a disaster paralyzed both telephones and LMRs? Responders who come prepared with other means of communication stand a better chance at continuing their operations.
This is where satellite enters the equation. It's becoming "technological catnip" for some agencies that are seeking that diversity during emergencies. If a hurricane or terrorist attack disabled phone lines and destroyed local radio towers, perhaps responders could still point a dish toward a satellite that's safely orbiting in space.
Recent disasters, especially Hurricane Katrina, have magnified the need for diversified communications. The private sector has stepped up and made products that meet this need, including affordable tools for satellite communication. Federal, state and local responder agencies have deployed several of these devices and applications, and are using them as a partial solution for interoperable communications.
Photo: Federal Emergency Management Agency Mobile Emergency Response Support vehicle/Photo Courtesy of Mark Wolfe/FEMA
Since 9/11, government officials, experts and vendors have led a steady drumbeat of advocacy for interoperable responder communications equipment. An inability of different disciplines and jurisdictions to communicate during emergencies typically gets the blame for inefficient operations. These days, most responder agencies seem to agree on the importance of interoperable equipment, but conflicting opinions between agencies on proper equipment specifications and differing funding cycles tend to slow the process.
In 2007, the U.S. Department of Justice (DOJ) devised a relatively simple solution for giving agencies at least limited interoperable communications. Rather than laboring over equipment specifications that all agencies must agree to, the DOJ told SkyTerra Communications, the satellite vendor many agencies already used, to figure out the details.
SkyTerra and the DOJ created the Satellite Mutual Aid Radio Talkgroup (SMART) program, which consists of multistate regions that each have an interoperable "talkgroup" accessible to various responders, like fire services, police, hospitals and others. Each SMART region has one talkgroup that all the different disciplines can use simultaneously. Discipline-specific talkgroups are also provided for incidents that only require certain agencies. The regions comprise neighboring states: For example, Kentucky shares a region with Tennessee, the Carolinas, Georgia, Mississippi, Alabama and Florida. However, some states are in two regional talkgroups -- a southeastern state in a talkgroup might have a Midwest state as a neighbor.
The DOJ knew the primary obstacle to organizing government agencies into talkgroups would be funding-related, so the DOJ negotiated an agreement with SkyTerra to offer free SMART usage to its subscribers. The financial benefit to SkyTerra was obvious: a likely increase in subscribers. But it also gave emergency responders access to interoperable communications for little or no investment.
"That's the nice thing about SMART. If you're an existing customer, you're eligible to participate. You fill out an application that says which pieces of equipment you want it downloaded into and that's it," explained Drew Chandler, communications manager of the Kentucky Department for Public Health.
That downloading process happens quickly too. During a recent Kentucky ice storm, an environmental team from Mississippi downloaded SMART access within two hours, Chandler said.
Though it's easy to participate in the SMART program, it's not a
comprehensive answer to interoperable communications because agencies only have purchased a limited number of handheld devices for satellite communications. Giving all responders satellite devices would be too expensive, said Chandler. In Kentucky, 350 SkyTerra devices are used by various state and local responders. The monthly subscription cost is roughly $70 per unit. Their functionality is worth the expense, he said.
"That's like paying another cell phone bill for a lifeline," said Chandler.
The market activity that's making satellite an affordable failover strategy also has produced many satellite-related applications. Among the most critical is technology that lets responders transmit satellite images of an incident to a command center for processing on high-performance computers, said Eric Frost, co-director of the San Diego State University Immersive Visualization Center.
Here's how it works: Responders send an unmanned aerial vehicle (UAV) to the emergency site they need to survey. The UAV then sends video and hundreds of photos back to the command center. Frost's team does the image processing from the San Diego area. Access to high-performance computers gives responders on the ground a view of terrain that's difficult to see with the naked eye. For example, if someone wearing blue clothes got lost in a vast, wooded area, Frost's team could use software to isolate instances of the color. It would stand out in the altered image. "It's like seeing a red spaghetti dot on a white shirt," Frost said.
Another satellite technology allows those images to travel faster than what's otherwise possible via satellite's limited bandwidth. High-resolution photos and videos are large files that usually clog a satellite's bandwidth. To alleviate the problem, many responders now use software from GeoFusion, which breaks up the files into chunks so that only the content immediately on a responder's screen travels from the command center to that responder. Once the responder needs a different piece of content, the new content then travels from the command center to the responder's computer. This process ensures that responders get content faster because smaller files travel through the satellite.
Increased speed is a benefit many new satellite technologies share, according to Craig "Gator" Gallagher, IT specialist of the Federal Emergency Management Agency.
Photo: San Diego firefighters like satellite's ability to offer cell phone communication in backcountry areas that lack cell tower coverage./Photo by Andrea Booher/FEMA
"Most satellite systems are so automated these days that it only takes pressing a couple buttons to turn them on and, with the aid of built-in GPS, the system finds the satellite, locks on it and is ready to pass information in a matter of only minutes," Gallagher said via e-mail.
One feature becoming popular with San Diego firefighters is satellite's ability to offer cell phone communication in backcountry areas that lack cell tower coverage. The phone connectivity reaches cell phones by using a broadband global area network, which delivers satellite-powered broadband using a portable terminal the size of a laptop. Communicating requires no special training because everyone knows how to operate a cell phone.
An especially affordable satellite technology that's improving emergency management, Frost said, is Spot Satellite Messenger, a small device the San Diego Fire Rescue Department uses to track firefighters and trucks in the field. The retail price is a little more than $100 per unit, plus a $100 annual service fee. Using the device, location updates are provided every 15 minutes, which lets command centers track the locations of trucks and individual firefighters.
Photo: Spot Satellite Personal Tracker
Frost said having precise, updatable location data is important.
"If you're at a command center trying to manage what's going on, the reality is you're not really managing it. You're just sort of keeping track of it because you don't actually know where most of your people are," Frost said. "If the fire is coming up over one ridge and you have people on the other ridge, you often don't know that and they don't know that because you don't know where the people actually are."
Satellite, however, has its vulnerabilities. Chandler said the technology becomes useless when responders lose line of sight, which can happen during a hurricane or windstorm.
"Maybe your dish gets blown over. It's a very directional signal, and if you blow the dish and twist it several degrees, it's not looking at the satellite, it's just looking out into space," Chandler said. "We've had that happen frequently here in Kentucky in the western part of the state. There aren't a lot of mountains or anything to cut up some of that wind so we get straight line winds." | <urn:uuid:e758aa25-e8a3-4658-aa79-d027b0b1fa75> | CC-MAIN-2017-09 | http://www.govtech.com/featured/99853319.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00447-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942908 | 1,684 | 2.703125 | 3 |
Question 2) A+ Operating Systems Technologies
This post is outdated. For an updated guide see Jed Reisner’s A+ 220-801 and 220-802 guide.
SubObjective: Identify the names, locations, purposes, and contents of major system files
Single Answer Multiple Choice
Which file loads the Windows 98 graphical user interface (GUI)?
In Windows 98, the Win.com file loads the GUI.
In Windows 98, you can use the Autoexec.bat file to load applications automatically at startup, and you can use the Config.sys file to load real-mode drivers. In general, you should only load real-mode drivers for a device if a protected-mode driver is not available because using real-mode drivers can degrade system performance.
The System.1stfile is a backup of the Windows 98 Registry; this file is created when you complete a successful installation of Windows 98. You can use the System.1stfile to return Windows 98 to an operational state if the Registry ever becomes corrupted or is deleted.
A+ Training Guide, Chapter 22: Microsoft Windows Operating Systems, Major Operating System Components, Windows 9x/Me Structure, pp. 791-792.
These questions are derived from the Self Test Software Practice Test for CompTIA exam #220-302 – A+ Operating Systems Technologies, 2003 Objectives | <urn:uuid:8061a436-8673-4616-aa97-cd42967d4e00> | CC-MAIN-2017-09 | http://certmag.com/question-2-test-yourself-on-a-operating-systems-technologies-2003-objectives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00040-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.762814 | 280 | 2.65625 | 3 |
How to manage the security of network services according to ISO 27001 A.13.1.2
Everybody knows that information is stored in information systems (workstations, laptops, smartphones, etc.), but to exchange the information via a network is necessary.
Most of the information systems in this world are connected to the same main network – Internet – and, without this network, our society would look pretty different; in fact, the current society as we know it would not be possible.
Anyway, the Internet is not the only network relevant for information security. Other, commonly used networks are, for example, local area networks (LAN), mobile communication networks, Internet of Things (IoT) networks, etc. They are hosts to many services that need to be protected as well.
The A.13.1.2 control of Annex A of ISO/IEC 27001:2013 basically was developed for the security of network services, and the basic principle of this control is to identify security mechanisms, service levels, and management requirements related to all network services.
So, the important thing here is to manage the security of the network services, including those cases where the service is outsourced.
Security features of network services
Well, but what is a network service? According to ISO/IEC 27002:2013, network services are basically the provision of connections, private network services, firewalls, and Intrusion Detection Systems. ISO/IEC 27002:2013 also defines security features of the network services, which could be:
- Network security technology – This can be implemented through the segregation of networks, for example configuring VLANs with routers/switches, or also if remote access is used, secure channels (encrypted) are necessary for the access, etc.
- Configuring of technical parameters – This can be implemented through Virtual Private Networks (VPN), using strong encryption algorithms, and establishing a secure procedure for the authentication (for example, with electronic certificates).
- Mechanisms to restrict access – This can be implemented with firewalls, which can filter internal/external connections, and also can filter access to applications. Intrusion Detection Systems can also be used here, referenced specifically by the ISO 27002:2013 standard. Basically, Intrusion Detection Systems (IDS) are devices that can be based on hardware or software, and they constantly monitor connections to detect possible intrusions to the network of the organization. They can also help firewalls to accept or reject connections, depending on the defined rules. Here it is important to note that an IDS is a passive system, because it can only detect; but, there are also Intrusion Prevention Systems, known as IPS, which can prevent intrusions. The IPS are not specified by the standard, but are very useful and can also help firewalls.
So, basically, if you want to manage the security of network services, you can use these types of hardware/software:
- Routers/switches (for example, for the implementation of VLANs)
- Firewalls or similar perimeter security devices (for example, for the establishment of VPNs, secure channels, etc.)
- IDS/IPS (for intrusion detection/intrusion prevention)
By the way, this article about firewalls might be interesting for you: How to use firewalls in ISO 27001 and ISO 27002 implementation.
Network services agreements in ISO 27001
At this point, we have identified the network services, but if we want to align with ISO 27001, we need to go one step further. This means that these network services should be included in network services agreements (or SLA, Service Level Agreements), being applicable to internal services provided in-house, and also to services provided from outside, by which I mean those that are outsourced.
So, for the development of a network service agreement, basically you need to consider what network services are established, how they are offered (from inside, or outside, resources, etc.), service levels (24×7, response and treatment of incidents, etc.), and other key components. If the network service is outsourced, it is also important to consider periodic meetings with the external company, and in these meetings it is important to review the SLAs (following the A.15.2 Supplier service delivery management control).
This article might also be interesting for you: 6-step process for handling supplier security according to ISO 27001.
For the security mechanisms included in the SLA, the selection could be based on the results of the risk assessment (basically, for the highest risks, the strongest security mechanism will be necessary), using the security controls from Annex A of ISO 27001), or even using the organization’s contacts with special interest groups for specific environments like government, military, etc., where the implementation of specific regulations could be needed (following the A.6.1.4 Contact with special interest groups).
This article can provide you with more information: Special interest groups: A useful resource to support your ISMS.
Feel secure in your organization’s protection of network services
Remember that all your information is stored in information systems, and they are connected by networks, and the exchange of information is possible through network services (firewalls, IDS, IPS, VPNs, VLANs, etc.). So, if you want to feel secure in your organization, you need to be careful with the network, controlling the network services, identifying firewalls, IDS, IPS, VPNs, etc., and including them in network services agreements.
ISO 27001 control A.13.1.2 is a good resource on the increasing requirements for the security of networks. It is case-specific, and that could be exploited to the maximum – meaning you can tailor security mechanisms to your own requirements using the technology already in place. Your organization will gain results; but, even more importantly – so will your customers and users. And they know how to appreciate having a partner in business who sees security as a highly important topic.
See this eBook: ISO 27001 Annex A Controls in Plain English to learn more about security controls. | <urn:uuid:e35f402a-1013-4404-891f-9e9504498f21> | CC-MAIN-2017-09 | https://advisera.com/27001academy/blog/2017/02/13/how-to-manage-the-security-of-network-services-according-to-iso-27001-a-13-1-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00040-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934989 | 1,274 | 3.0625 | 3 |
Data center networking topology has improved significantly for a few years. With the developments of high speed switching device and multilayer network architecture, we have more powerful data centers with low latency, scalability and higher bandwidth. But due to the exponential increase of internet traffic and emerging web application, still data centers performance is not up to the market to meet the requirement. In order to face the increasing bandwidth demand and power requirement in the data centers, new connection scheme must be developed that can provide high throughput, low latency and less power consumption. So the optimal solution would be using optical fiber link between server to access switch.
Figure 1 shows a block diagram of a typical data center with four layers hierarchical network architecture from the bottom to top. The first layer is the servers which are connected to upper access layer switch. The second layer is access switch where many servers are connected with ToR switch. The third layer is aggregation layer which is connected with bottom access layer switch. Forth or highest layer is core layer where core switches are connected with router at top and aggregation layer at bottom to form a data center block. When a request is generated by a user, the request is forwarded through the internet to the top layer of the data center. The core switch devices are used to route the ingress traffic to the appropriate server. The main advantage of this architecture is that it can be scaled easily and has a good fault tolerance and quick failover. But the main drawback is high power consumption due to the several layer architecture switches and latency introduce due to multiple store-and-forwarding processing.
Figure 1. Traditional data center architecture
As the amount and size of traffic increase exponentially, the architecture of traditional data center is not sufficient to handle traffic and to meet the future challenges. An electrical-optical hybrid network has been proposed. It connects servers with upper layer switch with high availability and low latency. Connecting servers directly with upper layer using both electrical and optical link would fulfill the requirement for the propose scheme. Figure 2 depicts the proposed architecture. The blue lines indicate electrical connectivity and black dashed lines indicate optical connectivity.
Figure 2. Proposed data center architecture
Due to the emerging demand, servers require high bandwidth and low latency communication. Optical connectivity consumes less power at the same bandwidth. Connecting server directly using optical and electrical links will meet the demand and at the same time better load balancing is achieved.
Data center interconnection could be provided by several schemes. Interconnection could be at layer 1, 2 or 3. Considering transportation option and layered architecture of data center, interconnection is recommended at layer 2 aggregation layer. Figure 3 shows a data center interconnectivity solution. Each data center is connected at its aggregation layer using high speed optical fiber.
Figure 3. Data center interconnects architecture
Each server has both optical and electrical connectivity. The electrical connectivity is usually useful for communication between servers and handling short data transfer, whereas optical connectivity is used for long bulked data transfer. This is necessary when transferring data between data centers that require large bandwidth and low latency. At the same time because of layered architecture server can communicate with each other via access layer and don’t need to go the upper layer. Load balancing is achieved due to both hybrid electrical and optical links. Due to the optical connectivity virtual Ethernet port aggregator switching situation enhanced, it becomes easier to move server virtually not only within the data center but also among the data centers. The hybrid links intra and inter data center networking scenarios have significantly improved.
As the number of optical switches is introduced in every layer, the load sharing capability of the network is increased and power consumption for data center is reduced. In each layer, the electrical switch is reduced and replaced with optical switch. So the overall cost and power consumption associated with electrical switches have reduced.
A electrical and optical architecture is presented. Use of optical connectivity has some advantages such as less power consumption, higher bandwidth and low latency. Introducing both types of switches in each layer helps in load balancing. This also helps us when we interconnect our data centers where we require large bandwidth between servers. Because of higher bandwidth, it is easier to move server virtually not only within the data center but also among the data centers. So a hybrid electrical and optical networking topology improves the overall scenarios for data center and also for big data network. | <urn:uuid:fc10d993-0a6f-4dd0-81c1-6565e8728d03> | CC-MAIN-2017-09 | http://www.fs.com/blog/deploying-hybrid-electrical-and-optical-network-to-achieve-big-data-transfer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00216-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.908771 | 876 | 2.671875 | 3 |
Tip: Prompt magic
Enhancing the system prompt
As Linux/UNIX people, we spend a lot of time working in the shell, and in many cases, this is what we have staring back at us:
If you happen to be root, you're entitled to the "prestige" version of this beautiful prompt:
These prompts are not exactly pretty. It's no wonder that several Linux distributions have upgraded their default prompts that add color and additional information to boot. However, even if you happen to have a modern distribution that comes with a nice, colorful prompt, it may not be perfect. Maybe you'd like to add or change some colors, or add (or remove) information from the prompt itself. It isn't hard to design your own colorized, tricked-out prompt from scratch.
Under bash, you can set your prompt by changing the value of the PS1 environment variable, as follows:
$ export PS1="> " >
Changes take effect immediately, and can be made permanent by placing the "export" definition in your ~/.bashrc file. PS1 can contain any amount of plain text that you'd like:
$ export PS1="This is my super prompt > " This is my super prompt >
While this is, um, interesting, it's not exactly useful to have a prompt that contains lots of static text. Most custom prompts contain information like the current username, working directory, or hostname. These tidbits of information can help you to navigate in your shell universe. For example, the following prompt will display your username and hostname:
$ export PS1="\u@\H > " drobbins@freebox >
This prompt is especially handy for people who log in to various machines under various, differently-named accounts, since it acts as a reminder of what machine you're actually on and what privileges you currently have.
In the above example, we told bash to insert the username and hostname into the prompt by using special backslash-escaped character sequences that bash replaces with specific values when they appear in the PS1 variable. We used the sequences "\u" (for username) and "\H" (for the first part of the hostname). Here's a complete list of all special sequences that bash recognizes (you can find this list in the bash man page, in the "PROMPTING" section):
|\a||The ASCII bell character (you can also type \007)|
|\d||Date in "Wed Sep 06" format|
|\e||ASCII escape character (you can also type \033)|
|\h||First part of hostname (such as "mybox")|
|\H||Full hostname (such as "mybox.mydomain.com")|
|\j||The number of processes you've suspended in this shell by hitting ^Z|
|\l||The name of the shell's terminal device (such as "ttyp4")|
|\s||The name of the shell executable (such as "bash")|
|\t||Time in 24-hour format (such as "23:01:01")|
|\T||Time in 12-hour format (such as "11:01:01")|
|\@||Time in 12-hour format with am/pm|
|\v||Version of bash (such as 2.04)|
|\V||Bash version, including patchlevel|
|\w||Current working directory (such as "/home/drobbins")|
|\W||The "basename" of the current working directory (such as "drobbins")|
|\!||Current command's position in the history buffer|
|\#||Command number (this will count up at each prompt, as long as you type something)|
|\$||If you are not root, inserts a "$"; if you are root, you get a "#"|
|\xxx||Inserts an ASCII character based on three-digit number xxx (replace unused digits with zeros, such as "\007")|
|\[||This sequence should appear before a sequence of characters that don't move the cursor (like color escape sequences). This allows bash to calculate word wrapping correctly.|
|\]||This sequence should appear after a sequence of non-printing characters.|
So, there you have all of bash's special backslashed escape sequences. Play around with them for a bit to get a feel for how they work. After you've done a little testing, it's time to add some color.
Adding color is quite easy; the first step is to design a prompt without color. Then, all we need to do is add special escape sequences that'll be recognized by the terminal (rather than bash) and cause it to display certain parts of the text in color. Standard Linux terminals and X terminals allow you to set the foreground (text) color and the background color, and also enable "bold" characters if so desired. We get eight colors to choose from.
Colors are selected by adding special sequences to PS1 -- basically sandwiching numeric values between a "\e[" (escape open-bracket) and an "m". If we specify more than one numeric code, we separate each code with a semicolon. Here's an example color code:
When we specify a zero as a numeric code, it tells the terminal to reset foreground, background, and boldness settings to their default values. You'll want to use this code at the end of your prompt, so that the text that you type in is not colorized. Now, let's take a look at the color codes. Check out this screenshot:
To use this chart, find the color you'd like to use, and find the corresponding foreground (30-37) and background (40-47) numbers. For example, if you like green on a normal black background, the numbers are 32 and 40. Then, take your prompt definition and add the appropriate color codes. This:
export PS1="\w> "
export PS1="\e[32;40m\w> "
So far, so good, but it's not perfect yet. After bash prints the working directory, we need to set the color back to normal with a "\e[0m" sequence:
export PS1="\e[32;40m\w> \e[0m"
This definition will give you a nice, green prompt, but we still need to add a few finishing touches. We don't need to include the background color setting of 40, since that sets the background to black which is the default color anyway. Also, the green color is quite dim; we can fix this by adding a "1" color code, which enables brighter, bold text. In addition to this change, we need to surround all non-printing characters with special bash escape sequences, "\[" and "\]". These sequences will tell bash that the enclosed characters don't take up any space on the line, which will allow word-wrapping to continue to work properly. Without them, you'll end up with a nice-looking prompt that will mess up the screen if you happen to type in a command that approaches the extreme right of the terminal. Here's our final prompt:
export PS1="\[\e[32;1m\]\w> \[\e[0m\]"
Don't be afraid to use several colors in the same prompt, like so:
export PS1="\[\e[36;1m\]\u@\[\e[32;1m\]\H> \[\e[0m\]"
I've shown you how to add information and color to your prompt, but you can do even more. It's possible to add special codes to your prompt that will cause the title bar of your X terminal (such as rxvt or aterm) to be dynamically updated. All you need to do is add the following sequence to your PS1 prompt:
Simply replace the substring "titlebar" with the text that you'd like to have appear in your xterm's title bar, and you're all set! You don't need to use static text; you can also insert bash escape sequences into your titlebar. Check out this example, which places the username, hostname, and current working directory in the titlebar, as well as defining a short, bright green prompt:
export PS1="\[\e]2;\u@\H \w\a\e[32;1m\]>\[\e[0m\] "
This is the particular prompt that I'm using in the colortable screenshot, above. I love this prompt, because it puts all the information in the title bar rather than in the terminal where it limits how much can fit on a line. By the way, make sure you surround your titlebar sequence with "\[" and "\]", since as far as the terminal is concerned, this sequence is non-printing. The problem with putting lots of information in the title bar is that you will not be able to see info if you are using a non-graphical terminal, such as the system console. To fix this, you may want to add something like this to your .bashrc:
if [ "$TERM" = "linux" ] then #we're on the system console or maybe telnetting in export PS1="\[\e[32;1m\]\u@\H > \[\e[0m\]" else #we're not on the console, assume an xterm export PS1="\[\e]2;\u@\H \w\a\e[32;1m\]>\[\e[0m\] " fi
This bash conditional statement will dynamically set your prompt based on your current terminal settings. For consistency, you'll want to configure your ~/.bash_profile so that it sources your ~/.bashrc on startup. Make sure the following line is in your ~/.bash_profile:
This way, you'll get the same prompt setting whether you start a login or non-login shell.
Well, there you have it. Now, have some fun and whip up some nifty colorized prompts!
- rxvt is a great little xterm that happens to have a good amount of documentation related to escape sequences tucked in the "doc" directory included in the source tarball.
- aterm is another terminal program, based on rxvt. It supports several nice visual features, like transparency and tinting.
- bashish is a theme engine for all different kinds of terminals. | <urn:uuid:e461c253-c6f8-482a-b143-7b26abfb3343> | CC-MAIN-2017-09 | http://www.ibm.com/developerworks/library/l-tip-prompt/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00568-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.894035 | 2,256 | 2.515625 | 3 |
The popular Apple iPad is certainly no supercomputer by today’s standards, but if you could transport one back in time 25 years, it would be one of the fastest machines on the planet. According to a recent article by John Markoff of the New York Times, the performance of today’s iPad 2 would rival that of a 1985-era Cray 2 machine.
University of Tennessee’s Jack Dongarra has been running Linpack on the iPad and found the little dual-core wonder tablet does quite well with the linear algebra benchmark. That’s a little bit surprising considering it uses an ARM-based chip and is certainly not optimized for floating-point performance.
To date, the researchers have run the test on only one of the iPad microprocessor’s two processing cores. When they finish their project, though, Dr. Dongarra estimates that the iPad 2 will have a Linpack benchmark of between 1.5 and 1.65 gigaflops (billions of floating-point, or mathematical, operations per second). That would have insured that the iPad 2 could have stayed on the list of the world’s fastest supercomputers through 1994.
The Cray-2 was a custom-built vector supercomputing that delivered 1.9 peak gigaflops in the 8-processor version, and unlike the iPad, certainly was built for heavy-duty number crunching. But the Cray-2 took up a small room and needed to be immersed in a special refrigerant called Flourinert to keep the machine cool.
Apparently Dongarra and his team have been tossing around the idea of building an iPad cluster using a couple of stacks of the tablets. Unfortunately, the iPad is basically a closed system, so each one would have to be hacked into in order to hook them together via the built-in wireless communication.
Plus at $400 a pop, that’s going to make for a pretty expensive cluster, price-performance wise. But Dongarra thinks the low power consumption (and the fact that it runs on batteries) is the compelling part. Of course, the whole idea of a mobile supercomputer is pretty interesting too, especially if the cluster software can take into account nodes which can come and go as they please. | <urn:uuid:e7f6697f-e78d-442c-9c4b-abc8d1eba691> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/05/10/apples_ipad_gets_a_linpack_workout/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00444-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95677 | 471 | 2.890625 | 3 |
An American and a Frenchman have won the 2012 Nobel Prize for Physics for their work on quantum optics, which could one day lead to faster computer processors, better telecommunications or more accurate timepieces.
Prize winners David Wineland of the University of Colorado Boulder and Serge Haroche of the CollA"ge de France and the Ecole Normale SupA(c)rieure in Paris worked independently, approaching the field from different directions. Wineland developed ways to isolate individual ions (electrically charged atoms), measuring their quantum state with photons, while Serge Haroche traps individual photons and measures them with atoms.
While the behavior of electrical currents or beams of light can be described by the laws of classical physics, those rules no longer apply at the scale of individual atoms, electrons or photons. At that level a new set of rules, the laws of quantum mechanics, come into play -- and they're increasingly important as the IT industry moves towards chips so densely packed that only a few atoms or electrons are used to store each bit, and fiber optic communications systems so fast that only a few photons make up each pulse of light.
Researchers had faced a number of challenges in studying quantum phenomena, including difficulty in isolating individual particles of matter or light, and in observing or measuring their quantum behavior without influencing or destroying it. Wineland and Haroche were the first to solve those problems, taking the first steps towards the creation of a new generation of computers.
When, or even if, such computers will appear on the market is not a question Haroche is ready to answer.
"I don't know," he said in a telephone conversation with reporters at the Royal Swedish Academy of Science, which awards the prizes.
"We do fundamental research. We are studying trying to understand the way things behave at the quantum level," he said, just 20 minutes after learning he had won.
"With a lot of research, the final application is not the one that was foreseen in the first place. It was the case with lasers. It was the case with nuclear magnetic resonance. The manipulation of quantum systems belongs to the same kind of physics," he said.
Lasers, at first limited to applications such as range finding or the creation of holograms, are now found in CD players and long-distance telecommunication, while nuclear magnetic resonance, initially conceived as a way to identify individual atoms based on their magnetic properties, is the basis for the magnetic resonance imaging or MRI scanners used to diagnose many diseases.
"There are a lot of things to learn at the fundamental level and there are so many potential applications it is very hard to see which ones will happen. Maybe some kind of computer, some kind of useful quantum simulations, or some kind of communications," Haroche said of his own research.
As for Wineland's work, that could give rise to clocks 100 times more accurate than today's atomic clocks.
"They can measure gravitational shift with very high precision. The clock could be used to measure anomalies in the gravitational field for geology or earthquake detection," Haroche said.
Wineland and Haroche, both born in 1944, will share the prize of 8 million Swedish krona.
Peter Sayer covers open source software, European intellectual property legislation and general technology breaking news for IDG News Service. Send comments and news tips to Peter at email@example.com. | <urn:uuid:07d5eb57-2c5a-4e70-95ac-aa8c28c9be1f> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2492145/high-performance-computing/quantum-computing-pioneers-from-u-s---france-win-2012-nobel-prize.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00564-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959849 | 691 | 2.6875 | 3 |
Data centers are the backbone of today’s digital economy, and yet far too many are vulnerable to advanced attacks. Despite $13 billion spent every year to secure them,* attackers are compromising data centers in a flood of cyber assaults.
In many breaches, compromised data centers are used in attacks against new targets. These attacks are disrupting business, stealing priceless customer data and intellectual property — and damaging reputations in their wake. Read this paper to learn why so many attacks against data centers are successful and how organizations can better protect them.
You will learn:
In many cases, compromised targets unwittingly become attackers themselves. At the bidding of cybercriminals who can control comprised systems remotely, the data centers are commandeered as potent weapons in attacks against fresh targets.
In one example of this trend, the U.S. Department of Labor became an involuntary attacker in May 2013 when attackers compromised one of its Web pages. Site visitors received malware that exploited a zero-day vulnerability in Internet Explorer 8 to install a variant of the Poison Ivy remote-access Trojan (RAT).
To read more, complete the form to the right.
*IDC. “Worldwide Datacenter Security 2012–2016 Forecast: Protecting the Heart of the Enterprise 3rd Platform.” November 2012.
Download the Report | <urn:uuid:eeaaa04f-2d95-4963-8288-8be5e4b1cc90> | CC-MAIN-2017-09 | https://www2.fireeye.com/wp-data-center-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00564-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944572 | 269 | 2.515625 | 3 |
Ah, Mondays. Wonder why your employees are all yawning and glassy-eyed today? It may be easy to blame it on the hot weather or overindulgences over the weekend, but researchers are pointing fingers at another potential culprit that is increasingly interfering with our ability to get a good night's sleep: Smartphones and tablet PCs.
Don't blame those pesky Angry Birds. Rather, it's the light generated by mobile devices' screens that researchers are particularly interested in. Here's why it's a problem.
Right before bedtime, bright lights are the enemy, inhibiting the production of melatonin, which helps you fall (and stay) asleep. Smart phones and tablets have the advantage of being small, but because they are so bright and so close to your face, the overall impact is similar to being in a fairly well-lighted room. (The light from your phone alone is equal to about half that of "ordinary room light.") Making things even worse, short-wavelength light in the blue portion of the spectrum is the most disruptive to sleep patterns, and that's the type of light that is typically over-produced by modern LCD screens. Poor sleep, it should go without saying, is a factor in all kinds of problems, ranging from low productivity at work to increased traffic accidents to diseases like diabetes and cancer.
This isn't an entirely new phenomenon. Experts have been telling us about this problem for years, only in other ways. Everything from watching TV in bed to having bright lights on in the house at night has been demonized as a sleep disruptor. To combat the problem, one source has suggested using red light bulbsA for late-night bathroom runs in order to minimize sleep disruptions--as a way to keep exposure to that blue wavelength light to a minimum. The problems are compounded over time, so the more you use your phone or tablet at night, the worse it gets. Since more and more consumers are using these devices in bed--to send emails, watch movies, read books, play games, and more--the problem is becoming nearly universal. It's especially problematic with younger users, who habitually use their portable devices in bed every night.
The fix for all of this is easy, but for most of us it's tough love to the extreme: Experts say screen time should end a full two hours before bedtime. (Even having your cell phone in the bedroom next to you is on the no-no list: With your handset in arm's reach, you're more prone to wake up and check your messages in the middle of the night.)
If you can't cut the habit--and who can blame you?--one thing that might help is at least relocating where you spend after-hours time on your phone or tablet. Light exposure isn't the only problem associated with using gadgets at night; another risk factor is the "learned association" between the bedroom and these devices. In other words, your mind begins to associate the bedroom and the bed with studying, work, or gaming, rather than training it to recognize these as places where you sleep. The more distance and time you can put between your phone or tablet and going to bed, the better.
This story, "Tablets in Bed are Damaging Sleep and Killing Productivity" was originally published by PCWorld. | <urn:uuid:1f43bf0e-2a49-4cb5-bfa1-7fc2d668ac39> | CC-MAIN-2017-09 | http://www.cio.com/article/2384426/it-strategy/tablets-in-bed-are-damaging-sleep-and-killing-productivity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00440-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958599 | 679 | 2.515625 | 3 |
NWS poised for flood season with sharper tools for mapping risk
- By Patrick Marshall
- Mar 06, 2014
Record snowfalls this winter mean government weather forecasters will be looking for signs of heavier melt – and potential flooding – as spring nears.
The National Weather Service has a number of tools to help gauge the risk of seasonal flooding, including topographical maps that show which areas in a region are prone to be under water when a river overflows its banks.
But while these older maps help chart areas at risk, they leave out an important piece of information: How deep will the water be in a given location?
"The reason the water depth is so important is that allows people to really understand what their flood risk is," said Laurie Hogan, program manager of the National Weather Service's Hydrologic Services Division.
If people know their property will be under eight feet of water, different measures are called for than if they're expecting one foot of water. "Depending on the lead time they have, they can move things to the second floor," she said. "If they see that they'll have roadways that will be blocked they can decide to go stay with someone else."
To address the need for that kind of information, the National Weather Service in September 2007 began producing inundation maps that emergency planners and residents alike can use to view the extent and depth of flooding in selected areas. To date, 28 locations have been mapped.
The key technologies that have enabled inundation mapping are high-resolution LiDAR data and improved hydrologic modeling. LiDAR (light detection and ranging) devices measure elevation of the landscape – a key factor in flood prediction – by emitting laser pulses and measuring the time required for the pulse to reflect back to the device.
"The underlying data keeps getting better and better," Hogan said. "We get the water depth information by using the water surface estimation from the hydraulic model and subtracting the elevation data."
In addition to displaying terrain, the maps also indicate where streets, buildings, airports and other infrastructure are likely to be impacted by floodwaters. "It does pan and zoom," Hogan said. "It does geolocation. You click on the map and the nearest address pops up along with information about what you clicked on."
"We are on our second version of our interactive flood map viewer," Hogan said. "In 2012 we updated it to use Google Maps. Prior to that, we didn't really have zoom-in features." What's more, according to Hogan, moving from a custom viewer to Google Maps cut the cost of creating maps in half.
"There's really a lot of information that people can take away from these maps," Hogan said. "And emergency managers in river communities can use these maps to help determine which neighborhoods need to be evacuated and which streets need to be closed."
Patrick Marshall is a freelance technology writer for GCN. | <urn:uuid:b05851e6-a417-40e9-bf42-3353379fc82f> | CC-MAIN-2017-09 | https://gcn.com/articles/2014/03/06/national-weather-service-maps.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00616-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958793 | 595 | 2.8125 | 3 |
We can protect our voice network with simple Auxiliary VLAN but sometimes to be more secure Auxiliary VLAN are not enough. In this case we can use Security appliances such firewalls or VPN termination devices or both.
Firewall maybe seems like very clean and simple mechanism to protect RTP protocols transmitted voice packets but there’s a problem. Protecting voice networks with a firewall is not so simple because we are not sure what UDP port will be used by the RTP voice packets flow.
If we look at some Cisco network architecture and Cisco device environment, a UDP port for an RTP stream is an random port selected from the pool of 16,384 to 32,767. We surely don’t want to open all those ports on firewall just to be sure that the VoIP will function well. So many open ports may be seen from other side like a bunch of security holes.
Firewalls from Cisco are smarter than that, PIX and Cisco ASA – Adaptive Security Appliance have the possibility to dynamically inspect calls packets and read the setup protocol traffic like H.323 to learn the used UDP ports for every RTP flows. The firewall will then open those UDP ports for the duration of the RTP connection and then close those ports again.
Let’s be clearer about this. In the image here you can see that the first thing that happens is the Phone’s usage of SCCP protocol to initiate a call to the PSTN.
SCCP uses TCP port 2000 will make the communication between the Cisco IP Phone and the UCM server possible. After the communication is established UCM is reading the numbers dialed by user’s phone and using this numbers he is deciding that the call needs to be sent out the H.323 gateway.
In the next step, using TCP port 1720, UCM initiates a call setup with the H.323 gateway. The firewall will allow the communication between these devices using H.323 protocol. The firewall will also analyze H.323 data and determine which UDP ports are in use for the voice path.
The next step is important because the firewall will need to allow bidirectional RTP communication. There will be need for two random UDP ports, every one for one direction of communication. Let’s take an example in which UDP ports 20,548 and 28,642 are selected. Firewall analyzes the H.323 protocol and based on this information dynamically learns about UDP ports that are used. The firewall then permits the RTP flow in every direction until the call is over.
Using security appliance to protect voice network
There’s not only the ability to deny or permit some ports. The firewall may have some additional methods of protection of voice network. Let’s say that someone is attacking our VoIP network with DoS attack. In that case firewall has the ability to see that there are too many messages of a certain type sent in a short period of time. In some other case a firewall can be configured to use of policies, in that configuration a firewall can determine if necessary which phone to block.
Biggest part of Cisco IP phones can authenticate and encrypt call packets on the network. Some other phones from Cisco and other vendors don’t have this capability. If you however want to implement encryption and authentication in you voice network is not impossible. The solution is to use IPsec-protected VPN tunnel to send all call traffic across the network.
Cisco Unified Communications Manager but also a whole bunch of other devices from different vendors have the capability to be used for VPN termination | <urn:uuid:08188123-566f-45cc-8c7c-c08869e0e902> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2012/protecting-voip-appliances | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00616-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923865 | 737 | 2.859375 | 3 |
Manufacturing Breakthrough Blog
Friday April 29, 2016
In my last post we completed our series of posts on variation by discussing the basics of a Queuing System, two very important “laws of variability” and finally, a ten point summary of primary points, principles, and conclusions relative to understanding variability. These ten included:
- Variability always degrades performance.
- Variability buffering is a fact of manufacturing life.
- Flexible buffers are more effective than fixed buffers.
- Material is conserved.
- Releases are always less than capacity in the long run.
- Variability early in a line is more disruptive than variability late in a line.
- Cycle time increases nonlinearity in utilization and efficiency.
- Process batch sizes affect capacity.
- Cycle times increase proportionally with transfer batch size.
- Matching can be an important source of delay in assembly systems.
In today’s post, I will present the first of three posts on a subject I refer to as Paths of Variation along with a real case study to demonstrate the teachings of paths of variation.
Paths of Variation
We’re all familiar with the positive effects of implementing Cellular Manufacturing (CM) in our workplaces such as the improved flow through the process, overall cycle time reduction, throughput gains as well as other benefits. But there is one other positive effect that can result from implementing CM that isn’t discussed much. This potential positive impact is what CM can do to reduce variation. But before we reveal how this works, let’s first discuss the concept of paths of variation.
When multiple machines performing the same function are used to produce identical products, there are potentially multiple paths that parts can take from beginning to end as we progress through the entire process. There are, therefore, potential multiple paths of variation. These multiple paths of variation can significantly increase the overall variability of the process.
Even with focused reductions in variation, real improvement might not be achieved because of the number of paths of variation that exist within a process. Paths of variation, in this context, are simply the number of potential opportunities for variation to occur within a process because of potential multiple machines processing the parts. And the paths of variation of a process are increased by the number of individual process steps and/or the complexity of the steps (i.e. number of sub-processes within a process).
The answer to reducing the effects of paths of variation should lie in the process and product design stage of manufacturing processes. That is, processes should/must be designed with reduced complexity and products should/must be designed that are more robust. The payback for reducing the number of paths of variation is an overall reduction in the amount of process variation and ultimately more consistent and robust products. Let’s look at a real case study.
Many years ago I had the opportunity to consult for a French pinion manufacturer located in Southern France. For those of you who are not familiar with pinions (i.e. pignons in French), a pinion is a round gear used in several applications: usually the smaller gear in a gear drive train. Here is a drawing of what a pinion might look like and as you might suspect, pinions require a complicated process to fabricate.
When our team arrived at this company, based on our initial observations, it was very clear that this plant was being run according to a mass production mindset. I say this because there were many very large containers of various sized pinions stacked everywhere.
The actual process for making one particular size and shape pinion was a series of integrated steps from beginning to end as depicted in the figure below. The company received metal blanks from an outside supplier which were fabricated in the general shape of the final product. The blanks were then passed through a series of turning, drilling, hobbing, etc. process steps to finally achieve the finished product.
The process for this particular pinion was highly automated with two basic process paths, one on each side of this piece of equipment. There was an automated gating operation that directed each pinion to the next available process step as it traversed the entire process which consisted of fourteen (14) steps. It was not unusual for a pinion to start its path on one side of the machine, move to the other side and then move back again which meant that the pinion being produced was free to move from side to side in random fashion. Because of this configuration, the number of possible combinations of individual process steps, or paths of variation, used to make these pinions was very high.
In my next post, we’ll introduce you to the multiple paths of variation that these pinions could traverse and discuss how these paths can significantly increase the overall variability of processes. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time. | <urn:uuid:2d1a24e7-9439-4774-ab3e-b112a91bce93> | CC-MAIN-2017-09 | http://manufacturing.ecisolutions.com/blog/posts/2016/april/paths-of-variation-part-1.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00140-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958675 | 1,016 | 2.65625 | 3 |
What's the Role of SDN and UC?
Software-Defined Networks promise a more modern approach to allocating network resources for real-time traffic
Maybe the only technology that has received more recent hype than WebRTC is Software Defined Networking. SDN represents the latest evolution of an approach that first reared its head in the late 1990s, then referred to as "policy-based networking." The basic idea was that applications could talk to the network to request appropriate services such as QoS/CoS and bandwidth.
SDN evolves that concept even further, breaking apart the pieces of the network responsible for making forwarding decisions from the underlying hardware responsible for implementing those decisions. Proponents tout a variety of benefits such as simplified and cheaper network infrastructure, more flexibility in routing and forwarding algorithms, and potentially the ability to provide application interfaces into the control plane (for a great tutorial on SDN, and the OpenFlow approach to controlling the data forwarding plane, see http://www.openflow.org/wk/index.php/OpenFlow_Tutorial)
So what does SDN have to do with unified communications? Quite simply, SDN offers the potential to implement dynamic provisioning of networking resources (again, QoS/CoS, and bandwidth) to support the needs of real-time applications.
Configuring IP networks to support voice and video used to be fairly easy--you assign voice and video endpoints into separate VLANs and use 802.1p at Layer 2 and DiffServ code points and corresponding queuing algorithms to prioritize rich media at Layer 3. If you want to get really granular, you can even prioritize the media stream (usually RTP) higher than the signaling layer (e.g., SIP).
But that straightforward approach is under fire as organizations increasingly move away from "desktop phones for all" to a mixed environment that includes softphones running on PCs and laptops, and mobile UC clients running on smartphones and tablets (almost half of companies are extending UC services to tablets, according to Nemertes' latest benchmark). Once IPT and video are simply just another application on a computing device, it becomes much more difficult to prioritize them across the network.
Couple the trend toward software-based UC with the increasing reliance on WiFi and cellular networks (about 17% of employees on average are wireless-only), and delivering network services to support voice/video gets even more difficult. And there's another new wrinkle, as vendors turn on encryption technologies such as SRTP by default. If the LAN and WAN can't recognize the type of traffic (because it's encrypted), they can't prioritize it!
So this brings us back to SDN. As Kevin Kieller noted back in April, UC vendors are exploring SDN to potentially solve some of these expanding challenges. One such example is Microsoft's partnership with Aruba to enable Aruba WiFi access points to detect Lync traffic and adjust QoS parameters accordingly, prioritizing real-time traffic generated from Lync ahead of other types of traffic traversing the access point.
Another example was covered in April as well by Phil Edholm, in which an HP OpenFlow SDN controller could dynamically allocate network resources to support Lync. Yet another example comes from startup Ubicity, which is developing SDN capabilities to support voice and video. More recently, Sonus and Juniper announced a partnership to integrate SIP services with SDN. Juniper had previously partnered with Polycom to enable Polycom platforms to request network resources to support video conferencing sessions.
We've come a long way from the policy-based networking framework of 10+ years ago, but the problems we are trying to solve remain the same--ensuring that the underlying transport network can dynamically react to, and support the needs of real-time applications. IT leaders should spend some time with their trusted advisors and their infrastructure vendors to learn how SDN can potentially solve ever-increasing bandwidth and QoS provisioning challenges. And they should ensure that those responsible for network management (or managing outsourced service providers) are working hand-in-hand with those responsible for UC deployments to ensure a successful rollout of the overall service. It probably wouldn't hurt to take a look at RFC3198 again too. | <urn:uuid:bf1856f3-4699-4e1b-8e61-23f4c96aeabf> | CC-MAIN-2017-09 | http://www.nojitter.com/post/240158913/whats-the-role-of-sdn-and-uc?google_editors_picks=true | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00492-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94374 | 871 | 2.75 | 3 |
Linux Tools for Your Desktop
Though Microsoft Windows still leads all other desktops by a comfortable margin, the market share for Linux continues to go no place but up. In the wake of recent decisions by system vendors as different as Dell and IBM to offer desktop-ready PCs with Linux installed and widespread adoptions of Linux on the desktop and elsewhere, it should come as no surprise that Linux is finding a home in more PCs than ever before. That said, software tools are particularly important in the Linux environment because they provide necessary or desirable functionality, but also because so many of them help to simplify what’s involved in installing, using and maintaining a Linux desktop.
Because there is such a plethora of tools, I’ll cover the major categories that pertain specifically to desktop implementations of Linux, or that help to humanize, simplify or spice up the Linux interface. While this pertains to both server and desktop versions of Linux alike, it’s so important to general Linux functionality that it simply must be included in any reasonable survey of this topic.
If you take a look at any serious Linux distribution, such as Red Hat, SUSE, Debian, Mandrake and countless others, you’ll find added tools that fall into many categories. But while the tools chosen for inclusion can (and often do) vary from distribution to distribution, the categories themselves tend to stay the same.
- Package management: This refers to a class of tools used to manage the code base upon which a Linux distribution is built, and create customized groupings of and configurations for both source and binary files. Red Hat uses its own Red Hat Package Manager (RPM); there’s a Gnome RPM (GnoRPM) that offers a GUI interface; and other distributions offer their own unique tools as well. But if you’re going to keep a Linux installation patched and up-to-date, you’ll have to work with some kind of package or update management software.
- Graphical interface: The command line is the native interface for Linux, and although it’s still used a lot, many other alternatives are available. Some of the most popular graphical alternatives (most or all of which are bundled in numerous distributions) include GNOME (the Gnu Network Object Model Environment, a complete desktop environment that’s Windows-like enough to look familiar, but different enough to require some learning to master), KDE (the K desktop environment) and numerous implementations of the venerable X Window System (aka X or XWindows), such as XFree86.
- Linux shell: A shell represents the complete collection of commands and syntax that a Linux system recognizes. Shells are valuable because they define scripting languages that are easy to turn into compact, simple, text-driven “programs” (scripts, really) for all kinds of tasks. There are many types of shells that Linux can (and does) use. The most commonly included and used shells include the Bourne (or “Bourne again”) shell (aka bash), the C shell, the Korn shell and the increasingly popular TC shell (tcsh, which combines all of the functions of the C shell with emacs style command-line editing).
- Device management: Adding and configuring hardware devices to a Linux installation requires obtaining correct device drivers and any necessary supporting code, and adding and configuring the device drivers so Linux can use them to talk to the device. Most distributions include their own command-line tools, but the X Windows-based LinuxConf tool offers a nice and somewhat friendlier alternative.
- Web browser: As on any other platform, access to the Web on Linux is not an option—it’s a dire necessity. Mozilla (a distant relative of Netscape), Amaya and Opera are among the best known browsers with strong Linux implementations.
- File manager: Atop the standard Linux command line and basic file system, there are any number of file manager programs you can run, both GUI- and command-based. In fact, one is already included with KDE, if you use that desktop environment—Konqueror, which also provides Web browser capability. Other popular alternatives include Midnight Commander, Filerunner and Xplore.
- Windows compatibility software: Samba, a software add-on for Linux is by far the leading tool for non-Windows platforms to add client support for access to Windows servers (and NetBIOS-based file collections on just about any kind of host).
- Linux security tools and information: These are incredibly numerous and varied, and they cover lots of different kinds of functionality. Basic must-haves include the Simple Log Watcher (aka SWATCH), Trinux, a firewall (normally Iptables or Ipchains) and antivirus software.
- Productivity suite: Though Microsoft Office may not be readily available for Linux, other alternatives are. Among many suite options you’ll find that KOffice, Sun’s StarOffice (a commercial product well worth its purchase price of about $80) and Open Office are among the best known and most used.
Although many other categories of desktop tools come to mind for Linux, the preceding categories comprise the bulk of Linux desktop or related tools likely to be included in a Linux distribution or to appear on a large number of Linux users’ desktops. Because this area is so involved in open-source software and downloads, it’s really just the tip of a truly enormous collection of potential members. Whatever you seek, finding options won’t be a problem, but sometimes making good choices will be. Use newsgroups, mailing lists and Linux trade press coverage to help you separate wheat from chaff.
Ed Tittel is president of LANwrights Inc. and is technology editor for Certification Magazine. You can e-mail Ed with your questions and comments at firstname.lastname@example.org. | <urn:uuid:c1bbf534-df2f-4c30-8e6b-dc2ace95133d> | CC-MAIN-2017-09 | http://certmag.com/linux-tools-for-your-desktop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.914678 | 1,216 | 2.640625 | 3 |
In my last article (http://news.tgc.com/msgget.jsp?mid=339671), I demonstrated how 1 + 1 can be equal to 1 by using a simple program of two instructions: I1 and I2 executing on two processors at the same time and simultaneous completing in about 1 clock tick. I concluded by posing the question: “can we do better?” In order to explain this, I need to shed some light on algorithmic complexity and run time fabric.
For our purpose, algorithmic complexity is best viewed with Amdahl's law. So, it is helpful to prove it. Don't be alarmed, it only requires middle-school algebra! To compare the run time of a parallel execution with a serial execution, we need to have a measure of speedup. The standard approach is to define speedup as the ratio of Ts, serial time of execution, over time of execution of the same algorithm in parallel. So, if we have N processors and p, and s are the parts of the algorithm than can execute in parallel and serial respectively, using s + p = 1
Speedup = (s + p) Ts / ((s + p/N) Ts), applying middle school algebra
Speedup = 1 / (s + (1 – s) / N)
So this is the law that tells us that as s gets smaller and smaller, my speedup approaches N. Incidentally, for stage I or embarrassingly parallel problems, s approaches or is equal to zero. Of course, the key assumption is the amount of computation is exactly the same in serial or parallel execution on N processors. Plugging into the formula the numbers from MPT' claims – 102 speedup on 127 processors – we can conclude that the serial part of the problem they are solving must be equal to 0.001945.
Run time fabric consists of main shared memory, L1 and L2 cache, disk I/O, network bandwidth, speed of light, etc. And modeling a parallel or even serial execution very quickly gets very complex. There are a number of papers discussing performance engineering using queuing theory, but some simpler models exist for basic algebraic operations (see BLAST, NAS, etc.).
Putting one and one together now, we need to ask yet another key question: do I really solve the same problem parallel and serially? The answer – not always! Algorithmic complexity changes from parallel to serial. This is well understood, for example, on the so called branch and bound algorithms, such as techniques for integer programming and global optimizations. See references D. Parkinson and Phillips et al. demonstrating arbitrary speedup gains. In these type of algorithms, you can find exactly the optimal number of processors that get you maximum speedup.
In terms of the run time fabric, consider a problem requiring very large data or large enough not to fit in the serial memory cache, where in parallel execution everything fits nicely in local memory. The penalty of memory paging and hit misses can be sufficiently big to demonstrate a speedup of greater than N. In fact, some of our benchmarks of Monte Carlo simulation of large portfolios, at ASPEED Software, have demonstrated speedup greater than N.
I conclude this article by saying that the underlying value of this conversation is not just an academic exercise but has a real business value: predictability in completing mission critical operations and knowing how exactly I can scale my operations in support of business growth.
Until next time: keep the GRIDS crunching.
HPC Product and Marketing specialist
D. Parkinson, Parallel efficiency can be greater than unity, Parallel Computing 3 (1989)
A. T. Phillips and J.B. Rosen, Anomalies acceleration in parallel multiple cost row linear programming, ORSA Journal on Computing (1989(.
Labro Dimitriou is a High Performance Computing product and marketing specialist. He has been in the field of distributed computing, applied mathematics, and operations research for over 20 years, and has developed commercial software for trading, engineering, and geosciences. Labro has spent the last five years designing and implementing HPC and BPM business solutions. Currently employed by ASPEED Software. | <urn:uuid:5d2b0670-4e60-476f-993d-a160945b4ac4> | CC-MAIN-2017-09 | https://www.hpcwire.com/2005/02/24/amdahl_s_law_1_1_1-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00612-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919959 | 860 | 3.1875 | 3 |
In this post I'll go through the basics for "stochastic use case testing". It is sometimes called also "Markov chaining" or "Markov testing". There are variations of this technique, of course, but my aim here is to cover the common ground and share some thoughts on where methods like this are best applied.
Here (on the right) is a simple "stochastic use case model" or "probabilistic user model". This model represents the basic activity flow for a user who logs into a web site to purchase something. [Click on the image to zoom it if it appears too small.] The three gray circles are "states", and the one with bold circumference is the "start state". The yellow rectangles denote actions that the user can take, and the probabilities on them denote their estimated or measured probabilities. All the actions that can be taken from a single state have their probabilities sum up to 100%, so that for example "Successful login" and "Failed login" together add to 80% + 20% = 100%.
Because a system that has "states" and "transitions" with probabilities on them is called a "Markov chain" by mathematicians, the test generation paradigm I'm covering here is sometimes also known as "Markov chaining" or "Markov testing". However, "probabilistic use case testing" or something like that makes perhaps more sense to a non-mathematician.
The big idea now is that every path that begins from the start state ("Logged out") and ends at the same start state can be converted into a test. One example of such a path is:
- Failed login
Another, slightly more interesting path, is:
- Successful login
- Add items
Because the actions carry probabilities, it is possible to calculate the compound probabilities for these two paths. The first path has compound probability 20%, and the second 80% x 80% x 10% x 98% = 6.272%.
Now, when you have a probabilistic user model like the one here, you can use a tool or a hand-written program to calculate an interesting set of paths, i.e. an interesting set of test cases, from the model, and I'll now cover some ways to generate test sets next.
One method to generate paths from a user model is to random sample paths according to the probabilities on the actions. For example, with this model every 5th randomly generated path would be "Failed login", and 4/5 out of the generated paths would start with "Successful login". Most stochastic test generation tools implement this simple option.
Another option is to generate the most common paths through the model. For example, I wrote a simple computer program that enumerated all those paths through our example model that have compound probability of 10-5 or higher (that is, 0.001% or more). There are 1750 such paths, and when I sum up their probabilities I get to 96.3%. This means that executing those 1750 paths as (preferably automated) tests, I have covered 96.3% of the typical user behavior.
The most probable path is still "Failed login", because it is the only path that gets very quickly back to the start state (remember that I'm only considering paths that begin and end at "Logged out"). The next path is "Successful login", "Checkout", "Logout" and the next one is "Succesful login", "Add items", "Checkout", "Logout". The least probable path in this set, with probability 10-4.983, is "Successful login", do "Add items" 37 times, and then "Logout".
Executing the common paths as tests can be useful, because this approach focuses on the "normal" use of the system. However, sometimes you might want to explicitly test for the uncommon case.
I implemented another option to my simple program to enumerate also those paths that (1) have at most 10 action steps and that (2) have compound probability below 0.001% (as these cases are already covered in the Common Path test set). There are 2107 such additional uncommon path tests, and together with the initial tests these take the coverage figure from 96.3% to 96.5%. This shows that from a risk management perspective, these 2107 additional tests have less value than the original test set. However, they could be very good in exposing difficult-to-spot errors in the system. For example, the most improbable test in the whole test set is "Successful login", do "Checkout / Continue shopping" four times and then "Logout" (without adding anything to the shopping basket throughout the test).
It is also possible to ignore the probabilities and generate a small test set for transition cover, i.e. generate just enough paths so that every action in the model is covered at least one. A simple transition cover test set for my example model could be:
- Failed login
- Successful login
- Successful login
- Add items
- Remove items
- Continue shopping
These ten steps are enough to cover all the eight actions in the model. Two actions have to be executed twice (Successful login and Checkout) because there are two distinct states where one can logout and because "Done shopping" has two actions behind it, and you can only get to "Done shopping" by "Checkout".
Comparing the Methods
Transition cover, random sampling, common cases and uncommon cases are all valuable test generation methods. The unique benefit of transition cover is verifying rapidly that all the actions have been implemented. Random sampling is unbiased and its unique benefit is that you can execute tests based on random sampling as long as you can, and the longer you can, the better your chances of spotting additional errors. The unique benefit of enumerating the common cases is that you cover system's typical use in a robust manner and thus significantly reduce the risk of failure in day-to-day use. And testing the uncommon cases is good because that can easily spot implementation inconsistencies.
Where it Fits
Stochastic use case testing can be applied well in those contexts where, on some level, the system under test can be depicted as a relatively simple finite state machine. Of course, an actual web shop is not a simple finite state machine, but rather a system consisting of presentation and business logic layers, databases, web frameworks, interpreters and so on. However, the key is that if it possible with meaningful effort to realize the paths that you can find from an abstract finite state model as actual tests (for example, during test execution, fill in the login parameters and put in some examples of items that can be added to the shopping basket), then that can be enough.
How to Implement Stochastic Use Case Testing
There exist relatively simple commercial tools that can help you in enumerating the paths through a finite-state model, the available test generation options and mathematical methods employed varying. If you can program, it is also simple to write such an enumerator by yourself. The enumerator I used to get the numerical results in this post takes a total of 155 lines of C++ code and less than an hour to write. | <urn:uuid:062c4d91-2d1b-4173-93c4-9f35c7329f30> | CC-MAIN-2017-09 | https://www.conformiq.com/2012/08/understanding-stochastic-use-case-testing-and-markov-chaining/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00312-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942513 | 1,502 | 2.75 | 3 |
Despite all the news coverage about successful cyberattacks, developers are still writing code full of security vulnerabilities.
Of course, nobody is perfect. We all make mistakes, and as software projects get more and more complex, it can be easy to mix potential problems.
But that doesn't explain why so much software is full of the most basic errors.
According to a report released this month by Veracode, 61 percent of all internally-developed applications failed a basic test of compliance with the OWASP Top 10 list on their first pass. And commercially developed software did even worse, with a 75 percent failure rate.
These are basic, well-known problems, like SQL injections and cross-site scripting.
Or take hard-coded passwords. Who still does that? According to Veracode, 35 percent of all applications they tested.
Eliminating these basic vulnerabilities would go a long way towards making software more secure. And the earlier on in the process they're caught, the easier they are to fix.
Today's integrated development environments can already catch common syntax errrors, like missing semicolons, said Ron Arden, COO at security vendor Fasoo.
"If there's a function you're using, it shows the parameters," he added. "But it won't tell you if there's a SQL injection or cross-site scripting or something stupid like that."
Wouldn't it be nice if software developers had something like a spellchecker, but instead of catching typos and simple grammar mistakes, it caught basic security problems?
Developers would be able to fix them immediately, and also learn to write more secure code in the process.
The traditional approach is to test software for vulnerabilities after it has been written. But today the testing is moving to earlier in the development process, to when commits are made, or even earlier, while the developer is actually writing the code.
"We really need to be implementing this type of application security in our software development stage," said Doug Cahill, analyst at research firm Enterprise Strategy Group. "There are some organizations that are integrating these types of security best practices into their software methodology, but not enough. One part is just lack of awareness, and one part is the need for automation. If we can hit the easy button, more of us will do it."
According to Veracode, there are some signs that development is moving in that direction.
Although 40 percent of applications only get scanned once, 9 percent of applications get scanned much more frequently, suggesting that those companies are running some kind of continuous testing program, with some applications in developer sandboxes getting tested as much as six times a day.
According to the report, the number of vulnerabilities per applications can improve dramatically with this approach. Flaw density goes down 46 percent when application scanning is added to the development process. When e-learning features are added, there's a six-fold improvement in flaw density reduction.
Enterprises aren't just looking at their internal development processes, but are also starting to ask their software vendors to improve their security.
"It's happening more and more because the supply chain is responsible for security incidents and breaches," said Cahill. "And one of the things they ask is, do you do software scanning and security analysis?"
The tools are also getting better, he said, with a number of vendors offering software scanning and orchestration tools so that companies can integrate the security checks earlier into the development process.
"But it should be contextual," he added. "If you just get 'you made a coding mistake,' that's not especially helpful. But if you get an advisory that because of the way you structured your code, it could be exploited by a SQL injection, and here are some ways to adjust your code ... we can improve our security posture."
It's important to avoid alert fatigue, he added, or having a "Clippy" of security -- annoying and unhelpful.
"These types of alerts need to be prescriptive, consultative, and actionable," said Cahill.
Citigal, an application security vendor, first looked at doing a security "spellchecker" back in 1999, but creating another "Clippy" was a serious concern.
"Clippy was universally hated," said John Steven, Internal CTO at Cigital. "It was hated because it was in your face, you were typing and it distracted you, and its advice was always daft. It was telling you the wrong thing all the time."
It would have been too easy to do the same for application security.
For example, he said, take cross-site scripting.
"Every line of code you're writing could potentially be vulnerable to cross-side scripting," he said.
But developers are now more willing to consider tools that help them write code, he said. Plus, the new early-state software security tools are not being used to find all possible vulnerabilities, but are used as training tools, instead.
Say, for example, a developer is considering linking to an insecure open source library. Cigital offers a tool that can catch that problem right away, suggest a better library, and even automatically convert existing code.
"We want to find the choke points to help them make a good decision," he said, "And cut out whole swathes of later opportunities to create problems."
In fact, the security education aspect is one of the top benefits of early-stage application testing.
According to a Sans report released earlier this year, the lack of application security skills is the top challenge when it comes to improving software security, ahead of funding and management buy-in.
Built-in security education
Checkmarx is one of several vendors looking to address that very issue.
"We take source code, and do the analysis on 10 or 100 lines of code, allowing the developers to see the vulnerabilities at a very early stage," said Amit Ashbel, director of product marketing at Checkmarx. "And then we take them to a brief, five to 10 minute session on how to fix the code. We show them how to hack the code, and they can try it in real time. Then they understand what that vulnerability could have exposed to their code to."
As a result, the learning is delivered exactly when the developers need it most, he said.
"They don't have to move away from their desk, they don't have to spend too much time sitting in a room and listening to lectures," he said. "I think this is the way to do secure coding education."
On the issue of whether the product is more helpful or more annoying, he pointed to its page on Gartner PeerInsights, where the reviews were very positive.
"What I like most is the level of adoption usage and impact within our engineering department the product has made," wrote one CISO at a large manufacturing company.
"The feedback from our developers had been very positive, which has aided our adoption of code scanning as a routine activity," wrote a technical specialist at a large financial firm.
Early-stage testing can miss big problems
The security testing that takes place while code is being written is a type of static analysis.
With static analysis, the tools simply look at the code the way it is written, while dynamic analysis actually follows the flow of logic. That means that static analysis can miss many problems.
"The tools can only protect against errors that have certain patterns that it knows about," said Mike Milner, co-founder and CTO at Immunio, which offers run-time application security detection.
Meanwhile, as more companies move to agile development, the dynamic analysis tools are identifying problems quicker and quicker.
"You write and deploy several times a day," he said, "So it becomes a development tool."
When companies first began moving from traditional waterfall development to agile, security was often sidelined, said Mike Kail, chief innovation officer at Cybric, which offers a service that scans code whenever a developer commits it to GitHub or BitBucket, using Veracode or other commercial and open source vulnerability scanners.
"Currently, companies are testing for SQL injections or cross-site scripting once a week, or maybe once a quarter," he said. "We need to make this a continuous process because the hackers are attacking companies continuously."
It makes sense to have security testing tools as part of the software development process -- but not during the writing phase, said Brian Doll, vice president of marketing at SourceClear, which makes tools that look for open source security vulnerabilities during the built process, instead.
"Logistically, it's time consuming and painful to interrupt your authoring process to get feedback from those tests," he said.
And it's almost impossible to get good results during these early stages, he added.
"Until you build the software product and understand the relationship between components, you're just guessing," he said.
It's the difference between static and dynamic analysis, he explained.
For example, if a developer calls a particular open source library, the exact version of the library that's used in isn't locked in until the build takes place, once the package managers resolve all the dependencies -- and dependencies of dependencies.
"We can get much better insights and can tell you exactly where in your software you're linking to vulnerable methods or vulnerable libraries," he said. "And you're not going to do a build every time you type a word. It just wouldn't be efficient."
But a tool that checks for problems during the writing stage doesn't have to catch all potential vulnerabilties, said Cahill, the ESG analyst.
"This is just the first step," he said. "There are no silver bullets in security, but you can at least reduce the mistakes and the attack surface area along the way."
The optimal approach is to use each kind of security tool at the point where it works best, he said.
"Static and dynamic analysis should happen at the appropriate stage of the software life cycle," he said. "There should be scanning done in each environment. If you layer, you can dramatically reduce the security attack surface in production."
This story, "Why don't developers have a 'spellchecker' for security'?" was originally published by CSO. | <urn:uuid:a0bf80da-5ac8-46f9-b361-f75ff5141df4> | CC-MAIN-2017-09 | http://www.itnews.com/article/3136264/software/why-dont-developers-have-a-spellchecker-for-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00488-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969977 | 2,117 | 2.53125 | 3 |
Networking 101: Understanding Multicast Routing
Multicast has become a buzzword more than once in history. IP multicast means that one sender is sending data to multiple recipients, but only sending a single copy. It's very useful for streaming media, so let's explore how this works.
Much like broadcast, there are special addresses designated for multicast data. The difference is that some of these can be routed, and used on the Internet. The multicast space reserved by IANA is 18.104.22.168/4. We do not say, "Class D" anymore. The addresses spanned by 224/4 are 22.214.171.124 through 126.96.36.199.
Multicast is more efficient than broadcast, because broadcast packets have to be received by everyone on the local link. Each OS takes an interrupt, and passes the packet on for inspection, which normally involves some data copies. In multicast, the network card doesn't listen to these multicast packets unless it has been told to do so.
By default, with multicast-enabled network cards, the NIC will listen to only 188.8.131.52 at boot. This is the address assigned to "all systems on this subnet." Yes, that's very similar to broadcast. In fact, many people say that broadcast is a special case of multicast.
Multicast is selective in who it sends to, simply by nature of how network cards can ignore uninteresting things. This is how the local link works, but how about the Internet? If someone wants to stream the birth of a celebrity's baby in Africa via multicast, we don't want every router on the Internet consume the bandwidth required to deliver it to each computer. Aside from the NIC being able to make decisions locally, there are multicast routing mechanisms that serve to "prune" certain subnets. If nobody wants to see it within your network, there's no reason to let it travel into the network.
People who are interested in seeing such a spectacle will run a special program, which in turn tells the NIC to join a multicast group. The NIC uses the Internet Group Management Protocol (IGMP) to alert local multicast routers that it'd like to join a specific group. This only works one-way, though. If someone wants to send and receive multicast, the IP layer will need to be fancier. For sending, IP will map an IP address to an Ethernet address, and tell the NIC driver so that it can configure the card with another MAC address.
IGMP itself is very simple. It's very similar to ICMP, because it uses the IP layer, only with a different protocol number. The header consists of only four things: a version; a type; a checksum; and the group, i.e. multicast address, to be joined. When that packet is sent, a multicast router now knows that at least one host is interested in receiving packets for a specific multicast address. Now that router must somehow do multicast routing with other routers to get the data.
Here it gets interesting. There are a few multicast routing mechanisms that we'll talk about today: DVMRP and PIM. Pausing for just a moment, it's important to realize that even today multicast isn't widely supported. Back in the day there was a mbone, or multicast backbone, that people connected to via IPIP (IP encapsulated in IP) tunnels. The Unix application mrouted understood DVMRP and IGMP when the Internet routers did not. Most people who wish to use multicast nowadays still find themselves asking their ISPs why certain protocols aren't working.
DVMRP is the Distance Vector Multicast Routing Protocol. It uses IGMP sub-code 13, and does what's called Dense Flooding. Dense flooding is very effective, but very inefficient. A router will flood to everyone in the beginning, and then prune back uninterested subnets. PIM, or Protocol-Independent Multicast, is independent of unicast routing mechanisms. In dense mode operation, it is very much like DVMRP. PIM dense mode is essentially the same as DVMRP, except PIM uses IP protocol 103. PIM implements joins, prunes, and grafts. A graft is the opposite of a prune: it grafts a branch back onto the tree.
Dense mode multicast routing, regardless of protocol, works by sending data to everyone and then pruning back parts of the tree. A tree, as always, is used to represent a set of routers. When a bunch of branches get pruned, routers can eventually eliminate bigger and bigger chunks. If no branches are interested within an AS, the border router can send a prune message to the upstream router, hence it stops wasting bandwidth.
Sparse mode multicast routing utilizes a Rendezvous Point, or RP. All join messages are sent to the RP's unicast address, so this clearly requires a bit of prior knowledge. PIM sparse mode also operates a bit more intelligently. It uses shared trees, but if a router notices that it's closer to the source it can send a join upstream to ensure traffic starts flowing through the best point. The newly designated router then becomes the source distribution point for the network.
This is all fine and dandy, except for one little detail: the Internet isn't a vertical tree. Enterprises want to connect redundantly, so naturally giant loops will form. Reverse Path Forwarding (RPF) is used in multicast too, to make sure that loops don't happen. The basic idea is verify that the interface a multicast packet arrives on is the shortest unicast path back to the sender. If not, then it probably didn't come from the sender, so the packet is dropped. If the RPF check is successful, the packet is duplicated and sent to everyone in the group.
Quite a few other multicast routing protocols exist in the wild. OSPF has MOSPF, but that can really only be used within one domain. BGP has BGMP, but it's never been seen outside of captivity. Most are not really used, but people are always coming up with new and interesting ideas to make widespread use of multicast a reality. It's such a shame to watch the same video streamed separately from a Web site, when it would save tremendous bandwidth to use multicast and let the router duplicate when it needs to.
In a Nutshell
- Multicast uses special addresses to send data from a single sender to multiple recipients, even though the recipient only sends one copy.
- Hosts or routers can join multicast groups via IGMP to tell other routers that they are interested.
- Dense protocols flood and prune, sparse modes will utilize an RP to avoid flooding unnecessarily. | <urn:uuid:fed0c059-5d2f-40c0-9bbd-9d6806cef0e8> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3623181/Networking-101--Understanding-Multicast-Routing.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00608-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940729 | 1,419 | 3.890625 | 4 |
Many of us are lucky to live in a place where, for most people, the first two levels of Maslow’s Hierarchy of Needs are met. These consist of physiological needs, such as food and water, and safety needs, e.g., physical, economic, and health. These needs are usually met through personal resources or help from the government and charitable organizations. To meet the next two levels of need (friendship/belonging and self-esteem), however, more than half of the 60 million disabled people in the United States turn, at least partially, to the world of gaming.
“That’s where video gaming for people with disabilities becomes very important because you can free yourself from your disability through a video game, you can make friends, you can present yourself in a way that has less stigma around your disability,” says Mark Barlet, founder of AbleGamers, an eight year-old organization that is devoted to supporting gamers with disabilities.
AbleGamers works to educate content producers as well as hardware and software developers on the development of accessible games, and to educate and support caregivers about the benefits of gaming for those with disabilities. They also host events such as their Accessibility Arcades to show disabled gamers and caregivers equipment and technology that already exists to help them enjoy video games like anyone else.
I recently spoke with Barlet about the role gaming plays in the lives of many disabled people and the current state of accessible gaming. I learned a number of interesting things, such as:
- There are roughly 33 and a half million disabled gamers in the United States, mostly (two-thirds) male, with more of them over the age of 50 than under 18, which mimics the general population of gamers. Game developers, having been educated about the size of this market, have become very open to making their games accessible. Developers are “competing in a very big marketplace and are really looking to draw in as many people as they can. So, if they can add in 5 or 6 accessibility features to help make it more appealing to the mass audience then they’re going to,” said Barlet.
- There currently isn't any legislation in the United States requiring video games to be accessible - and, surprisingly, disabled gamers prefer it that way. Barlet argues that gaming helps to push the envelope in computer development and that government legislation would only hurt the development of games and, hence, computer technology. Instead, he feels that the number of disabled gamers is large enough to provide incentives for developers to ensure their games are accessible. “I think that’s a far better path than legislation,” said Barlet.
- In terms of gaming platform (PC vs. console vs mobile), while there is apparently some disagreement in the disabled community, Barlet said, “I am firm believer that the most flexible platform for a gamer with disabilities is the PC. There are a truckload of devices and peripherals out there... that you can plug into USB and they’re fairly inexpensive.” Consoles, on the other hand, while offering the most cutting edge games, are worse for disabled gamers because they’re closed systems. “Adaptive controllers and custom controllers… have to go through incredible hoops to try to get the Xbox to talk to the peripheral, because they’ve locked it down through proprietary processes.” Mobile gaming, is still fairly new, though Barlet notes that “independent developers that key on the mobile gaming space are much more creative and much more responsive to accessible features.”
- In order to support gaming among the disabled, caregivers need to be educated about it. “The caregiver is governor of what a person with disabilities can do,” said Barlet. “The understanding has to be there in the caregiver, because they’re the ones that have to support the cause.” To that end, AbleGamers has recently begun producing simple videos like “How to set up an Xbox.” | <urn:uuid:14010e34-9710-41fa-bff8-cc3e673d737f> | CC-MAIN-2017-09 | http://www.itworld.com/article/2719734/mobile/for-some-with-disabilities--gaming-fills-a-basic-need.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00132-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.97223 | 833 | 2.78125 | 3 |
Fortunately, two browser-side mechanisms have been introduced to mitigate or limit the impact from externally-included resources: Subresource Integrity and sandboxing.
Subresource Integrity (SRI) allows developers to pin down certain versions of scripts or stylesheets which are included from external domains. The goal is to ensure that the external script has not been modified inadvertently, or intentionally replaced or altered by an attacker. A cryptographic hash is used to ensure that the version included in the website has not been tampered with:
<script src="https://external.cdn.ch/example-framework.js" integrity="sha512-z4PhNX7vuL3xVChQ1m2AB9Yg5AULVxXcg/SpIdNs6c5H0NE8XYX ysP+DGNKHfuwvY7kxvUdBeoGlODJ6+SfaPg==" crossorigin="anonymous"></script>
As soon as the content of the external resource “example-framework.js” changes, the hash set on the consuming website does no longer match the script’s hash and the script will not be run by the browser. Note that Cross-Origin Resource Sharing (CORS) must be enabled on the Content Delivery Network’s side to be able to use the integrity attribute, or else the script will not load at all, even with a correct hash.
The SHA-512 hash to pin down external resources can be calculated as follows:
cat example-framework.js | openssl dgst -sha512 -binary | openssl base64 -A
While only the script and link tags support the integrity attribute by the time of writing, other tags will probably follow, enabling developers to also ensure the integrity of images or other content embedded from externally.
The crossorigin attribute defines whether or not the browser should send credentials when fetching the external resource. It defaults to anonymous if an invalid value is given. The values are defined as follows:
|anonymous (or empty string)||Anonymous||Requests for the element will have their mode set to “cors” and their credentials mode set to “same-origin”. (i.e. CORS request and credentials for the external domain are not sent with the request)|
|use-credentials||Use Credentials||Requests for the element will have their mode set to “cors” and their credentials mode set to “include”. (i.e. CORS request and credentials for the external domain are sent with the request)|
|[crossorigin attribute not present]||No CORS||Requests for the element will have their mode set to “no-cors”. As a consequence, data cannot be read cross-origin.|
SRI only makes sense if the website is secured with TLS. If an attacker is able to intercept and alter the website he can inject his own scripts and strip the hashes or generate them on-the-fly.
Since hashes like MD5 or SHA-1 are prone to collision attacks, they should be avoided for pinning external resources. SHA-384 or SHA-512 are considered secure.
Note that only Firefox, Chrome and Opera support SRI at the time of writing. This means that users of Internet Explorer or Edge do not benefit from the SRI protection. Therefore it is still preferable to host the resources on your own domain, than to depend on SRI.
While SRI can be used to ensure the integrity of external resources, it does not feature a report mechanism that tells the including party if a script has been modified and therefore does no longer run in the user’s browsers. A report-uri mechanism, as known from HSTS, HPKP or CSP, would need to be custom-developed.
From a security standpoint, it is a bad idea to include external resources directly into the website, because this is giving them full access to the website’s Document Object Model (DOM). A resource could thus access and alter the whole content of the website. When confined in an iframe, access to the parent’s DOM is restricted. It is however still possible to open pop-up windows from within the iframe, display dialogs or perform other unwanted actions. With the HTML5 feature sandboxing, it is possible to further restrict the behavior of “iframed” content.
<iframe src="//external.cdn.ch/example_resource.html" id="sandboxed_frame" sandbox="" height="500" width="700"></iframe> <iframe src="//external.cdn.ch/example_resource.html" id="sandboxed_frame" sandbox height="500" width="700"></iframe>
|allow-popups||Enables iframe content to open pop-up windows.|
|allow-pointer-lock||Enables iframe content to use the Pointer Lock API. The mouse movements can be tracked and the mouse pointer can be hidden.|
|allow-popups-to-escape-sandbox||Enables iframe content to create pop-up windows without sandbox restrictions. (Why would you want to do that?!?)|
|allow-modals||Enables iframe content to display modal windows such as alert();, prompt(); or through showModal().|
|allow-top-navigation||Enables iframe content to load content to the top-level browsing context, e.g. via href or target=”_top”.|
Sandboxed iframe content resides in a custom origin, isolating it from the originating domain. It can therefore not perform requests or read data from its original origin. This directive allows the iframe content to run in the originating domain, thus removing those restrictions.
|allow-forms||Enables iframe content to submit forms.|
|allow-presentation||Enables iframe content to start presentations.|
Our next Web Application Security courses
- 14./15.03.2017, Web Application Security Basic, Bern
- 16./17.03.2017, Web Application Security Advanced, Bern
- 04./05.04.2107, Web Application Security Basic, Berlin
- 06./07.04.2017, Web Application Security Advanced, Berlin
- 26./27.09.2017, Web Application Security Basic, Zurich
- 28./29.09.2017, Web Application Security Advanced, Zurich | <urn:uuid:16adc447-6ac7-455d-bc71-4f0ccf668d90> | CC-MAIN-2017-09 | https://blog.compass-security.com/tag/javascript/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00132-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.847943 | 1,367 | 2.765625 | 3 |
A VPN (virtual private network) offers network connection possibility over an extensive physical distance (remoteness). But you need to know that it can work over both on private networks and public networks (Internet).
VPN in simple words make possible for clients or whole LAN-s on other side of the internet to connect into main LAN pesmises and have the “technical impression” that they are localy connected to this site. This includes gaining the local IP address from local DHCP pool, possibility to use all the LAN resources that are defined by the administrator etc.
Features shows that the VPN is a type of WAN (wide area network). The purpose of its use in network is: file sharing, video conferencing and facilitating from other similar network services. Though, such services are available already in other alternative mechanisms and technologies, but use of a VPN is made for getting more efficiency of available remote resources, sharing data and better communicating. Most of all, this technology can be implemented in relatively a low cost.
Technologies working at the back of VPN
A number of network protocols like: PPTP, L2TP, IPsec and SOCKS can be employed in the mechanism of a VPN. The presence of such protocols in the VPN is necessary to carry out the processes of authentication (verification of users) and encryption (to hide sensitive data from the other online public).
With the help of tunneling, a VPN can use existing hardware infrastructure on the Internet or intranet. Three different modes of VPN are possible for the following purposes: remote client’s connections over internet, LAN-to-LAN internetworks and for restricted access inside an intranet.
VPNs for distant Connectivity over the Internet
To increase the mobility of any organization’s workers and to be connected to the company’s networks, a VPN deployment can be a good solution. This device can be employed to handle such circumstances of distance and in order to get protected access to those offices of organization, connected over the Internet. But according to the client/server environment, a client (remote user) is required to log on first to his/her ISP (internet service provider) in this process of getting access to company network. Then company’s VPN server connection is required.
Once connection is established between the remote client and server, the communication process will begin soon after this with the internal company systems over the Internet in the same way as a local host can do. Cisco VPN client can be employed for decidedly protected connectivity. And with these devices, encrypted tunneling is established for the remote employees.
VPN Extended network
It is the quality of a virtual private network to link together two networks for remote access. Moreover, in such cases of operation, the united remote network (combination of two remote networks) can further be linked to another company’s network. This kind of networking structure (extended network) is possible with a VPN server plus this VPN server’s connection.
After reviewing the payback that is attached with a VPN networking, downloading Cisco VPN client can be the desire of any organization. But certain system requirements should be fulfilled in order to get benefits from such networks like:
- Windows 98 or newer MS operating system, Mac OS X, Linux OS or Solaris Unix OS
- VPN (Cisco) client must be compatible with the VPN (Cisco) servers: VPN 3000 series concentrator 3.0 software or later and IOS Software 12.2(8)T or later etc
- PPTP, L2TP/IPsec, L2TP or any other VPN tunneling protocol | <urn:uuid:aa80deb2-cdfa-4553-8014-95e2c5210ea0> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2011/vpn | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00484-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.90522 | 744 | 3.203125 | 3 |
Exploiting Big Data For Creating New Products And Innovation
With the development of technology resources, organizations will increasingly depend on exploiting data for creating new products, services, innovation as well as changes in business processes. Big data is currently one of the most talked about issues in business and organizations. Big data is a collection of unstructured and multi-structured data that come from traditional and digital sources inside and outside companies representing sources for ongoing discovery and analysis.
Unstructured data refers to information that is not organized such as metadata, Twitter tweets, and other social media posts.
Multi-structured data refers to different types of organised data and data which can be generated from interactions between people and machines, for example, web applications, social networks and web log. Three significant characteristics of big data; high-volume, velocity, and variety; are important in obtaining, extracting, manipulating and interpreting it within an organization.
Many scholars believe that analysing, interpreting, and managing big data will help companies to understand their business environments, to respond to changes, and to create new products and services in order to keep their business fresh and new. Increasingly, long term commercial success is based on an ability to manage change and using data effectively. Many commentators stress the importance of leadership in the big data era. Leaders’ vision, their ability to communicate that vision, their creative thinking, their ability to spot a great opportunity, and the way that they support and deal with employees, customers, stockholders, and other stakeholders can surpass or mobilize change process. Moreover, effective leaders are able to bring together a group of competent professionals and data scientists to work with large quantities of data and information. Data scientists not only should possess statistical, analytical and creative IT skills but also they should be familiar with operations, processes and products within organizations. However, people with these competencies are difficult to find and in great demand.
Technologies such as the Hadoop framework, cloud computing and data visualization tools help skilled professionals to cope with the technical challenges. In fact, the increased volumes of data require major improvements in database technologies. Nowadays, open source platforms such as Hadoop have been designed to load, store and query massive data sets on a large, flexible grid servers, as well as perform advanced analytics. Analysing big data will not be valuable if professionals cannot understand and interpret the results of the analysis. Hence, organizations need great decision makers to examine all the assumptions made and retract the analysis to increase productivity and innovation at all organizational levels. Many scholars believe that automated administrative decision making can save time and improve efficiency. However, there is possibility of some error in computer systems that can lead to errors in interpretation of results. Hence, users should try to examine and verify the results produced by the computer.
By Mojgan Afshari | <urn:uuid:f023e73b-4d0c-4ed8-bc70-eeb211ca0e0e> | CC-MAIN-2017-09 | https://cloudtweaks.com/2014/05/big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00008-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950498 | 572 | 2.734375 | 3 |
14 Amazing DARPA Technologies On TapGo inside the labs of the Defense Advanced Research Projects Agency for a look at some of the most intriguing technologies they're developing in computing, electronics, communications, and more.
1 of 14
The Defense Advanced Research Projects Agency has announced its latest technological advance, a combination of "mind and machine" to help soldiers on the battlefield respond more quickly to deadly threats. It's the latest in a series of technical breakthroughs from the Penatgon's research arm, some of which can be applied in areas other than national defense.
A few months ago, DARPA revealed it had successfully tested a camera (pictured above) with 1.4 gigapixel resolution. To achieve that resolution--the equivalent of 1,400 megapixels--the camera builds a panoramic image from more than 100 micro cameras.
DARPA's newest development, called the Cognitive Technology Threat Warning System (CT2WS), includes a 120-megapixel camera, radar, computers with cognitive visual-processing algorithms, and brainwave scanners worn by soldiers. It aims to help scouts assess battlefield input using a portable visual threat-detection device.
DARPA is trying to solve a common problem with CT2WS, said program manager Gill Pratt, in a statement on the initiative's progress: "How can you reliably detect potential threats and targets of interest without making it a resource drain?"
CT2WS is based on the concept that humans have a natural ability to "detect the unusual," according to DARPA. The soldier wears an electroencephalogram (EEG) cap that monitors brain signals and records when a threat is detected. Users are shown images, about 10 per second, and their brain signals indicate which images are significant.
Launched in 2008, the program is being transitioned to the Army's Night Vision Lab. Field tests and demonstrations resulted in a low rate of false alarms--five out of 2,304 "target events" per hour--and the technology identified 91% of threats. Common alternatives such as binoculars and cameras have a much higher error rate.
DARPA draws a lot of attention for far-out research projects like the world's fastest robot and a plan to capture and recycle space junk, but electronics, communications, and IT are core to its mission. That's been true since the agency created ARPANET, the predecessor to the Internet, in 1969.
The research agency has dozens of projects underway in various research offices. Its Information Innovation Office focuses on IT research and development, its Microsystems Technology Office on electronics and photonics, and its Strategic Technology Office on communications, networks, and electronic warfare.
Dig into our InformationWeek Government visual guide to 14 of DARPA's most innovative technology projects. Image credit: DARPA
1 of 14 | <urn:uuid:511d0d43-f1c4-4fa1-a2eb-6da649ec5f89> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/14-amazing-darpa-technologies-on-tap/d/d-id/1106551?cid=sbx_byte_related_mostpopular_byte_news_google_announces_android_42_new_nexus_ta&itc=sbx_byte_related_mostpopular_byte_news_google_announces_android_42_new_nexus_ta | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00304-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930689 | 573 | 2.9375 | 3 |
Wrapping a firewall around the perimeter is no longer sufficient to meet the needs of modern networks. Technologies such as IPS need to be pushed into the network, not just at the edge, but throughout the entire infrastructure.
During Network World's recent Security Technology Tour, we received a lot of questions about intrusion-prevention systems. The problem is that there is little agreement on what an IPS really is.
The security experts on the tour agreed on one thing: An IPS must be inline. That is, packets have to move through the IPS to prevent intrusions. While the idea of resetting connections and changing firewalls is a good interim step, enterprise-class intrusion prevention will require that the IPS handle packets, dropping them when something is wrong.
A second assumption about IPS is that it is a "permissive" technology. In other words, an IPS will drop a packet if it has a reason to, but the default behavior is to pass traffic along. In contrast, a firewall is a "prohibitive" technology: It lets a packet through only if it has a reason to.
Obviously, firewalls are also intrusion-prevention devices. Some experts say that all IPS vendors are talking about is what firewalls should be doing. But the difference in the orientation of these technologies suggests that they are not the same.
More importantly, because they are different, you can use a firewall or an IPS or both at any point in your network. At the perimeter, it's reasonable to expect that a firewall also will have an IPS built in. But at the core of the network, inline IPS might be built into switches and routers.
How do you convince purse holders to buy into IPS? There's no easy answer to that. The "fear factor" approach can be useful. Make the decision-makers afraid. Point out the new legislation regarding liability. And perhaps you'll see the money start to flow. But that's not a long-term solution.
For some, an IPS can be justified on the "nuisance factor" instead. By blocking the thousands of Code Red and MS-SQL Slammer attacks coming into the network every hour, the load on the firewall is lightened, the Internet connection is faster and the Web server logs are easier to analyze.
For others, IPS justification will have to be part of a larger program of security, justified on the basis of traditional ROI analysis.
What's clear from tour attendees is that wrapping a firewall around the perimeter is no longer sufficient to meet the needs of modern networks. Technologies such as IPS need to be pushed into the network, not just at the edge, but throughout the entire infrastructure. | <urn:uuid:a5e3d2de-a181-454c-bdba-c4b3b0896012> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2335574/network-security/what-is-an-ips--anyway-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00476-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955178 | 546 | 2.625 | 3 |
Following NASAs LeadBy Larry Dignan | Posted 2005-08-31 Email Print
The UPS Brown Voyager? It could happen if private companies take over low-earth space travel and free up NASA to shoot for the stars.
Meyers hopes to use NASA's suppliers to build what would be a second-generation space shuttle to ferry space-station components, cargo and people into space. The company's shuttle—pending $5 billion to $7 billion in funding—would be built on NASA's work on the Delta Clipper, an experimental vehicle shelved in July 1997, and the X-33, which was scrapped in March 2001.
"The problem with the shuttle was that it was a 30-year prototype," Meyers says. "There were never second and third generations that improved on the first shuttle."
Clint Wallington, a professor at the Rochester Institute of Technology, says it remains to be seen whether there's a payoff from transferring NASA's knowledge of astronaut training, supplier contracts and operating launch facilities.
"You can transfer it all [to the private sector], but at what cost?" Wallington says. "You can give away the launch facilities and everything, and it could still take $250 million to do a launch. If you get seven passengers paying $20 million each, you're still [more than] $100 million short."
Challenge:Making a business case
Solution:Push tourism. Find multiple
According to Beichman, engineering a commercial manned space flight is nothing compared to making a profit. "It's not obvious where the money is going to be made," he says.
Will Whitehorn, president of Virgin Galactic, British entrepreneur Richard Branson's effort to launch space tourism, says business cases will emerge. In July, Virgin Galactic and Scaled Composites, a Mojave, Calif.-based aerospace design company, announced a joint venture to build a spaceship that could take two pilots and seven passengers into a sub-orbital flight (minimum of 62 miles above Earth). The service, initially targeted for 2008, will cost $200,000 a person for a nearly three-hour trip after three days of training.
Space travel is expensive, but NASA has figured out some ways to get customized systems at a lower cost. See if the approach makes sense for you in: Custom Software on the Cheap"In five to six years, we hope to get that down to $100,000," Whitehorn says.
Whitehorn envisions a trip where passengers can see Earth at the edge of space, float around the cabin and see some stars along the way.
Virgin's partner, Scaled Composites founder Burt Rutan, built SpaceShipOne, which in October 2004 reached a height of 69.6 miles to collect the $10 million Ansari X Prize, an award for the first spacecraft to reach 328,000 feet twice within 14 days.
Next up: Develop SpaceShipTwo, which will carry people and payload for Virgin Galactic, and then build SpaceShipThree, which will be an orbiting craft, according to Whitehorn. | <urn:uuid:9bdb867f-cfe2-4d6d-8351-7c9dd2c1961b> | CC-MAIN-2017-09 | http://www.baselinemag.com/c/a/Business-Intelligence/Should-NASA-Open-LowOrbit-Space-to-Business/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00176-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949305 | 631 | 2.71875 | 3 |
It's been a rough year for the IT industry. The death of Apple co-founder Steve Jobs in October grabbed international headlines. But we also lost other major figures from almost every area of technology, including Xerox PARC founder Jacob E. Goldman, who died in late December. Here's one last look at some of the people who made a big difference.
Dennis M. Ritchie
Godfather of Unix, Father of C
September 1941 - October 2011
Arguably the most influential programmer of the past 50 years, Dennis Ritchie helped create the Unix operating system, designed the C programming language. And he promoted both, starting in the 1970s.
Ritchie worked closely with Unix designer Ken Thompson starting in 1969, integrating work by other members of the Bell Labs research group. And in 1971, when Thompson wanted to make Unix more portable, Ritchie radically expanded a simple language Thompson had created, called B, into the much more powerful C. Just how influential has all that work been? Unix spawned lookalikes such as Linux and Apple's OS X, which run devices ranging from smartphones to supercomputers. And by one account, eight of today's top 10 programming languages are direct descendants of C. (Read more about Unix in Computerworld's 40th anniversary of Unix package.)
While Ritchie was serious about Unix and its potential for creating a computing community, he knew better than to take himself too seriously. He quipped that Unix was simple, "but you have to be a genius to understand the simplicity." And Ritchie wasn't above an office prank. In 1989, he and Bell Labs cohort Rob Pike, with the help of magicians Penn and Teller, played an elaborate practical joke on their Nobel prize-winning boss, Arno Penzias. (You can see the prank in this video clip.)
A Knack for Encryption
July 1932 - June 2011
Among the Bell Labs researchers who worked on Unix with Thompson and Ritchie was Bob Morris, who developed Unix's password system, math library, text-processing applications and crypt function.
Morris joined the Bell Labs research group in 1960 to work on compiler design, but by 1970 he was interested in encryption. He found a World War II U.S. Army encryption machine, the M-209, in a Lower Manhattan junk shop. Morris, Ritchie and University of California researcher Jim Reeds developed a way to break the machine's encryption system and planned to publish a paper on the subject in 1978.
Before they did, they sent a copy to the National Security Agency, the U.S. government's code-breaking arm -- and soon received a visit from a "retired gentleman from Virginia," according to Ritchie. The "gentleman" didn't threaten them, but he suggested discretion because the encryption techniques were still being used by some countries. The researchers decided not to publish the paper -- and eight years later, Morris left to join the NSA, where he led the agency's National Computer Security Center until 1994.
Ironically, it was Morris's son, Robert Tappan Morris, who brought him into the national spotlight: In 1988, the younger Morris, then 22, released an early computer worm that brought much of the Internet to its knees. The senior Morris said at the time that he hadn't paid much attention to his son's interest in programming: "I had a feeling this kind of thing would come to an end the day he found out about girls," he said. "Girls are more of a challenge."
Intelligence, Artificial and Otherwise
September 1927 - October 2011
He may be best known as the creator of the Lisp programming language and as the "father of artificial intelligence" (he coined the term in 1956), but John McCarthy's influence in IT reached far beyond would-be thinking machines. For example, in 1957 McCarthy started the first project to implement time-sharing on a computer, and that initiative sparked more elaborate time-sharing projects including Multics, which in turn led to the development of Unix.
In an early 1970s presentation, McCarthy suggested that people would one day buy and sell goods online, which led researcher Whitfield Diffie to develop public-key cryptography for authenticating e-commerce documents. In 1982, McCarthy even proposed a "space elevator" that was eventually considered by a government lab as an alternative to rockets.
But McCarthy's first love was A.I., which turned out to be harder than he first thought. In the 1960s, McCarthy predicted that, with Pentagon funding, working A.I. would be achieved within a decade. It wasn't -- as McCarthy later joked, real A.I. would require "1.8 Einsteins and one-tenth of the resources of the Manhattan Project." | <urn:uuid:27a5efe6-efe1-46fc-ba9a-bd707330b8e5> | CC-MAIN-2017-09 | http://www.itworld.com/article/2733552/it-management/tech-luminaries-we-lost-in-2011.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00528-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966381 | 975 | 2.703125 | 3 |
Researchers working with the Oak Ridge Leadership Computing Facility’s (OLCF’s) Titan supercomputer are making the most of the system’s hybrid design, which pairs traditional CPUs with highly-parallel GPUs. As the fastest computer in the United States, Titan has a max theoretical speed of 27 petaflops, but as a recent OLCF article points out, “Titan is only as powerful as the applications that use its unique architecture to solve some of our greatest scientific challenges.”
“The real measure of a system like Titan is how it handles working scientific applications and critical scientific problems,” reports Buddy Bland, project director at the OLCF. “The purpose of Titan’s incredible power is to advance science, and the system has already shown its abilities on a range of important applications.”
To help users extract the highest benefit from the multi-million dollar machine, the OLCF understands that having the right application set is essential. With this in mind, they launched the Center for Accelerated Application Readiness (CAAR) two years before Titan’s scheduled arrival. The program brings together staff from Cray and NVIDIA, who had helped to build Titan, with application developers and OLCF’s scientific computing experts. The team identified a set of six target applications based on their benefit to science and predicted performance and/or fidelity gains on Titan. Because these application codes were originally developed for CPU-based machines, the team was tasked with modifying the codes so they could fully exploit the power of GPUs at scale. The goal was to enable productive work to begin as soon as Titan passed acceptance.
Winnowed to five codes, the final application set includes the following:
- S3D – provides insight into chemistry-turbulence interaction of combustion processes.
- LSMS – used to compute the magnetic structure and thermodynamics of magnetic structures.
- LAMMPS – a popular molecular dynamics application that provides understanding of molecular processes such as cellular membrane fusion.
- Denovo – used to model radiation transport for reactor safety and nuclear forensics.
- CAM-SE – used in climate climate change adaptation and mitigation scenarios; accurately represents regional-scale climate features of significant impact.
OLCF holds that Titan’s design will have a profound effect on simulation speed. Running the climate change application CAM-SE on the GPU-accelerated Cray XK7 system will enable 1 and 5 years per computing day, compared to just three months per computing day on Jaguar, Titan’s Cray XT5 predecessor. “This speed increase is needed to make ultra-high-resolution, full-chemistry simulations feasible over decades and centuries and will allow researchers to quantify uncertainties by running multiple simulations,” observes the lab’s science writer Scott Jones.
Going forward, OLCF expects the S3D code, in addition to modeling simple fuels, will be able to take on more complex, larger-molecule hydrocarbon fuels and biofuels, paving the way for advanced internal combustion engines with higher energy efficiency yields.
The molecular dynamics code LAMMPS has also benefited from the Oak Ridge supercomputer, especially the system’s GPU parts. As a code that simulates the movement of atoms through time, LAMMPS is a staple application in biology, materials science, and nanotechnology. When GPUs were added to Titan, LAMMPS achieved a seven-fold performance speedup compared with an earlier CPU-only version of the machine.
Another CAAR application, called WL-LSMS, has also proven to be a good match for the hybrid Titan supercomputer. Used to study candidate magnetic materials, WL-LSMS ran 3.8 times faster on a GPU-enabled Titan than its CPU-only predecessor. The run harnessed an astounding 18,600 of Titan’s compute nodes, out of a total 18,688. Equally as significant, the GPU-enabled version of Titan used 7.3 times less energy than the CPU-only system. The result of faster performance with reduced energy consumption was considered a big win since these tandem benefits were cited as the main motivators for the move to GPU-accelerated supercomputing.
OLCF reports that all of the CAAR applications have seen significant speedups using Titan’s GPUs. As the rest of Titan’s users begin the process of ramping up their codes to make use of Titan’s massive core count, they can rely on the guidelines and practices established by CAAR. | <urn:uuid:b65666c3-5b9c-457a-9dbb-e9d7d06fdd25> | CC-MAIN-2017-09 | https://www.hpcwire.com/2014/01/06/gpus-speed-early-science-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00404-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926965 | 946 | 2.84375 | 3 |
The race for ever more powerful mobile processors may be keeping Qualcomm, Nvidia, and Samsung occupied for now, but ARM is focusing on another goal – designing ultra-low power processors. After years of research and various internal designs, the microchip company is now developing a new low-power microcontroller core, which will be quite slow compared with the processors that we’re more familiar with.
Low power chips aren’t anything particularly new – a few companies already offer sub 2V microcontrollers for battery powered devices. But in order to take advantage of minute power sources, ARM intends to push the voltage requirements right down to the threshold of where a transistor can be turned on and off. However, there’s a trade-off with much slower performance.
The core will be working down at the bare minimum voltage of traditional transistors, meaning operating voltages of just 0.3-0.6 volts, and will be clocked in in the low kilohertz range, so you’re more likely to see this one ticking over as a 50 kHz chip, rather than a 2 GHz multi-core processor.
Don’t expect to see these chips powering a new range of super battery efficient smartphones, but such a development has interesting implications for low power communication devices and the Internet of Things. Speaking with a group of UK journalists, Mike Muller, chief technology officer at ARM, talked about the strategies required for processing small amounts of data and transmitting these small packets, and how such a device could be powered by energy scavenged from the local environment.
Normally, the best strategy is to do processing as fast as possible and then go to sleep for as long as possible—get in and get out. But for energy scavenging, it can be different.
As these chips could be made to work with limited power supplies, especially if they have to scavenge energy from other devices or sources, there might not always be the energy available to transmit information on demand. For the Internet of Things to become a reality, these microprocessors need to be able to cope with unreliable supplies, and that’s an unexplored area when it comes to processor technology.
Remember the “ambient backscatter” concept we covered a couple of weeks ago, whereby devices can communicate by piggybacking on background radio waves? Well ARM’s new chip seems to be based on the idea that it could potentially be powered by weaker power sources such as this, allowing for some level of computer processing without requiring a large main source or a battery for a power supply. | <urn:uuid:18b54a86-e719-4607-938d-c85c24700770> | CC-MAIN-2017-09 | http://www.machinetomachinemagazine.com/2013/08/27/arm-processors-take-us-closer-to-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00348-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942631 | 529 | 3 | 3 |
This FAQ lists all the popular VoIP definitions.
- VoIP – Voice over Internet Protocol (also called IP Telephony, Internet telephony, and Digital Phone) – is the routing of voice conversations over the Internet or any other IP-based network.
- SIP – Session Initiation Protocol – is a protocol developed by the IETF MMUSIC Working Group and proposed standard for initiating, modifying, and terminating an interactive user session that involves multimedia elements such as video, voice, instant messaging, online games, and virtual reality.
- PSTN – Public Switched Telephone Network – is the concentration of the world’s public circuit-switched telephone networks, in much the same way that the Internet is the concentration of the world’s public IP-based packet-switched networks.
- ISDN – Integrated Services Digital Network – is a type of circuit switched telephone network system, designed to allow digital (as opposed to analog) transmission of voice and data over ordinary telephone copper wires, resulting in better quality and higher speeds, than available with analog systems.
- PBX – Private Branch eXchange (also called Private Business eXchange) – is a telephone exchange that is owned by a private business, as opposed to one owned by a common carrier or by a telephone company.
- IVR – In telephony, Interactive Voice Response – is a computerised system that allows a person, typically a telephone caller, to select an option from a voice menu and otherwise interface with a computer system.
- DID – Direct Inward Dialing (also called DDI in Europe) is a feature offered by telephone companies for use with their customers’ PBX system, whereby the telephone company (telco) allocates a range of numbers all connected to their customer’s PBX.
- RFC – Request for Comments (plural Requests for Comments – RFCs) – is one of a series of numbered Internet informational documents and standards very widely followed by both commercial software and freeware in the Internet and Unix communities. | <urn:uuid:23fcb17c-7e06-4f81-a408-dd6163c0d4e4> | CC-MAIN-2017-09 | https://www.3cx.com/pbx/voip-definitions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00348-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910797 | 418 | 3.421875 | 3 |
Unbreakable encryption remains a pipe dream, even on a quantum Internet
- By John Breeden II
- May 08, 2013
The goal of unbreakable encryption has been a dream of governments since time immemorial. The ancient Greeks sent coded messages by way of a Scytale, which consisted of cloth wrapped around rods on which messages were written. The cloth was unwrapped during transit. An authorized viewer would then re-wrap the cloth around an identically sized rod to read the complete message.
Believe it or not, the Scytale, though easy to break, is in some ways similar to quantum encryption, which is likely unbreakable.
In a quantum computing code system, an object like a photon has its state measured, which is always changing. The state of the photon is the encryption key, which is sent along with a message. Any attempt to monitor this state slows down the data, which ruins the key and makes it very obvious on the other end that someone is trying to tap into the feed.
Cambridge University and Toshiba have put this quantum theory into practice, and they’ve been fairly successful in laboratory settings. The problem, which is where the Scytale has the advantage, is that these unbreakable encryption set-ups are point to point in nature. One computer can send data to another that is pre-programmed to get the signal, and that’s it. The Toshiba/Cambridge setup has a maximum limit of 56 miles too.
The reason for the limitation is because if the signal is sent through a router, that router has to read at least part of the message to know where to forward it. And that is no different from someone trying to eavesdrop on the line. It corrupts the data about the quantum state ever so slightly, but more than enough to ruin the key and destroy and therefore protect the message.
Recently, MIT Technology Review reported that scientists at the Los Alamos National Labs in New Mexico have been running a quantum Internet for almost two years, with all computers on the network able to send and forward secure messages to every other one.
How are they able to do this? Simple. They set up a series of point-to-point connections between computers and a specialized router. Computer A is not sending a quantum-protected signal to Computer B. It’s sending it to the hub. The hub then converts that message back to normal, sees where it’s supposed to go and then sets up a second quantum-state-protected communication to its destination. It’s not Computer A to Computer B. It’s Computer A to hub and then hub to Computer B, or C, or D.
The problem with a system like that is two fold. First, the hub interjects a non-secure element into the communications. The message can be snooped, at least in theory, while it sits in its unencrypted and unprotected state at the hub before being sent off to its destination. Second, all of the connections are pre-programmed, which works fine in what is really a Los Alamos-based Intranet, but could not be setup on the Internet where destinations are constantly in flux. There would have to be many hubs to send a quantum-secured message cross the country, and every one would need to know every possible destination.
But the system at Los Alamos is a good start. Perhaps secure routers could be created and implemented along paths, giving users the option to send a quantum-state secured message if a path is available. For government, this is even more attractive right now. Imagine the Pentagon setting up all of its systems on a completely secure network, something that would easily be possible within a single building, or even a small campus.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:1143213f-ee49-4085-a9bd-be823a12ca52> | CC-MAIN-2017-09 | https://gcn.com/articles/2013/05/08/unbreakable-encryption-quantum-internet.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00044-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958975 | 792 | 2.71875 | 3 |
SIP Methods / Requests and Responses
SIP uses Methods / Requests and corresponding Responses to communicate and establish a call session.
There are fourteen SIP Request methods of which the first six are the most basic request / method types:
- INVITE = Establishes a session.
- ACK = Confirms an INVITE request.
- BYE = Ends a session.
- CANCEL = Cancels establishing of a session.
- REGISTER = Communicates user location (host name, IP).
- OPTIONS = Communicates information about the capabilities of the calling and receiving SIP phones.
- PRACK = Provisional Acknowledgement.
- SUBSCRIBE = Subscribes for Notification from the notifier.
- NOTIFY = Notifies the subscriber of a new event.
- PUBLISH = Publishes an event to the Server.
- INFO = Sends mid session information.
- REFER = Asks the recipient to issue call transfer.
- MESSAGE = Transports Instant Messages.
- UPDATE = Modifies the state of a session.
SIP Requests are answered with SIP responses, of which there are six classes:
1xx = Informational responses, such as 180 (ringing).
2xx = Success responses.
3xx = Redirection responses.
4XX = Request failures.
5xx = Server errors.
6xx = Global failures. | <urn:uuid:5adfb6d4-2f97-4ea5-b823-77161652da62> | CC-MAIN-2017-09 | https://www.3cx.com/pbx/sip-methods/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00096-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.796287 | 307 | 2.515625 | 3 |
Two weeks have passed since a magnitude 9.0 earthquake and subsequent tsunami rocked eastern Japan, and while a recovery among the country's technology manufacturers has begun, it could be several months before things start to normalize.
Many factories were closed immediately following the quake, and most have been gradually returning to production in the last week. A handful of plants were hit harder and could be offline for months.
Companies face a daunting task.
Japan's biggest earthquake ever recorded, and the tsunami it spawned, left more than 10,000 people dead and an even larger number missing; several nuclear power plants remain in emergency condition and continue to spew radioactive contamination to the environment; hundreds of thousands are homeless; and the economy is being forced to adapt to power failures and supply disruptions. The end of the disaster is still not in sight.
For IT companies, the loss of production at these plants could have widespread effects on the electronics industry.
Texas Instruments' plant in Miho, northeast of Tokyo, is one of the factories that was hard hit. The plant, which produced chips and DLP devices for projectors, suffered "substantial damage" and it won't be until May when partial production resumes. Full production is not due until mid-July, and that could be further delayed by power problems, the company said.
Toshiba estimates production at its mobile phone display factory in Saitama, north of Tokyo, will be stopped for a month because of damage sustained in the earthquake.
Further north in Miyagi prefecture, a number of factories near the quake-hit city of Sendai suffered high levels of damage.
A Sony plant responsible for magnetic tape and Blu-ray Discs was inundated with water when a tsunami washed through the town of Tagajyo and is one of six Sony plants currently idle. Two Nikon plants were severely damaged and won't be back online until at least the end of March. And Fujitsu's major chip plant in Aizu Wakamatsu is still closed with no estimate of when production will begin again.
But some of the potentially biggest disruptions could come from the closure of two plants run by Shin-Etsu Chemical. Although not a well-known name to consumers, the company is a major supplier of silicon wafers. One of the halted plants, its Shirakawa facility in Fukushima prefecture, is responsible for around 20 percent of the world's supply of such wafers, according to IHS iSuppli.
"The wafers made by this facility mainly are used in the manufacturing of memory devices, such as flash memory and DRAM," said Len Jelinek, an IHS iSuppli analyst, in a statement. "Because of this, the global supply of memory semiconductors will be impacted the most severely of any segment of the chip industry by the production stoppage."
The knock-on effects of the quake to the global supply chain are already being felt.
Sony suspended production of Bravia LCD televisions, digital cameras and other products at five factories far from the quake zone because it can't get raw materials and components. Suppliers are unable to deliver because of either quake and tsunami damage or because of disruptions to the distribution network.
Industries beyond consumer electronics are also likely to feel the effects of these problems.
The automobile industry is a big customer of chip companies and the products it buys are often custom-made.
"Products like microcontrollers and DSPs can't simply be swapped out for another chip, whether from the same vendor or another," said Tom Starnes, an embedded processor analyst at Objective Analysis in Austin, Texas. "The programs aren't easily transferable between processors, and even changing other chips like analog may introduce cost, quality, or reliability issues not originally anticipated."
The long-term effect on Japanese electronics manufacturers and the supply chain remains difficult to gauge. Several major companies have said they will delay the hiring of new workers, usually done on April 1, and some have adjusted or canceled dividend payments to shareholders. While a nascent recovery appears to be underway and some factories are coming back online, it will be weeks before the full extent of damage to the global IT supply chain becomes clearer. | <urn:uuid:16ce0ad3-eee9-48cf-aed3-6bdbcbd8aa8f> | CC-MAIN-2017-09 | http://www.cio.com/article/2409849/it-strategy/two-weeks-after-japan-earthquake--it-industry-faces-hurdles.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00569-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964027 | 860 | 2.5625 | 3 |
Many Americans think the next 50 years will bring custom-ordered, made-to-order organ generation, teleportation and robots that will take care of the elderly and sick.
And they're largely hopeful about technology in the future, with 59% optimistic that coming technological and scientific changes will make life better, according to a study from The Pew Research Center.
However, 30% said they think the coming changes will make them worse off than they are now.
"Many Americans pair their long-term optimism with high expectations for the inventions of the next half century," Pew researchers said in the report. "But at the same time that many expect science to produce great breakthroughs in the coming decades, there are widespread concerns about some controversial technological developments that might occur on a shorter time horizon."
On the positive side, the telephone survey of 1,001 adult Americans in February showed that 81% expect that within the next 50 years people who need new organs, whether it's a liver, kidney or heart, will have them custom grown in a lab. And 51% said they expect computers will be able to create art that is so good it will be indistinguishable from art produced by humans.
Ezra Gottheil, an analyst with Technology Business Research, said he's glad that people are largely excited about what the future of technology will hold.
"I think this is both pretty positive and pretty sensible," he told Computerworld. "The progress in medical technology has been incredible. And that's where mobile technology could be world-changing. I'm very positive about medicine."
However, the study also showed that there are several areas where people aren't so enthused.
If scientists figure out how parents could alter their children's DNA to produce smarter, healthier or more athletic kids, 66% said it would be a bad thing.
The study also showed that 65% are against robots becoming primary caregivers for the elderly and sick, while 53% are against implants and other devices that give people information about the world around them. A full 63% are against commercial drones using U.S. airspace.
A lot of people also think technology advancements might not get us to a sci-fi-like world so soon.
Only 39% said teleportation will be a reality within 50 years, and just 33% said humans will colonize other planets in the same timeframe.
However, 19% said humans will be able to control the weather.
While Patrick Moorhead, an analyst with Moor Insights & Strategy, said he's happy about the general optimism, he thinks there's virtually no chance of scientists perfecting teleportation in the next 50 years and he's surprised that a half of those surveyed wouldn't ride in a driverless car .
"Imagine the time savings if you could just be a passenger," he added. "I am excited about the positive aspects of self-driving cars, as they will lead to shorter and safer commutes and the eradication of drinking and driving deaths."
Rob Enderle, an analyst with the Enderle Group, said the high-tech industry needs to pay attention to what makes people nervous.
"If you have dissenting people who are really against it, they can work hard to block it," he added. "With robotics, if we put a lot of people out of work in a short period of time, it's going to be bad. We have to think through how we implement the technology and the concerns people have. If they're worried about alien cows taking over the planet, well, that's not a big worry. But if people are worried about losing their jobs, we have to really address that."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Future tech: Americans foresee made-to-order organs, teleportation and robots" was originally published by Computerworld. | <urn:uuid:74bbd2db-f5bc-4719-84f1-601f26e5b53b> | CC-MAIN-2017-09 | http://www.itworld.com/article/2698446/hardware/future-tech--americans-foresee-made-to-order-organs--teleportation-and-robots.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00269-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965892 | 847 | 2.71875 | 3 |
Cisco on Cisco
Optical Networking Case Study: How Cisco IT Used CWDM to Interconnect Japanese Data Center Sites
Cisco Systems® maintains two sales offices in Tokyo, Japan. One is located in the government district of the city known as Akasaka and the other in a commercial area known as Shinjuku. As is typical for enterprises in the region, Cisco® Japan IT colocated data centers in each of the offices to support the IT needs of sales, Cisco Technical Assistance Center (TAC), and engineering staff within the facilities, but needed to connect these two locations together. In Japan, many service providers offer managed services like Gigabit Ethernet service to end customers. Cisco leased a Gigabit Ethernet circuit from a service provider at a cost of one million yen per month (about US$9000 at an exchange rate of 111 yen per dollar) to tie the two data centers (and offices) together.
Although these sites had been in place for some time, it is common for sales offices to change locations-to relocate to larger facilities or move closer to customers, etc. Reestablishing a colocated data center can be costly and disruptive. In 2003, Cisco Japan IT considered relocating the two data centers to a single, dedicated facility. A more permanent facility would allow Cisco Japan IT to engineer a higher level of availability with more robust redundancy. Situating the data center outside the expensive city center also would reduce leasing costs.
Although a dedicated data center appeared to be a good solution, the cost of providing reliable connectivity between the data center and the two sales offices seemed prohibitive. To provide an acceptable level of reliability and redundancy, a circuit would be needed from the data center to Akasaka, from Akasaka to Shinjuku, and from Shinjuku back to the data center. With three circuits required, the one-million-yen-per-month Gigabit Ethernet lease cost would triple to approximately $27,000.
Cisco IT manager,
As early as 2001 Cisco Japan IT had investigated leasing dark fiber from carriers as an alternative to the costly Gigabit Ethernet service. Because the great majority of the cost of laying fiber optic cable is labor, not the fiber, telecom carriers typically install more fiber strands than they need. In many areas of the world, private enterprises sometimes can lease these "dark" unused strands from carriers at low rates to connect company sites in metropolitan areas. They had abandoned this solution in 2002, however, because carriers in Tokyo were not offering dark fiber to enterprises. Furthermore, the dense wavelength-division multiplexing technology that enabled multiple channels over a pair of optical fibers, a critical requirement for Cisco future bandwidth needs, was expensive and difficult to manage.
To provide viable connectivity between the new data center and the two sales offices, Cisco Japan IT needed a high-speed, low-cost solution that would be reliable and easy to manage.
In 2003, with the construction of a dedicated data center under serious consideration, Cisco Japan IT again investigated dark fiber. In Japan, Class 1 carriers can provision and lease fiber, but can lease it only to Class 2 carriers and service providers. Class 2 carriers can in turn lease them to corporations. This time Cisco Japan IT found that there were several carriers in Tokyo who were receptive to leasing dark fiber, and at prices far lower than the current managed Gigabit Ethernet services. In addition, the Cisco coarse wavelength-division multiplexing (CWDM) gigabit interface converter (GBIC) solution was now available, which could provide economical optical bandwidth scalability with little or no management requirements. CWDM employs multiple light wavelengths to transmit signals over a single optical fiber. CWDM technology is a crucial component of Ethernet LAN and MAN networks because it maximizes the use of installed fiber infrastructure at an attractive price point.
The dark fiber and CWDM GBIC solution offered another benefit. "If we had chosen to lease Gigabit Ethernet circuits from a service provider, we would have a single provider with the risk of a single point of failure," says Zhengming Zhang, Cisco IT network engineer. "With dark fiber, however, we have the ability to select circuits from different providers, ensuring physically diverse routes, which was an important requirement for us."
A building was located for the new data center about 25 kilometers from Akasaka and 30 kilometers from Shinjuku. Efforts to lease the dark fiber links began in October 2003. The task of relocating nearly 50 racks of equipment from Akasaka and 30 from Shinjuku to the new data center began on February 28, 2004, with the first move on March 6.
Although Cisco IT uses CWDM for network access in other locations, the Tokyo Internet data center (IDC) is the first place in which Cisco takes advantage of CWDM to interconnect an IDC with multiple Cisco offices. The advantage of CWDM technology is that it can transmit and receive signals over a single strand of fiber. With Cisco CWDM GBICs, a maximum of four channels can be multiplexed over the single fiber. CWDM provides bandwidth for growth and secure traffic separation with only one single fiber. The Tokyo IDC currently utilizes one channel, which is used for a single gigabit circuit.
Several dark fiber providers are located in Tokyo, and Cisco Japan IT included the dark fiber vendor selection process as part of the overall data center evaluation process. Seven venders were sent Requests for Proposal (RFPs) for the data center project, and Cisco Japan IT selected three separate fiber providers to ensure redundant paths to all three sites. A single provider also was responsible for supporting service-level agreements (SLAs) on all three fibers and for terminating the fibers in each of the sites. "Installation was very simple," says Greg Duncan, Cisco IT Manager. "They pulled the dark fiber into our racks and attached the SC connector, and that was it." Although the fiber providers had estimated eight to nine weeks for completion, they installed the first circuit in less than four weeks and the remaining circuits within another week.
Because the CWDM equipment is passive, it does not amplify the signal traveling through the fiber. The signal naturally weakens, or attenuates, over the length of the fiber based on factors such as the quality of the fiber, distance, and number of splices. If too much loss occurs, the signal at the receiving end will be too weak to detect and could cause packets to be dropped. Testing carried out by Cisco Japan IT showed that the CWDM equipment could tolerate a loss of 30 dB with no packets dropped. The fiber provider SLA guaranteed that these fibers would not exceed 24 dB. "This is one of those few instances where everything has gone exactly to plan," says Duncan. "Our fiber provider installed every fiber link with less loss (less than 16 dB for all three paths) and in less time than what they promised. It went very, very smoothly."
Connecting the fiber to the LAN environment at each of the three locations is the Cisco CWDM GBIC solution. The primary components of the CWDM GBIC solution are the Cisco CWDM GBIC and Cisco CWDM optical add/drop multiplexer (OADM) modules. The Cisco CWDM GBICs are active components that convert Gigabit Ethernet electrical signals into an optical single-mode fiber (SMF) interface. The CWDM GBIC plugs into standard GBIC ports on Cisco switches and routers. No dedicated or additional routers were required for the deployment. "The CWDM solution is very cost effective," says Zhengming Zhang. "You can use existing routers or switches as long as the hardware has the Gigabit Ethernet module." At the Akasaka and Shinjuku offices, existing Cisco 7603 routers were used, as shown in Figure 1. A Cisco 7603 Router also was used at the data center.
Figure 1. CWDM Network Diagram
The CWDM OADM modules used in this deployment (CWDM-MUX-4-SFx) are passive optical components that multiplex multiple wavelengths from multiple SMF pairs into one SMF strand. Other CWDM OADM modules are designed to multiplex multiple wavelengths into a pair of SMF fibers where a dual-fiber topology is used. The CWDM OADM modules are connected to the CWDM GBICs with SMF using dual SC connectors. Because they are passive devices, no power is required. Neither the CWDM GBIC nor OADM modules require any configuration. The technicians simply matched the GBIC color with the color of the channel interface on the respective OADM module. As with a Gigabit or Fast Ethernet interface, an IP address must be configured for the GBIC interface if it is used as a Layer 3 router port. If a Layer 2 switch is used, spanning tree configuration may be required.
The CWDM (CWDM-MUX-4-SFx) solution supports up to four channels over a single fiber. When additional channels are needed, technicians simply plug another CWDM GBIC into a GBIC port on the Cisco 7603 Router, as shown in Figure 2. No new fibers or changes to the dark fibers are required. The technician simply plugs the second CWDM GBIC into the Cisco 7603 Router and connects it to the OADM with a pair of single mode fibers. "Adding a Gigabit takes only about five seconds and costs about $750," says Zhengming Zhang.
Figure 2. Adding a Second CWDM GBIC
The Cisco CWDM GBIC solution supports point-to-point, ring, hub-and-spoke, and mesh network topologies. The solution offers both path protection (using two fiber paths for the same wavelength) and client protection at the channel endpoints through the CWDM GBICs. Availability redundancy schemes such as EtherChannel technology, Spanning Tree Protocol, and Hot Standby Router Protocol (HSRP) can be used to provide redundancy.
Cisco Japan IT chose a multisite point-to-point topology for the Tokyo network deployment because of its simplicity and cost. Using Enhanced Interior Gateway Routing Protocol (EIGRP), the network detects a failure in one of the links and automatically reroutes traffic to the redundant path. A full-mesh solution would have required two fibers between each location, more extensive hardware, and greater management requirements, such as Spanning Tree Protocol. "If two fibers were as inexpensive as one, we might have chosen a full mesh solution, but this solution is also good," says Zhengming Zhang.
The Tokyo IDC hosts all the regional mission-critical services and applications and supports all WAN connectivity for Cisco Japan offices. Some of these services include Internet access, extranet connections, VPN concentrators for site-to-site and user-based IPSec VPN connections, content networking and IP/TV® streaming video broadcasts, CallManagers, storage filers, printing servers, and many more. high performance of the network in IDC makes all services available to users as if the resources are located nearby. In addition, critical user and application data in Japan data is replicated to our Hong Kong IDC, which provides redundancy in the event of a critical hardware failure. Cisco quality of service technology allows for near-real-time replication without consuming other critical services such as Web, video, and voice services.
Circuit diversity was an essential factor for building a highly available IDC in Tokyo. In this case both physical circuit routes and carriers are diversified. Normally, circuit backup is sufficient and physical diversity of circuit paths is valuable but sometimes hard to achieve; carrier diversity is even more valuable but usually too difficult to achieve. Carrier diversity is valuable because there are rare instances of multiple outages on a single carrier's network (for example, due to a port module board failure that causes multiple circuits to fail at the same time). In Tokyo, Cisco IT was able to select a unique carrier for each circuit. This was fortunate because high availability is crucial. In addition to hosting all mission-critical services and applications for Cisco Japan, the IDC connects Cisco Japan's large locations to the rest of the network, and to services and applications located in other regions. This diversity helps ensure that mission-critical services and applications are always available no matter which circuit or carrier fails.
Before being relocated to Tokyo IDC, all mission-critical devices were hosted in either the Shinjuku office or Akasaka office. Every year each building conducted a necessary electricity maintenance process that would bring down entire power supplies for 48 hours. During this maintenance, all the customer devices in the building lost power and none of the applications or services were available. Cisco IT used uninterruptible power supplies (UPSs) to provide backup for a few hours, but battery backup for 48 hours was not realistic, since it would require a huge number of batteries, and the cost, weight, space, and safety factors made this too expensive to consider. During the 96 hours of power outage (different dates for each building outage), users were unable to use file and printing servers, DHCP, DNS, ACS, DC (directory service), and local VPN concentrators. Some field sales offices in Japan were unable to connect to the corporate network through site-to-site-based IPSec VPN. Users would have to connect to the VPN concentrators in San Jose to access corporate resources and the Internet, and the long distances between Japan and the United States made VPN performance slow.
Relocating all servers and network equipment to a single Tokyo IDC has resolved all these problems. The Tokyo IDC has three power generators, each one with an independent power source. The Tokyo IDC supplies power to each server rack with at least two separate power feeds (and some have three separate power feeds). The N+1 redundancy has greatly improved service availability.
Had Cisco Japan IT chosen to lease Gigabit Ethernet circuits from a service provider, the cost would have been approximately 3 million yen (about $27,000) per month. Instead, the three dark fibers cost approximately 1.1 million yen (about $9900)-a saving of more than 60 percent. And by adding relatively inexpensive CWDM GBICs and OADM modules to the existing infrastructure, bandwidth can be doubled, tripled, or even quadrupled without additional monthly fiber leasing expense.
Total route diversity has eliminated single points of failure and ensures high availability between sites. The network has been in operation since March 2004 with no problems. On May 15, Cisco Japan IT took offline one of the Cisco 7603 routers to replace line modules. The network rerouted automatically and connectivity between sites was never affected.
Improvements in other areas were achieved as well:
More usable space available at the two Tokyo offices: Relocation of shared services to the Tokyo IDC allows us to reuse the expensive downtown Tokyo office space for services showcasing, labs, and customer support staff. When new Cisco technology or products go to market, the existing spaces can be used for sales and marketing purposes.
Removed duplicate hardware costs: Some shared services and applications were duplicated in the Shinjuku and Akasaka offices (for example, storage filers and printing servers). When new services were deployed, the same hardware and software had to be installed in both offices. With the Tokyo IDC, existing services are combined into fewer, higher-capacity hardware devices, which perform the same tasks at a lower price per task.
Simpler management: With the centralized colocation of services in the Tokyo IDC, management, troubleshooting, and maintenance are easier than when equipment was located in two separate offices.
Reduced IT labor: Cisco IT Japan used to spend a significant amount of time on cabling, mounting hardware, simple hardware replacement, and circuit installation. These tasks are now handled by the Tokyo IDC support staff, and Cisco IT Japan can concentrate on new service design and implementation.
Unlike most deployments, the CWDM project went exactly as planned. "I can't think of a single instance during the CWDM deployment that trapped us or caused us to reconsider our plans," says Duncan. Several factors made this deployment simple and trouble-free. Among them was the willingness of different service providers that, even as direct competitors in the same market, were willing to share cable path information to provide diversified routing. And the fiber vendor performed as promised. "They lived up to their word without exception, even beating their schedule," notes Duncan. And finally, the CWDM equipment offered no surprises. "I think the lesson learned for me is, it was as easy to do as what the product information said it would be," says Duncan.
Cisco Japan IT plans to use the Tokyo CWDM solution for several new applications over the next year. Prior to CWDM, separate access paths had to be provided for labs that needed direct demilitarized zone (DMZ) access, resulting in additional Internet access points distributed throughout the different labs. One of the four channels on the CWDM already is being used to carry secure, segregated lab traffic into the DMZ, located at the new data center, from the existing engineering lab at the Shinjuku office. Without CWDM, a dedicated leased line, which might cost at least 300,000Yen ($2700) monthly, would have been required to connect the DMZ lab in Shinjuku to DMZ backbone in the Tokyo IDC. Other labs will follow, replacing their separate Internet trunks with a channel on the CWDM.
If more circuits between the same sites are required in the future, Cisco IT Japan can easily add CWDM GBIC modules to expand bandwidths to 2 Gbps, 3 Gbps, or 4 Gbps. Because each channel is utilizing a different wavelength to transmit and receive, each one can be treated as an independent physical circuit. This allows us to use another channel of CWDM to interconnect the DMZ lab in the Shinjuku office to the DMZ backbone in the Tokyo IDC over the same dark fiber without compromising security.
Another advantage of the CWDM dark fiber solution is its ability to support technology demonstrations without negatively affecting production traffic. Before customers spend a large sum of money for a Cisco solution, they want to see that it works. Cisco sales and engineering teams set up demos for different solutions. Often, the customer might be at the Akasaka sales office while the servers and critical resources that make the demo work are in Shinjuku. Engineers would have to connect the two sites but they could not use the existing Gigabit Ethernet circuit because of existing IT policies and information security policies, which caused them to find another circuit. With CWDM, they will be able to use a separate fiber channel without raising security concerns. In addition, the extra bandwidth will allow Cisco IT Japan labs and sales locations to interconnect servers to storage using SAN iSCSI, FCIP, or other applications.
Cisco TAC and engineering groups currently occupy a sizeable portion of the leased space at the Shinjuku facility. At some point within the next year, those groups probably will be relocated to lower-cost facilities outside of Shinjuku. "That's going to be a lot simpler for us to do because we just need to hook them into the new data center and extend another dark fiber to their new location," says Zhengming Zhang. | <urn:uuid:081c4d07-dec9-40cd-b8cb-39942bbfa3db> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/about/cisco-on-cisco/enterprise-networks/cwdm-japanese-data-centers-web.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00445-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953312 | 3,989 | 2.53125 | 3 |
Think of all of the websites that you utilize online that require passwords to protect your sensitive information. You have a password for your online bank account, your email, your credit card accounts — the list goes on.
Out of all of those websites, however, how many unique passwords do you have? Not too many? If a hacker deciphers even one password to your account, your entire online life would be in serious jeopardy.
According to credit firm Experian, the typical online user has 26 accounts—and only five passwords. Couple that with the fact that 90 percent of passwords are vulnerable to cracking, and it becomes clear that passwords are not quite as effective when it comes to protecting sensitive information.
Quite simply, passwords are failing to get the job done in an era where digital security is of the utmost importance. They are failing because they are easy to crack. A better solution, therefore, is to take a multilayered approach to online security providing protection beyond the initial scope of a password. A multilayered approach to online security involves implementing advanced authentication requirements to verify a user or application’s identity.
In this regard, mobile is playing a pivotal role in the multilayer approach to security at the enterprise level. One advantage of mobile is that applications can use a process called sandboxing, in which applications on a device cannot access the digital information of other applications. This is imperative when it comes to the prevention of advanced malware, as taking the completion of a transaction out of the compromised desktop channel may be the only way to defend against evolving malware threats.
Additional security involves PIN locks and embedded, transparent one-time passcodes (OTP), as well as digital certificates for mobile devices.
Believe it or not, authenticating an identity is much easier to accomplish on a mobile device than on a desktop or laptop computer because these traditional computer platforms were designed to share device memory as a basis for architecture, unlike sandboxed mobile applications.
While 71 percent of IT executives still believe that the traditional desktop or laptop computer is more secure than a mobile device, the reality is that mobile devices are in fact more secure. That is why 65 percent of organizations are placing mobile security as a critical priority moving forward.
Once you understand how comprehensive a multilayered mobile approach is to overall security, a basic password for an important account seems about as safe as locking a bicycle with a rope. While passwords are still the status quo for consumers, organizations looking for advanced security measurements should seriously consider the comprehensive security benefits that mobile technology currently affords. | <urn:uuid:ad20bc7e-c720-4340-a47c-a60c159de744> | CC-MAIN-2017-09 | https://www.entrust.com/passwords-weak-todays-digital-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00265-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950053 | 521 | 2.734375 | 3 |
Wikipedia Often Omits Important Drug InformationBy Reuters - | Posted 2008-11-25 Email Print
Dr. Kevin A. Clauson of Nova Southeastern University in Palm Beach Gardens, Florida and his colleagues found few factual errors in their evaluation of Wikipedia entries on 80 drugs, but some entries were often missing important information.
NEW YORK (Reuters Health) - Consumers who rely on the user-edited Web resource Wikipedia for information on medications are putting themselves at risk of potentially harmful drug interactions and adverse effects, new research shows.
Dr. Kevin A. Clauson of Nova Southeastern University in Palm Beach Gardens, Florida and his colleagues found few factual errors in their evaluation of Wikipedia entries on 80 drugs. But these entries were often missing important information, for example the fact that the anti-inflammatory drug Arthrotec (diclofenac and misoprostol) can cause pregnant women to miscarry, or that St. John's wort can interfere with the action of the HIV drug Prezista (darunavir).
"If people went and used this as a sole or authoritative source without contacting a health professional...those are the types of negative impacts that can occur," Clauson told Reuters Health.
Wikipedia is an online, free encyclopedia covering millions of topics in more than 250 languages. Users add and edit content themselves. Clauson and his colleagues decided to investigate the accuracy and completeness of drug information on Wikipedia given that one third of people doing health-related Internet searches are looking for information on over-the-counter or prescription drugs, and that a Wikipedia entry is often the first to pop up with a Google search.
The researchers compared Wikipedia to Medscape Drug Reference (MDR), a peer-reviewed, free site, by looking for answers to 80 different questions covering eight categories of drug information, for example adverse drug events, dosages, and mechanism of action.
While MDR provided answers to 82.5 percent of the questions, Wikipedia could only answer 40 percent. Answers were less likely to be complete for Wikipedia, as well. Of the answers the researchers found on Wikipedia, none were factually inaccurate, while there were four inaccurate answers in MDR. But the researchers spotted 48 errors of omission in the Wikipedia entries, compared to 14 for MDR.
"I think that these errors of omission can be just as dangerous" as inaccuracies, Clauson told Reuters Health. He pointed out that drug company representatives have been caught deleting information from Wikipedia entries that make their drugs look unsafe.
The researchers did find that after 90 days, the Wikipedia entries showed a "marked improvement" in scope.
Wikipedia can be a good jumping-off point for Internet research, Clauson said, but shouldn't be seen as the last word on any topic-and should certainly not be used as a resource by medical professionals. "You still probably want to go to medlineplus.gov or medscape.com for good quality information that you can feel confident in," he said.
SOURCE: The Annals of Pharmacotherapy, December 2008.
© Thomson Reuters 2008 All rights reserved | <urn:uuid:827ae4e5-181e-4cc5-8fec-4e664963c227> | CC-MAIN-2017-09 | http://www.baselinemag.com/messaging-and-collaboration/Wikipedia-Often-Omits-Important-Drug-Information | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00493-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943317 | 631 | 2.65625 | 3 |
- Test for Heartbleed Vulnerability
- Get details of a SSL certificate.
- Detect weak ciphers and SSLv2, a version of SSL with known security vulnerabilities.
About SSL Certificate Checking
SSL (and TLS) provide encrypted communication over the Internet, SSL 2.0 has known vulnerabilities and it is recommended that it no longer be used. PCI compliance for example also mandates that the version SSL 2.0 not be used and the version must be SSL 3.0.
While it can be used for any TCP based service such as FTP, NNTP, SMTP - it is most commonly used to encrypt web traffic. User awareness has become such that non-technical users are aware of the importance of the HTTPS in the URL and the "padlock" in the browser status bar when browsing secure sites such as Internet banking and email. | <urn:uuid:254bc8a6-4767-4bfd-8f09-a3b158db4f58> | CC-MAIN-2017-09 | https://hackertarget.com/ssl-check/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00085-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.91817 | 175 | 3.109375 | 3 |
“There are many different silos of information that have been painstakingly collected; and there are a number of existing tools that bring some strands of data into relation. But there is no overarching tool that can be used across silos.”
The sentiments behind this quote could apply to a wide range of scientific disciplines, not to mention to enterprises that have collected vast amounts of data but are still piecing together the puzzle of how to integrate and make sense of it.
In fact, the above quote came from quantitative biologist, Michael Schatz, as he reflected on the need for massive data integration for scientists worldwide—and the computational models needed to produce connected information sets.
Schatz is one of several biologists involved in Systems Biology Knowledgebase, also known as Kbase. This DOE project was started in 2008 to make data more accessible and integrated for biological researchers. Just last year the research and development required to design and implement the Kbase effort was completed by the Genomic Science program—but there is still plenty of work ahead.
As Ariella Brown noted, “Kbase should be a boon both for those who want to gain better understanding of such life forms for the sake of pure science and to those who would apply the Kbase data, metadata, and tools for modeling and predictive technologies to help the production of renewable biofuels and a reduction of carbon in the environment.”
Brown goes on to describe the Kbase program and its goals:
“The plan is for Kbase to start off with seven data centers on ESnet (the Department of Energy Energy Sciences Network). That is one for each of the six defined scientific objectives of Kbase; the seventh is devoted to coordinating the infrastructure development of the project. According to the current timetable, it should take 12 months to get the Kbase hardware platform operational. Version 1.0 is anticipated to be accessible after 18 months and version 2.0 after 36 months; five years is the estimated time to achieve operation and support at target levels.
The idea is to implement a system that can grow as needed and be easily used by scientists without extensive training in applications. It should produce understandable results based on clear scientific assumptions, engage all members of the scientific community, and encourage further discovery, with findings that inspire “new rounds of experiments or lines of research.”
While Kbase is an ongoing project, the model for its integration and collaboration developments will extend to other disciplines, allowing greater, more open access to scientific data across the world. As the graphic below shows, the need for such integration is clear—but it is a slow climb to full data integration, sharing and use for biology researchers.
Image Source: Genomic Science Program, US Dept. of Energy | <urn:uuid:84126d83-f8af-4e96-b715-abc55ac1670d> | CC-MAIN-2017-09 | https://www.hpcwire.com/2011/08/10/doe_focuses_on_scientific_data_integration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00309-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947363 | 558 | 2.875 | 3 |
In 2020, both Harlingen and San Benito will have all the water they need, according to a state water agency website.
However, Primera will fall short by 45 percent, says the new interactive State Water Plan website launched by the Texas Water Development Board.
“This website is an example of the changes we are making to provide transparency to Texans about the important work TWDB does,” Carlos Rubinstein, TWDB chairman, said.
The data offers communities important information as they plan projects for which they will apply for funding from the State Water Implementation Fund for Texas (SWIFT), board member Bech Bruun said.
“It’s another way to get relevant information to those who want to get involved in the water planning process — something we strongly encourage all citizens to do,” Bruun said.
To find the website, go to www.twdb.texas.gov, and then click on “Interactive 2012 State Water Plan Website.” A map of Texas will appear. The map has the state divided into sections A – P. Beneath the map is a list of years a decade apart beginning with 2010 and ending with 2060.
The Valley is located in a dark blue area labeled M that stretches all the way to Webb County. Upon clicking on that section, it will open up and multiple dots will appear beginning with a deep green and ending with red.
As the user moves the cursor from one dot to the next, the name of the city it represents will appear. It will also show how much of its demand for water will be met in a given year.
The legend in the lower left-hand corner shows what the dots represent. A light green dot means that the area represented will lack between 0.5 percent and 10 percent of its total demand, said Dan Hardin, senior planning advisor for TWDB.
“What we’re trying to represent is a measure of how serious an entity’s water problems might be in the future,” Hardin said.
“Another way to think of need in planning terminology is, that’s the shortage. It’s essentially how much the available existing supply is going to fall short of meeting demands.”
He said orange and red dots show that the entity is looking at a shortage of at least 25 percent of its water demands.
The website also includes a section called “Regional Water Needs.” It shows how much water each region will need for six uses, including municipal, manufacturing and irrigation.
Municipal, Hardin said, refers to water cities provide to homes, schools, stores and offices.
In 2020 the M region will need 64,277 acre feet of water for municipal use. It will need 2,355 acre feet for manufacturing and 333,246 acre feet of water for irrigation.
While the outlook is pretty good for 2020, more red and orange dots appear in 2040. However, Harlingen and San Benito are still projected to meet all of their water demands.
Twenty years later in 2060, even more red dots pop up and Harlingen shows a shortage of 15 percent and San Benito lacks 11 percent of its total water demand.
©2014 Valley Morning Star (Harlingen, Texas) | <urn:uuid:3886209e-f2d5-4346-a98c-6286ebba12cc> | CC-MAIN-2017-09 | http://www.govtech.com/internet/Website-Predicts-Water-Needs-in-Texas-Communities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00485-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938858 | 683 | 2.640625 | 3 |
The European Space Agency says it has completed what it calls the largest digital camera ever built for a space mission - a one billion pixel array camera that will help create a three-dimensional picture of the Milky Way Galaxy.
Set to be launched onboard the ESA's galaxy-mapping Gaia mission in 2013, the digital camera was "mosaicked together from 106 separate electronic detectors." ESA says that Gaia's measurements will be so accurate that, if it were on Earth, it could measure the thumbnails of a person on the Moon.
More on space: Gigantic changes keep space technology hot
According to the ESA, the camera was developed by e2v Technologies of Chelmsford, UK and uses rectangular detectors a little smaller than a credit card, each one measuring 4.7x6 cm but thinner than a human hair. The completed mosaic is arranged in seven rows of charge coupled devices (CCDs). The main array comprises 102 detectors dedicated to star detection. Four others check the image quality of each telescope and the stability of the 106.5º angle between the two telescopes that Gaia uses to obtain stereo views of stars.
The 0.5x1.0 m mosaic was assembled at the Toulouse facility of Gaia prime contractor Astrium France. Technicians spent much of May carefully fitting together each CCD package on the support structure, leaving only a 1 mm gap between them.
According to ESA, the Gaia satellite will operate at the Earth-Sun L2 Lagrange point, 1.5 million kilometers behind the earth, when looking from the sun. "As the spinning Gaia's two telescopes sweep across the sky, the images of stars in each field of view will move across the focal plane array, divided into four fields variously dedicated to star mapping, position and motion, color and intensity and spectrometry," the space agency stated.
Gaia is expected to map a billion stars within the Milky Way Galaxy over the course of its five-year mission, charting brightness and spectral characteristics along with their three-dimensional positions and motions.
From the ESA on Gaia's mission:
- Gaia's transmitter is weak, much less powerful than a standard 100 W light bulb. Even so, this equipment will be able to maintain the transmission of an extremely high data rate (about 5 Mbit/s) across 1.5 million km. ESA's most powerful ground stations, the 35 m-diameter radio dishes in Cebreros, Spain, and New Norcia, Australia, will intercept the faint signal transmitted by Gaia.
- The numbers foreseen in Gaia's celestial census are breathtaking. Every day it will discover, on average, 10 stars possessing planets, 10 stars exploding in other galaxies, 30 'failed stars' known as brown dwarfs, and numerous distant quasars, which are powered by giant black holes.
- Estimates suggest that Gaia will detect about 15, 000 planets beyond our Solar System. It will do this by watching for tiny movements in the star's position caused by the minute gravitational pull of the planet on the star.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:0ad158d4-6741-43bc-a215-33eed888a1dc> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2220129/security/billion-pixel-camera-set-to-snap-milky-way-shots.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00481-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921357 | 647 | 3.46875 | 3 |
The HAI problem
99,000 people die.
– U.S. Centers for Disease Control
Healthcare Associated Infections, or HAIs, are caused by patient exposure to any of a number of dangerous pathogens that can be passed by direct hand contact or by high-touch room surfaces, like bed rails, faucet handles, door knobs, telephone handsets and TV remote controls.
Patients who catch HAIs are often the most vulnerable, very ill or elderly, with little ability to fight off these resilient infections. Some strains have become resistant to antibiotics, making them particularly dangerous.
HAIs infect 5 to 10 percent of hospitalized patients and cost hospitals an average of $15,000 per incident and nationally more than $25 billion per year in additional, extraordinary expenses for readmission or elongated hospital stays. Additionally, HAIs put hospital staff at risk and increase labor expenses for paid sick time.
Some of the more common and dangerous HAIs include:
- Clostridium difficile (C. diff) – in 2011, responsible for half a million infections, with 29,000 people dying within 30 days of the initial diagnosis
- Methicillin-resistant Staphyloccocus aureus (MRSA) – can cause severe problems such as bloodstream infections, pneumonia and surgical site infections.
- Vancomycin-resistant Enterococci (VRE) – most commonly transmitted person-to-person through hand contact or by touching infected surfaces, VRE usually infects hospitalized patients who have received antibiotic treatment for long periods of time or who have weakened immune systems or have undergone surgical procedures.
- Acinetobacter baumannii (A. baumannii) – typically infects people who have weakened immune systems, chronic lung disease, or diabetes. Hospitalized patients, especially very ill patients on a ventilator, those with a prolonged hospital stay, those who have open wounds, or any person with invasive devices like urinary catheters are also at greater risk for Acinetobacter infection.
Nearly 20% of pathogens reported from all HAIs in 2009-2010 were multidrug-resistant organisms. – National Healthcare Safety Network
Manual Cleaning Not the Complete Answer
Manual terminal cleaning has been proven to be only partially effective in neutralizing harmful HAI-causing pathogens, according to The Society for Healthcare Epidemiology of America.
- Only 34% of high-touch surfaces are cleaned (terminal clean)
- After cleaning, 71% of VRE and 78% of C. diff infected rooms still tested positive
- Even after four rounds of disinfection with bleach, 25% of rooms were still contaminated with MRSA and Acinetobacter baumanii.
UV-C Disinfection Far More Effective
UV light disinfection robots, like those manufactured by Infection Prevention Technologies, attack HAI-causing pathogens with germ-killing radiation, offering fast, effective whole-room treatment of all direct, indirect and shadow areas, including high-touch surfaces that often are missed in terminal cleaning.
Read reports about the effectiveness of UV-C disinfection:
- Study Finds Ultraviolet Cleaning Reduces Hospital Superbugs by 20 Percent, Infection Control Today, May 28, 2014
- Study Shows Effectiveness of Ultraviolet Light in Hospital Infection Control, Infection Control Today, Oct. 25, 2012 | <urn:uuid:4837d454-c134-448f-ae5b-de5fa9149977> | CC-MAIN-2017-09 | http://www.infectionpreventiontechnologies.com/next-gen-uv-validated-by-science/hai-problem.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00005-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927128 | 698 | 3.484375 | 3 |
Today, we’d like to get back to a few basics about the relationship between the public-switched telephone network and voice over IP.
The PSTN includes a signaling system, a series of central offices and a distribution network. The PSTN employs a packet-based network called Signaling System 7 (SS7) or Common Channel Signaling System 7 (CCSS7) to determine the best call route, connect the callers and control calls. Private voice network systems like PBX and key systems work with the PSTN to create a hybrid public/private network.
Using IP to signal and transport voice brings several fundamental shifts to traditional voice communications. In the legacy PSTN environment, unused bandwidth cannot be shared; using packetized transmission (like an IP packet) for voice shares unused bandwidth and allows for greater efficiency, thereby reducing cost. IP is the packet protocol of choice for voice because the overall volume of users’ WAN traffic is dominated by IP.
In the PSTN, voice network features are delivered to a user on a static pair of copper wires to a static local central office switch or PBX. VoIP allows the traditionally switched services to be delivered to a user anywhere the user is connected.
The three most common ways to deploy private VoIP include the use of VoIP gateways, VoIP-enabled routers or an IP-PBX.
VoIP gateways represent one of the easiest ways to deploy VoIP. A gateway transforms SS7 signaling and traditional voice transmissions into IP-based signaling and transmission techniques. By installing a gateway, a business can connect to an IP or other data network and a TDM network simultaneously.
VoIP-enabling routers means adding a gateway function to a router. Routers can be upgraded to include the gateway and voice-specific features.
IP-PBX and IP-enabled PBX deployments are similar in that they start with PBX features and include a gateway function.
Steve and Larry have co-authored a technology backgrounder about basic telephony and VoIP. If you’d like to read more about the basics or see a presentation featuring Steve and Larry, please see the links below. | <urn:uuid:3dea0fd0-e701-45d2-ad3d-82fec0b2d8ab> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2324219/lan-wan/how-voip-relates-to-the-pstn.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00181-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910645 | 446 | 2.5625 | 3 |
Picture it: 2004, RSA Conference. Bill Gates proclaims that passwords are dead, explaining “People use the same password on different systems, they write them down and they just don’t meet the challenge for anything you really want to secure.”
Flash forward to 2011: despite frequent reports of email hacks and enterprise data breaches, the username and password method for authentication is still one of the primary security measures used today. What is wrong with this picture?
Think about how your online life was the first time you connected to the Internet, and compare it to now. Very likely it is a lot less about reading news and chatting on AOL, and a lot more about storing private or work-related information, banking, and filing taxes. In a recent study The NPD Group found that three quarters of U.S. consumers had used a cloud computing service, where you use a provider to actively store information on the Internet, in the past 12 months.
In other words, our online lives are a lot less anonymous and a lot more personal.
The costs of your data being breached – personal, financial or corporate – are high. A recent study conducted by the Ponemon Institute found that data breaches cost on average, $214 per compromised record in 2010, up significantly from $204 per compromised record in 2009. On a personal level, even losing 100 dollars out of a checking account by a fraudster can leave you feeling violated and vulnerable.
How can we prove that we are who we say we are when accessing online services? How do we create secure online identities?
When authenticating ourselves to cloud-based applications, banking and government sites, a good first step to adding a layer of security is one-time password (OTP) authentication. You may have a physical token or a mobile application for this solution, which a different password you must enter for every login. OTPs provide a higher level of identity assurance than a simple password. It makes your online identity stronger, which is why this is often called strong authentication.
An even stronger way to authenticate your identity online is through a solution that incorporates multi-factor authentication with smart card technology. With this method, you need both a password and a physical token, such as a smart card or encrypted USB token, before you can be logged in. Your physical device contains your unique identity credentials and relays to the service provider with a high level of assurance that you are you. Even if your password is stolen, a criminal cannot access your online services without your physical token.
In the future, our best bet is using a combination of the physical device, something the user knows, and “something we are.” This would add a biometric, like a fingerprint, to the mix. This type of multi-factor authentication approach will provide the strongest verification that we who we say we are.
There are many way to move toward stronger online identities. We must move forward and promote strong authentication until we can finally pronounce the beloved password as a security technology of the past. Then, we can all move on with more confidence in our online existence.
If you got this far then you might be interested in contributing a question to our CIO survey. What would you like to ask Heads of IT about security and authentication? When will they phase out static passwords perhaps? If you have a suggestion let us know here. | <urn:uuid:b9721663-4b72-4690-9e24-cde243cb15d1> | CC-MAIN-2017-09 | https://blog.gemalto.com/blog/2011/09/29/are-passwords-dead-yet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00477-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938366 | 689 | 2.6875 | 3 |
While it might seem like the stuff of science fiction, the cutting edge of technology and science has us on the cusp of some very exciting developments such as making Harry Potter's invisibility cloak a reality, making 3D holograms that can dance above your mobile device, making tiny blood-monitoring implants that can call your smartphone to warn you before a heart attack, and there are even attempts to make immortality a reality by transferring your brain into an avatar.
Mini version of Harry Potter’s invisibility cloak
Physicists have successfully created a small-scale invisibility cloak from a new material called metascreen. Unlike other attempts to make invisibility cloaks a reality, this one doesn’t try to bend light rays around an object. Instead, the researchers used ultra-thin strips of copper tape attached to a flexible polycarbonate film in a diagonal fishnet pattern; this “metascreen” enables a technique called “mantle cloaking.” They managed to “hide” an 18-centimeter-long (7-inch) cylindrical tube from view in microwave light.
“When the scattered fields from the cloak and the object interfere, they cancel each other out and the overall effect is transparency and invisibility at all angles of observation,” said Andrea Alu, a physicist at the University of Texas at Austin. “The advantages of the mantle cloaking over existing techniques are its conformability, ease of manufacturing and improved bandwidth.”
The researchers presented their study in the New Journal of Physics, writing, “Combined with the field penetration inside the cloak, these results pave the way to realizing not only 3D conformal camouflaging and invisibility, but also a practical scheme for non-invasive high-performance near-field sensors.”
Avatar robotic bodies to make you immortal
If Russian multi-millionaire Dmitry Itskov gets his way, immortality would be a reality by 2045. Itskov, the founder of Initiative 2045, wants to transfer the human consciousness into an avatar. But he is not interested in just one robotic body avatar that could be controlled by the human brain; instead he hopes to create telepresence robotic systems and networks in which the human consciousness could be uploaded to different avatars in different locations.
Scientists, technologists and entrepreneurs will meet at the second annual Global Future 2045 to discuss topics such as immortality by 2045, how immortal minds—even repair and replacement brain parts—are simply a matter of time. Potential applications would allow human augmentation enhancements to the physical body and the “significant extension of the lives of individuals whose biological bodies have exhausted their resources.” In a letter seeking United Nations’ support, GF2045 lists the eight key components for study, such as: “1. The construction of anthropomorphic avatar robots—artificial bodies. 2. The creation of telepresence robotic systems for long-distance control of avatars. 3. The development of brain–computer interfaces for direct mental control of an avatar.”
As you see in the image above, one 2045 scenario would be hologram-like avatars.
3D Hologram paves the way for Princess Leia to hover above your smartphone
Speaking of holograms . . . while it’s not quite a moving hologram of Princess Leia in Star Wars, researchers have built a 'hologram-lite' prototype for 3D holograms that may eventually lead to holograms that can “dance above a tablet, mobile phone or wrist watch,” reported the journal Nature. Researchers at Hewlett-Packard Laboratories in Palo Alto said the prototype can send light in 14 different directions for a smooth 3D effect without the 3D glasses, but they are working on another prototype that would send light in 64 different directions.
Physicist David Fattal said, “In principle you would be able to move your head around the display, rotate your head in any direction, and still see a 3D image, much like what you see in Star Wars, with the famous hologram of Princess Leia.” This prototype technology might start off being used for inexpensive 3D digital signage, but research colleague Raymond Beausoleil added, “Perhaps in the not so distant future, it could make its way to smartphones, smart watches and tablets.” If you are interested, they have produced this "Glasses-free 3D display" video to show off their work.
Tiny blood-monitoring implant calls smartphone before heart attack
You may have received hundreds of important calls, but one of the most unique and urgent calls a person might receive could come from a tiny blood-monitoring implant to alert you that you are about to have a heart attack. A team of Swiss scientists at Ecole Polytechnique Fédérale de Lausanne (EPFL) developed the world’s smallest medical implant, 14mm by 2mm, to measure critical chemicals in the blood, reported ExtremeTech. This “tiny lab on a chip” is implanted under the skin and uses Bluetooth to transmit the data to a mobile device. The wireless impant is powered by a skin patch, which is about the size of a credit card.
The implant tracks five substances in the blood, such as troponin that the heart produces hours before a heart attack. The researchers have tested the tiny implant on lab animals, but hope to begin testing on intensive care patients soon. They expect this implantable device to be ready for the commercial market in about four years.
Focusing on the potential of these developments is exciting, but focusing on the potential hacks to such devices would be scary. Let's hope that the researchers build-in security and privacy, such as in the embedded medical device, instead of trying to bolt it on afterwards. | <urn:uuid:982ab221-8637-4316-aac9-5be75a508f77> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2474946/emerging-technology/cool--invisibility-cloak--implant-calls-before-heart-attack--3d-hologram--immort.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00529-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930031 | 1,208 | 2.703125 | 3 |
In order to build more resilient data centers, many Cumulus Networks customers are leveraging the Linux ecosystem to run routing protocols directly to their servers. This is often referred to as routing on the host. This means running layer 3 protocols like OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol) directly down to the host level, and is done in a variety of ways, by running Quagga:
- Within Linux containers (such as Docker)
- Within a VM as a virtual router on the hypervisor
- Directly on the hypervisor
- Directly on the host (such as an Ubuntu server)
Why Route on the Host?
Why do customers do this? Why should you care?
Troubleshooting layer 2 network problems in the data center has been a persistent challenge in modern networks, so expanding the layer 3 footprint further into your data center by routing on the host alleviates many issues described below.
Consider a network where layer 2 MLAG is configured between all devices. Although this is a common data center design, and can be deployed on Cumulus Linux, it suffers from a number of shortcomings.
- Traceroute is not effective, since it only shows layer 3 hops in the network; this design uses layer 2 devices only. All traceroute outputs, regardless of the path taken, only show the layer 3 exit leafs. There is no way to determine which spine is forwarding traffic.
- MAC address tables become the only way to trace down hosts. For the diagram above, to hunt down a particular host you would need to run commands to show the MAC addresses on the exit leafs, the spine switches and the leaf switches. If a host or VM migrates while troubleshooting, or a loop occurs from a misconfiguration, you may have to show the addresses multiple times.
- Duplicate MAC addresses and MAC flaps become frustratingly hard to track down. Orphan ports and dealing with MLAG and non-MLAG pairs increase network complexity. The fastest way to find a specific MAC address is to check the MAC address table of every single network switch in the data center.
- Proving load balancing is working correctly can become cumbersome. With layer 2 solutions, LACP (Link Aggregation Control Protocol) is very prevalent, so you need to have multiple bonds/Etherchannels between the switches. Performing a simple ping doesn't help because the hash remains the same for layer 2 Etherchannels, which are most commonly hashed on SRC IP, DST IP, SRC port and DST port. In the end, you need multiple streams that hash evenly across the LACP bond. This often means you must buy test tools from companies like Spirent and Ixia.
With a layer 3 design, you can run
ip route showand see all of the equal cost routes. It's possible to use tools like
scamperand see all possible ECMP routes; that is, what switches are being load balanced.
Three or More Top of Rack Switches
With solutions like Cisco's vPC (virtual Port Channel), Juniper's MC-LAG (Multi-Chassis Link Aggregation) or Arista's MLAG (Multi-chassis Link Aggregation), you gain high availability by having two active connections. Cumulus Networks has feature parity with these solutions with its own MLAG implementation.
High availability means having two or more active connections. However, with high density servers, or hyper-converged infrastructure deployments, it is common to see more than two NICs per host. By routing on the host, three or more ToR (top of rack) switches can be configured, giving much more redundancy. If one ToR fails, you only lose 1/total ToR switches, whereas with a layer 2 MLAG solution, you lose 50% of your bandwidth.
Clear Upgrade Strategy
By routing on the host, you gain two huge bonuses:
- Ability to gracefully remove a ToR switch from the fabric for maintenance
- More redudnancy by having multiple ToRs (3+)
Let's expand on these two points. With layer 2 only (like MLAG), there is no way to influence routes without being disruptive (that is, some traffic loss must occur). With OSPF and BGP, there are multiple load balanced routes via ECMP (Equal Cost Multipath) routing. Since there is routing, it is possible to change these routes dynamically.
For OSPF, you can increase the cost of all the links making the network node less preferable.
With BGP, there are multiple ways to change the routes, but the most common is prepending your BGP AS to make the switch less preferable.
Both BGP and OSPF make the ToR switch less preferable, removing it as an ECMP choice for both protocols. However, the link doesn't get turned off. Unlike layer 2, where the link must be shut down and all traffic currently being transmitted is lost, a routing solution notifies the rest of the network to no longer send traffic to this switch. By watching interface counters you can determine when traffic is no longer being sent to the device under maintenance, so you can safely remove it from the network with no impact on traffic.
Because routing on the host uses three or more ToRs, this reduces the impact of a ToR being removed from service, either due to expected maintenance or unexpected network failure. So, instead of losing 50% of bandwidth in a two ToR MLAG deployment, the bandwidth loss can be reduced to 33% with three ToRs or 25% with four.
The redundancy with layer 3 networks is tremendous. In the image above, the network on the left can still operate even if 3 out of 4 ToR switches are down. That is 4N redundancy. The best case for the network on the right is 2N redundancy, no matter what vendor you choose. Layer 3 allows applications to have much more uptime with no risk for outages.
Often when deploying a new application, server or service, there can be a delay between when the new device or service is available and when it is integrated with the network. This is typically a result of the additional configuration required to set up layer 2 high availability (HA) technologies on the upstream switches, which is often a manual process.
Using layer 3 and routing on the host eliminates this delay entirely. Tight prefix list control coupled with authentication can be leveraged on leaf and spine switches to protect the rest of the network from the downstream servers and what they are allowed to advertise into the network. Server admins can be in control of getting their service on the network within the bounds of a safe framework setup by the network team. This is similar to how service providers treat their customers today.
Similarly, when an application or service moves from one part of the network to another, the application team has the ability to advertise the newly moved application quickly to the rest of the network allowing for more agility in service location.
A service or application can be represented by a /32 IPv4 or /128 IPv6 host route. Since that application depends on that /32 or /128 being reachable, the application is dependent on the network. Usually this means the ToR or spine is advertising reachability. If the application is migrated or moved (for example, by VMware vMotion or KVM Migration), the network may need substantial reconfiguration to advertise it correctly. Usually this requires multiple steps:
- Removing the host route from the previous ToR, spine or pair of ToRs or spines so it is no longer advertised to the wrong location.
- Adding the host route to the new ToR, spine or pair of ToRs or spines so it is advertised into the routed fabric.
- Checking connectivity from the host to make sure it has reachability.
These steps are often done by different teams, which can also cause problems. When routing on the host this is done automatically by Quagga advertising, the host routes no matter where the host is plugged in.
One problem with layer 2, especially around MLAG environments, is interoperability. This means if you have 1 Cisco device and 1 Juniper device, they can't act as an MLAG pair. This causes a problem known as vendor lock-in where the customer is locked into a vendor because of propritary requirements. One huge benefit of doing layer 3 is that by using OSPF or BGP, the network is adhering to open standards that have been around a long time. OSPF and BGP interoperability is highly tested, very scalable and has a track record of success. Most networks are multi-vendor networks where they peer at layer 3. By designing the network down to the host level with layer 3, it is now possible to have multiple vendors everywhere in your network. The following diagram is perfectly acceptable in a layer 3 environment:
Host, VM and Container Mobility
When routing on the host, all VMs, containers, subnets and so forth are advertised into the fabric automatically. This means the only the subnet on the connection between the ToR and the router on the host needs to be configured on the ToR. This greatly increases host mobility by allowing minimal configuration on the ToR switch. All the ToR switch has to do is peer with the server.
If security is a concern, the host can be forced authenticate to allow BGP or OSPF adjacencies to occur. Consider the following diagram:
In the above diagram the Quagga configuration does not need to change, no matter what ToR you plug it into. The only configuration that needs to change is the subnet on swp1 and eth0 (configured under
/etc/network/interfaces, which is not shown here). This greatly reduces configuration complexity and allows for easy host mobility.
BGP Unnumbered Interfaces
Cumulus Networks enhanced Quagga with the ability to implement RFC 5549. This means that you can configure BGP unnumbered interfaces on the host. In addition to the benefits of not having to configure every subnet described above, you do not have to configure anything specific on the ToR switch at all, so you don't have to configure an IPv4 address in
/etc/network/interfaces for peering.
BGP unnumbered interfaces enables IPv6 link-local addresses to be utilized for IPv4 BGP adjacencies. Link-local addresses are automatically configured with SLAAC (StateLess Address AutoConfiguration). This address is derived from an interface's MAC address and is unique to each layer 3 adjaency. DAD (Duplicate Address Detection) keeps duplicate addresses from being configured. This means the configuration remains the same no matter where the host resides. There is no specific subnet used on the Ethernet connection between the host and the switch.
Along with implementation of RFC 5549, Quagga has a simpler configuration, allowing novice users the ability to quickly configure, understand and troubleshoot BGP configurations within the data center. The following illustration shows a single attached host using BGP unnumbered interfaces:
Why Have Networks not Done this in the Past?
If routing on the host has a lot of benefits, why has this not happened in the past?
Lack of a Fully-featured Host Routing Application
In the past, there were no enterprise grade open routing applications that could be installed easily on hosts. Cumulus Networks and many other organizations have made these open source projects robust enough to run in production for hundreds of customers. Now that applications like Quagga have reached a high level of maturity, it is only natural for them to run directly on the host as well.
Cost of Layer 3 Licensing
Many vendors have many license costs based on features. Unfortunately, vendors like Cisco, Arista and Juniper often want to charge more money for layer 3 features. This means that designing a layer 3-capable network is not as simple as just turning it on; the customer is forced to pay additional licenses to enable these features.
The licensing is often confusing (for example, "What is the upgrade path?" "Do I need additional licenses for BGP vs OSPF?" "Does scale affect my price?"), even when the cost is budgeted for. Routing is not something that should cost additional money for customers when buying a layer 3-capable switch. At Cumulus Networks our licensing model is simple, concise and publicly available. | <urn:uuid:c5350d2a-ae7b-4db3-9710-8a5625c64599> | CC-MAIN-2017-09 | https://support.cumulusnetworks.com/hc/en-us/articles/216805858-Routing-on-the-Host-An-Introduction | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00529-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929902 | 2,576 | 2.5625 | 3 |
Mapping the tiny mouse brain is pushing the limits of information technology capabilities.
By the end of the three-year project to map the mouse brain about one petabyte of data will have been generated, pushing scientists up against a range of technological limitations, according to the senior director of the Allen Brain Atlas Project Mark Boguski at the recent Bio-IT World Conference.
The staff of 26 scientists and IT specialists went into the work knowing that it would require augmenting existing hardware and software as well as in-house development. This may result in a technology infrastructure that will be useful to other scientists when made publicly available. The goal of the project is to create a 3D molecular map of the mouse brain; its constructed through painstakingly imaging slices of the brain.
Read the full story at IDG News Service | <urn:uuid:db63d5c6-95bf-4807-a656-69876b1258c5> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Database/RAM-Limitations-Strain-BioIT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00229-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941795 | 165 | 2.984375 | 3 |
DELL EMC Glossary
Two-factor authentication is also called strong authentication.
Two Factor Authentication (2FA) is also called strong authentication and usually requires another proof, beyond just a password, for a user to assure their identity and gain access to a system, network, or application. Two-factor authentication technology usually requires that two out of the three following proofs be met:
- Something the user knows, like a password,
- Something the user possesses, like an ATM card, or
- Something unique about the user, like a fingerprint.
Common two-factor authentication methods include Chip and PIN card readers, tokens, and TANs.
When information is particularly sensitive or vulnerable, using a password alone may not be enough protection. A stronger means of authentication, something that’s harder to compromise, is necessary. For example, health care information on a shared computer can be both sensitive and vulnerable. It’s sensitive because its exposure could result in HIPAA violations and fines, not to mention the loss of patients’ confidence in the medical institution. And the information is vulnerable if the shared computer can be used by many people or if it is connected to the Internet. These are the kinds of situations that require two-factor authentication. While biometrics is sometimes used with a PIN or password, hardware authenticators or tokens have traditionally been more widely available and supported. | <urn:uuid:0cee4892-a34b-48c9-99b0-ed7a9e204316> | CC-MAIN-2017-09 | https://www.emc.com/corporate/glossary/two-factor-authentication.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00405-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917007 | 285 | 3.640625 | 4 |
Large-scale, worldwide scientific initiatives, such as the one that found the Higgs Boson or the one that is currently researching the depths of proteomics, rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources.
Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
On July 4 of last year, one of the largest physics experiments in history announced the finding of the Higgs Boson. The discovery was another step in the verification of the Standard Model of elementary particles, and it was largely a result of the data collected by the ATLAS detector that was later stored, analyzed, and used in simulations in computational centers around the world.
Naturally, CERN is equipped with significant computational capabilities as it sifts through the swaths of data created by the LHC. However, a great deal of that data was being sent out to scientists across the world in over a hundred computing centers located in over 40 countries.
As a result, Google stepped forward in August of last year to offer its Compute Engine services for overflow scientific computing periods. According to Panitkin, those spikes would occur before major conferences, overloading the existing computational framework. These overflow spikes represent an intriguing phenomenon, a macro-scale example of a problem that many mid-sized research institutions face on their own. Many of those institutions house their own HPC cluster that handles the majority of their heavy duty computational leg-work. When those resources are exhausted at peak times, they turn to the cloud.
When that problem manifests itself at key times across a research project that spans hundreds of facilities across the globe, that becomes a massive, worldwide HPC cloud computing challenge.
As such, the ATLAS project was invited by Google to test the Google Compute Engine in an effort to complete that challenge.
The experience has gone well so far, according to Panitkin. “All in all, we had a great experience with Google Computing. We tested several computational scenarios on that platform…we think that Google Compute Engine is a modern cloud infrastructure that can serve as a stable, high performance platform for scientific computing.”
The ATLAS collector, diagrammed below, was designed to intake and record 800 million proton-proton interactions per second. Of those 800 million collisions per second, only about 0.0002 Higgs signatures are detected per second. That translates to one signature for every 83 minutes or so. The computing systems have to sift through that huge dataset containing information from each of those almost billion interactions a second to find that one distinct pattern.
Thankfully, much of the ATLAS data is instantly filtered and discarded by an automatic trigger system. Were this not the case, the collector would generate a slightly unsustainable petabyte of data per second.
Adding to the challenge that the enormous amount of data presents is the very particular signature the ATLAS project was looking for. According to Panitkin, sifting through that much is akin to trying to find just one person in a system of a thousand planets of the same population as Earth. To help visualize what that looks like, the above picture represents all the possible signatures while the diagram below shows the one specific indicator of the Higgs Boson.
CERN collects the data and initially distributes it to its 11 tier-one centers, as shown in the diagram below. The cloud and specifically the Google Compute Engine enter the picture in tier two, where about two hundred centers across the globe simulate their respective sections based on the tier-naught CERN data.
Combining all of those resources into a shared system is essential for scientific researchers, as they cull information from other tests and simulations run. According to Andrew Hanushevsky, who presented alongside Panitkin at the Google I/O event, the system was aggregated using the XRootD system. XRootD, coupled with cmsd, was instrumental and combining and managing the thousand-core PROOF cluster made for ATLAS as well as the 4000-core HTCondor cluster for CERN’s collision analysis.
The important aspect was ensuring the system acted as one, as Hanushevsky explained. “This is a B tree, we can split it up anyway we want and this is great for doing cloud deployment. Part of that tree can be inside the GCE, another part can be in a private cloud, another part in a private cluster, and we can piece that all together to make it look like one big cluster.”
With that in place, the researchers could share information across the network at an impressive transfer rate of 57 Mbps transfer rate to the Google Compute Engine.
Finally, according to Panitkin, the computations done over GCE were impressively accurate. The system reported, according to Panitkin, “no failures due to Google Compute Engine.”
The best science requires extensive collaboration. Global projects such as the one that found the Higgs Boson mark the pinnacle of that collaboration, and these efforts can only grow stronger with the betterment of large-scale cloud-based computing services like Google Compute Engine. | <urn:uuid:df8e12c8-ca58-46c6-99a4-669b523b8308> | CC-MAIN-2017-09 | https://www.hpcwire.com/2013/05/21/cern_google_and_the_future_of_global_science_initiatives-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00225-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949776 | 1,105 | 2.75 | 3 |
BALTIMORE, MD--(Marketwired - Feb 24, 2014) - Kiddie Academy® says that Dr. Seuss's birthday, March 2, 2014, is a great day for families to think about ways to give their children the gift of a lifelong love of reading.
"While learning to read is a gradual process, involving letter sounds, vocabulary, grammar and comprehension, learning to enjoy reading is something that children do best at home," said Richard Peterson, vice president of education for Kiddie Academy® Educational Child Care. "When children read only in school, they may see reading only as school 'work'; but when children see their parents and siblings reading at home for entertainment or sit down with a family member to share a favorite book, they get the message that reading can be something you do for fun. And that's an essential step in raising a lifelong reader."
Each year, on Dr. Seuss's birthday, the National Education Association (NEA) sponsors their "Read Across America Day" in honor of the well-known children's book author. The message of this celebration is clear: encourage children to read -- for enjoyment, for knowledge, for relaxation, for life. Kiddie Academy® shares three tips that families can use to bring the Read Across America celebration into their own homes:
- Be a reading role model. Make sure your child sees you reading to gain knowledge and for entertainment. Talk about your favorite childhood books and introduce them to your child.
- Make reading together interactive. Don't rush to "finish" the book. Ask your child questions about illustrations and characters; encourage your child to make predictions and observations about the story; invite your child to "retell" the story; and take turns reading out loud.
- Have your child read you a book. Even pre-readers will enjoy "reading" to you if you pick a book that they know well. Turn the pages as they tell you the story, prompted by their memory and the book's illustrations.
"The benefits of making reading a part of your child's everyday world go far beyond 'learning to read,'" said Peterson. "Activities like reading together, sharing favorite books, and discovering new interests and ideas together not only help boost your child's literacy skills; they help build treasured family memories."
Kiddie Academy® is a leader in education-based child care, offering full- and part-time care, before- and after-school care and summer camp programs to families and their children. For more information, visit www.kiddieacademy.com.
About Kiddie Academy®
Since 1981, Kiddie Academy® has been a leader in education-based child care. The company serves families and their children ages 6 weeks to 12 years old, offering full time care, before- and after-school care and summer camp programs. Kiddie Academy's proprietary Life Essentials® curriculum, supporting programs, methods, activities and techniques help prepare children for life. Kiddie Academy is using the globally recognized AdvancED accreditation system, signifying its commitment to quality education and the highest standards in child care. For more information, visit www.kiddieacademy.com.
About Kiddie Academy® Franchising
Kiddie Academy International, Inc. is based in Maryland and has nearly 120 academies located in 23 states, including two company-owned locations. Approximately 70 additional academies are in development, with 15 to 20 new locations slated to open each year. For more information, visit www.kafranchise.com. | <urn:uuid:cecf7c9e-c4a4-4576-8b6a-e271dfb843c3> | CC-MAIN-2017-09 | http://www.marketwired.com/press-release/kiddie-academyr-urges-families-share-joy-reading-with-their-children-as-dr-seusss-birthday-1882180.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00401-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955615 | 733 | 2.796875 | 3 |
The original article on RAID, entitled "The Quick Skinny on Raid" appeared on Ars back in October of 1998. We recently decided that we should beef this section up. Why? For one, after Panders' review of the Promise FastTrak66 IDE RAID controller, we quickly came to realize that there's a serious hunger out there for information on RAID, but simultaneously, there' a serious lack of reliable and easy to understand info ready to be consumed. So, without further ado, we remove the word 'quick' from the title, and present to you an almost completely reworked article, with new graphics, detailed ECC explanations, and a fresh new scent. The basics behind RAID
For all you Quiz Bowl participants, let's get the fun acronymical knowledge out on the table first: RAID stands for Redundant Array of Inexpensive Disks.
Seriously. No, wait, I'm not joking. That's really what it means. (You wouldn't believe how many people e-mail in saying that it's some combination of random, really, real-time, redundant, array, assembly, interconnected, independent, inter-relation, devices, etc).
Despite what Techweb and other sites say, the original name as proposed by the original researchers is just that (just consult Patterson, Gibson and Katz). There's a darn good explanation for the name, and for the genesis of RAID per se. Since the inception of the hard drive, disk I/O performance has been a persistent performance bottleneck. Here's the abstract from the above mentioned source (Patterson, et al):
Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
Additionally, statistics indicate that the hard drives of yesterday were not quite as reliable as drives that are manufactured today (I'm not sure if that says a lot or not). So, if RAID was born forth from the desire for more speed and more data security over 12 years ago, it hasn't really taken the world by storm until recently, due to the fact that the third letter in RA*I*D is finally becoming true: disks really are inexpensive. As a result, interest in utilizing RAID in different scenarios, even on workstations, has risen greatly. Heck, right now, you can by a server-class machine from Dell, with RAID, for under $3,000. Absolutely unthinkable just 3 years ago!
There are varying types of configurations for RAID, and each is assigned a different notation based on the cleverly conceived "RAID X" scheme (you know it sounds cool, even if it's not the most useful designation scheme). Now, I gotta let you know - I'm going to be thorough. Some of the the implementations I'm going to mention exist solely on chalkboards, having been neglected by vendors, systems developers, and even the media. Some are out there floating around in obscurity, and others are the buzzwords that legends are made of (at least in the world of storage technology). First up we'll talk about the two forms of RAID you're most likely to see on a workstation near you: RAID 0 and 1.
RAID 0, the Striped Set
Ah, good ole' RAID 0. Also known as the Striped Set without parity, RAID 0 is a kind of RAID half-breed. It features one of the most prominent processes known to RAID devices--striping (as in, you have stripes on your shirt). Striping comes in myriad forms and flavors (as you'll see), but it can be understood loosely as this:
Using an array of disks, that is, several hard drives connected to a controller of some sort, one facilitates the rapid reading and writing of data by separating data into consecutive blocks that are each read or written to different physical drives (or spindles), in order.
Imagine writing the first 7 letters of the alphabet with 3 writing hands instead of one. If your brain was able to pre-sort the data so that hand one started on 'd' the second it finished 'a', and so on, you would theoretically have a write-rate three times faster than with one hand.
As a result, system I/O performance improves greatly because the data is spread out over X spindles, on possibly more than one channel. Better performance will be observed as the physical disk-to-controller ratio approaches one. But then again, so will the cost. One SCSI RAID controller can support several disks efficiently, and as my Promise RAID controller review showed, IDE RAID 0 isn't too bad either.
It is important to note that this config is not truly a valid RAID implementation because it's not fault-tolerant (and thus does not have the complementary overhead work). This is really important to understand. If one drive fails in a RAID 0 array, all data on the array is completely lost! In that respect, RAID 0 can be dangerous. If you have a RAID 0 implementation with 3 10GB drives, you've got three different points of failure that could trash 30 GB of data.
This doesn't stop most crazy performance freaks, of course (like me). RAID 0 is a good entry-level to small environment alternative to dumping huge monies on fancier tech, especially in instances where I/O performance is more important than file redundancy. You all do backups anyway, right?
The thing to note with RAID 0 is its postulation: striping data across disks is a performance boosting configuration. We'll see RAID 0 re-interpreted in a number of ways in the next few pages. But before that, we have to meet RAID 0's antithesis, RAID 1. | <urn:uuid:276bd923-3491-449b-be4a-3e70fa556ce0> | CC-MAIN-2017-09 | https://arstechnica.com/features/2000/05/raid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00577-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954607 | 1,283 | 2.609375 | 3 |
In a special IT Blogwatch Extra, Richi Jennings watches bloggers celebrate the Web's 20th birthday. Not to mention the movable type rap...
Carrie-Ann Skinner swells with patriotic pride:
While the internet was developed in the 50s, the web or the network of content servers that allow information to be shared, was not created until March 1989 [when] its British employee Tim Berners-Lee first proposed a "universal linked information system" that formed the beginnings of the worldwide web.Berners-Lee authored "Information Management: A proposal" which was the starting point for the world wide web. Berners-Lee was trying to find a solution for the hundreds of CERN employees that needed to communicate, share information, equipment and software.
Let Marshall Kirkpatrick now praise famous men:
On March 13th, 2009 the World Wide Web will turn 20 years old. Sir Tim Berners-Lee invented this world-changing layer on top of the Internet on this day in 1989. It's hard to overstate the impact this young technology has had already and it's even more exciting to think about where it's going in the future.Berners-Lee has some great ideas about where the web should go next. His vision is of a major advance that could serve as the foundation for innovations that we can't even imagine today ... Thank you Tim, for what you've done for the world already.
Marie Boran quibbles:
Technically, if we're talking about the difference between birth and conception, then conception may be the word.
...Meanwhile the name World Wide Web and its acronym brings to mind a podcast with the author and television personality Stephen Fry who bemoans the fact that the acronym takes even longer to say than the phrase itself. But this need not have been the case ... early suggested names for the web were: Information Mesh, Mine of Information and The Information Mine. The last one would have been simply called TIM.
Charles Cooper 'spains:
[The 1989 proposal] is amazing to read with the benefit of 20-20 hindsight. But it would take Berners-Lee another couple of years before he could demo his idea. Even then, the realization of his theory had to wait until the middle of the 1990s when Jim Clark and Marc Andreessen popularized the notion of commercial Web browsing with Netscape.And as prescient as the CERN document was, not even Berners-Lee could imagine where his basic design was about to lead.
Jon Silk offers photos, but makes the usual schoolboy error:
In celebration of the Internet's 20th birthday (always arguable as to date and technology but let's just go with it, shall we?), here are the pics I took on a trip to CERN last year.Notable things: The pad where Tim drew the Internet, and the sticker on his PC that says 'Don't turn this off - it's running the Internet' (or words to that effect).
What's next, Paul Miller?
This Web of Documents has exceeded most peoples wildest expectations since reaching mainstream awareness on the back of graphical tools such as the Mosaic web browser, but the data behind so many decisions, analyses and visualisations largely remains inaccessible even today.Berners-Lee spoke at TED last month ... [He] describes the notion of Linked Data and attempts to illustrate the advances that could be made if we were all able to contribute and consume data in the same way that we do today with documents, images, and the like ... This is not some new Web; not a replacement for the Web of today. Rather, its an evolution of todays Web that makes todays applications and interactions richer and more capable.
Previously in IT Blogwatch:
Richi Jennings is an independent analyst/adviser/consultant, specializing in blogging, email, and spam. A 23 year, cross-functional IT veteran, he is also an analyst at Ferris Research. You can follow him on Twitter, pretend to be Richi's friend on Facebook, or just use boring old email: firstname.lastname@example.org. | <urn:uuid:b0400c0e-ad60-482e-a1eb-43e1c4fbab0a> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2481139/e-commerce/web-is-20-today--anyone-for-cake-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00218-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952448 | 846 | 2.640625 | 3 |
Research from Norton estimates the global price tag of consumer cybercrime now topping some US$113 billion annually4 which is enough to host the 2012 London Olympics nearly 10 times over. The cost per cybercrime victim has shot up to USD$298: a 50% increase over 2012. In terms of the number of victims of such attacks, that’s 378 million per year – averaging 1 million plus per day.
”Domain Validated (DV)” SSL Certificates pose a direct threat to consumers on the Internet.
Cybercriminals frequently use DV SSL certificates to impersonate real ecommerce websites for the purpose of defrauding consumers.
This paper will explain SSL, the different types of certificates, how cybercriminals use DV certificates to steal personal and financial data, and what can be done to thwart this tactic.
Please login to download this report. | <urn:uuid:9a1df679-a8dd-492b-aee1-3f51528aaeb2> | CC-MAIN-2017-09 | https://www.infosecurity-magazine.com/white-papers/hidden-dangers-lurking-in-ecommerce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00394-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.863073 | 180 | 2.546875 | 3 |
Chapter 3: Common IPv6 Coexistence Mechanisms
As the name suggests, transition mechanisms help in the transition from one protocol to another. In the perspective of IPv6, transition basically means moving from IPv4 to IPv6. One day, IPv6 networks will completely replace today’s IPv4 networks. For the near term, a number of transition mechanisms are required to enable both protocols to operate simultaneously. Some of the most widely used transition mechanisms are discussed in the following sections.
To continue reading this article register now | <urn:uuid:7230cacd-a9a8-46bc-a29f-379e7c46d868> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2202943/lan-wan/book-excerpt-from-ipv6-for-enterprise-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00570-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.884374 | 106 | 2.59375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.