text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
If surveillance technology can help save the lives of American GI Joes and Janes, then that is a good thing. Privacy concerns seep in when that same technology moves from military-only use to use in the public domain for the "detection of safety and threats." You know a wooden match is small, which seems better illustrated when seen in an adult's hand, but most folks don't see the small size and think "ultimate sound probe" with a "sharp memory."
The surveillance capabilities of the "matchstick-sized sensor" are so impressive that militaries worldwide have deployed the acoustic vector systems developed by Microflown Technologies. It can "hear" and pinpoint the sound of a gunshot, a drone, or even pick out and record one specific conversation in a crowd. Back in 2011, when Col. Harold Jacobs of the Royal Netherlands Army first saw the system, he was "flabbergasted." He told Scientific American that he was "really surprised about the simplicity, the amazing accuracy, the size and all the possibilities."
One of those possibilities includes the scenario described by NewScientist; the "matchstick-sized sensor that can pinpoint and record a target's conversations from a distance." At first, it was discovered "by chance" that "the device can hear, record or stream an ordinary conversation from as far away as 20 meters." Even though that's a distance of about 65.6 feet, the "hearing" capabilities were improved upon until it could pinpoint and record a specific conversation from 82 feet (25 meters) away. "Work is now underway to increase the range."
Hans-Elias de Bree, co-founder of Dutch firm Microflown and inventor of the sensors, told Scientific American that the senor can pinpoint "a normal, 60-decibel conversation from up to 50 meters away;" for the metric-challenged, that means it can pinpoint a normal conversation from a whopping 164 feet away. The Engineering Toolbox described a normal voice for social situations to be about 55 to 60 decibels.
Furthermore, the Microflown system has a "sharp memory." It can identify "a sound source's unique signature after hearing it once. This information could be used, for example, to create a profile on an elusive sniper by plotting when and where attacks happen." Although that's a fine example for military purposes, there would not, hopefully, be any snipers in the average crowd. That sharp memory could also target the average John Doe if the surveillance device were mounted to a park bench to covertly record audio.
"Given a battery and a tiny antenna, the sensor could be attached to traffic lights, a shrub or park bench," explained NewScientist. "Such systems can be teamed with surveillance cameras."
In a non-military use video for land-based detection of safety threats, Microflown AVISA showed an example of the system working alongside surveillance cameras. It was capable of pinpointing a person in a shot-panic scene, a robbery scene, a cheering scene marked later in the video as normal crowd behavior, and abnormal crowd behavior in a crowd-gathering scene. The playlist of videos six through 10 are marked as private and can only be accessed with permission from Microflown.
Microflown used the below image in its leaflet [pdf] describing the product.
The match-sized sensors can be mounted on drones, helicopters, ships, vehicles, or "even on a soldier's epaulets." It's as if the sensors were mounted on drones flying over America, mounted on buildings, surveillance cameras or on a park bench, when it raises concerns of potential surveillance mission creep. An expert from the University of Kansas told NewScientist, "It will be possible to record a parade of people on a busy sidewalk all day using a camera and acoustic sensor, and tune into each conversation or voice, live or via stored files."
Besides recording conversations, Microflown's match-sized sensors perform whether airborne or on the ground to pinpoint "a 155-millimeter howitzer-at 175 decibels-from up to 40 kilometers away" (nearly 25 miles away); "an 81-millimeter mortar-at 180 decibels-from 25 kilometers away" (about 15.5 miles away); and "a 5.56-millimeter small arms fire-at 155 decibels-from five kilometers away" (3.1 miles away).
When it comes to next-generation surveillance and “seeing,” one of the most impressive yet scary drones is DARPA’s ARGUS-IS. It has a 1.8-gigapixel camera and was described as being the equivalent to having up to 100 Predator drones look at an area the size of a medium-sized city at once. Zooming in allows 65 detailed windows within that area to be opened simultaneously. Within each, objects as small as six inches could be seen on the ground. Although there was no mention of ARGUS’ capabilities to “hear,” that doesn’t necessarily mean it can’t. But it’s not too hard to imagine a Microflown AVISA device mounted on ARGUS to see and to hear for the ultimate creepiness in surveillance.
Microflown's technology looks good overall when you think about it saving lives of men and women in our military, but the possibility of invading our private lives with its "unwelcome" capabilities had Bruce Schneier adding, "It's not just this one technology that's the problem. It's the mic plus the drones, plus the signal processing, plus voice recognition."
Like this? Here's more posts:
- Wickr: Free texting app has military-grade encryption, messages self-destruct
- IE zero-day attacks to ramp up: Metasploit releases module
- Ctrl+Alt+Del 'was a mistake' admits Bill Gates, who said 'no' about returning as CEO
- Report: NSA tracks and maps American citizens' social connections
- Researchers develop attack framework for cracking Windows 8 picture passwords
- Microsoft warns of IE zero day in the wild, all IE versions vulnerable
- Have you protected your privacy by opting out of cross-device ad tracking?
- F-Secure's Mikko Hypponen: George Orwell was an optimist
- Not even Microsofties trust Microsoft’s approach to privacy
- Microsoft Research: Secret tags in 3D-printed objects, hooked to the Internet of Things
- Gmail is the preferred email service of terrorists, claims former NSA chief
- Are Bing it on challenge claims a bunch of bunk?
Follow me on Twitter @PrivacyFanatic | <urn:uuid:f5767384-b186-40ce-8669-857562c57f48> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225518/microsoft-subnet/extreme-tech-for-covert-audio-surveillance.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933139 | 1,374 | 2.828125 | 3 |
Rules of the GameBy Baselinemag | Posted 2008-04-30 Email Print
Virtualization technology can deliver cost savings and improve IT performance, but it also introduces new security concerns. In this summary of a Burton Group report, security expert Pete Lindstrom examines the security considerations unique to virtualized IT environments.
Rules of the Game
There are five immutable laws of virtualization security. It’s essential to understand them and use them to drive security decisions. They are:
1. Attacking a virtual combination of operating systems and applications is exactly the same as attacking the physical system it replicates.
The beauty of a virtual machine is that it acts just like a physical system. However, in most environments, that means it can be attacked in the same way. Any data on the VM can be stolen, and if the VM has network access, it can be used as a stepping-stone to attack other systems.
2. A virtual machine poses a higher security risk than an identically configured physical system running the same operating system and applications.
This corollary to the first law accounts for the additional vulnerability of a virtual system’s controlling software, the hypervisor. Because the hypervisor monitors and responds to a VM, it is susceptible to attack. So it’s important to recognize the risks inherent in the virtual environment and to offset them in other ways.
3. Virtual machines can be made more secure than similar physical systems when they separate functionality and content.
When two processes share the same memory space, an attack against one process can impact the other. One way to benefit from virtualization is to separate functions and data into isolated operating environments. Such segregation helps reduce the risk added by the virtualization software that’s part of the second law.
4. A set of virtual machines aggregated on the same physical system can only be made more secure than separate physical systems by modifying the VM’s configurations to offset hypervisor risk.
While separating resources reduces risk, combining resources will initially increase risk (see #2). At this level of aggregation, VMs must be reconfigured to attain the same level of risk achieved through the third law. Turning off services, adding controls and separating content can help reduce overall risk.
5. A system containing a trusted virtual machine on an untrusted host poses a greater risk than a system containing a trusted host with an untrusted VM.
Attacks at lower levels pose greater risks than those at higher levels, because higher-level programs can be tricked into believing assertions about trust and authenticity. It is important for deployments of trusted VMs in untrusted environments to consider the implications and harden the VM image accordingly. | <urn:uuid:39293539-7682-4001-a824-cdb440696b02> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Security/5-Laws-of-Virtualization-Security/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911982 | 553 | 2.640625 | 3 |
Traditional security tactics teach us to build our infrastructure like a castle —strong, high, impenetrable walls with a few points of securable entry, surrounded by a moat and on top of a hill.
With elementary mechanisms in place — like antivirus software and a firewall — many businesses think they are well protected. Unfortunately, organizations are leaving themselves exposed to data loss, breach, user error, disaster and more.
Moving from a false sense of security to actual security requires action. While employing perimeter measures and accounting for the unpredictable human element in a security solution are large parts of a comprehensive plan, the best plan is one that assumes a breach, hack or disaster will happen.
By being prepared, IT security rests on actionable and tangible plans instead of just the hope that a breach won’t happen. Replacing that false sense of security with real solutions is a constant work in progress, but there are some basic things every IT team can do.
While it may seem obvious, the first line of defense is a properly configured firewall and antivirus software. But what is the definition of a perimeter for your organization? Does your network consist solely of on-premise servers and desktops? Or does it include cloud-based servers and applications? And what part do mobile devices and off-site computers play?
Perimeter security has expanded beyond basic and fundamental tools. Intrusion prevention systems and breach detection can help to prevent losses and alert an IT team when a breach has occurred. In today’s world of bring your own device (BYOD), access to a company’s network and data should be carefully managed. As walls and easily controlled network cables no longer define the workplace, MAC filtering or other methods can further control the perimeter.
Perimeter security is where many companies stop, and therefore fail at providing a complete security solution. Human error has been found to be a leading cause in data breaches. This can range from anything as simple as using unsafe practices when dealing with passwords — such as reusing the same password on multiple systems, especially between personal and professional systems — to accessing company networks and data from public WiFi.
Training employees to be cognizant of proper security protocols is paramount when moving beyond a false sense of IT security.
While we’ve been told for years to be aware when downloading email attachments or clicking on links, hackers have increased the sophistication of their attacks. For example, spearphishing has led to many high-profile security breaches. While phishing tries to lure individuals to click on links and provide credentials, in spearphishing a hacker goes to great lengths to impersonate co-workers and friends — perhaps by monitoring activity on social networks — in order to get employees to give up information and access to networks and data.
Despite all the efforts to prevent a breach, the reality is a breach is likely. Accepting this fact is the first step in making sure reaction time is swift in getting an organization running and back to business again.
The most important component in this effort is the backup system. When a breach occurs, perpetrators may not just steal information, but may also completely disrupt business operations. The goal should be to have systems up and running again in minutes or hours, not days.
Relying on IT teams to manually recover data, applications or servers means prolonged downtime and a much greater impact on the bottom line.
Quicker recovery times can be achieved using a backup system that supports point-in-time snapshots of entire systems, including the data and application state. The ability to go back in time allows for the resumption of operations in the wake of an attack. This is increasingly necessary, as ransomware attacks have become so commonplace. Rather than being held hostage to the attackers, one can go back to a point in time before the impacted server was infected and immediately resume operations.
(Note: This article was also posted on CIOdive.com) | <urn:uuid:6b42e8ca-ba89-4671-b584-9837b971d70f> | CC-MAIN-2017-04 | https://axcient.com/blog/false-sense-of-it-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947104 | 800 | 2.6875 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: 4 states of matter
Select a size
States of Matter
Earth Science Intro Unit
I will be able to define and describe matter and the four states of matter.
What are the states of Matter?
All matter is made of atoms.
Atoms are in constant motion.
Particle movement determines the state of matter.
States of Matter.
Very close together.
Vibrate in place.
Further apart just enough to allow movement
Move, have more kinetic energy.
Kinetic energy is the energy of motion.
Particles are far apart
Moving very fast.
Most kinetic energy.
Move faster when heat is applied.
Move slower when heat is taken away.
What are the states of Matter
Plasma: 1000 C- 1Bill C
A good conductor of electricity
Is affected by magnetic fields
Common State of Matter
STATES OF MATTER
Tightly packed, in a regular pattern
Vibrate, but do not move from place to place
Close together with no regular arrangement.
Vibrate, move about, and slide past each other
Well separated with no regular arrangement.
Vibrate and move freely at high speeds
Has no definite volume or shape and is composed of electrical charged particles
I will be able to explain what a change in state is and describe how matter changes state.
What is a change in State?
Changes in States
A change of a substance from one physical state to another.
Water vapor to liquid water.
Liquid water to ice.
Energy Gained or Lost
To change state, energy must be added or removed.
Gaining or losing energy changes the temperature of a substance.
What happens to matter when a change of state occurs?
Energy / Motion of Particles.
Vibrate in place, particles close together
Particles further apart.
Particles very far apart.
Solid becomes liquid.
Energy is added
Particles speed up.
Temperature at which a substance changes from a solid to a liquid.
Process by which liquids or solids change to a gas.
Change in state from a solid directly into a gas. (skips liquid phase)
Atoms gain energy
Escape into the air as a gas.
Example: Dry Ice
Add heat to a liquid
Atoms gain energy.
Particles can escape from the surface of a liquid if enough heat is added.
A rapid change from a liquid to a gas (vapor)
The temperature at which this occurs in a liquid.
The change in state from a gas to a liquid.
Cooling a gas
Particles lose energy.
Change in state from a gas directly to a solid.
Atoms lose energy
No liquid forms in the process.
Water vapor turns into ice crystals.
Liquid becomes solid.
Loss of energy.
Particles slow down
Temperature at which a substance changes into a solid.
Water = 0C or 32F
Mass is Conserved
The amount of matter stays the same during a phase change.
The mass does not change when its state changes.
Each state contains the same amount of matter.
Equal Amounts of Positive and Negative Electrons
I will be able to explain what a change in s | <urn:uuid:4f778a81-fceb-4367-a362-92b43290afb3> | CC-MAIN-2017-04 | https://docs.com/paul-day/1485/4-states-of-matter | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00527-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.843331 | 751 | 3.5 | 4 |
In a world where the reliance on electronic data transmission and processing is becoming every day more prevalent, it is of critical importance for organizations to guarantee the integrity and confidentiality of mission critical information exchanged over communication networks.
Contrary to a false perception, intercepting information transmitted over an optical fibre cable – an optical fibre is a thin glass fiber which transmits light to carry information – is not only possible but also not very difficult in practice. “Tapping a fibre-optic cable without being detected, and making sense of the information you collect isn’t trivial but has certainly been done by intelligence agencies for the past seven or eight years” explains John Pescatore, VP of Security at the Gartner Group and a former US National Security Agency analyst. “These days, it is within the range of a well-funded attacker, probably even a really curious college physics major with access to a fiber-optics lab and lots of time on his hands” adds Pescatore. Bending an optical fibre is indeed sufficient to extract light from it. Optical taps are readily available from a variety of manufacturers and inexpensive.
Optical fiber cables have replaced copper cables for all high bandwidth links and they become every day more prevalent in the telecommunication networks worldwide. Organizations almost certainly rely on optical fibers to transmit some, if not all, of their information. Because of this vulnerability, optical links carrying critical information must be identified and protected with appropriate countermeasures.
As telecommunication links are intrinsically vulnerable to eavesdropping, cryptography is routinely used to protect data transmission. Cryptography is a set of techniques that can be used to guarantee confidentiality and integrity of communications. Prior to its transmission, information is encrypted using a cryptographic algorithm and a key. After the information has been received, the recipient reverses the process and decrypts the information. Even if he intercepted the encrypted information, an eavesdropper would not be able to gain knowledge about it without knowing the cryptographic key.
Current cryptographic techniques are based on mathematical theories. In spite of the fact that they are very widespread, they do not offer a foolproof security. They are in particular vulnerable to increasing computing power and theoretical advances in mathematics. These techniques are thus inappropriate in applications where long-term confidentiality is of paramount importance (financial services, banking industry, governments, etc.).
Quantum cryptography was invented about twenty years ago and complements conventional cryptographic techniques to raise security of data transmission over optical fibre links to an unprecedented level. It exploits the laws of quantum physics to reveal the interception of the information exchanged between two stations. According to the Heisenberg Uncertainty Principle, it is not possible to observe a quantum object without modifying it. In quantum cryptography, single light particles – also known as a photons – which are described by the laws of quantum physics, are used to carry information over an optical fibre cable. By checking for the presence of disturbance, it is possible to verify if a transmission has been intercepted or not. Because of this, quantum cryptography was identified in 2002 by the MIT Technology Review and by the Newsweek magazine as one of the ten technologies that will change the world.
This technology can be used to exchange keys between two remote sites connected by an optical fibre cable, and to confirm their secrecy. The keys are then used with secret key algorithms to securely encrypt information. With such an approach it is possible to guarantee future-proof data confidentiality based on the laws of quantum physics. Its deployment on critical links allows thus to raise the information security level of an organization. | <urn:uuid:bc4c9cad-36b6-4320-af44-f3f00aa35d9d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2005/05/05/securing-optical-networks-with-quantum-cryptography/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938938 | 715 | 3.234375 | 3 |
Another Tool in War on Terror?
So for example, a Predator drone, allegedly belonging to the U.S. Air Force (but really belonging to the CIA or some other murky three-letter agency), records images of activity far below. Because of a variety of factors, these images aren't really all that clear. But because the drone can be fitted with a GPU-based computing module, the initial image processing can take place in real time before the image is transmitted to the drone's controllers and perhaps from there to members of the armed forces. This may have been why Osama bin Laden's courier, who was being tracked by a drone, could be identified from such a long distance. It was the courier who gave the Navy Seals the indication they needed of bin Laden's whereabouts.Of course, this is just one example and perhaps only a theoretical example of what you can do with GPU-based parallel processing. Because the current Nvidia technology can put 512 processing cores on a single chip, the opportunities are substantial. NASA is using this technology for space science applications, and Nvidia has published some weather modeling modules on its Website. These are highly complex mathematical processes and could take a very long time on a traditional computer, even a supercomputer. One feature I was shown is the ability to create extremely complex flight simulations in a few minutes, versus nearly a month of computer time using Xeon processors. The simulation I saw was of a night landing by a jet fighter on an aircraft carrier in bad weather. This is a problem so complex that many pilots simply can't handle it, and simulators are a way to save lives. But as the use of GPU computing modules spreads, so does the type of software that you'll find supporting it. Right now, most applications are based on Nvidia's CUDA programming. But that could change. There are rumors that Microsoft is looking at Nvidia's GPU modules for part of its high performance computing initiative, for example. That could mean, among other things, that you could see a really, really fast version of Windows. Editor's Note: This story was corrected to state the correct number of processing cores that can be built into a single chip with current Nvidia technology.
So was Nvidia one of the reasons we found Osama bin Laden? As you can imagine, neither Nvidia, the CIA nor the military will say. But that courier had to be followed somehow and it needed to be done remotely. There are only so many ways that this can be accomplished. | <urn:uuid:a4584a4f-9da8-43af-bcdd-ae98c13ff7c4> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Nvidia-Moves-from-Gamers-to-Gunners-with-Intelligence-Defense-Apps-397340/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00400-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966265 | 500 | 3.109375 | 3 |
According to a paper from the Center for Strategic and International Studies (CSIS), the vast majority of successful security breaches to corporate networks are made using basic techniques. By implementing just four critical security controls you could remove 85 to 90% of threats. In a previous article I talk through the benefits of patching applications and operating systems, which cover two of the controls. The control that I’ll be covering today is application whitelisting.
Whitelisting is a term for a list of approved items and when it comes to IT, the term usually refers to application whitelisting where there is a list of known good software titles that can be used by the employees, where any software that is not listed in the whitelist is not allowed to run. There are many advantages to doing this; Zero day attacks for example are significantly reduced, if not eradicated altogether, as any unauthorised executable code that is placed on a system is rendered useless as it will be unable to run, thus incapable of delivering its intended payload. This is the method that most of today’s malware, spyware and viruses use to spread themselves around.
There have been various articles over the years claiming that Anti-Virus (AV) is dead and the main reason for this is the reactive approach AV takes, i.e. AV software relies on a catalogue of virus definitions and the constant updating of that catalogue in order to stay effective. In short, the virus has to be created, deployed, and someone infected in order for a definition to be created to detect and remove it downstream – not the most proactive model – whereas whitelisting gives complete control over what applications are actually run; if it is not on the list, the software does not run. Simple, yes?
Well not really, let me explain:
Everything above is true; an Application Whitelist does give complete control and if the software isn’t explicitly on the list then that software will not run. However creating the list in the first place is not an easy task. Applications are not individual files that run, they are a collection of possibly hundreds of files that all need to be able to run in concert, and if just one of those files isn’t on the whitelist there is a strong chance that the application won’t start or will fail at some point during operational usage. It is imperative therefore that your whitelist is crafted very carefully in order to be effective. One other thing to bear in mind is that your operating system will come under the same rules that your applications will, so adding every file that your operating system needs is a must. There are few things more annoying than setting up a whitelist and testing it on your machine, only to find that the operating system won’t load because some of the files it requires are being blocked by your whitelisting application.
In short, setting up an Application Whitelist is not an easy task, and in a great deal of cases the effort of getting it right and the risk of getting it wrong is prohibitive, so many companies simply don’t bother.
However, by understanding your current risk and what applications are already installed within your environment you have a big piece of the puzzle. Knowing what is actually being used and removing the waste gives you another big piece, as you can hone your whitelist on what is relevant. Technologies such as Security Benchmark and AppClarity from 1E give you information about every application that is installed within your environment, if an application is run from a temporary location or doesn’t have any publisher information as an example then there is a good chance that it is not something that you want running in your environment. This kind of information is critical when trying to understand what applications must be added to your Application Whitelist.
Creating the whitelist is only part of the story; there is constant maintenance required because as applications are installed and updated, new files are introduced into the environment that must be added to your Application Whitelist. That’s where 1E’s enterprise app store, Shopping can give you the flexibility that helps whitelisting projects to succeed by allowing users to select the applications they need and have them installed by a known good mechanism that can be added as a trusted source, rather than the user carrying out the installation manually as these files would be blocked from executing.
As I’ve said previously, Application Whitelisting is not an easy task. Knowing up front your risk, what applications exist within your environment, what is actually being used and removing the unused applications all significantly help you get your Application Whitelist in order. Controlling the applications that are allowed into your environment using an enterprise app store lowers the maintenance that a whitelist needs by making sure that apps are only introduced using a known mechanism.
The take home message is this: Application Whitelisting is a proven method in protecting your environment and has been highlighted by the CSIS as one of four key security controls. Let 1E technologies help you protect your environment, to find out how we can assist you please contact us at info@1E.com.
In the third part of this series I will look at the fourth control: Removing Unneeded Administrative Privileges. | <urn:uuid:ccf52437-d989-44f6-b002-5cbdf1154a2b> | CC-MAIN-2017-04 | https://www.1e.com/blogs/2015/02/02/whitelisting-grey-area/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00336-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955998 | 1,068 | 2.625 | 3 |
Joined: 14 Jun 2006
Location: Jacksonville, FL
|There is no initialization needed to execute POS, INDEX, or FIND.
POS and INDEX work the same except the commands are opposite.
X = POS('XYZ', 'BCDEFXYZ') - looking for XYZ in BCDEFXYZ
The value of X should be 6
X=INDEX('BCDEFGXYZ','XYZ') - looking for XYZ in BCDEFGXYZ
The value of X should also be 7
FIND finds words within a sentence.
X=FIND('A LOOK OUT THE WINDOW', 'WINDOW')
WINDOW is the 5th word of the phrase, so X will equal 5.
After you create a REXX statement (like any of the 3 above), you can enter SAY X on the next line. When you run the REXX, it will tell you the value it is coming up with so you can compare what REXX is finding against what you know is to be true.
You can also use TRACE "I" in your REXX and step through the REXX commands every time you press enter. TRACE OFF will turn the trace back off. | <urn:uuid:d6f1ecc2-0243-4cc6-b694-8508a502503f> | CC-MAIN-2017-04 | http://ibmmainframes.com/about11800.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00418-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.856284 | 256 | 2.75 | 3 |
Business Process Definition
A business process is a collection of linked tasks which find their end in the delivery of a service or product to a client. A business process has also been defined as a set of activities and tasks that, once completed, will accomplish an organizational goal. The process must involve clearly defined inputs and a single output. These inputs are made up of all of the factors which contribute (either directly or indirectly) to the added value of a service or product. These factors can be categorized into management processes, operational processes and supporting business processes.
Management processes govern the operation of a particular organization’s system of operation. Operational processes constitute the core business. Supporting processes such as human resources and accounting are put in place to support the core business processes.
The definition of the term business process and the development of this definition since its conception by Adam Smith in 1776 has led to such areas of study as Operations Development, Operations Management and to the development of various Business Management Systems. These systems, in turn, have created an industry for BPM Software which seeks to automate process management by connecting various process actors via technology.
A process requires a series of actions to achieve a certain objective. BPM processes are continuous but also allow for ad-hoc action. Processes can be simple or complex based on number of steps, number of systems involved etc. They can be short or long running. Longer processes tend to have multiple dependencies and a greater documentation requirement. | <urn:uuid:22705b95-0143-4658-b32a-5ae7c5ed7d53> | CC-MAIN-2017-04 | http://www.appian.com/about-bpm/definition-of-a-business-process/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952527 | 298 | 3.171875 | 3 |
announced today that it has activated its newest weather and climate supercomputers, increasing the computational might used for the nation's climate and weather forecasts by 320 percent. The new IBM machines process 14 trillion calculations per second at maximum performance and ingest more than 240 million global observations daily. The primary and back up systems, ranked 36th and 37th in the world on the top 500 list of the world's fastest computers, will enable the NOAA National Weather Service
to deliver more products, with greater accuracy, at longer-lead times. These supercomputers will consume more data and generate highly advanced models that may enable meteorologists to begin making significant inroads in cracking hurricane intensity forecast challenges.
These machines also will process data from Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC
) satellites, a series of six satellites launched in 2006 that will provide NOAA National Weather Service forecasters with better understanding of jet streams and related storm systems -- keys for the early prediction of storms like those that affected Denver and the Pacific Northwest in December and January.
"Better physics, better models, better data, and faster and more powerful supercomputing are the foundation for making better weather and climate forecasts," said retired Navy Vice Adm. Conrad C. Lautenbacher, Ph.D., undersecretary of commerce for oceans and atmosphere and NOAA administrator. "NOAA's partnership with IBM is a great case study of the public and private sectors working together to save lives."
The supercomputers will harness 160 IBM System p575 servers, with 16 1.9 gigahertz Power5+ processors. The machines also will contain 160 terabytes of system storage DS4800 disk storage systems.
Photo courtesy of NOAA. | <urn:uuid:dde43302-bb5b-428a-9703-81e0ce44be3d> | CC-MAIN-2017-04 | http://www.govtech.com/security/NOAA-Activates-Newest-Climate-and-Weather.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00354-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900582 | 354 | 2.9375 | 3 |
Google Cultural Institute Commemorates U.S. Civil Rights Movement
Google teamed with the National Archives to bring historic documents, photos and film clips on the civil rights movement to the online Google Cultural Institute.The Google Cultural Institute is highlighting the U.S civil rights movement through a fascinating collection of documents, photographs and film clips in commemoration of the 50th anniversary of the Civil Rights Act of 1964. The new online collection is being unveiled just as the LBJ Presidential Library holds a three-day Civil Rights Summit from April 8 through 10 in Austin, Texas, to mark and discuss some of the most monumental legislation of our nation's history. The new online exhibit and the Civil Rights Summit were announced by Susan Molinari, vice president of public policy for Google, in an April 8 post on the Google Public Policy Blog. Google is providing technology to help support the summit by live streaming panel discussions, presentations and other activities that are ongoing at the event, she wrote. The presentations will also feature comments from four former U.S. presidents. "Each day will also feature heroes from the civil rights movement, the sports arena and the music industry, as well as panels on new civil rights challenges around immigration rights, gay rights, women's rights and so much more," wrote Molinari. "We hope you can tune in, but if you miss the live stream, you can find all of the content on the LBJ Library's YouTube page."
For those who want to explore the history and eventual passage of the Civil Rights Act of 1964, the Google Cultural Institute's new collection is a great starting point. "Much of the content on the site is from the LBJ Presidential Library and features images, letters, telegrams, and video from January 1961 when President Kennedy first takes office to July 1964 when President Johnson signs the Civil Rights Act into law," wrote Molinari. | <urn:uuid:12dc62c0-a443-4cc1-bf10-6501ecd0123f> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-cultural-institute-commemorates-u.s.-civil-rights-movement.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94289 | 381 | 2.65625 | 3 |
With an eye toward keeping satellites as sprightly as possible and their instruments functioning at optimum temperature levels, Lockheed Martin says it has developed a cooling system three-times lighter than its predecessor.
[IN THE NEWS: Bitcoin's highest highs, lowest lows]
The Lockheed microcryocooler, weighs approximately 11 ounces which the company says will help drive down satellite expenses as it can cost up to ten thousand dollars a pound for a satellite to orbit Earth.
Lockheed says the microcryocooler operates like a refrigerator, drawing heat out of sensor systems and delivering highly efficient cooling to small science satellites orbiting the Earth and on missions to the outer planets.
"Temperatures as low as -320 F are required for infrared instruments and the coolers must operate with minimum power and long lifetimes," said Ted Nast, Lockheed Martin fellow at the Advanced Technology Center in Palo Alto in a statement. "That is why we constantly pursue a deeper understanding of the dynamic effects of temperature on cutting-edge technology and develop new systems, like our microcryocooler, that will perform successfully within the demands and constraints presented by severe, operational thermal environments."
Lockheed Martin said it has flown more than 25 cryocoolers in space over the past 40 years - most recently on NASA's WISE and Gravity Probe-B missions. In addition to space applications, the microcryocooler can be utilized in tactical systems, such as unmanned aerial vehicles and tanks.
NASA wrote in a white paper from 2006 that many of its space instruments require cryogenic refrigeration to improve dynamic range, extend wavelength coverage, or enable the use of advanced detectors to observe a wide range of phenomena-from crop dynamics to stellar birth. Reflecting the relative maturity of the technology at these temperatures, the largest utilization of coolers over the last fifteen years has been for instruments operating at medium to high cryogenic temperatures. For the future, important developments are focusing on the lower temperature range in support of studies of the origin of the Universe and the search for planets around distant stars.
The Mid-Infrared Instrument (MIRI) on the still-in-development James Webb Space Telescope for example is expected to operate in the neighborhood of -400 F.
Check out these other hot stories: | <urn:uuid:aa2e141c-e5d2-40f9-aab4-dd48dae93582> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225944/applications/lightweight-lockheed-cryocooler-will-keep-satellite-innards-on-ice.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925786 | 463 | 3.265625 | 3 |
In a number of ways, the “Internet of Things” is a perfect description for the phenomenon that it aims to describe. The wide spectrum of applications that this radical technology incorporates is vast and it is growing with every passing day. For some, the word “things” in IOT may sound like a bit of a lazy term, as though a better description wasn’t available, but the term is clearly filled with great merit and potential. Over time, many have seized the opportunity many to interpret, retell, and share individual perspectives of what “Internet of Things” is. A big part of that story this phenomenon can do for business, our communities, and individuals. In a way, the term parallels the rise of and the usage of the term “web”, or “internet”, or “cloud” in its fluidity and importance. As many ways there are to describe IoT, there are just as many ways to implement it, and as observed in many technological endeavors, there is a worthwhile road that is not always chosen because it is seemingly more difficult than others.
From its simple beginnings, IoT is growing into a massive industry that only now is hitting its stride. New applications, new utilities, new avenues of information, and a growing number of intelligent devices are emerging at a staggering rate. With each manifestation of improvement and entries into the market, IoT is helping to deliver answers to everyday problems, improve our quality of life, and increasingly connect us all regardless of our locations. As witnessed with the emergence of smartphones, the integration of practical technologies into our lives comes with a price that must be satisfied in order to sustain any practicality at all. The ultimate formula for a scalable IoT and big data bridge into the future is based on science and understanding the elements that go into the big picture.
This science bears the elements of maintaining sustainable performance, securing the integrity of the technology, as well as the information it is entrusted to carry. Without these fundamentals, the Internet of Things would be useless.
The Elements of Modern Data
Make no mistake about it, because with little comprise, IoT technology is expected to always work. This expectation is produced by the upward presence of IoT applications and the increasing role of IoT in our lives. As this growing synergy between people and technology continues on its journey, the elements that make IoT dependable are clear.
There are many stacks of books and courses dedicated to data and application designs. Whether the case is utilitarian, minimalist, comprehensive, or anywhere along a wide spectrum, every IoT application makes choices about how to present itself and deliver data to the user in the best manner possible. Consistency is a key factor, and along with user experience, the approach that is taken in the design can make or break an IoT technology.
IoT applications have to be available at all times. The general expectation is that data can be collected, delivered, and shared regardless of what time one chooses to connect. In the world of IoT, the challenge of presence and availability is best answered by cloud-based hosting solutions. Ubiquitous, efficient, and scalable, the Internet of Things has come to depend on the presence, capabilities, and power of the cloud.
Capacity of Data
Cloud technologies can help insure that applications are always available, but that is only part of the equation. Data transaction is also quite critical in that the delivery of data is what makes IoT work in the first place, whether the goal is to deliver, collect, process or receive information. The takeaway is that architectural design is a major building block to building the right IoT infrastructure.
The term scale is often thrown about with little understanding of its true necessity and implications. Scale is an exercise that is facilitated by healthy architectural computing resource design and precision engineering. Many struggle with the fact that the capacity to effectively produce IoT services is capped by the realities of costs, complexity and application growth. Thus, an architect of IoT systems must strategically implement infrastructure capacity where and when it is needed. This is a task that must be made simple and it must be readily reversible. In other words, a properly built IoT infrastructure is able to expand and contract where and when it is needed by means of a flexible infrastructure, programming, and network capabilities. This precise maneuvering is made possible by constant measuring, monitoring, and reporting from the side of IoT that only a few every get to see.
It is clear that the speed of data makes for a positive user experience. IoT applications are designed with optimized connectivity in a number of targeted network conditions. Devices themselves may have to connect over technologies such as Bluetooth, 3G, 4G, and Wi-Fi networks. New standards for connections are constantly evolving, pushing the boundaries of range, speed, and reliability with each passing phase. On the architecture side, speed is also a factor. IoT applications are pushing and increasing amount of data through massive networks. In addition, IoT applications are evolving towards a more edge-centric model, meaning that because of latency and the size of data being pushed, an increasing amount of data is being processed in proximity to the end client rather than a central processing stage. In any example, the quality of network is a constant theme in the positive experience of IoT, whether it comes down to the nature of wireless carrier, wireless access device, or the network backbone that a hosting provider is utilizing.
Security is one of the most critical and enabling elements of IoT itself. Properly secured IoT applications are critical for the reliable and safe operation of devices that connect in the IoT world. It is easy to imagine that with such a wide variety of IoT applications in a still-nascent industry that security standards continue to vary quite widely and may never see a unified approach. Such is the nature of an open environment. The traffic that courses through the internet and the infrastructure however enjoys a solid history with decades of development and experience at its sails. That is because protecting data has been an issue since the earliest days of computing. Over the years, a growing number of concerns, threats, security technologies, devices, regulations, and more have evolved to produce a robust ecosystem of security that has served IoT well. From the endpoint to the cloud however, there are other concerns. Most IoT endpoints are designed to be either easy to use or have no human interaction at all. There is no opportunity to input passwords or credentials as most devices are designed to be low-power, single use by design. This footprint is both an advantage and a potential threat. The leading security designs and approaches are using elements such as data segmentation, end-to-end security, and network monitoring to deal with this new spectrum of threats.
IoT, Big Data, and Hybrid
There is only one answer in the industry that can handle all of these elements in one cohesive platform. It’s hybrid computing. Hybrid computing answers the questions and demands of a surging big data and IoT industry with a number of key technologies.
Architecture Meets Data Design – Whether it is a simplistic IoT interface, or complex arrays of sensor-based data, hybrid cloud technology delivers the resources of network, compute, and storage exactly where it needs to be, when it needs to be there. This is the ‘cloud layer’ of the hybrid formula and it was made to do this exact task efficiently. On the other hand, for applications that are constructed with heavy calculations and data processing, a multi-point or single-point layer of dedicated resources delivers high capacity where it is needed. This is the dedicated element in hybrid cloud computing.
Availability Answered – Thanks to the nature of hybrid cloud, there is no single network point of failure to be concerned with.
Data Unleashed – Hybrid cloud is multi-directional, meaning that wherever data needs to go, it can be structured as needed.
- Between nodes
- From endpoint to central repository (and back)
- Delivering status and control to an army of endpoint devices
- Variations and combinations of these mean the examples are endless.
Scalability – Hybrid cloud answers the IoT and big data challenge with the limitless capability to add resources as required.
Speed – An IoT and big data environment counts on speed to produce the best experience. With strategic implementation of cloud, network, and dedicated resources, hybrid cloud cannot be beat when it comes to structuring speed where it counts, minimizing hops to public-facing networks and target audiences.
Security – The days where cloud security was a strangling point of concern are long gone, but not forgotten. Hybrid cloud heeds the call for cutting edge security by allowing the enterprise to exercise control over data, implementing security as it needs fit over the most sensitive information, and to have complete ownership of the data it sees as critical.
Codero Pushing the Boundaries
Codero’s operational formula allows businesses to decide for themselves how to implement IoT and big data according to their needs. A business can start simple and grow as needed, which is the premise behind our hybrid approach and consultative sales approach. By providing uncompromised excellence in support, business can rely on a partner that is always there, ready to help when required from inception, to design, to test, and throughout production. Codero also boasts a unique “On-Demand” hybrid cloud technology that allows for rapid scale, rapid configuration, and the best experience possible for any IoT and big data application. Codero is also uniquely positioned for a vast network of localized network and data centers, so that as compute gets increasing closer to the edge, a variety of resources are available in order to provide optimal results. When IoT technology is tested and designed to work as expected, it is both fundamental and critical that the architecture be as powerful as can be.
The answer for a great deal of IoT and big data deployments is hosted on demand hybrid cloud technology. Do you agree? Let’s start the conversation—visit our Facebook page and tell us what you think! | <urn:uuid:5e851d6e-15c2-41b7-a455-7f66b70e912a> | CC-MAIN-2017-04 | http://www.codero.com/blog/the-sweet-science-of-big-data-and-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950528 | 2,034 | 3.015625 | 3 |
In the history of computing, one aspect that has eluded capture is a consistent encoding scheme.
There have been many attempts to standardize a character encoding scheme, but each has had strong downsides until about 20 years ago.
The invention of UTF-8 in 1992 by Ken Thompson based on Unicode was the solution to the original problem of universal encoding. It provided the ability to represent all possible characters in the smallest space possible. It also preserved compatibility with ASCII. Other solutions did not have full coverage or used too much storage.
UTF-8 1981-4: Most efficient and compatible
|ASCII||1964||1||Early standard. Made popular by PC|
|UCS-2||1993||2||First Unicode encoding|
|UTF-16||1996||2,4||UCS-2 model with more chars|
|UTF-32||1996||4||Simple, but uses much memory|
Based on timing, Windows adopted UCS-2 first with Windows NT in 1993.
This was seen as an improvement over the traditional single byte character sets of DOS and Windows 3.x. Unfortunately, it added complexity to Windows development and support due to the dual model of Unicode and non-Unicode support. It led to the classic doubling of Windows APIs. One version of the API appended with ‘W’ (Wide/Unicode) and the other ‘A’ (ASCII). A brief explanation can be found here.
Citrix XenApp and XenDesktop are built on Windows models. Even Citrix Linux VDA has its roots in the Windows encoding schemes. Overall this makes sense because it is the history of what happened and it made sense to share code between platforms.
This philosophy has changed for Linux VDA version 1.1. Instead of trying to preserve the Microsoft encoding schemes, everything is now converted internally to UTF-8. Initially this might not seem to have much relevance to customers. However, this provides some immediate benefits and also some longer term improvements.
Because UTF-8 is now the core encoding in Linux VDA, it no longer has to convert strings internally. This improves performance slightly and also reduces the risk of losing something in translation. It also reduces the footprint of how much space the strings need.
Another benefit is that it allows for a native encoding on Linux. Messages coming from administrators are now allowed to be displayed using full Unicode support. Even though the message arrives in UTF-16 from Studio, the message is converted to UTF-8 and displayed using GTK+.
Another reason to use UTF-8 is that it is now possible to support full Unicode text transfers with the clipboard. Again, even though the clipboard is receiving UTF-16 text, it is automatically translated to UTF-8 for the sake of the Linux applications. This is very important for Asian languages that typically have large character sets.
A side benefit is improved username handling. It is now possible to support usernames that include non-ASCII characters. This was tried as part of the Linux VDA 1.1 tests. The username in that case had characters above the BMP range (>64K) which is considered fairly rare but valid.
Beyond these changes, the logging and tracing components now use UTF-8. This allows for full character set usage for log messages and trace output. There is still more work needed to localize the log messages to non-English languages but at least it is enabled and will display UTF-8 content.
Internally it simplified the code in many places. This will allow for more consistent handling of the strings and less trouble with conversion.
As to the future, it provides a base for Linux VDA to better support any language. It also allows for the possibility of having a server that supports different languages at once with different users. The overall biggest potential is full integration with Linux related to text.
To read more from the Linux Virtual Desktop Team, check out all of our posts here. | <urn:uuid:bdd60b0d-8f52-46ee-927c-2260bf8a581d> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2015/11/04/linux-vda-internationalization-utf-8/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924008 | 814 | 3.59375 | 4 |
Security has been top of mind for everyone in the last year. The most secure communication layer (SSL/TLS) on WWW was torn apart by the security researchers and we figured out that most of what we were using was not as secure as it sounded. TLS 1.1 and 1.2 became the need of the hour and NetScaler quickly responded back by implementing the support in MPX appliances. Now, NetScaler MPX-FIPS platform also becomes stronger with support of TLS 1.1/1.2 protocol. MPX-FIPS platforms are used in sensitive deployments where data security is critical and all such deployments can avail the new support now.
The rule is simple! Data confidentiality is directly proportional to the protocol and cipher used.
When I saw the 2014 biopic The Imitation Game which is set in World War II, I was astonished to see how Alan Turing successfully broke the super strong ciphers created by the Enigma machine, which the Nazis used to secure their wireless messages. His efforts were helpful in saving thousands of lives around the world.
In the modern context, sensitive data, from credit card numbers to patient health information to social networking details, need protection when transmitted across an insecure network. Data with defence and federal agencies are considered even more sensitive and thus they need stronger cryptographic infrastructure. National Institute of Standards and Technology (NIST) mandates such agencies to comply with Federal Information Processing Standard (FIPS) to meet their strict security requirements.
What is FIPS 140-2?
The FIPS Publication 140-2 is a U.S. government computer security standard used to accredit cryptographic modules. To coordinate the requirements and standards for cryptography modules including both hardware and software, the NIST issued the FIPS 140 Publication Series.
Citrix NetScaler comes in FIPS variant as well which is in compliance with FIPS 140-2 level 2. It provides organizations with additional security by protecting unauthorized access to cryptographic keys, which if misappropriated could result in a data security breach.
NetScaler FIPS appliance are available in four form factors with Pay-as-You-Grow model. These are MPX 9700/10500/12500/15500.
Why TLS 1.2?
SSL 3.0 and TLS 1.0 both have vulnerabilities in their implementation. These vulnerabilities were exposed by various attacks like POODLE and BEAST. SSL 3.0 is becoming obsolete now and TLS 1.0 is not secure enough considering the advancement in processing and computing power. Large ciphers can also be broken now which were practically impossible to do earlier.
The security issues with SSL 3.0 and TLS 1.0 are addressed by TLS 1.1 and TLS 1.2. It is recommended by NIST that government servers and clients should move to TLS 1.1 and 1.2.
Government is taking proactive steps in terms of security of data and NIST has asked the federal agencies to move to TLS 1.2 before Oct 1 2015.
What is the fuss about?
NetScaler FIPS appliances start supporting TLS 1.1 and TLS 1.2 protocols from 10.5.e MR build 55.8007.e, released in Q2 2015. This will result in the product to comply with the NIST mandate for federal agencies to use TLS 1.2 from October 2015 and enables NetScaler to be successfully deployed in government organizations. Already deployed NetScaler FIPS appliances shall be upgraded to this build or later to support TLS 1.2 and continue its glorious run in the government sector.
As always NetScaler evolves itself and keeps its customers in a win-win situation. So, beat the trepidation, unsureness and dilemma and start using TLS 1.1/1.2 with NetScaler FIPS appliances.
Important details once more –
TLS 1.1 and TLS 1.2 support on NetScaler FIPS appliances.
Build – 10.5 55.8007.e
Download link – https://www.citrix.com/downloads/netscaler-adc.html
P.S. Stay tuned – happiness spreads faster than you notice! | <urn:uuid:96bf830e-8607-4c32-b7eb-9314552f9211> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2015/05/13/beat-the-fud-tls-1-2-is-here-on-the-netscaler-mpx-fips-platform/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00191-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943997 | 864 | 2.625 | 3 |
The Federal Communications Commission should get more involved in environmental issues, particularly as the digital TV transition causes an "enormous influx" of potentially dangerous old analog TVs into landfills, FCC Commissioner Jonathan Adelstein said in an exclusive interview with the new Green Electronics Daily.
Adelstein called for an interagency digital television task force on environmental issues including the FCC, the Environmental Protection Agency, the Department of Commerce, and state and local governments. He said his call for the task force has gone unheeded, and currently "there is not a structure for us to work in a cooperative fashion across different agencies, except for ad hoc meetings."
The FCC's main role in dealing with discarded TVs is informational, focusing on educating consumers on environmentally friendly alternatives for disposing of them, Adelstein said.
But the FCC might play a bigger role in environmental matters by promoting telecommuting and teleconferencing, since they reduce travel, Adelstein said. He said the best approach is promoting high-speed Internet access. "I think the biggest issue before the FCC is the need for national broadband policy," Adelstein said. Other countries are "leapfrogging" the U.S. in terms of bandwidth, he said.
Consumer Electronics Firms Unlikely to Cooperate on E-Waste
The first issue of Green Electronics Daily also reported indications that the major consumer electronics companies won't merge their e-waste programs. According to executives, Sony shows no sign of willingness to add its recycling program to a joint venture of Panasonic, Sharp, Toshiba and 12 other companies. But the venture is moving ahead on expansion plans.
The same issue reports on improved fuel cell technology from Sharp, efforts to reduce toxic materials and power consumption for TVs, a new e-waste consultant for Seattle, federal legislation on accelerated depreciation for smart grids, and G-8 and International Telecommunication Union efforts to internationalize green electronics issues. | <urn:uuid:6c1c3bcb-2f33-4532-bb4b-cf94cd709e78> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/FCC-Official-Presses-for-Bigger-Agency.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00457-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940968 | 398 | 2.53125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Improving Soil Fertility with Cover Crops, Green Manures, and Intercropping
Select a size
Improving Soil Fertility through Cover Cropping, Green Manure, and Intercropping
Modern industrial agriculture, as we currently know it, was ushered in by Haber, Bosch, and BASF (Haber Process, 2016) in the 1930’s and John Deere’s moldboard plow a century earlier. A little over 80 years later, time-tested farming practices for maintaining soil fertility were replaced with synthetic fertilizers. This change allowed for unprecedented growth in farm productivity and, along with other factors, the demise of the diversified family farm; farmers were no longer constrained by nutrient requirements and could retool their farms to focus on the most profitable cash crops.
“Between 1930 and 2000 U.S. agricultural productivity (output divided by all inputs) rose by an average of about 2 percent annually causing food prices to decrease. "The percentage of U.S. disposable income spent on food prepared at home decreased, from 22 percent as late as 1950 to 7 percent by the end of the century." (Intensive farming, n.d.)”
These changes have had a profound impact on humanity; We have reduced hunger and starvation, increased nutrition, and improved life expectancy, among other. However, the tenants of modern agriculture have also had a negative impact on our soils and the environment in general. Nationally, soil has been lost to erosion, excess nutrients are damaging our lakes, rivers, and oceans, and the structure and composition of our soil have degraded (Intensive farming, n.d.) and (Dabney, 2001).
Our soils are a finite and perishable resource that are a legacy that we leave to future generations - It up to us to ensure we enhance these resources. The approaches we take must not diminish the productivity gains we have seen or we will be unable to feed an ever-increasing human population. By reintroducing the core practices of cover cropping, green manures, and intercropping we can improve soil fertility while reducing and/or eliminating our dependency on synthetic fertilizers.
Cover crops are crops grown for purposes other than harvest and may include many traditional cash crops like rye, oats, alfalfa, and winter wheat (Unknown, n.d.). They are gown to improve soil fertility and quality, reduce erosion, improve soil water holding capacity, and manage pests, diseases, and weeds (Cover crop, n.d.). The choice of which cover crop to be grown is often based upon the soil needs, time of year, rotational limitations, termination process, and cost - However, they must be carefully chosen. Marianne Sarrantonio (Greg Bowman, 2007), recommends the following process for select a cover crop; 1) Clarify your primary needs, 2) Identify the best time and place for your cover crop, and 3) Test a few options. Creating a targeted goal will help narrow the selection – for example do you want to provide nitrogen, add organic matter, improve soil structure, reduce erosion, etc. Once you have established a goal, you need to determine when to plant your cover crop and where you should grow it; this will help narrow the species and varieties that will meet your goal, where you need them to grow, and when you need them to plant them. A list of common cover crops and their use is detail in table 1 and Chart 1.
The soil quality is often measured by its salinity, pH, organic matter, texture, nutrient levels, and micro flora/fauna among others. For a crop to flourish, the soil quality and fertility should be well aligned with the needs of the crop and if they are not, crop yields will be negatively impacted. Compacted soils, which limit water and air penetration, can be addressed by crops that loosen soils, pans can be remediated by deep rooted plants that will penetrate and breakup pans. Salinity, water holding capacity, pH, and soils that are low in flora and fauna (macro and micro) can be helped by cover crops that add organic matter to the soil (Patrick, 1957). For example, a cover crop composed of oats, daikon radish, and field peas can help improve soil fertility by adding 2,000 to 4,000 Lbs. of dry matter per acre (DM/A) (unknown, n.d.) from the oats, 5,000 DM/A and 90-300 pounds per acre of nitrogen from field peas (Field Peas, 2007), and 7,000 DM/A from the radishes with the added benefit or reduced compaction from root penetration of the radish (Gruver, 2016). Taken together, the dry matter, nutrients scavenged and fixed, loosening of the soil will improve overall soil fertility; when we add in additional side benefits like habitat created for beneficial organisms, weed suppression, etc., the improvements to soil fertility is compounded.
Green manure is a crop grown, terminated prior to maturation, and incorporated into the soil to improve soil conditions and/or fertility for a subsequent cash crop and may be classified as a cover crop. Often the crops chosen are done so because they fix nitrogen, add organic matter, and/or scavenge soil nutrients. When they are terminated, and incorporated into the soil, the nutrients they created or scavenged are release, through decomposition by heterotrophic bacteria (Green manure, n.d.), and additional organic matter is added to the soil. Since natural processes are used to mineralize the nutrients and break down the organic matter, considerations need to be taken to ensure that decomposition has progressed to the point where nutrients will be available and not bound, allelopathic characteristics have degraded, and decomposer will not interfere with the subsequent crop.
The choice of a green manure should be made in a similar fashion to the that of a cover crop, with special consideration paid to termination characteristics. Crops that are difficult to terminate present special challenges when used as a green manure because the time, effort, and processes required for their termination – Green manures that are not fully terminated will not provide the desire fertility boost and may compete with subsequent crops. It should be noted that often times, one crop acting as a green manure may not provide the necessary soil improvements. Just as nature prefers diversity, green manures can be composed of multiple species, which when working together, improve multiple soil characteristics.
Composite green manures are composed of plant varieties that were chosen for how they improve the soil and work with each other. For example, an Oat / peas / vetch green manure will add nitrogen to the soil (peas and vetch), add organic matter and weed control (oats and vetch), and perform soil nutrients scavenging (vetch). The oats in this combination will provide scaffolding for the peas and vetch. The vetch decomposes quickly when terminated and will provide nitrogen and nutrients (potassium and phosphorous), followed by the peas (nitrogen) and lastly the oats. This combination has been shown to fix 90-150 lb./A of nitrogen (Eric Sideman, 2013). Other common composite green manures are radish / oats / pea, where the radish help with soil compaction, or substituting buckwheat for oats for a quick turnover rotation with quick decomposition (Eric Sideman, 2013). Common green manures can be found in Table 1.
Intercropping is the practice of growing more than one crops in close proximity for the purpose of increasing productivity from the land or the primary crop. In this paper, we will focus on the later. Utilizing cover crops and green manures can provide substantial benefits to cash crops, but some of the benefits may be limited in duration. Long duration cash crop (fruits, berries, biannual, perennials, long season annuals, and etc.) nutritional needs will likely extend beyond the time that decomposition of cover crops and green manures provide their maximum benefits. Intercropping helps to address this need by allowing farmers to maximize the productivity and yield of their primary crop by adding supporting crops that provide direct benefits, like nutrients, disease / pest management, support for soil flora / fauna, amount others.
Let’s consider a traditional cash crop like corn with respect to its need for nitrogen. Green manures or cover crops planted in the fall and incorporated into the soil will provide a ready source of nitrogen for the seeds post-germination and into the early life cycle of the corn. However, over time the nitrogen stores will be utilized by the corn or other soil microbes. While the corn has had a great start, to reach its full potential, additional accessible nitrogen stores will be required. The corn will scavenge some nitrogen, but it is unlikely to find enough for optimal growth. Corn following an alfalfa rotation has been shown to fix reserves of ~ 170 kg/Ha-1 of nitrogen, but this still leaves a deficit of up to 34 kg/Ha-1 of nitrogen (Sawyer, 2010). Intercropping corn with kura clover can fix between 74 kg/Ha-1 and 334 kg/Ha-1 of nitrogen; effectively meeting the needs of the corn (Sawyer, 2010). This approach can also reduce overall costs. Kura clover seeded at 4 lbs/Acre and a cost of ~ $4 / lbs (welterseeds, 2015)is well below the average cost of $200 / Acre for fertilizer (Fertilizer Cost move Down, 2013) and can be achieved with the same number of passes over the field.
Selection of the crop to intercrop requires the farmer to consider allelopathy and potential competition along with the other factors we have previously discussed in this paper. Intercropping need not be implemented for only nutritional needs, but can be used to improve overall soil conditions. Intercropping grasses between rows of trees in an orchard can reduce soil erosion, improve soil texture, increase water holding capacity of the soil, and provide habitat for beneficial insects.
Cover crops, green manures, and intercropping have been in used since the earliest days of agriculture and have a proven record of success. Their use can provide modern agriculture with the yields that they are accustom to and at potentially reduced costs, but they will require farms to learn or reacquire lost practices. These practices have the potential to slow, stop, and even reverse the reduction in the fertility of our soils and move us from an era of soil mining to soil fertility building.
Table 1 – Common Cover Crops
Hardy through zone
Seeding rate² (lb/A)
Seeding depth (inches)
N-capture/ fertilizer equivalency (lbs/A)
Grasses (Cool season)
Cereal rye (Secale cereale L.)
112 (2 bu)
Excellent nutrient scavenger (esp. N)
Most cold tolerant of commonly used cover crops; provides living cover in winter and spring, erosion control, weed suppression, nutrient recycling, organic matter improvement, soil tilth improvement; earliest small grain to mature
Regrowth may occur if not completely controlled; explosive growth in spring poses termination challenges; possible following crop suppression due to allelopathy or nutrient tie-up; may attract some insect pests.
Winter wheat (Triticum aestivum L.)
120 (2 bu)
Cold tolerant in most of PA; rapid growth; common varieties not as tall as rye and therefore easier to manage; provides living cover in winter and spring, erosion control, weed suppression, nutrient recycling, organic matter improvement, soil tilth improvement
Accumulates lower amounts of biomass than rye; possible following crop suppression due to nutrient tie-up; may attract some insect pests; matures after triticale
Intermediate between wheat and rye
Intermediate between wheat and rye; matures after barley
Winter barley (Hordeum vulgare L.)
120 (2.5 bu)
Good nutrient scavenger
Cold tolerant in southern parts of PA; common varieties not as tall as rye and therefore easier to manage in spring; provides living cover in winter and spring, erosion control, weed suppression, nutrient recycling, organic matter improvement, soil tilth improvement
Winterkill is possible; accumulates lower amounts of biomass than wheat; possible crop suppression due to nutrient tie-up; matures after cereal rye
Spring oats (Avena sativa L.)
100 (3 bu)
Average nutrient scavenger
Very easy to manage because winterkills; provides erosion control, weed suppression, nutrient recycling, organic matter improvement, soil tilth improvement in fall, rapid growth in cool weather; ideal for quick fall cover or nurse crop with legumes; may produce more biomass in fall than other winter small grains if planted early
Winterkills in most of PA, no living root system in winter and spring; erosion control may be limited in spring; high lodging potential; susceptible to disease and insect pests
Annual ryegrass (Lolium spp.)
Cold tolerant in southern parts of PA; varieties not as tall as rye; provides living cover in winter and spring, erosion control, weed suppression, nutrient recycling, organic matter improvement, soil tilth improvement, high-quality fodder
May winterkill; may be difficult to control; low heat tolerance; may harbor insects; may reseed and become weed
Pearl millet (Pennisetum glaucum)
Heavy tillering, drought-tolerant, and adapted to low-fertility, sandy soils; can grow to 12 feet tall depending on variety; matures in 60–70 days; good forage
Very large biomass production can be challenging to manage
Sudan-grass (Sorghum bicolor)
Quick growth; scavenges nitrogen; competes with weeds; large biomass producer; alleviates compaction; can grow 12 feet tall
Prussic acid production when young, drought stressed, or frosted—do not graze then; large biomass production can be challenging to manage
Japanese millet (Echinochloa frumentacea)
Very fast growth, mature and up to 4 feet tall in 45 days; resembles barnyard grass; suppresses weeds
Can become a weed if let go to seed; grows poorly on sandy soils.
Hairy vetch (Vicia villosa Roth)
Most cold tolerant and high biomass production; above-average drought tolerance; adapted to wide range of soil types; combines well with small grains
Requires early fall establishment; slow to establish; matures in late spring; high P and K requirement for maximum growth; can harbor pests; potential weed problem in winter grain; glyphosate not full-proof for control
Crimson clover (Trifolium incarnatum L.)
Fairly cold tolerant; rapid fall growth; high biomass production; matures midspring; above-average shade tolerance; forage use (no bloat); good nematode resistance
May winterkill; requires early fall establishment; poor heat and drought tolerance; residue has tough stems, difficult to no-till plant into
Red clover (Trifolium pratense L.)
SLP (2–3 yrs)
Survives winter; deep taproot; soils; tolerates wet soil conditions and shade; forage use, especially if mixed with grass
Needs to be established before midsummer because initial growth slow; high P and K requirements for maximum growth; hard seed can persist creating volunteer problems; pure stand forage causes bloat; vulnerable to some pathogens, insects
Field peas (Pisum spp.) (e.g. Austrian winter pea)
Rapid growth in cool weather; versatile legume; interseed with cereal and Brassica spp.; used as food or feed
May winterkill; shallow root system; sensitive to heat and humidity; susceptible to diseases, insect pests
Cowpea (Vigna uncuiculata)
Also known as black-eyed peas; adapted to wide range of soil conditions; deep taproot can extract moisture from deep in profile
Performance has been erratic in PA trials
Sunnhemp (Crotelaria juncea)
Fast growing, up to 9 feet tall in 60 days; competitive with weeds; drought tolerant; can reduce root knot nematode populations
No seed production in contiguous U.S.; stems fibrous with age
Buckwheat (Fagopyrum esculentum Moench)
Spring or late summer
Fair to good nutrient scavenger (esp. P, Ca)
Grows on wide variety of soils (infertile, poorly tilled, low pH); rapid growth; quick smother crop and good soil conditioner
Limited growing season; not winter hardy; limited biomass accumulation; frost sensitive; poor growth on heavy limestone soils; occasional pests
Brassicas (Cruciferae family) (e.g. rape, radish)
Spring or fall
Good nutrient scavenger (esp. N, P, Ca)
Quick establishment in cool weather; prevent erosion in fall (radish) and spring (canola, rape); radish easy to manage because winterkills; deep, thick root systems; compaction alleviation; nutrient cycling; weed suppression
Radish winterkills in all of PA, while canola/rapeseed may winterkill in northern parts of PA; radish leaves soil bare in spring, therefore mix with a small grain
Source (Unknown, Characteristics of Common Cover Crops, n.d.)
Chart 1 – Performance and Roles of Cover Crops
(Greg Bowman, 2007)
Cover crop. (n.d.). Retrieved 11 27, 2016, from Wikipedia: The Free Encyclopedia: http://en.wikipedia.org/wiki/Cover_crop
Dabney, S. M. (2001). Using winter cover crops to improve soil quality and water quality. Communications in Soil Sciences and Plant Analysis, 1221-1250.
Eric Sideman, P. (2013, July). Maine Organic Farmers and Gardeners Association. Retrieved from Maine Ogranic Farmers and Gardeners Association: http://www.mofga.org/Publications/MaineOrganicFarmerGardener/Summer2013/GreenManures/tabid/2621/Default.aspx
Fertilizer Cost move Down. (2013, August 8). Retrieved from AgWeb: http://www.agweb.com/article/fertilizer_costs_move_down/
Field Peas. (2007). Retrieved from SARE: http://www.sare.org/Learning-Center/Books/Managing-Cover-Crops-Profitably-3rd-Edition/Text-Version/Legume-Cover-Crops/Field-Peas
Green manure. (n.d.). Retrieved 11 27, 2016, from Wikipedia: The Free Encyclopedia: http://en.wikipedia.org/wiki/Green_manure
Greg Bowman, C. C. (2007). Managing Cover Crops Profitably. Beltsville, MD: Sustainable Agriculture Network.
Gruver, D. J. (2016, February 26). Radishes – A New Cover Crop for Organic Farming Systems. Retrieved from Extension: http://articles.extension.org/pages/64400/radishes-a-new-cover-crop-for-organic-farming-systems
Haber Process. (2016, Oct 30). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Haber_process
Intensive farming. (n.d.). Retrieved 11 21, 2016, from Wikipedia: The Free Encyclopedia: http://en.wikipedia.org/wiki/Intensive_farming
Patrick, W. (1957). The effect of longtime use of winter cover crops on certain physical properties of commerce loam. Soil Science Society of America, 366-368.
Sawyer, J. E. (2010). Intercropping Corn and Kura Clover. Agronomy Journal, 568 - 574.
Unknown. (n.d.). Characteristics of Common Cover Crops. Retrieved from PennState Extension: http://extension.psu.edu/agronomy-guide/cm/tables/table-1-10-6
Unknown. (n.d.). Cover Crop Plants. Retrieved from USDA: http://plants.usda.gov/java/coverCrops
unknown. (n.d.). Oats. Retrieved from SARE : http://www.sare.org/Learning-Center/Books/Managing-Cover-Crops-Profitably-3rd-Edition/Text-Version/Nonlegume-Cover-Crops/Oats
welterseeds. (2015, june). Retrieved from welterseeds.com: http://welterseed.com/wp-content/uploads/2015/06/Retail-Price-List-2015.pdf
Wunderlich 15ica spp.; used as food or feed
Fast growing, up to 9 feet tall in 60 days; competitive with | <urn:uuid:3e7d03b0-fa27-4cba-83cd-5bb65ffffc99> | CC-MAIN-2017-04 | https://docs.com/mark-wunderlich/2295/improving-soil-fertility-with-cover-crops-green | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890313 | 4,476 | 2.75 | 3 |
Net Neutrality is one of the biggest tech-related issues currently making its way through the American Government. In mid-November President Obama made his stance on the issue known, while also introducing a plan for it and thereby bringing the subject to worldwide attention. Here is an overview of what Net Neutrality is and how it can affect you.
In order to define Net Neutrality, we should first look at the main idea behind what the Internet is: a free and open medium where individuals can express and house thoughts, ideas, and more. It was founded on one principal, and one principal alone: All information and Internet traffic MUST be treated equally.
This free, open, and fair principle is what we call Net Neutrality. In practice, this idea prevents Internet providers, and even governments, from blocking legal sites with messages they disagree with, and restricting access to services and sites that don’t meet their business needs.
At this time, major telecommunications companies providing Internet access are trying to push legislation through the US court systems that will essentially make it legal for them to throttle Internet speeds; asking other providers to pay fees in order to speed up access to sites and to even block some sites.
There are laws currently in place, set by the FCC (Federal Communications Commission), that prohibit providers from collecting, analyzing, and manipulating user traffic. In other words, according to the FCC, the role of the Internet providers should be to simply ensure traffic and data gets from one end of the network to the other.
Last year, it was uncovered that US telecommunications giant, and Internet Service Provider, Comcast demanded that Netflix pay them millions of dollars or they would limit the Internet speed of Comcast users trying to access the streaming service. Netflix tried to negotiate but the result was that Comcast did indeed cut user speeds. Netflix paid to avoid this from happening again. This act is an obvious breach of the main tenet of Net Neutrality: Equal access for everyone.
Combine this with the January 2014 ruling that the FCC had overstepped its bounds in regards to this topic and the increased lobbying by telecommunications giants against Net Neutrality, and you can quickly come to realize that the Internet as we know it is under threat.
If nothing is done, there is a very high chance that you will be paying higher rates for Internet-based services (because the providers will be asking other companies to pay to guarantee speedy access which will then be passed along to you via higher rates). You may even be forced to use services you don’t want to use because they offer better access speeds on your network.
Beyond this, because so many businesses rely on websites and the hosting companies that enable us to access them, there is a very real risk that these hosts may have access speeds cut. This in turn could mean that it will take more time for some users to access your website and services. Think of how you react when you can’t access a website, you probably just search for another similar site which loads easily – now imagine this happening to your site. In other words, you could see a decrease in overall traffic and therefore profits.
First off, we highly recommend you visit The White House’s site on Net Neutrality, and read the message that President Obama has recently posted there. To sum it up, he believes that Net Neutrality should be protected and the Internet should remain open and free. He has even laid out a plan with four rules that the FCC should enact and enforce:
You can bet that this plan will be met by stiff resistance both in government and by the telecommunications companies themselves. The FCC is an independent organization and it is up to them to select whether or not they want to enact President Obama’s plan. One thing you can do is to publicly submit your comments to theFCC via this website. Any comments made will be seen by the FCC and are are publicly viewable. In the past, enough public pressure has been able to sway FCC decisions, so share this article and the links in it with everyone you know, asking them to take action as well.
For now, the Net Neutrality battle is largely US based. The vast majority of Internet traffic starts or at least passes through the US. This means that if the telecommunications providers (many of whom own international subsidiary providers) can limit access to sites in the US it could very quickly become a world issue. Beyond this, other countries often follow laws that the US enacts, so it could only be a matter of time before we see similar bills passed in other countries.
In short, this is a major issue that could see the end of the Internet as we know it. If you would like to learn more about Net Neutrality and how you can help ensure the Internet remains free and open, contact us today. | <urn:uuid:d8c1cf9f-d3f5-452d-9e14-18457c1b2876> | CC-MAIN-2017-04 | https://www.apex.com/define-net-neutrality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00301-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965034 | 977 | 3.421875 | 3 |
Those dire warnings that worldwide warming was having an incendiary effect on hurricanes and that ever-more powerful, deadly, and costly tropical storms were inevitable were part of the legacy of Katrina's almost unimaginable devastation.
But what followed was shocking for other reasons: After that 2005 season, not a single major hurricane struck U.S. shores, constituting a period of record quiet. Technically, Sandy, as bad as it was, was not a hurricane at landfall.
If forecasters can be believed — and last year they whiffed badly — this could be yet another relatively tranquil season in the Atlantic basin, which includes the Caribbean Sea and Gulf of Mexico.
Experts say the lull represents the calm between the storms, and a bleak-or-bleaker debate continues over whether the Atlantic basin will be a busy tropical-storm brewery for perhaps the next two decades — or in perpetuity.
With an estimated $10 trillion worth of insured property in hurricane-target areas, the outcome is of importance not only for coastal residents and property owners, but for every U.S. taxpayer.
From fiscal 2005 through 2013, hurricanes consumed more than $60 billion in federal disaster money, 75 percent of all Federal Emergency Management Agency aid, or about $500 per U.S. household. That doesn't capture the full tally for the hybrid storm Sandy, or the estimated 24 billion tax dollars all but lost to the U.S. Treasury by the National Flood Insurance Program.
Historically, active hurricane periods have alternated with quieter ones in 25- to 40-year cycles, based on government records. From 1970 to 1994, hurricane traffic generally was slow. But according to the government's Hurricane Research Division, an active period began in 1995 and could last perhaps 20 more years.
The last several years notwithstanding, more tropical storms mean more opportunities for damage, even if not all those storms qualify as "major" hurricanes. Besides Sandy, there was Irene, in 2011, which made landfall in North Carolina as a minimal Category 1 hurricane but cut a destructive path; Allison, in 2001, one of the deadliest storms on record, never reached hurricane status.
The research division attributes the uptick to the "positive phase" of something called the Atlantic Multidecadal Oscillation, or AMO, characterized by generally warmer waters in the Atlantic.
That analysis, however, "is wrong," in the view of Pennsylvania State University's Michael Mann, a respected — and controversial — figure in the climate-research community. It was Mann who gained unwanted fame when he was implicated in the "climategate" scandal in which leaked e-mails suggested data tampering. The university cleared him of wrongdoing.
Mann, along with other researchers, holds that the AMO, a complicated phenomenon, actually entered a cool or "negative" phase in the 1990s and that, in essence, warming from greenhouse emissions has masked it.
"The warming is extremely unlikely to reverse," he said recently; in short, this is not a "phase," and things are likely to get worse.
"It's difficult to say for sure whether we are currently in a positive or negative phase of the AMO," said Gregory Foltz, at the Atlantic Oceanographic and Meteorological Laboratory. He noted that other experts had documented that the oscillation indeed has been in its warm phase.
Although the rate of global warming evidently has slowed in recent years — and Mann thinks that, also, is related to the AMO — climate scientists have calculated that overall, Earth's temperature has risen at the rate of about 0.25 degrees Fahrenheit in the last 30 years.
But if that warming is somehow increasing hurricane intensity, the evidence has been wanting in recent years.
No major hurricane, one with winds of at least 111 mph, has made landfall on a U.S. coast since Wilma, in 2005. No hurricane of any intensity has hit Florida in that time. Both are records, according to Dennis Feltgen, meteorologist and official spokesman at the National Hurricane Center in Miami.
Last year, despite forecasts of a brisk season, only two hurricanes formed in the Atlantic basin, the fewest in 31 years, and for the first time since 1994, not a single major hurricane developed.
The criterion for a name for a tropical storm is a peak wind of at least 39 m.p.h. The threshold for a hurricane is 74 m.p.h. The normal tallies for a season are 12 named storms, six hurricanes, and three major ones.
Forecasters say it's possible that the tranquility will lap into this season, which began June 1 and ends Nov. 30. The key might be thousands of miles away in the tropical Pacific. An El Niño, in which warmer-than-normal sea-surface temperatures cover a continent-size area of the Pacific, might be unfolding. The warm waters' interactions with the atmosphere generate powerful upper-air winds from the west that can snuff out incipient storms in the Atlantic.
Millions of people along the Gulf and Atlantic coasts will be rooting hard for El Niño.
©2014 The Philadelphia Inquirer | <urn:uuid:bb809509-c149-4457-acb0-2d59b49d5968> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Future-Hurricanes-10-Trillion-Question.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00209-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960567 | 1,047 | 3.046875 | 3 |
Illegal online activities such as phishing and typosquatting are growing at an alarming rate. To understand the issue in detail High-Tech Bridge analyzed 946 domains that may visually look like a legitimate domain (for example replacement of “t” character by “l” character, or mutated domain names such as “kasperski.com” or “mcaffee.com”) or that contain typos (e.g. “symanrec.com” or “dymantec.com”).
For ten antivirus companies, they 385 domains which can be classified by the following categories:
164 fraudulent domains. Domains registered by third-parties to make money on users erroneously visiting websites hosted on these domains (due to a typo in URL or a phishing campaign) by displaying ads, redirecting users to questionable websites selling illegal or semi-legal products and services, etc. 164 domains were detected (42.5%).
107 corporate domains. Domains registered by the antivirus companies to prevent potential Typosquatting and illegal usage of these domains. 107 domains were detected (27.7%).
73 squatted domains. Domains registered by cyber-squatters in the hope that the antivirus companies or third-parties will buy the domains at some point in the future. Websites on these domains are not active. 73 domains were detected (18.9%).
41 other domains. Domains registered by third-party businesses or companies that may have a legitimate reason to register the domain (e.g. similar Trade Mark or company name) without intention to spoof the identity or to benefit from user typos. 41 domains were detected (10.6%).
Detailed statistics are provided in the table below:
Corporations, governments and law-enforcement agencies go to great lengths to prevent abusive or illegal domain name registration and usage. However, our research revealed that the average age of a fraudulent domain is as high as 1181 days, and the average age of a squatted domain is 431 days.
Research results show that some antivirus companies pay attention to such illegal activities and try to prevent them, for example Kaspersky and McAfee purchased more than 70% of the domains that could be potentially used for illegal purposes if registered by third-parties. While others should probably pay more attention to domain squatting, monitor illegal activities and block them.
Researchers wanted to understand which domain registrars are used by cyber crooks to register fraudulent and squatted domains. The most popular domain registrars for fraudulent or squatted domains are listed in a table below:
During the research the company also collected statistics about the most popular countries in which to host websites with fraudulent content. The list is provided below:
Marsel Nizamutdinov, Chief Research Officer at High-Tech Bridge, comments on the research: “Our research clearly demonstrates that cyber criminals do not hesitate to use any opportunity to make money on domain squatting and subsequent illegal practices. There are many ways to make money from these domains: they can be resold at a profit to the legitimate owner of the Trade Mark, used to display annoying ads, redirect users to pornographic or underground pharmaceutical websites, or even to infect with malware user machines who accidentally made a typo in the URL or clicked a phishing URL.” | <urn:uuid:b8526c7e-a3f1-4d3a-b989-e8036611a6e7> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/12/12/how-cyber-squatters-and-phishers-target-antivirus-vendors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92767 | 697 | 2.578125 | 3 |
Harvard scientists develop a self-organizing thousand-robot swarm
The Harvard school of engineering and applied sciences (SEAS) have created the "first thousand-robot flash mob". The swarm consists of 1,024 "kilobots" that collaborate and provide "a simple platform for the enactment of complex behaviors".
Michael Rubenstein, a research associate at SEAS, said "Biological collectives involve enormous numbers of cooperating entities -- whether you think of cells or insects or animals -- that together accomplish a single task that is a magnitude beyond the scale of any individual".
After a basic instruction from a computer scientist, the kilobots use a simple algorithm to enact the instruction, "similar to a flock of birds". The kilobots once given an instruction do not require any human input to complete the task.
A blogpost on the SEAS website reveals how these kilobots complete their goal, "once an initial set of instructions has been delivered. Four robots mark the origin of a coordinate system, all the other robots receive a 2D image that they should mimic, and then using very primitive behaviors -- following the edge of a group, tracking a distance from the origin, and maintaining a sense of relative location -- they take turns moving towards an acceptable position".
Although currently only able to produce 2D shapes Radhika Nagpal, professor of computer science at SEAS and creator of the swarm, outlined her hopes for the project envisioning ", hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways".
Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved. | <urn:uuid:e634c73b-5303-4741-8ed8-c86ca6165c4f> | CC-MAIN-2017-04 | http://betanews.com/2014/08/18/harvard-scientists-develop-a-self-organizing-thousand-robot-swarm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863428 | 345 | 3.078125 | 3 |
Mixing supercomputing, social networking for a better view of Earth
- By Kevin McCaney
- Jul 24, 2012
NASA broke some new computing ground two years ago when it launched its NASA Earth Exchange (NEX), a supercomputing-powered, social networking-linked virtual laboratory designed to speed the study of Earth sciences.
Now, it’s taking its collaborative approach further, opening up the facility to other researchers.
The idea behind NEX is to give researchers a comprehensive look at the Earth, through high-resolution Landsat satellite images. The Landsat program has been collecting data since the first of its satellites was launched in 1972 — and that was part of the problem. With that much data, it could take months for researchers to gather and analyze data sets, and they had to develop high-end computational methods in the process, NASA said on its website.
NOAA tracks toxic Great Lake algae from space
Now, they can do that in a matter of hours with NEX, which combines NASA’s supercomputing capacity and an array of analytics and visualization tools with an internal social networking platform for sharing data and results.
"NEX greatly simplifies researchers' access to and analysis of high-resolution data like Landsat," Tsengdar Lee, high-end computing program manager at NASA, said in the announcement.
In the virtual lab, for instance, scientists can piece together high-resolution images of global vegetation patterns totaling more than a half-trillion pixels in about 10 hours, NASA said. They can combine sensor data from NASA and other agencies, share their results instantly via NEX’s social media platform, and develop interdisciplinary studies of what’s going on with the planet.
Rama Nemani, a senior Earth scientist at NASA's Ames Research Center in Moffett Field, Calif., where NEX was developed, said the science community is being urged to not only study changes in climate and project what might happen next, but also to develop ways of dealing with the impact. NEX offers a way "to change the research paradigm by bringing large data holdings and supercomputing capabilities together, so researchers have everything they need in one place," he said.
By allowing other researchers into the NEX environment, scientists can save time and costs, while improving their results, NASA said.
Landsat was launched 40 years ago by NASA and the U.S. Geological Survey, and has accumulated millions of images. (This NASA page, for example, features a time-lapse video of Landsat images showing Maricopa County, Ariz., during its 1972-2011 population boom.)
And here’s a collection of Landsat images depicting the sprawl in Las Vegas since 1972.
Currently on Landsat 7, the next launch, of what’s called the Landsat Data Continuity Mission, is scheduled for February 2013. | <urn:uuid:d1e5de56-ab36-4733-ab3a-9cb4d7d83382> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/07/24/nasa-nex-earth-sciences-supercomputing-research.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00411-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938686 | 596 | 3.09375 | 3 |
Basic Rates & Supported Rates
IEEE 802.11-2007 standard defines required rates as basic rates. For a client
station to successfully associate with an AP, the station must be capable of communicating by using the configured basic rates that the access point requires.
In addition to the basic rates, the access point defines a set of supported rates. This set of supported rates is advertised by the access point in the beacon frame and is also in some of the other management frames.After a station associates with an access point, it will use one of the advertised supported rates to communicate with the access point.
Dynamic Rate Selection
Also know as Dynamic Rate Shifting, Adaptive Rate Selection,Automatic Rate Selection. If you watch this 5 min CWNP video you will understand this concept well.
In simple terms as client station radios move away from an access point, they will shift down to lower bandwidth capabilities by using a process known as dynamic rate switching (DRS). Access points can support multiple data rates depending on the spread spectrum technology used by the AP’s radio card. Below diagram shows HR-DSSS (802.11b or Clause 18) AP client dynamically shift rates based on the signal quality (RSSI, SNR) it receive from the AP.
The objective of DRS is up-shifting and downshifting for rate optimization and improved performance. The algorithms used for dynamic rate switching are proprietary and are defined by radio card manufacturers. Most vendors base DRS on receive signal strength indicator (RSSI) thresholds, packet error rates, and retransmissions. Because vendors implement DRS differently, you may have two different vendor client cards at the same location, while one is communicating wit the access point at 11 Mbps and the other is communicating at 2 Mbps. Below is an sample chart showing the RSSI & SNR to achieve particular data rate for a given WLAN vendor. (page 227 – CWAP Official Study Guide)
Here is an example for Cisco 1262 AP receive sensitivity details listed in the AP specs. | <urn:uuid:ba241db4-d79b-4f8b-9cca-a81582596bdd> | CC-MAIN-2017-04 | https://mrncciew.com/2014/11/03/cwap-dynamic-rate-selection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917356 | 417 | 2.53125 | 3 |
For years satellites were the workhorses of international
communications: they carried virtually all intercontinental
calls and TV programmes. On most of those high-traffic routes
they've been displaced by optical fibre and satellites have
taken on a new role, delivering TV channels direct to viewers'
But the universal demand for broadband is giving satellite
operators a new market opportunity — something that
many telecommunications companies are now learning about.
As those in urban areas are getting access to faster and
faster speeds service providers are facing demands from rural
communities to provide something comparable. Satellites can
provide a route, says Patrick Brant, the president of satellite
operator Loral Skynet.
He spoke at a Global Telecoms Business conference in the
middle of 2006 and it was intriguing to hear senior executives
from a number of operators explore his ideas as a potential
solution to a long problem.
"Satellites provide an opportunity to extend terrestrial
networks," says Brant. "Satellite become an extender or
extensions to their current terrestrial networks." He is
suggesting a mixture of satellites for the backhaul to a base
station at the centre of small communities, with fixed wireless
access to local subscribers.
Mobile phone networks commonly use satellite distribution in
emerging markets where there is an inadequate terrestrial
backhaul network. "If you have a carrier with a vast territory
to cover a satellite can be used to extend or back up the
An operator can use the satellite for the connections as the
network is being built. When the fibre reaches the tower, "you
can seamlessly switch the backhaul", says Brant.
Like many in the industry, he is an inveterate enthusiast
about satellites. He's worked in the business for many years.
He served as COO of Loral CyberStar, Loral's former data
services company, and helped to integrate CyberStar with Skynet
|The first Telstar satellite
was launched in 1961.
Loral Skynet plans to launch the huge
Telstar 11N in early 2008
The merged company, Loral Skynet, has a bit of unrivalled
heritage: the name Telstar. The brand dates back more than 40
years, to when the old AT&T proposed a network of 50-120
active satellites orbiting about 10,000 kilometres above the
Earth. The world's first active telecommunications satellite,
Telstar 1, was launched on July 10 1962.
Today Loral Skynet's Telstar satellites orbit at the
conventional geostationary altitude of 35,700 kilometres above
the equator. The next in the series, Telstar 11N, is due for
launch in 2008.
Telstar 10 is already in orbit, providing video services in
Asia: "About 80 million people every day are viewing content on
Telstar 10 from China to east Africa and Korea to almost
Australia," says Brant.
And he sees an opportunity emerging over the next few years
as telcos build out IPTV services, especially once they face
the demand for high-definition channels. "Satellite will be
able to provide content to the head ends," says Brant. With
HDTV and with IP platforms networks will be able to serve
mobile TV terminals too, he says.
Are carriers doing this already. "I can't say, because it's
a competitive advantage for the early adopters. They're in test
right now." Watch out in the first quarter of 2007, he says:
"There'll be a big splash. The testing will be over and
there'll be introductions next year."
Satellites provide "instant bandwidth availability" to
broadband IP networks, he adds. "You can have speeds up to 45
megabits a second. Satellite has a vast capability to provide
broadband capacity and you don't have to buy more and more
fibre. It will cost a little more until you get scale."
Loral Skynet was "the first company to provide digital IP
services by satellite", says Brant, "and we were the first to
connect with the internet by satellite. Our network allows
customers in any part of the globe to connect data or video or
voice with any part of the world." GTB | <urn:uuid:ce6860ae-8655-4b3e-9aeb-afe70ce32698> | CC-MAIN-2017-04 | http://www.globaltelecomsbusiness.com/Article/2199500/Interviews/25239/Forty-years-on-satellites-still-fly.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00192-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934158 | 913 | 2.546875 | 3 |
The driver configuration file naming convention is:
<base name>[-<type>]-IDM<min. engine version>-V<config version>.xml
Base Name: The name of the connected system or service the driver provides. For example, Active Directory or Delimited Text.
Type: An additional descriptor for the driver configuration file. If there are multiple configuration files, the type distinguishes among the different files.
Minimum Engine Version: Lists the minimum engine version that the driver can run against. The elements to date are:
Configuration Version: Specifies the particular driver configuration file version. It is a number that is incriminated with each release of a new driver configuration file. | <urn:uuid:db74d827-ae8f-49c3-800b-49949c3e4e79> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/idm401/idm_common_driver/data/bh15zah.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.729242 | 141 | 2.765625 | 3 |
Building a better mousetrap: The science of certification exams
Certification can help enhance your skills and increase your earning potential. If you want to understand networking, or security, or a specific vendor’s software more completely than you already do, or if you’re in line for a promotion or considering a new job (hopefully with a better salary), then certification can make a dramatic difference.
Before any of that can happen, however, you have to study for, and pass, a certification exam. And certification exams probably aren’t quite like any other test you’ve taken. You’ll probably be better prepared, and achieve a better performance, if you have a better understanding of how certification exams are created and what makes them work.
If you think back to your days in school (for some of you, that might not be too far back), you undoubtedly have fond (or not so fond) memories of endless tests — quizzes, midterms, finals. How do tests that a teacher might write for a class differ from a certification test?
Certification tests differ from tests created by teachers in three main ways:
● Development process
● Statistical analysis
● Intended use
Certification tests differ from other tests commonly encountered in classrooms by the way they are made. In a classroom setting, the teacher is typically in charge of determining what to test, then sits down to write questions. In some cases, a group of teachers who have similar classes might work together to come up with a standard test used for all classes.
Certification tests are typically developed following a more rigorous process. Individual test questions are written, reviewed and quite often reviewed again. Processes are followed not only in the writing of questions, but also in early stages of test construction, where it is decided what needs to be tested. An individual teacher working alone might sometimes construct a test of very high quality. Following the test development process, however, provides consistently high-caliber results, relying less on the skill of a single individual to determine the overall exam quality.
Certification tests undergo extensive statistical analysis. This analysis looks at not only the quality of individual questions, but also the entire test as a measurement instrument. Statistical analysis for tests typically focuses on looking for evidence that the exam has two important qualities:
The exam is valid: Test validity means that the test actually measures what it is supposed to measure. Poorly written questions, questions that are not applicable to the content area, or questions that score incorrectly negatively affect the validity of a test. Statistical analysis helps to identify questions that do not accurately assess the target skills.
The exam is reliable: Reliability means that the test will yield consistent results. For example, test scores should be accurate regardless of the time of day or where the test is taken. Results should be consistent for all individuals who possess the target skills, regardless of race, gender, or other non-essential characteristics.
Statistics used by teachers on exams might include scoring the exam, looking at the average score on the exam in the class, or possibly looking at individual questions upon occasion to throw out a badly written question. At times, teachers might “grade on a curve”, meaning they adjust the passing rate based on how well the entire class did on a test. The need to adjust the passing score on the test based on class performance suggests that the test itself might not be an accurate measure of what was taught or what should have been learned, or that the content covered in class did not match well what was being tested.
Tests used in a traditional classroom are typically given to measure how well an individual remembers or can use what was taught in class. These tests are often content mastery checks, evaluating retention, familiarity, or mastery of specific content. Tests apply directly to what was taught in class, with many teachers often adjusting tests in subsequent semesters based on what content was covered.
The purpose of a certification test is to evaluate individual performance of a specific set of job skills, concepts and tasks. While classroom tests typically measure how well someone memorized the material, certification tests compare skills and abilities to a predefined standard or against a consistent set of objectives. Classroom tests are meant to apply only to members within the class, but certification tests can be used to measure anyone’s fluenct with respect to the target skills and abilities. Certification tests are often used as national or international measurements.
We’ve compared certification tests to tests that might be taken in a school setting, now let’s define a certification in the context of two things a certification is not — certificates and badges.
In an educational setting, a certificate is something that provides evidence of successful completion of a course of study. The requirements for obtaining the certificate vary depending on the organization that grants the certificate — the only requirement might be as simple as attending all sessions of the training, or it might include doing assignments or tests with scores at a “passing” level.
Certificates might be granted for a single class or course, or might require completion of multiple classes that together make up a program of study. In higher education, certificates are similar to diplomas or degrees, typically requiring a year or less of study to complete.
Certifications differ from certificates in that the certification is independent from the training. Certificates are tied to a specific course of study offered by a school or an organization. By contrast, a certification is only concerned with measuring knowledge, skill and ability, regardless of the source of training (if any). Based on international standards, certifications are not to include any specific course of study as a requirement to obtaining the certification.
Badges are recognitions of skill, achievement, or accomplishment. Badges might be awarded for having passed a class or course of study, for having mastered a skill, or even for having passed a test. But badges can also be awarded for reasons as varied as having an interest in a specific subject, for attendance and effort, or even as a recognition of attitudes or commitment. The use of badges has become more popular as a means of not only recognizing accomplishment but also a method of motivation (i.e. as students are awarded badges for small accomplishments, they might be more motivated to continue their efforts in order to attain more recognition).
Badges can be used to identify competency on one or multiple skills. For example, instead of granting a certification, an organization could simply give a badge that identifies a similar range of skills. Badges can also be useful in identifying abilities on a more granular level than a certification is capable of doing. For example, rather than certifying that an individual is a qualified network administrator, individual badges can identify skill levels in specific tasks such as cable management, switch administration, DHCP configuration, etc.
The purpose of a certification is to validate knowledge, skills, and abilities, typically in a work-related set of tasks. Certification exams are carefully designed to be reliable and valid measures of these skills. While tests, certificates, and even badges can give some indication of one’s educational or work background, certifications are meant to be a higher measurement of specific skills used on the job. | <urn:uuid:3df80c3e-a3e6-4644-9440-682abe76a9c6> | CC-MAIN-2017-04 | http://certmag.com/building-better-mousetrap-science-certification-exams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958746 | 1,468 | 2.890625 | 3 |
The health care industry is digitizing electronic health records (EHRs) to not only improve organizations' efficiency, but how they administer care to patients. The ability to share data digitally holds immense potential for doctors, nurses, clinicians and other professionals in the field to send and receive content quickly and effectively.
Although this development is great on one hand, it is also dangerous because patient data and other sensitive information are even more at risk of being stolen, exposed or accessed by malicious parties. As a result, security must be a top priority for any medical organization today and for the foreseeable future.
Secure file sharing tools are one way to make sure staff members keep patient records and organizational data safe from harm. These solutions ensure that information being sent between staff members or at rest will be safeguarded so organizations do not inadvertently experience a security breach.
Such incidents are especially dangerous for health care providers because the medical field includes strict compliance guidelines. Providers that expose information can receive hefty fines and public distrust if they cannot keep data protected at all times.
The cost of a data breach
Since September 2009, nearly 500 data breaches that have affected more than 500 people have been reported to the Department of Health and Human Services (HSS). Overall, 21.1 million records were exposed in these incidents, with the average exceeding 40,000 per event. The average cost of each totaled approximately $8.3 million.
What is perhaps even more troubling is the fact that HSS said it took providers an average of 85 days to even identify that a breach had even taken place. Organizations then spent another 68 days to notify those who were affected by the incident.
HSS also found that a quarter of all security breaches measured occurred through laptops, followed by paper records, mobile media, desktops, network servers and system applications. In terms of the number of records exposed, 51 percent happened because of mobile media.
This last point is especially important for health care providers to think about, since many employees are using tablets and other mobile devices in the workplace. If staff members accidentally send information to the wrong recipient or leave these products in an unsafe location, the likelihood of exposure goes up exponentially.
Rather than spend more than five months identifying a breach and notifying victims of said events, health care providers should do all they can to prevent incidents in the first place. The best offense for safeguarding patient records and medical data is a strong defense. Secure file sharing tools can help providers minimize risks while ensuring that personnel have access to user-friendly solutions.
Once such options are in place, health care organizations must regularly check the reliability and accuracy of these tools to keep pace with the ever-changing threat landscape. Failing to do so may not only result in millions of dollars in losses, but also uneasy and skeptical patients. | <urn:uuid:83d9aeec-ff5a-489a-8683-76041b30d9fd> | CC-MAIN-2017-04 | https://www.globalscape.com/blog/2013/10/24/security-of-the-utmost-importance-for-health-care | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00522-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956905 | 567 | 2.90625 | 3 |
Security researchers Andrew “bunnie” Huang and Sean “xobs” Cross have demonstrated that the only way to be absolutely sure that no one will be able to extract data from a SD memory card you used is to physically destroy it.
Various versions of SD flash memory cards are usually used in portable devices such as digital cameras, recorders, tablets and mobile phones, but also in some PCs, video game consoles and embedded systems.
Unfortunately, as the two researchers showed to the crowd at the 30th edition of the Chaos Computer Congress held last week in Hamburg, some of these cards contain vulnerabilities that can be exploited to remotely execute malicious code on the cards themselves, allowing attackers to mount a Man-in-the-Middle type of attack.
To be able to explain how the attack works, they first had to explain how an SD flash card is structured:
Flash memory is really cheap. So cheap, in fact, that it’s too good to be true. In reality, all flash memory is riddled with defects — without exception. The illusion of a contiguous, reliable storage media is crafted through sophisticated error correction and bad block management functions. This is the result of a constant arms race between the engineers and mother nature; with every fabrication process shrink, memory becomes cheaper but more unreliable. Likewise, with every generation, the engineers come up with more sophisticated and complicated algorithms to compensate for mother nature’s propensity for entropy and randomness at the atomic scale.
These algorithms are too complicated and too device-specific to be run at the application or OS level, and so it turns out that every flash memory disk ships with a reasonably powerful microcontroller to run a custom set of disk abstraction algorithms. Even the diminutive microSD card contains not one, but at least two chips — a controller, and at least one flash chip (high density cards will stack multiple flash die).
Unfortunately, the combinations and the quality of these chips varies widely, and the complexity of the implementation process guarantees that firmware bugs will be popping up.
“The crux is that a firmware loading and update mechanism is virtually mandatory, especially for third-party controllers. End users are rarely exposed to this process, since it all happens in the factory, but this doesn’t make the mechanism any less real,” Huang explained.
“In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that ‘expand’ the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured.”
The two researchers proved that the unsecured firmware updating sequences can be exploited to add new applications to the controller, and make it do something that it was initially not intended to do.
They tested this approach on several cards equipped with a microcontroller by Appotech, “a relatively minor player in the SD controller world,” but say that there are many more out there, and research should be made into their offerings as well.
“From the security perspective, our findings indicate that even though memory cards look inert, they run a body of code that can be modified to perform a class of MITM attacks that could be difficult to detect; there is no standard protocol or method to inspect and attest to the contents of the code running on the memory card’s microcontroller,” Huang pointed out.
“Those in high-risk, high-sensitivity situations should assume that a ‘secure-erase’ of a card is insufficient to guarantee the complete erasure of sensitive data. Therefore, it’s recommended to dispose of memory cards through total physical destruction.”
For those interested in more details, the video of the researchers’ presentation can be found here. | <urn:uuid:32b6aeee-f74a-475c-9920-64b52948167c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/01/02/researchers-demonstrate-sd-memory-card-hacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943595 | 813 | 2.671875 | 3 |
Most of us would be uncomfortable carrying a few thousand dollars in cash, but at least you would know how to avoid risks. Are you equally confident online?
Online banking can be safe, but just like in the physical world that depends on you. You must pay careful attention to what you do, where you go and most of all recognize that a whole lot of unsavory characters can cross your path in the online world. Even on your own PC.
You may have heard of risks like phishing and keyboard logging, but do you really understand how to avoid them? Can you tell if you are really connected to your bank, or if you have a secure connection?
Here are a few insights into the risks you are facing when banking online, and some do’s and don’ts to keep you safe.
1. Stolen Passwords
At the root of Internet security problems is over-reliance on passwords to protect your identity, accounts and assets online. Imagine you could withdraw money at an ATM by just entering a PIN code. No ATM card required. Would you trust that? That would mean that if someone could get your PIN code, say with an overhead Web-cam at a grocery store checkout, they could clean out your bank account.
You know that makes no sense in the physical world. Yet you are willing to trust passwords to protect your online bank account?
Stolen passwords are the real problem to watch out for. If someone steals your username and password, they can become you at that account. The bank cannot distinguish the crook from you if they have the right login and password. They can steal your money or perhaps your identity.
The most important element of your online banking security is you! You need to take responsibility for your own online safety by learning how to do it.
Phishing is a scam to steal your online username and password. They can target your banking account, your credit cards or even your employer.
It works by tricking you with an email that looks like it’s from your bank, your broker or employer. A common example is a security warning, alerting you that there may be a problem with your account. “Please click here to check your security,” the email offers helpfully.
The problem is the link in the email goes to a fake site operated by criminals. It looks like the real thing, so you are fooled into entering your bank account login or other personal identity information.
Spyware and malware (malicious software) are nasty programs that someone sneaks onto your computer to do bad things to you or your PC. Spyware and malware often get installed along with something else you-or your children-are getting for “free” on the Web. Music, a funny cat video that requires a special program to watch, videos, game cheats, an MP3 editing utility. You get the idea.
It works by installing a bad program-the “payload”-in addition to what you really wanted. Two very dangerous malware examples are presented below, keylogger and an Internet address redirection attack.
A keylogger, or keyboard logger, is a type of malware program that monitors every stroke you type on the keyboard to gather information used for identity theft, including account logins and passwords, which it sends to the hacker. Unlike malware that spams you with incessant Internet advertising popups so you know you have a real problem, keyloggers work invisibly. You won’t even know its there.
5. Internet Address Redirects
Internet addresses are friendly for people, like www.justaskgemalto.com. Underneath that though, real Internet addresses are all numbers, such as 126.96.36.199. The friendly version is called a Domain Name, and there is something called a Domain Name Server (DNS) that acts like a White Pages lookup and maps the friendly name to the actual address.
Internet address redirect attacks, also called DNS poisoning, work by putting a bad Domain Name lookup up list on your PC. You enter www.mybank.com but you get redirected to a copy site operated by criminals, who trick you into revealing passwords.
6. WiFi snooping and Hotspots
Think about this: When you use a public hotspot, even one you pay for, why do you always get a warning that anything on this network can be seen by others? Well, because it’s true.
To drive the point home, a big hacker convention features the “Wall of Shame.” It displays a steady stream of usernames and passwords gathered at the event as people enter them while using the free WiFi public network.
Is there any way to be safe using a public WiFi? There are ways, but if you are not tech savvy we recommend you avoid accessing your banking or other confidential accounts from hotspots.
7. WiFi snooping of your home or business network
WiFi is a little like radio. What are you broadcasting? If you are not careful, someone nearby could monitor your communications or access your wireless network. They can attack your PCs and sniff out passwords for example. Weak wireless network security contributed to the largest identity theft fraud ever in the United States.
8. Who else can use the PC you bank from?
Ever do your banking at work? If your PC memorized your bank account passwords, anyone who can access that desktop can enter your bank account. Or someone may have installed a keylogger on your PC. If you do banking from a PC that others can access, you are trusting that computer and everyone you work with your confidential information.
Top 10 Dos and Don’ts
1. Use some kind of one-time password (OTP) or smart card-based personal security device in addition to your password for login security. Using two forms of authentication for online banking protects keeps you safe, like your ATM card and PIN, but far stronger.
2. Install anti-virus/anti-spyware software and keep it up-to-date. Also keep your operating system and browser software current. Don’t think of anti-virus software as a cure all, however; remember you are the most important part of your digital security. You need to learn about the threats to online banking and how to avoid them and stay safe.
3. Never connect directly to the Internet through a cable or DSL modem without a hardware firewall or at least a hardware router or switch. Criminals can scan direct cable modem connections and take over your PC and install malware or make your computer a zombie
4. Install and use a software firewall on your desktops or laptops.
5. Never click on links in emails to get to your banking or other confidential accounts. Remember phishing works by tricking you into clicking on a link to a fake site. Train yourself never to click on email links and stay safe online. Type in the URL yourself or use a shortcut you created.
6. Make sure you have a secure Internet connection (https:// and padlock) when going to a bank or other confidential Web site.
7. Don’t login to your bank account from hotspots or other insecure wireless network.
8. Don’t login from your work PC or any other desktop others can also access.
9. Setup your home wireless network security with the built-in authentication. Use the newer WPA wireless security standard, not the older WEP standard. If your network equipment doesn’t support WPA, time to upgrade.
10. Use a browser toolbar w
ith anti-phishing/site verification capability. | <urn:uuid:c8662514-4cc7-42dc-8fb2-0c1142907eeb> | CC-MAIN-2017-04 | https://www.justaskgemalto.com/en/online-banking-safe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00246-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933069 | 1,581 | 2.765625 | 3 |
Stanford researchers said this week they had used a supercomputer with more than one million computing cores to predict the noise generated by a supersonic jet engine.
The researchers used the 1,572,864 processor Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories to run complex simulations that determined the physics of noise that are often impossible in the harsh exhaust environment of massive and powerful jet engines.
"The exhausts of high-performance aircraft at takeoff and landing are among the most powerful human-made sources of noise. For ground crews, even for those wearing the most advanced hearing protection available, this creates an acoustically hazardous environment. To the communities surrounding airports, such noise is a major annoyance and a drag on property values. Understandably, engineers are keen to design new and better aircraft engines that are quieter than their predecessors. New nozzle shapes, for instance, can reduce jet noise at its source, resulting in quieter aircraft," Stanford stated.
The researchers noted that with the advent of massive supercomputers boasting hundreds of thousands of computing cores, engineers been able to model jet engines and the noise they produce with accuracy and speed. Such fluid dynamics simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be, the researchers said.
"And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks," the researchers stated.
Check out these other hot stories: | <urn:uuid:0302614a-810f-4f79-a16a-291450279bba> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223944/data-center/stanford-consumes-million-core-supercomputer-to-spawn-supersonic-noise-forecast.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93417 | 369 | 3.625 | 4 |
German researchers are helping to push back the goalposts on large-scale simulation. Using the the IBM “SuperMUC” high performance computer at the Leibniz Supercomputing Center (LRZ), a cross-disciplinary team of computer scientists, mathematicians and geophysicists successfully scaled an earthquake simulation to more than one petaflop/s, i.e., one quadrillion floating point operations per second.
The collaboration included participants from Technische Universitaet Muenchen (TUM) and Ludwig-Maximillians Universitaet Muenchen (LMU) working in partnership with the Leibniz Supercomputing Center of the Bavarian Academy of Sciences and Humanities.
The effort hinged on retooling the SeisSol earthquake simulation code to harness more than one hundred thousand cores and over one petaflops of computing power. The popular simulation software is used in the study of rupture processes and seismic waves beneath the Earth’s surface. The goal of this geophysics project was to simulate earthquakes as accurately as possible, paving the way for improved predictive efforts. The project faced a limiting factor, however, in that the computational element was challenging even for a leadership-class system like the 3-petaflops SuperMUC, one of the world’s fastest.
To push beyond this barrier, Dr. Christian Pelties at the Department of Geo and Environmental Sciences at LMU teamed up with Professor Michael Bader at the Department of Informatics at TUM. The duo formed workgroups focused on optimizing the SeisSol program, tuning it for the parallel architecture of “SuperMUC.” The result was an impressive five-fold speedup and a new record on the SuperMUC.
In a virtual experiment, the team simulated the vibrations inside the geometrically complex Merapi volcano, located on the island of Java. The supercomputer chewed through the problem at 1.09 quadrillion floating point operations per second. And this wasn’t just a momentary peak, SeisSol maintained this high performance level during the entire three hour simulation run, incorporating all of SuperMUC’s 147,456 “Sandy Bridge” processor cores.
The official news release from LRZ asserts that this was only possible due to the extensive optimization and the complete parallelization of the 70,000 lines of SeisSol code, enabling peak performance of up to 1.42 petaflops. This corresponds to 44.5 percent of Super MUC’s theoretically available capacity (3.185 petaflops), making “SeisSol one of the most efficient simulation programs of its kind worldwide,” according to the institution.
“Thanks to the extreme performance now achievable, we can run five times as many models or models that are five times as large to achieve significantly more accurate results. Our simulations are thus inching ever closer to reality,” observes project lead Dr. Christian Pelties. “This will allow us to better understand many fundamental mechanisms of earthquakes and hopefully be better prepared for future events.”
“Speeding up the simulation software by a factor of five is not only an important step for geophysical research,” notes co-lead Professor Michael Bader of the Department of Informatics at TUM. “We are, at the same time, preparing the applied methodologies and software packages for the next generation of supercomputers that will routinely host the respective simulations for diverse geoscience applications.”
The researchers are planning to extend the project to simulate rupture processes at the meter scale as well as the seismic waves that propagate for hundreds of kilometers. The work has the potential to help humanity better prepare for these often damaging, and even deadly, natural forces.
SuperMUC, which was the world’s fourth fastest when it debuted in 2012, employs Intel Xeon processors running in IBM System x iDataPlex servers and has a LINPACK performance of 2.897 petaflops. It touts an innovative warm-water cooling system developed by IBM called Aquasar. The system currently holds the tenth spot on the TOP500 list and LRZ expects to double its performance in 2015.
The US Sequoia supercomputer (with 20 petaflops peak capability) may have trail-blazed the sustained petascale application front, but this level of scale is essential if exascale timelines are to be met. The earthquake simulation project will be highlighted at the International Supercomputing Conference in Leipzig, Germany this June with the session title: “Sustained Petascale Performance of Seismic Simulation with SeisSol on SuperMUC.” | <urn:uuid:9f312222-26df-4618-ae1a-7e98a102a44a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/15/earthquake-simulation-hits-petascale-milestone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915152 | 981 | 3.171875 | 3 |
Rummel S.,BSPG Bayerische Staatssammlung fur Palaontologie und Geologie |
Dekant C.H.,BSPG Bayerische Staatssammlung fur Palaontologie und Geologie |
Holzl S.,BSPG Bayerische Staatssammlung fur Palaontologie und Geologie |
Holzl S.,GeoBio Center |
And 10 more authors.
Analytical and Bioanalytical Chemistry | Year: 2012
The strontium isotope ratio ( 87Sr/ 86Sr) in beef, derived from 206 European cattle, has been measured. These cattle were located in 12 different European regions within France, Germany, Greece, Ireland, Italy, Spain and the UK. As animal protein is known to be a difficult material on which to conduct Sr isotope analysis, several investigations were undertaken to develop and improve the sample preparation procedure. For example, Sr isotope analysis was performed directly on freeze-dried meat and defatted dry mass from the same samples. It was found that enormous differences-sometimes exceeding the measurement uncertainty-could occur between the fractions and also within one sample even if treated in the same manner. These variations cannot be definitely allocated to one cause but are most likely due to inhomogeneities caused by physiological and biochemical processes in the animals as post mortem contamination during analytical processing could be excluded. For further Sr isotope measurements in meat, careful data handling is recommended, and for the authentic beef samples within this project, it was decided to use only freeze-dried material. It can be demonstrated, however, that Sr isotope measurements in beef proteins are a valuable tool for authentication of geographic origin. Although partly overlapping, some of the European sampling sites could be discriminated even by only using 87Sr/ 86Sr. [Figure not available: see fulltext.] © 2012 Springer-Verlag. Source
Goitom Asfaha D.,European Commission |
Quetel C.R.,European Commission |
Thomas F.,Eurofins |
Horacek M.,AIT Austrian Institute of Technology |
And 24 more authors.
Journal of Cereal Science | Year: 2011
The aim of this work (from the FP6 project TRACE) was to develop methods based on the use of geochemical markers for the authentication of the geographical origin of cereal samples in Europe (cf. EC regulations 2081/92 and 1898/06). For the first time, the potential usefulness of combining n(87Sr)/n(86Sr) and δ13C, δ15N, δ18O and δ34S isotopic signatures, alone or with key element concentrations ([Na], [K], [Ca], [Cu] and [Rb], progressively identified out of 31 sets of results), was investigated through multiple step multivariate statistics for more than 500 cereal samples collected over 2 years from 17 sampling sites across Europe representing an extensive range of geographical and environmental characteristics. From the classification categories compared (north/south; proximity to the Atlantic Ocean/to the Mediterranean Sea/to else; bed rock geologies) the first two were the most efficient (particularly with the ten variables selected together). In some instances element concentrations made a greater impact than the isotopic tracers. Validation of models included external prediction tests on 20% of the data randomly selected and, rarely done, a study on the robustness of these multivariate data treatments to uncertainties on measurement results. With the models tested it was possible to individualise 15 of the sampling sites. © 2010 Elsevier Ltd. Source | <urn:uuid:aad24c93-2290-41c2-bed2-de8f66ec0bab> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bspg-bayerische-staatssammlung-fur-palaontologie-und-geologie-1455888/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00448-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908045 | 748 | 2.578125 | 3 |
Agency: Cordis | Branch: FP7 | Program: CP | Phase: SPA.2010.2.1-04;SPA.2010.2.3-1 | Award Amount: 2.41M | Year: 2010
The Electric Solar Wind Sail (E-sail) is a recent invention of ultra-efficient propellantless in-space propulsion technology. It uses the solar wind charged ions as natural source for producing spacecraft thrust. The E-sail is composed of a set of long, thin, conducting and positively charged tethers which are centrifugally stretched from the main spacecraft and kept electrically charged by an onboard electron gun powered by solar panels. The E-sail concept is an enabling technology for reducing significantly the time, cost and mass required for spacecraft to reach their destinations. It has been estimated that it has the potential to improve the state of the art of propulsion systems by 2 to 3 orders of magnitude if using the lifetime integrated total impulse versus propulsion system mass as the figure of merit. Furthermore, the E-sail propulsion technology is truly a green propellantless method reducing significantly the mission launch masses and the amount of chemical propellant burnt in the atmosphere. As an electromechanical device it does not need any poisonous, explosive or radioactive substances or dangerous construction procedures. In the proposed project, we develop the key E-sail technologies (tethers, tether reels, spinup and guidance/control method based on gas and FEEP thrusters) to prototype level. The goal is that after the project, the decision to build and fly the first E-sail demonstration mission in the solar wind can be made. As a secondary technological goal, the project will raise the FEEP and gas thruster readiness level for general-purpose satellite attitude control purposes.
Alta Spa | Date: 1989-05-30
Agency: Cordis | Branch: FP7 | Program: CP | Phase: SPA-2007-2.2-01;SPA-2007-2.2-02 | Award Amount: 5.36M | Year: 2008
The main objective of the HiPER project is to initiate technological and programmatic consolidation in the development of innovative electric propulsion technologies (and of the related power generation) to fulfill future European space transportation needs. The objective will be pursued by conceiving and substantiating a long term vision for European space transportation and exploration, considering realistic developments in the state-of-the-art, and by performing basic research and proof-of-concept experiments on the key technologies identified by such a vision.
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: SPA.2012.2.2-02 | Award Amount: 2.60M | Year: 2013
PulCheR (Pulsed Chemical Rocket with Green High Performance Propellants) is a new propulsion concept in which the propellants are fed in the combustion chamber at low pressure and the thrust is generated by means of high frequency pulses, reproducing the defence mechanism of a notable insect: the bombardier beetle. The radical innovation introduced by PulCheR is the elimination of any external pressurizing system even if the thruster works at high pressure inside the combustion chamber. At each pulse, pressurization of the combustion chamber gases takes place due to the decomposition or combustion reaction, and the final pressure is much higher than the one at which the propellants are stored. The weight of the feeding system is significantly reduced because the propellants are fed at low pressure, and there is no need for turbopumps, high pressure propellant tanks or gas vessels. The feed pressure becomes independent on the chamber pressure and the performance degradation typical of the blow down mode in monopropellant thrusters can be avoided. The PulCheR concept is able to substitute many currently used propulsion systems for accessing space. It can be employed for low orbital flight and beyond and subsequent re-entry (allowing also for re-usable vehicles), and can be used in space vehicles for typical manoeuvres around a planet or during interplanetary missions. The feasibility of this new propulsion concept will be investigated at breadboard level in both mono and bipropellant configurations through the design, realization and testing of a platform of the overall propulsion system including all its main components. In addition, the concept will be investigated using green propellants with potential similar performance to the current state-of-the-art for monopropellant and bipropellant thrusters. The test campaign will experimentally investigate the propulsive performance of the system in terms of specific impulse, minimum impulse bit and thrust modulation.
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT.2010.1.1-2.;AAT.2010.6.2-1. | Award Amount: 6.54M | Year: 2011
ATLLAS II is a logical follow-up of a recently finalized FP6 project which has as objectives the identification and assessment of advanced light-weight and high-temperature resistant materials for high-speed vehicles up to Mach 6. The material requirements are first defined through an in-depth feasibility study of a Mach 5-6 vehicle. The consortium has now this capability at hand as they can rely on a first set of validated tools, material databases and valuable experience acquired during ATLLAS-I. Starting with a preliminary aero-thermal-structural high-speed vehicle design process, further multi-disciplinary optimization and testing will follow to result into a detailed layout of an independently European defined and assessed high-speed vehicle. Special attention will be given to alleviate sonic boom and emissions at high altitudes. Throughout the design process, the aero-thermal loads will define the requirements for the proposed materials and cooling techniques needed for both the airframe and propulsion components. The former will focus on sharp leading edges, intakes and skin materials each coping with different external aero-thermal loads. The latter will be exposed to internal combustion driven loads. Both metallic (Titanium Matrix Composites and Ni-based Hollow Sphere Stackings) and non-metallic materials (Ceramic Matrix Composites and Ultra High Temperature Composites) will be evaluated. Combined aero-thermal-structural experiments will test various materials as specimens and realistic shapes at extreme conditions representative for high flight Mach numbers. Both static and cyclic tests at low and high temperatures are planned including the evaluation of their durability in terms of long duration exposure to the harsh flight conditions. The materials assigned to dedicated engine components will be exposed to realistic combustion environments. These will be combined with passive or active cooling technologies developed in ATLLAS-I. | <urn:uuid:f651d032-fe9a-41cd-8b89-5c1460d01cce> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/alta-spa-177722/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912673 | 1,371 | 2.703125 | 3 |
Analyzing binary executables can be a very boring activity, especially when you get used to the
regular patterns. You see the same things again and again. A tool to automate the analysis or
diminish the amount of text to browse quickly becomes a dream.
On the other hand, if you are new to the binary analysis, the assembly language is very unusual
at first. You need to learn not only the processor instructions but also the standard code
sequences, calling conventions, and lots of other stuff. It is understandable that you might
nostalgically remember the high level languages you know.
Here is the tool that can help you in both cases: a decompiler that can handle the real world
code. Take a binary file, analyze it, and get a nicely formatted C text as the output.
I’m tempted to say that you could even recompile it but not, recompilation is not the goal
– the program analysis is.
Let’s see how it works. Take a file, say, a virus – these days many of them are written in a high
level language. One could say that the virus writers have more comfortable environment than the
virus analysts. Hopefully the decompiler will change the situation a bit.
Go to the WinMain, check the code.
Here we call several functions, apparently the worm works
with the Internet (see WSAStartup). We could switch to the graph view to display the logic more
As we see, the first block checks for a condition, if it is not satisfied, we go to the end
of the function, otherwise there are some checks and some actions. We have too zoom in
and check each block to understand how the worm works.
Or we could use the decompiler to get this nice text:
Watch the demo to see the decompiler at work (5.5MB flash with audio):
The decompiler as it is today can handle compiler generated code. While it produces nice results, it lacks many features, like floating point support, exception handling, proper type derivation (and I’m sure there are hundreds of bugs are there) but eventually these things will be implemented and fixed.
The beta testing will be open soon. If you want to participate, please apply at (only professional email addresses, please; keep in mind that the decompiler works only with IDA v5.1)
For more news, subscribe to the
mailing list. It is a read only list.
I’d like to say a couple of words about the decompiler internals. As anything else in reverse
engineering, the decompiler uses many probabilistic methods and heuristics. This means that
its output can not be made 100% reliable. This is just a caveat to anyone who wants a
decompiler to recover a lost source code. If you want an automatic recovery, try something else,
this decompiler won’t work for you.
It heavily uses the data-flow analysis methods to analyze the program. In fact the decompiler
consists of two parts: the first part is an engine which works with a microcode. This engine can
reason about the microcode and optimize it with the goal of making it
as concise as possible.
The second part converts microcode to a human readable form, to a C text.
The second part is quite simple: it just displays a nicely formatted text on the screen. The
first part, the optimization engine (I haven’t come up with a nice name for it yet, neither for the
decompiler), it much more interesting. It can be developed into something bigger. Something
which is capable of answering questions about the variable ranges, code and data coverage (it
will need to use inter-function analysis for that), and other things. It could check if some
invariants hold at the given program locations. A program verification tool can be built on the top
of such an engine quite easily. Imagine an automatically generated nice report about not only
trivial buffer overflows but also about other logic flaws in the program. This engine can evolve
into such a platform. This is how I see its bright and promising future 🙂 | <urn:uuid:21e6bdf2-7e03-4a55-ad78-ef09c123409f> | CC-MAIN-2017-04 | http://www.hexblog.com/?p=56 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915544 | 882 | 2.671875 | 3 |
With the advent of smart electricity meters and websites that are updated in real time, getting accurate information on how much energy your home consumes is becoming more convenient. The White House hopes that data soon will only be one click away for millions of Americans.
The Barack Obama administration announced this week that the federal government and several utility companies have agreed to participate in the Green Button initiative. The program is designed to help consumers access their household’s energy consumption data more easily and in a standardized format.
Easy Being Green
The federal government and utility companies say these so-called “green buttons” will be available in 2012 and 2013 on utility companies’ websites. Customers who want to access their data — information collected from smart meters — will be able to click a green button on their utility company’s website and securely access their household data.
|The utilities and electricity suppliers making new commitments include:
American Electric Power, serving 5.3 million customers in 11 states (Arkansas, Indiana, Kentucky, Louisiana, Michigan, Ohio, Oklahoma, Tennessee, Texas, Virginia and West Virginia);
Austin Energy, serving 400,000 customers in Texas;
Baltimore Gas and Electric, serving 1.2 million customers in Maryland;
CenterPoint Energy, serving 1.8 million households in Texas;
Commonwealth Edison, serving 3.4 million households in Illinois;
NSTAR, serving 1.1 million households in Massachusetts;
PECO, serving 1.4 million households in Pennsylvania;
Reliant, serving 500,000 households in Texas; and
Virginia Dominion Power, serving 2.4 million customers in Virginia and North Carolina.
“Green Button will arm millions of Americans with information they can use to lower their energy bills,” said Nancy Sutley, chair of the White House Council on Environmental Quality. “Innovative tools like these are good for our economy, good for the health of our communities, and an essential part of our approach toward a secure and clean energy future that works for Americans.”
Nine utilities and electricity suppliers nationwide have already committed to move forward with the initiative, the White House said. Collectively these utility companies account for more than 15 million households. The companies have said they will roll out the initiative on their own timelines.
The suppliers have agreed to base each of their Green Buttons on a common standard developed through a public-private partnership supported by the U.S. National Institute of Standards and Technology. What the Green Button actually will look like is to be determined.
Val Jensen, senior vice president of customer operations at Commonwealth Edison — one of the participating companies — said that data can give customers a better idea of how much energy they’re using, when they’re using it and how much it’s costing them. Research has shown that by having this information in an accessible way, the customers become better at managing their energy and therefore can reduce their energy costs.
According to Tammy Ridout, a spokeswoman for American Electric Power, said a goal of the initiative is to have the same button appear on any utility company’s website. The idea is that having the button prominently displayed should help customers access their data more easily.
“On some utility websites, the information may not always be as consumer friendly as it could be,” Ridout said.
Last year, former federal CTO Aneesh Chopra introduced the idea of the Green Button and said the concept would be modeled after the Blue Button initiative, an effort originally launched by the U.S. Department of Veterans Affairs to provide people with a one-stop access point to their health information.
Roadmap to Apps
Technology companies have said they will take part in Green Button by developing applications and Web tools that utilize energy data, since that information should soon be available in a more accessible format, according to the announcement.
“So what Green Button is going to be able to do when it’s widespread and fully mature is allow an ecosystem of third-party software to take this data and bring a lot of value to the utility customers,” said Andy Frank, vice president of business development for Efficiency 2.0, a technology company that’s participating in the initiative. | <urn:uuid:4c432535-5371-49f5-a334-e267455ef56d> | CC-MAIN-2017-04 | http://www.govtech.com/innovationnation/What-Is-the-Green-Button.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94188 | 868 | 2.59375 | 3 |
The U.S. Department of Education and the American Institutes for Research released a report on what STEM education will look like in 10 years.
The components of STEM in 2026 are networked communities of practice, accessible learning activities that invite play and risk, interdisciplinary approaches to problem-solving, inclusive learning spaces, accessible measures of learning, and societal beliefs that promote diversity in STEM.
The report said that schools should foster groups that share a common concern and passion for STEM learning.
“These collaborative networks of STEM learning foster the skills and growth mind-sets among all students that lead to lifelong learning and opportunities for postsecondary and career success, while expanding access to rigorous STEM courses,” the report said.
STEM activities should incorporate intentional play that’s appropriate for any age level that encourages curiosity and the ability to think in different ways. That way, students could learn that they have the necessary talent to contribute to the STEM field and to work in team-based environments.
The report said that students should be encouraged to tackle “grand challenges” that haven’t yet been solved in local, national, or global communities, such as building technology that will improve health care systems. STEM students should also attempt to solve these problems using various methods instead of strictly from a technological standpoint.
Learning spaces could include a well-resourced classroom, the natural world, makerspaces, and environments induced by virtual reality equipment depending on the lesson.
“Flexible learning spaces are adaptable to the learning activity and invite creativity, collaboration, co-discovery, and experimentation in accessible and unintimidating instructor-guided environments,” the report said.
President Obama said the nation’s education system needs to update so that students are taking “fewer, smarter, and better,” assessments. Tests should be designed to ensure that they’re not redundant, take up too much time, or offer unreliable data.
The report said that other forms of assessments could be more demonstrative of student learning than tests, including portfolios, presentations, and observations.
The report said that toy manufacturers, retailers, and popular media could be more conscious of the persona they portray as the typical STEM professional to include a more diverse profile.
“These images counter historical biases that have prevented the full participation of certain groups of individuals in STEM education and career pathways,” the report said. Communities and youth in all neighborhoods and geographic locations around the country are equally exposed to social and popular media outlets that focus on STEM, and a wide diversity of STEM-themed toys and games that are accessible and inclusive and effectively promote a belief among all students that they are empowered to understand and shape the world through the STEM disciplines.”
The report said that as the STEM field continues to grow, the projection of what STEM education in 2026 will change.
“The project team and contributors expect and trust that the STEM 2026 vision described here will be revised and refined as new knowledge, evidence, and experiences are gained in the process of achieving it,” the report said. | <urn:uuid:615c7bf4-8850-4df8-b539-9dba59c9677c> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/what-stem-education-will-look-like-in-2026/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952831 | 635 | 3.453125 | 3 |
|Technical Terms Glossary
Projection System | Overhead Projector
Projection System Terms
IBM high-resolution video standard of 1024 x 768 (interlaced).
pixel is actively controlled by a diode or a transistor. Advantage: allows
each pixel to be independently controlled.
lumens is a scale to measure the overall brightness value for projectors.
The measurement represents the average value of 9 points on the projected
between the width and height of the output (whether it is a monitor, LCD
projection panel, overhead or slide).
National Standards Institute.
Graphics Adapter. This is the card added to an IBM® PC & XT that
gives the computer the ability to handle graphics and color. Resolution
of this card is 640 x 200 pixels.
signal that combines all the color and timing components of the image into
a single input line.
of displaying images in a reduced size format. A compressed image usually
has part of the image information discarded. The result is a projected
image that has light and dark lines and text characters with thick and
thin line widths.
of the brightest and darkest images a display can reproduce.
of the red, green, and blue video image signal on a projected display system.
size is the diagonal length of the LCD plate. Typical sizes are 8.4" to
Super Twist Nematic. Where two separate LCD plates are combined to form
a single panel.
that amplifies and transmits a video signal over a distance using shielded
Graphics Array. This card is the second generation of the CGA card in that
it gives IBM PCs, XTs and ATs greater resolution (640 x 350 in all models).
or (Kilohertz or Megahertz). Cycles per second. (Kilo = 1,000, Mega = 1
million). These terms are used to express the frequency of an electrical
signal or event.
higher resolution images into 640 x 480.
other line is scanned during each total vertical (full) screen refresh.
of wireless transmission using infrared light waves.
line is scanned during each total vertical (full) screen refresh.
Television Standards Committee. The standard for broadcast color television
and other video equipment signal in North America, established in 1953.
525 lines/60 Hz.
Alternate Line. The phase of the color carrier alternates from line to
line. PAL is used extensively in Western Europe. 625 lines/50Hz.
of colors available for use in creating an image. The use of a standardized
palette in a presentation allows the user to create a consistent look.
as liquid crystal display (LCD).
of simple driver electronics in an LCD projection panel where the pixels
are turned on and off using a row-and-column format. The amount of control
on each pixel is limited, which results in lower contrast ratios and a
slower response time than active-matrix LCD projection panels.
position on a display that consists of a single dot or group of three dots
(red, green and blue). Total pixels are usually expressed in horizontal
x vertical dimensions (e.g., 640 x 480).
TFT is a type of LCD technology that allows more light at high temperatures
through the LCD.
of times the screen image is "painted" or refreshed per second, expressed
is the ability of an imaging system to faithfully reproduce fine detail
information and transitions between dark and light parts of an image. The
more pixels the display systems can address (e.g., 800 x 600 ) the higher-quality
image with more detail.
it takes for a pixel to turn on and off. Typically measured in milliseconds,
an active-matrix LCD projection panel's response time is fast enough to
display full-motion video and rapid mouse cursor movements.
Green, Blue. The basic signal components of the color video system.
Couleur Avec Memorie. The color television standard developed in France.
SECAM is used mostly in France and Eastern European countries. 625 lines/50Hz.
I/O port on the computer enabling other devices or computers to link with
the computer. Also referred to as RS-232C or COM port.
of 800 x 600. This standard has versions with different vertical frequencies.
signal that separates luminance (Y) and chrominance (C) signals.
Film Transistor. This is a developing technology that attempts to place
the controller of the panel directly on the surface of the glass.
of the light that is transmitted off the stage of the overhead projector
that reaches the screen at a given distance. Typically, LCD projection
panels are able to use less than 10% of the total light available.
Super Twist Nematic. Where three separate LCD plates are combined to form
a single panel.
Electronics Standards Association. A non-profit group of companies organized
to define and improve computer graphics standards. VESA standards usually
achieve a higher display quality by increasing the resolution (e.g., 1024
x 768) while maintaining a high vertical refresh rate (e.g., 72 Hz) to
of display specifications agreed upon by the VESA organization, usually
referred to by resolution and vertical refresh rate.
Graphics Array. This is the standard interface for the IBM PS/2®. It
is the only analog graphics card IBM has used (other cards handle digital
information) 720 x 400 in the text mode, graphics mode 640 x 480 resolution.
to project images from a VCR, laser disc, or PC with CD-ROM drive.
lightweight remote control offers all of the functionality of a computer-compatible
Graphics Adapter. IBM's graphics standard that includes VGA and extended
resolution up to 1024 x 768.
many computers, there is only one monitor output. Subsequently, a cable
is necessary that will split the monitor signal so it will work simultaneously
with both a monitor and an LCD projection panel.
to the top
AC power outlets for connecting accessory items such as notebook computers.
of the stage that is available for projecting an image. Usually comes in
the following sizes: 10" X 10", 10.5" X 10.5" and 11.25" X 11.25" (A4).
projector head, where the mirror will move one-half the distance when the
image is tilted up on the screen.
coating on a reflective optical stage (Fresnel lens) that resists scratching,
so that the projected visuals stay sharp and clear.
the edge-to-edge uniformity of the projected light to eliminate yellow
or blue corners. This gives an optimized image, no matter what the projector-to-screen
distance, within the prescribed area.
head where the mirror and lens is enclosed.
of the projector lamp by the upward flow of heated air, without the use
of a fan. Usually used in reflective-type projectors that have the lamp
installed into the head assembly.
lens that has two elements contained in a single assembly.
that the projected image can be tilted to, and still project a full image
onto the screen.
enlargement lens used to enlarge LCD projection panel images.
given to a lens, stated in inches or millimeters. The smaller the focal
length, the wider the angle of the image. Focal length is the distance
between the lens and its focal point.
that is used when it is necessary to project an image onto a vertical surface
(such as a wall) with a high tilt angle of the head, making it possible
to obtain uniform focus from top to bottom of the image. This feature does
not eliminate keystoning.
lens that is composed of a series of closely spaced grooves that control
the refraction of light. It is usually part of the stage.
switch that decreases (LOW) the lamp output by 10% and doubles the lamp
life. The HIGH setting should be used with LCD projection panels.
of light projected onto a screen or other surface. Stated in lumens.
is caused when the projected image is not perpendicular to the screen.
Correct keystoning by tilting the screen until it is perpendicular to the
light beam axis.
mechanism that allows easy exchange of replacement lamps by rotating a
replacement lamp into operation position after the primary lamp has failed.
Crystal Display (LCD)
panel that sits on top of the projector stage, which creates an image that
is generated by a computer.
of illumination on a screen or other surface. One lumen is the light of
one candle power on each square foot of a surface of a sphere at a radius
of one foot from the light source.
head where the mirror and lens is not enclosed. The image is raised on
the screen by tilting the mirror up.
the lamp off if the projector reaches an unsafe temperature.
that allows the projector fan to continue to run after the lamp has been
turned off, which reduces the temperature of the unit.
projector where the light source is located in the head assembly and shines
down onto the stage. The light is then reflected from the Fresnel lens,
back through the head and onto the screen. Usually used in lightweight,
lens that has only one element.
area of the projector where the transparency film or LCD projection panel
projector where the light source is under the stage and light is transmitted
through the transparency film to the head and onto the screen.
lens that has three elements contained in a single assembly.
lens containing movable elements to permit focusing of the image by varying
the focal length. (Not a zoom.)
that will project a larger image on a screen at a closer distance than
a standard lens will project. Usually has a focal length of 11.5" (293
mm) or smaller.
PS/2 are registered trademarks of IBM Corporation.
to the top | <urn:uuid:397af202-3ae3-4ec3-8b45-bda50d31ceb1> | CC-MAIN-2017-04 | https://kintronics.com/3m/601_gloss.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00466-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.830391 | 2,075 | 2.578125 | 3 |
NTP vs SNTP - it's more than this skewing issue, there are RFC documents
that describe them both. At one time, these were separate documents. Now
I think I just saw that SNTP is covered as a subset in the NTP RFC
(request for comment, the mechanism used to get all manner of internet
standards established and/or worked on - a very casual and probably
skewed description of it all!!)
The last RFC that was only SNTP was 4330 - that has been obsoleted by
is a place to look at it.
So according to the latest RFC 5909, SNTP just doesn't have all the
stuff that is in NTP - from 5909 we have this in the SNTP section -
Primary servers and clients complying with a subset of NTP,
called the Simple Network Time Protocol (SNTPv4) [RFC4330 <http://tools.ietf.org/html/rfc4330
do not need to implement the mitigation algorithms described
inSection 9 <http://tools.ietf.org/html/rfc5905#section-9
> and following sections.
As I barely understand things, mitigation is working out time data from
multiple servers - SNTP uses only 1 server as its time source.
The RFC, if I read it correctly, says that the only thing different is
the mitigation - so skewing is certainly available to SNTP, as Chuck
Anyhow, NTP on IBM i is SNTP according to RFC 2030, I think - a
predecessor of 4330. Same with the AIX version. Not sure about any Java
On 5/20/2014 5:48 PM, CRPence wrote:
On 20-May-2014 14:23 -0500, Buck Calabro wrote:
<<SNIP>> NTP will speed up the clock (or slow it down) a bit until
the incoming packets match the system clock. In other words, the
clock will gradually slew onto the new time. With SNTP, the
algorithm is simple, and doesn't slew the clock, it makes up the
difference in one jump, so there's a lurch when SNTP needs to make a
This is why the actual requirement matters. If the requirement is
to avoid large clock changes, you really need to be running an NTP
that can gradually slew the IBM i clock. <<SNIP>>
As I understood the SNTP client for IBM i, there are thresholds that
can be set to decide if\how to adjust the time. And if the hardware
supports the capability to slow-down or speed-up the system clock,
then that throttle feature will be used [within specified thresholds]
rather than resetting the time. Notably, for Change SNTP Attributes
(CHGNTPA) is this old help-text\documentation for a particular parameter:
_Client adjustment threshold_ (ADJTHLD)
"Specifies the threshold which determines whether changing the clock
will require setting or adjusting the clock.
• Setting the clock means replacing the current clock value with a
new value. ...
• Adjusting the clock means to incrementally speed up or slow down
the clock so that time is gradually synchronized with the NTP or SNTP
time server. Adjusting does not cause the large jumps in time that
can be experienced with setting the clock. ... | <urn:uuid:f2941c04-3799-4b80-aa89-84b265f4330e> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201405/msg00811.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00374-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905144 | 724 | 2.71875 | 3 |
Binary search trees, dynamic arrays, matrix multiplication — these are some of the reasons that more than 50 students traveled to San Diego in July as part of the 2nd Annual XSEDE (Extreme Science and Engineering Discovery Environment) conference.
The students also enjoyed plenty of pool time and sight-seeing after competing in the XSEDE13 Student Programming Competition, which was a primary focus of the overall Student Program that included a poster session, papers, a mentor program, tutorials, and a job fair.
The Student Programming Competition started in 2011 when the National Science Foundation (NSF) cyberinfrastructure initiative was known as the TeraGrid. This year it has matured under the direction of Ange Mason and a handful of committed teachers and outreach representatives from Shodor, Contra Costa Community College, the University of Washington, Louisiana State University, and the Louisiana School of Math, Science and the Arts.
“We want students to realize their potential with analytical thinking in joining the competition,” said Mason, who chaired the XSEDE13 student program and works as the Education and Outreach Coordinator for the San Diego Supercomputer Center. “I’m hoping this pushes them in the right direction to challenge themselves and to go outside their comfort zone.”
A total of 10 teams participated in the all-day competition, which challenged the students on a number of computational problems such as calculus-based approaches for calculating the area under simple curves; generating prime numbers that are large enough to be used for computationally-intensive tasks; and using ensemble-based approaches to produce a probability distribution of expected outcomes. In the process, students document their work in an engineering journal, create algorithms, write code, and measure how long it takes the code to run. They can use a variety of programming languages including C, C++, Fortran, Perl, Python and Java.
According to Mason, the students who participate are not necessarily vested in pursuing a STEM career, which is why formalizing the competition into an annual event, listening to student feedback on how to improve the event, and using social media to keep the students engaged was crucial this year. More than 58 institutions were represented with an impressive mix of traditionally underrepresented students including women, Hispanic, African American and Native American students.
The students who are the most marginalized in computing have voices in their head that say “I can’t,” according to Tom Murphy, a math teacher and computer scientist at Contra Costa College in California.
“This is why the competition is a big win for what we’re doing in terms of outreach,” Murphy said. “The students are walking away with way more than how they place in this competition. There’s a lot of hitting your head against the wall, but there is also a lot of success. Competing allows them to focus on their education differently. If I can get them to own their education the way they own solving these problems, and take this back to the classroom, they own a larger part of their life.”
Genaro Hernandez Jr, a graduate student in Computer Science at the University of Maryland, Baltimore County and an XSEDE Scholar, was a first time participant. “Our team had numerous strengths depending on peoples’ background,” Hernandez said. “One person could parallelize code to make it run faster, another person was adept at math, most people were eager to solve the problems, and some of the team members were good at generating code.”
Hernandez’s team was able to solve 7 out of 10 problems ― not bad. However, in his many years organizing programming contests, there has not been a team to Murphy’s knowledge that has completed all 10 problems in the eight-hour timeframe. He has been involved in these contests since 2005.
“Our philosophy is – let’s give them way more than they can do,” Murphy said. “It would be a boring competition if they could answer every problem. You always want students to extend their reach.”
Participating in the programming competition has enormous value for the students. Can they code in the presence of others? Can they generate solutions and write code under time pressure? Can they work as a team?
Brad Burkman, an XSEDE Campus Champion and a math teacher from the Louisiana School for Math, Science and the Arts, is leading by example. “I want to make people aware that advanced computing resources can be brought into high school classrooms,” he said.
The school Burkman represents was one of the first high schools to have a Little Fe computer, and this amazing “supercomputer in a box” has lived in his classroom ever since. He teaches weeklong courses in parallel programming using Little Fe, and the students have access to XSEDE resources via Burkman’s Campus Champion allocation. High school students can develop a “righteously powerful program” that they can then move to an XSEDE resource, according to Burkman. The high school team he mentored and chaperoned won the “Most Creative Solution” category in the competition.
“XSEDE is an amazing resource for making science happen,” Burkman concluded.
Ultimately, the XSEDE Student Programming Competition is about helping students develop problem-solving skills. Memorizing one way to solve a problem isn’t the goal. “Justifying how you solve a problem and why is a skill set I want my students to have. They get credit for their reasoning and extra points for applying parallel solutions.”
Why should XSEDE continue hosting a student programming competition?
“It changes students,” Murphy said. “You want students to use supercomputers, but they need to change how they look at their education. They have the potential to become impactful contributors whether they are scientists, artists or economists, and computation affects all fields.” | <urn:uuid:7e12ca66-8028-4e20-a6bc-80adae8deeb3> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/15/programming_competition_allows_students_to_geek_out_and_gain_crucial_skillsets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969003 | 1,243 | 2.875 | 3 |
All 30 users on a single floor of a building are complaining about network slowness. Afterinvestigating the access switch, the network administrator notices that the MAC address table is full(10,000 entries) and all traffic is being flooded out of every port. Which action can the […]
A network printer has a DHCP server service that cannot be disabled. How can a layer 2 switch beconfigured to prevent the printer from causing network issues?
A switch is being configured at a new location that uses statically assigned IP addresses. Which willensure that ARP inspection works as expected?
Which of the following would need to be created to configure an application-layer inspection ofSMTP traffic operating on port 2525?
Which command is used to nest objects in a pre-existing group?
Which threat-detection feature is used to keep track of suspected attackers who create connectionsto too many hosts or ports?
What is the default behavior of an access list on the Cisco ASA security appliance?
What is the default behavior of NAT control on Cisco ASA Software Version 8.3?
Which three options are hardening techniques for Cisco IOS routers? (Choose three.)
Which three commands can be used to harden a switch? (Choose three.) | <urn:uuid:1ad85f81-14bd-4995-a88b-3071cb04b4e2> | CC-MAIN-2017-04 | http://www.aiotestking.com/cisco/category/exam-300-206-implementing-cisco-edge-network-security-solutions-update-january-4th-2016/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925705 | 257 | 2.578125 | 3 |
To fully comprehend the importance of data normalization in an Intrusion Prevention System, it is first necessary to understand what data normalization is and what it does, how it accomplishes its goal, and why it is so integral to maintaining security against the advanced evasion techniques used today.
The critical importance of data normalization can also be seen while reviewing security failures and fundamental design flaws in many IPS devices that lack such normalization.
Data normalization explained
Data normalization is the process of intercepting and storing incoming data so it exists in one form only. This eliminates redundant data and protects the data’s integrity. The stored, normalized data is protected while any appearance of the data elsewhere is only making a reference to the data that is being stored and protected in the data normalizer.
The normalizer’s job is to patch up the incoming data stream to eliminate the risk of evasion as well as ambiguities. The monitor then views the data in its pure, protected and normalized form. Varying forms of normalization exist on levels of increasing complexity. The complexity is due to the set of requirements that must be met to achieve normalization. The most basic is known as First Normal Form, which is often abbreviated 1NF. It is followed by Second Normal Form, or 2NF, Third Normal Form, or 3NF and can continue increasing in forms and complexity as required or desired.
Normalization plays a key role in the security of a network, provided that normalization extends to every protocol layer. One of the major benefits is the forced integrity of the data as data normalization process tends to enhance the overall cleanliness and structure of the data. Normalization significantly contributes to the fortification of a network, especially in light of typical networks’ three main weak points: traffic handling, inspection and detection.
Where many IPS devices go wrong
When it comes to traffic handling, many IPS devices focus on throughput orientation for the most rapid and optimal inline performance. This process, while attractive for its rapidity, makes it impossible for full normalization to take place. The data traffic is then inspected without normalization, offering prime opportunities for infiltration to take place. One may agree that a rapid and optimal output performance is useless if the payload is riddled with malicious invaders.
When many IPS devices do employ normalization, they often rely on shortcuts that only implement partial normalization as well as partial inspection. This leaves gaps in the security and provides optimal opportunity for evasions. TCP segmentation handling is one example of such a process, as it is only executed in chosen protocols or ports and is drastically limited in its execution. Shortcut exploitation is a familiar evasion method and, with the proliferation of IPS devices that fail to perform full normalization, it is likely to remain that way due to its ease of execution.
Many IPS devices fall short in other areas, as well. They often perform only a single layer of analysis, execute traffic modifications and interpretations and rely on inspection of individual segments or pseudo-packets. Their detection methods are based on vulnerability and exploits, banner matching or shell detection. Their updates are generally delayed and their evasion coverage is extremely limited. Evasions can easily exploit the limited inspection scope by spreading attacks over segments or pseudo-packet boundaries.
Packet-oriented pattern matching is insufficient as a means of invasion detection due to the need for a 100% pattern match for blocking or detection. Advanced Evasion Techniques (AETs) possess the ability to utilize a vast multitude of combinations to infiltrate a system, rendering the likelihood of a 100% pattern match for every possible combination nonexistent. It is simply impossible to create enough signatures to be effective.
AETs exploit the weaknesses in the system, often being delivered in a highly liberal manner that a conservatively designed security device is incapable of detecting. In addition to using unusual combinations, AETs also focus on rarely used protocol properties or even create network traffic that disregards strict protocol specifications.
A large number of standard IPS devices fail to detect and block AETs, which have therefore effectively disguised a cyber attack that infiltrates or even decimates the network. Standard methods used to detect and block attacks generally rely on protocol anomalies or violations, which is no longer adequate to match the rapidly changing and adaptable AETs. In fact, the greatest number of anomalies occurs not from attacks, but rather from flawed implementation in regularly used Internet applications.
An additional issue that arises with many IPS devices is the environment in which they are optimized. Optimization typically takes place in a clean or simulated network that has never suffered a complex and highly elusive attack.
Resistance to normalization
Resistance to data normalization does not typically arise from the advanced security it promises, but rather the impact it may have on a network. When the security design flaw is found in hardware-based products, network administrators may resist the upgraded security measure due to the necessity of significant research and development for redesign. Additional memory and CPU capacity are also required to properly implement a data stream inspection that comprehensively protects against AETs.
When vendors decide that the required changes are impossible to implement, they leave their networks highly vulnerable for exploits and attacks. Focusing on the cost of the cleanup required for all infected computers in the network, and the even higher cost of network downtime, can help change the minds of vendors who continue to resist the necessary adaptations.
How the most effective IPS devices use data normalization
Instead of analyzing data as single or combined packets, effective IPS devices analyze data as a normalized stream. Once normalized, the data is sent through multiple parallel and sequential machines. All data traffic should be systematically analyzed by default, regardless of its origins or destination.
The most effective way to detect infiltration is to systematically analyze and decode the data, layer by layer. Normalization must occur at every layer simply because attacks can be hidden at many different layers. In the lower protocol layers, the data stream must be reconstructed in a unique manner. Modifications should generally be very slight or nonexistent, although any fragments or segments containing conflicting and overlapping data should be dropped.
Normalizing traffic in this manner ensures there is a unique way to interpret network traffic passing through the IPS. The data stream is then reassembled for inspection in the upper layers. Inspection of constant data stream in this manner is a must for correcting the flaws and vulnerabilities left open by many IPS devices. This process also removes the possibility of evasion of attacks that span over segment boundaries.
Higher levels are subjected to inspection of separate data streams that are normalized based on the protocol. In compressed HTTP, for instance, the data can be decompressed for inspection. In another example, MSRPC-named pipes using the same SMB connection would be demultiplexed and inspected separately.
Such a thorough and comprehensive data normalization process is the most effective way to protect networks from AETs and other threats that may otherwise disguise themselves to go undetected through standard IPS. The most effective IPS devices will ensure evasions are removed through the normalization process before the data stream is even inspected. This normalization is so successful because it combines a data stream based approach, layered protocol analysis and protocol specific normalization at different levels. It therefore helps fortify a network’s three weakest points and keeps malicious invader’s attacks at bay. | <urn:uuid:d20f350a-3adb-4f56-92d3-b1a7afe24e78> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/01/07/the-importance-of-data-normalization-in-ips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933159 | 1,491 | 2.703125 | 3 |
Sun Library Flaw Opens Door to Remote Attacks
A vulnerability in a Sun code library enables a remote attacker to execute code on a user's machine.There is a vulnerability in a Sun Microsystems Inc. code library that enables a remote attacker to execute code on a users machine. The flaw also affects libraries derived from the Sun library, including any BSD-derived libraries with XDR/RPC routines and the GNU C Library with sunrpc. The vulnerability is located in the Sun Network Services Library, which enables developers to incorporate XDR (External Data Representation) into their applications. XDR is a standard for the description and encoding of data and is used to transfer data between computers with different architectures.
Researchers at eEye Digital Security Inc. discovered an integer overflow in the xdrmem_getbytes () function. Depending on the location and use of the vulnerable routine, an attacker may be able to exploit this vulnerability remotely.
Find white papers on security.
For more security news, check out Ziff Davis Medias Security Supersite. | <urn:uuid:b54c8ed0-1052-47dc-95ce-c39fdbf57a2a> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Sun-Library-Flaw-Opens-Door-to-Remote-Attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868527 | 216 | 2.6875 | 3 |
Most of us remember when femtocells were the “next big thing” in wireless services. In the early days, small-cell solutions emerged as tools for addressing gaps in wireless coverage. Today, small-cell technology is (rightfully) being repurposed to address the ever-pressing issue of mobile-bandwidth shortage. It certainly has great potential, but even with carriers aggressively pursuing small-cell deployments, it remains unclear which models and strategies will be successful.
The business case for exploiting small-cell technologies for scale-based infrastructure enhancement is simple: Growth in mobile-data consumption threatens to outpace the rate at which carriers can add capacity in the form of traditional cell towers. The small-cell solution, which today exists in several versions, with effective ranges between 50 and 5,000 feet, allows carriers to replicate the connectivity of cell towers on a much smaller scale. While these smaller cells can’t handle the capacity of a full-scale (or macrosite) cell tower, they can be deployed in greater numbers, creating antenna arrays that provide substantial capacity. Additionally, unlike macrocell cites, small cells are relatively discreet and can be mounted in densely populated locations and urban environments.
Carriers are counting on small cells to deliver, with nearly all (98 percent) mobile operators viewing small-cell applications as essential to the future of their networks, according to a recent study by Informa Telecoms & Media. Further, nine of the world’s 10 largest wireless operators have deployed small cells. Just last month, AT&T announced that it will spend $8 billion on wireless initiatives to blanket 300 million people with coverage by year-end 2014 through small-cell enhanced LTE network strategy. The company’s strategy calls for the deployment of more than 1,000 distributed antenna systems as well as leveraging 40,000 small cells to move traffic to AT&T’s fiber networks. In other words, with no clearly successful model in the market yet, AT&T is adopting the heterogeneous network (HetNet) model in which it will use of a variety of radio and hardware technologies to achieve maximum network capacity and density.
Sprint has similar plans, anchored by a large-scale rollout of picocells next year, in highly trafficked environments (think airports and stadiums). Verizon has not yet detailed its small-cell plans, but reportedly sees small-cell solutions as being more applicable in some settings (e.g., dense urban) than others.
Still, the drive forward by two major carriers, which comes as the carrier world at large continues to address the practical challenges of small-cell deployments, including security, interference, synchronization and backhaul, underscores the pressure carriers face with respect to mobile data demand. Even though challenges remain with respect to small cells, demand is great enough for forward-thinking carriers to pursue deployment strategies while kinks are being worked out. In this column, we previously addressed the reality that, with market-leading devices and platforms reaching across all carriers and networks, competition is moving from device availability to network performance. Major small-cell builds starting next year for AT&T and Sprint are clear signals this shift has already occurred.
This analysis originally appeared in B/OSS Magazine. | <urn:uuid:f232dfd7-752e-45a3-a8ab-000e9fb26f8a> | CC-MAIN-2017-04 | http://www.atlantic-acm.com/big-hopes-for-small-cells-but-no-clear-path-to-success/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00247-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955762 | 662 | 2.515625 | 3 |
In a move reminiscent of medieval times – when alchemists attempted to synthesize noble metals like gold and silver from base metals such as lead and iron – modern engineers are on the hunt for less expensive alternatives to platinum. The search has Duke University engineers using brute force computing to uncover new materials that have the same chemical, physical and/or structural properties of platinum minus the hefty price tag.
A new study published in the American Physical Society journal Physics describes how researchers from Duke University’s Pratt School of Engineering used computational methods to identify dozens of promising platinum-group alloys. Although previously unknown to science, these compounds could be suitable for a wide-range of applications in the same way that platinum-group metals are used now.
With a unique atomic makeup, platinum is a super-material of sorts. As a catalytic converter, it can transform toxic engine exhaust into carbon dioxide and water. To a lesser extent, this rare metal is also used in the production of high octane gasoline, plastics and synthetic rubbers. It’s helpfulness even extends to medicine where it is effective against certain types of cancer.
The only problem with this super-material is its super-high price tag. Developing a more affordable, knock-off compound would benefit a multitude of industries and would encourage greater adoption of catalytic converter technology as well.
“We’re looking at the properties of ‘expensium’ and trying to develop ‘cheapium,’” said Stefano Curtarolo, director of Duke’s Center for Materials Genomics. “We’re trying to automate the discovery of new materials and use our system to go further faster.”
The research is part of the Materials Genome Initiative launched by President Barack Obama in 2011, “to help businesses discover, develop, and deploy new materials twice as fast.” The initiative sees innovative materials as crucial to achieving global competitiveness in the 21st century.
Curtarolo and his team have spent years laying the groundwork for the identification of the new platinum-group compounds. The databases and algorithms they’ve developed are based on an understanding of how atoms interact with model chemical structures. The researchers screened thousands of potential compounds with a priority given to the most stable candidates. A survey of nearly 40,000 calculations resulted in 37 new binary alloys in the platinum-group metals: osmium, iridium, ruthenium, rhodium, platinum and palladium – all rare and in-demand.
The newfound compounds have a number of desirable traits, e.g., catalytic properties, resistance to chemical corrosion and performance at high-temperature environments. These properties recommend their use in a number of commercial applications, such as electrical components, corrosion-resistance apparatus, fuel cells, chemotherapy and dentistry.
The next step is for experimentalists to continue the development process by creating these materials and identifying their physical properties. Historically, Curtarolo’s methods have turned out stable new compounds, but it’s difficult to know how such compounds will behave in the wild.
“The compounds that we find are almost always possible to create,” said Curtarolo. “However, we don’t always know if they are useful. In other words, there are plenty of needles in the haystack; a few of those needles are gold, but most are worthless iron.” | <urn:uuid:21d65e75-27e9-4c19-9733-df92d2da9942> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/27/modern-alchemy-quest-cheapium/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919652 | 708 | 3.21875 | 3 |
Have you ever dropped your brand new razor or a full bottle of hand soap on a tiled bathroom floor and wondered why it didn’t simply shatter into a dozen pieces or split apart and create a gooey mess? Maybe next time that happens, you’ll thank computer modeling and simulations, not just your lucky stars.
“What most people don’t know, is behind each one of those everyday mishaps, as well as the routine use of all those household products that help get us through each day, is an amazing amount of science, engineering, and high-performance computing,” said Tom Lange, Director of R&D, Modeling & Simulation for the Procter & Gamble Company (P&G), who addressed attendees at XSEDE14, this year’s conference of the National Science Foundation’s (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) program in Atlanta this month.
Lange’s responsibilities at P&G – founded 177 years ago and now doing almost $85 billion annually in global sales – spans consumer modeling; computational chemistry and biology; computer-aided engineering in structures, fluids, chemicals and controls; and production system throughput and reliability. From studying the micelles, or an aggregate of molecules in a solution such as detergents, to modeling the stratum corneum to better understand the physical properties of skin, Lange has spent his 36-year career modeling and simulating product formulations as well as their packaging. He and his colleagues have even optimized how these products are mass-produced – often to the tune of one billion items in just a matter of days – enabling P&G to achieve volumes that dwarf those of automobiles or even now-ubiquitous electronic devices such as laptops and mobile phones.
Lange has studied aspects of these household products that most of us simply take for granted: exactly how household cleaners must remove stains while protecting the fabric as well as one’s skin, how the varied sizes of infants directly correlate to fit from a diaper – and its urine leakage risk. He and his team at P&G have been using various XSEDE resources and expertise, such as those at the Ohio Supercomputer Center and the National Center for Supercomputing Applications (NCSA), in addition to other resources including the Department of Energy’s national labs at Los Alamos, Oak Ridge, and Sandia to help solve a wide range of challenges, primarily by jointly developing detailed simulations that predict performance, durability, and other metrics long before these consumer products hit the store shelves.
“High-performance computing is the theme that made all of this possible,” Lange told XSEDE14 attendees, noting that while advances during the last decade alone have enabled much more accurate simulations demanded by what he calls “the relentless pursuit of realism.” As a society, he noted, we have been on this learning curve for at least the last 60 years or more.
“I like to say that computing and modeling and simulation have changed science and engineering the way aviation changed travel,” Lange said, noting that at some point predictive modeling became an integral part of engineering and analysis replacing the crash-testing expensive prototypes – be it an aircraft or a new line of aerosol cans –demonstrations that bore no resemblance to the final production versions in the first place.
“I’m in the business of shaping decisions,” Lange said. “I make money with modeling and simulations, by either doing something, or not doing something that we would have done by experiment. I would much rather do stuff that tells you those things before they happen, not after.”
Contradictions and Scale
From an engineering perspective, many consumers may still think that everyday consumable goods such as detergents or diapers are ‘low tech’, when in fact the challenges faced by scientists and engineers working on P&G’s extensive product portfolio are in principle similar to those in rocket science. “High tech is not just for rockets, airplanes, cars, drugs, or smart phones” said Lange.
In describing P&G’s product development challenges as ‘contradictions and scale’, Lange explained that often, even the most common household product must have characteristics that are scientifically opposed to each other to work as flawlessly and effectively as possible. That’s where HPC resources and expertise come in, by making it possible to model many thousands of iterations of a single product characteristic with less time and less cost, but with consistent results – and no unpleasant surprises for consumers.
“Paper towels must be absorbent, but be very strong when wet,” he said. “Diapers need to be absorbent, but not leak and fit and comfort babies like cloth. Laundry treatments need to remove stains but protect fabrics, yet be concentrated and still be easy to use. Containers should never leak, but open easily. When dropped, they shouldn’t break, but they should use a bare minimum of plastic that also recycles well. Most importantly, all these products must represent a good value for improving daily life, not just affordable for use once in a while.”
As for scale, using computational science to improve products is also done to meet the demands of high-volume production. Taken together, contradiction and scale require a systematic process: first, a business challenge must be translated into a science challenge and expressed in science equations. Then all relevant data must be collected, such as material properties, production capabilities, even consumer ratings. Then simulations must be conducted and then effectively communicated at a non-expert level before final decisions are made and actual output begins.
Such computational modeling is used across the entire product spectrum and involves computer-aided engineering (CAE) skills in just about every area. For example, kinetic simulations are done to measure muscle activity and joint angles of the arm and hand when a person lifts a full jug of laundry detergent and twists off the measuring cap time after time. Others focus on how to produce billions of diapers each year that are absorbent, comfortable, and leak-proof. Many of these simulations require cross-disciplinary CAE skills.
“Here’s one – free surface flow on and through a compressible, partially saturated porous media with non-Newtonian behavior. That’s a kid going potty in his diaper,” said Lange.
A key challenge for companies such as P&G will be to ensure that computational analysis is eventually democratized, Lange told XSEDE participants. “High-performance computing and modeling and simulation skills need to be a base for all scientists and engineers. We need to be replacing the hand calculations of our education – most of them but not all of them – with computation and calculation.”
Another challenge will be for companies to develop an archive of its many thousands of simulations, and to be able to have the capability to reproduce an analysis done maybe five years ago by another researcher, said Lange. “We need a ‘library science’ to emerge for how we record and manage our simulations.” | <urn:uuid:8466d9cf-01c8-4666-8aeb-570e3f2b9506> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/22/micelles-machines-hpc-simulations-transform-everyday-household-products/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948873 | 1,483 | 2.578125 | 3 |
States have lost control of territories and environments, including the encrypted parts of the internet, creating a world where no single state has the power to guarantee its own security.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Rapid advances in information technologies and biotechnologies are creating vulnerabilities for national and international security, according to the report. "Cyber-crime and cyber-terrorism are already realities," it said.
The report quoted earlier research, saying "the grid of connections between terrorism and criminal networks has been highly crisscrossed. The increasing sophistication of both information and communications technologies and criminal gangs and terrorist groups themselves makes the scale of the challenge considerable."
The researchers found online fraud and theft rising. "National governments and global cyber-governing bodies have been overwhelmed by the ingenuity and pervasive online presence of organised criminal gangs in recent years," they said.
They said Google had found 450,000 out of 4.5 million "suspicious" websites were capable of downloading malicious software. More than two-thirds of programs were spyware that collected data on banking transactions and sent it to criminals.
This helped to fuel a financial fraud and money laundering industry worth as much as $1.5 trilllion, about the size of the Spanish economy or up to 5% of the world's GDP, they said.
The commission said the report was published against the backdrop of a significantly worsening international situation. "We believe we are witnsessing a downgrading of the ability of state institutions to control the security envirnoment and to provide public protection," it said.
It said governments now owned less of their critical national infrastructures. As a result, private sector organisations had become more important to delivering security and social resilience.
"Governments cannot take sole responsibility for making people secure. Governments must devolve, and businesses and individuals must accept, more responsibility for national security and the costs will have to be shared," the commission said. | <urn:uuid:fe5cb0b1-5389-49dc-b14e-2386781fdfa0> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240087602/States-have-lost-control-of-the-internet-says-think-tank | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00485-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965514 | 417 | 2.53125 | 3 |
Table of Contents
Windows gives you the ability to take a snapshot of what is shown on your computer screen and save it as a file. You can then view this image at a later date to see what your screen looked like or share this image with other people to view. You may be asking why this is important and why you would want to share screen shots of your computer. Two reasons come to mind, though there are many others:
When you take a screen shot, the image must be saved on your computer. We will discuss this as well in this tutorial.
Windows gives you the ability to take a screen shot of the entire screen. To do this you simply have to press the PrtSc or PrintScreen button on your keyboard. When you press this key a copy of your current screen will be placed in the Clipboard of Windows. When you save that screen shot to an image file you will see an image of your entire screen at the time you pressed the key. An example screen is below:
Figure 1: screen shot of the entire screen
Notice that the image above is a screen shot of the entire screen at the time of the key press. Instructions how to save the image to an image file will be described below.
Windows also gives the ability to only take a screen shot of the current active window. Each program that is running on your computer is run inside a window. The active window is the program that you are currently using. To take a screen shot of only the active window you would press the ALT and Prt SC or ALT and PrintScreen at the same time. This will create a screen shot of the current window that you are using. An example can be seen in figure 2 below of the active program I was using at the time I pressed those keys.
Figure 2: screen shot of the active window
It is important to note that any window, whether it is a dialog box, message box, or program window, is considered an active window and will be what is placed in the screen shot. Instructions of how to save this screen shot as an image file on your computer can be found below.
In order to save the screen shot that you have just created you will need some sort of image manipulation software. Popular software is Paint Shop Pro, PhotoShop, and IrfanView. For the purposes of this tutorial we will cover how to save your screen shot with IrfanView because it is a free download and works the same in every Windows version. If you are using Windows XP or higher, then you do not need to download anything, and can instead use the Paint program that comes with windows. The instructions below should work with Paint as well.
IrfanView can be downloaded from the following link:
IrfanView Download Link [Download Link]
Download and install IrfanView on to your computer. When it has finished installing, double-click on the icon found in your Start Menu under the IrfanView program group. When you open the program you will see a black window with nothing in it. Now take a screen shot of a window or your screen by pressing the appropriate keys explained above.
Now that a screen shot has been made, we want to paste that screen shot into IrfanView. If it is not opened already, open the program and click on the Edit menu and then select the Paste menu option. Once you click on the Paste button you will see your screen shot that you took previously now inside the IrfanView program. To save this screen shot, you would click on the File menu and then select Save As. When the Save Picture As ... dialog box opens type in the name you would like to save the image as in the File Name field. Then change the Save as type: menu to either a GIF or a JPEG. Change the Save in selection to the directory you would like to save the image and then press the Save button.
Now the image has been saved onto your computer in the location you specified, you can close IrfanView. The next part of this tutorial will explain how to share this image for others to see.
Now that the file is saved on your hard drive, you want to be able to share that image for others to view. One free service that we recommend is Photobucket. Photobucket is a free service that allows you to upload and share images on your computer so that other people on the Internet can view them. To use Photobucket you must first register at their site. Simply fill out the form and follow the instructions for becoming a free member of their site. Once you are a member you should login and you will be presented with the Add Pictures screen.
Browse to the picture you would like to upload and then press the Submit button. Once you submit the picture you will be presented with a screen that shows all the pictures you have uploaded with your account on to Photobucket. Below each picture you will see the words Url, Tag, and Img. I will explain what each of these means and when to use them.
Now that you know how to save and share a screen shot you should have no problem sharing images of your desktop or images of problems that you may be having. We hope you find this information fun and informative. Have some fun and start sending your friends and family pictures that you may find on your computer. Very soon you will be the one teaching them these tricks!
As always, if you have any questions, please do not hesitate to ask them in our computer help forums.
Bleeping Computer: Basic Operating System Concepts Tutorial
BleepingComputer.com: Computer Help & Tutorials for the beginning computer user.
The Snipping Tool is a program that is part of Windows Vista, Windows 7, and Window 8. Snipping Tool allows you to take selections of your windows or desktop and save them as snips, or screen shots, on your computer. In the past if you wanted a full featured screen shot program you needed to spend some money to purchase a commercial one. If you needed basic screen shot capability, past versions of ...
A basic, but important, concept to understand when using a computer is cut, copy and paste. These actions will allow you to easily copy or move data between one application and another or copy and move files and directories from one location to another. Though the procedures in this tutorial are considered to be basic concepts, you would be surprised as to how many people do not understand these ...
In the past, the built-in method to create a screenshot in Windows was to use Alt+PrintScreen or PrintScreen to copy a screenshot into the clipboard. You would then have to paste that image into another program that has the ability to save it as an image file. Though this method works, it was inconvenient, required an extra program, and was confusing for less experienced computer users. With the ...
Have you ever been connected to your computer when something strange happens? A CD drive opens on its own, your mouse moves by itself, programs close without any errors, or your printer starts printing out of nowhere? When this happens, one of the first thoughts that may pop into your head is that someone has hacked your computer and is playing around with you. Then you start feeling anger tinged ...
If you have ever worked with computer graphic images, whether they be from digital cameras, found on the web, or you create them yourself, then you know there are a lot of image file formats that are available. This is because each format stores the image in a certain way that makes it the best choice for a given situation. This tutorial will cover the most common image formats that you will find ... | <urn:uuid:38783bce-8a68-4d18-adc4-2a3de49bbaf3> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/how-to-take-and-share-a-screen-shot-in-windows/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00485-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950366 | 1,564 | 3.125 | 3 |
Originally published on Family Online Safety Institute.
All of the rules and requirements around creating a password can get exhausting. Sometimes you need a symbol other times just a mix of upper and lowercase letters. Why can’t we just make it whatever we want, something easy like abc123? Ideally, the freedom to create simple passwords would be nice but in a world where we literally do everything online, safety and confidentiality hinge on a secure password.
What makes a strong password
A strong password is one that is unique to you. One that wouldn’t be easy for anyone else to guess. More importantly it isn’t easy for hackers to crack. This requires a combination of lowercase and uppercase characters, symbols and numbers. It is suggested that your password be at least eight characters long, contain a combination of letters, numbers and symbols; a mixture of upper and lowercase letters and be associated with your name or username. If you really want to set the bar high, try not to include any complete words in your passwords.
Tips for avoiding having your password hacked
- Don’t login using your password on a public wifi….even at Starbucks. These networks are in place for our convenience but are not secure. Using any site that requires you to type in a password while connected to an unsecure network puts you at risk. You might be thinking that you don’t really care if your twitter account gets hacked, but your twitter password may provide clues to other passwords you have protecting more serious accounts such as banking. Which leads perfectly to the next tip...
- Do not repeat the same password across multiple sites. I understand it is hard to juggle multiple passwords and as tempting as it is to just stick with one password for all sites it just isn’t smart. Also do not think a password is different just because you juggle some numbers around, add a character or change what letter is uppercase. aBC132 is the same as abC123.
- Avoid using names in your passwords or any information that can be found on your social sites. It might be tempting to you use your spouse's name with your wedding date as your password but this information is easy for anyone to guess viewing your social media profile. Same goes for pets, kids etc. Avoid using any information that can be found online attached to your identity.
- Try not to create any record of your passwords. After learning that you need to have multiple and different passwords, it might be hard to fathom how to remember which password is tied to which account without some sort of record. If you can’t commit this information to memory (I have trouble with this) and need to have some sort of record than physically write it down, I suggest in a notebook, and keep it in a safe place. Never make a digital copy of your passwords. Even an Excel Spreadsheet saved on your hard drive, and not to a cloud database, is a risk. Also, if you choose to write down your passwords try to create some sort of shorthand that would be hard for anyone other than yourself to understand.
- Be careful who you share your passwords with. In most cases share your passwords with no one, but if you for any reason you need to share your passwords with a spouse or financial advisor make sure you ask how they plan on storing the password. It is your right to request your password not to be saved in any sort of database. Ask as many questions as you need to feel safe before ever sharing your password. Again, your password is safest when you keep it to yourself. If you ever have to divulge your password to a sales associate or customer service rep. change your password after they are done working within your account.
Also remember to change your passwords about twice a year or at the first sign of any unusual behavior within any of your accounts. If your email account was hacked it would most likely be wise to do a total password overhaul on all platforms.
Being able to do everything online is so convenient. But along with the luxury comes responsibility and risk. Take responsibility to protect yourself and create passwords that are strong. There are tools available to help you test the strength of your passwords.Originally published on Family Online Safety Institute. | <urn:uuid:99ddef4c-c058-4abe-8192-c02f966efaa8> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/make-your-passwords-hard-to-hack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00421-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941167 | 868 | 2.671875 | 3 |
As attacks on the United Kingdom’s financial institutions have become increasingly commonplace, the Bank of England has now developed a cyber-attack simulation to test the bank’s preparedness for an attack.
Banks across the UK will be exposed to a variety of cyber-attacks that are designed to spot vulnerabilities within their network, putting an enormous amount of pressure on these financial institutions’ CIO’s.
This operation titled, “Resilient Shield”, will be presented by the UK’s Computer Emergency Response Team, CERT-UK, the organization dedicated to managing major cyber security incidents throughout the UK.
This simulation will also include many United States banks and is set to reveal the communication between financial institutions and the governments.
With the UK being one of the world’s capitals for finance, it should come as no surprise that nearly 90 percent of large UK companies experienced a breach within the last year. Governor of the Bank of England, Mark Carney, even warned the finance industry earlier this year that cybercrime would be a major threat to the City’s financial stability.
While many financial institutions fear this issue it too big to fix, it’s important to take the right steps towards implementing best in class security solutions to help ward off any potential cyber-attacks. If organizations want to maintain security and minimize the likelihood of a financial fallout from these cyber-attacks, they need to realize the likelihood of stopping all breaches is unlikely, but a preventative approach can be the best way to stop them. NNT Change Tracker Gen7 provides organizations with Non-stop, continuous visibility of what’s going on in your IT environment, allowing an organization to at least spot unusual change that represent a breach in real time and take action before any damage is done.
Learn more about NNT’s Change Tracker Gen7
Read the article on SC Magazine | <urn:uuid:421b02e8-6099-40d4-816c-651371e082b0> | CC-MAIN-2017-04 | https://www.newnettechnologies.com/operation-resilient-shield-to-test-the-uk-financial-sector.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940886 | 388 | 2.546875 | 3 |
New tool adds geospatial capabilities to federal data website
- By Alice Lipowicz
- Sep 21, 2010
The White House’s Data.gov website has added a geospatial interactive mapping tool that enables users to view geographic data and overlay it on other data.
The GeoViewer tool offers a preview of geographic data with limited functionality rather than serving as a full-fledged geospatial information system. It allows users to visualize data on a map and to quickly determine which data within Data.gov contains geographic data.
Now, the GeoViewer is able to preview geographic data for selected datasets only, such as for the Environmental Protection Agency’s data on environmentally sensitive areas.
Feds launch Data.gov today
Government maps path to geospatial data
The Geo Viewer was developed by ESRI, a geographic information systems company. ESRI officials recently announced the company received a contract to merge geographic data from Data.gov with GeoData.gov. ESRI has been under contract fo GeoData.gov development since 2004.
Jack Dangermond, chief executive of ESRI, spoke about use of geographic information on government data at the Gov 2.0 Summit on Sept. 8.
“The question is, can government geographic information system databases be integrated?” Dangermond asked. The answer is yes, they can, and if that occurs it would provide great benefit to the public, he said.
However, open data policies and data sharing are not enough to integrate geographic databases, he added. Public maps should be made available as shared services along with templates for integrations, free open-source geographic software and tools for developers, Dangermond said at the summit.
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. | <urn:uuid:6edfde5c-25f1-432b-b1bc-6fad7e6f7bb6> | CC-MAIN-2017-04 | https://fcw.com/articles/2010/09/21/data-dot-gov-geo-viewer-feature-shows-data-on-maps.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911662 | 373 | 2.5625 | 3 |
As technology continues to transform our society, those responsible for our current systems of learning and education are facing overwhelming pressure to adapt. Education technology, connected learning, and the rise of the Networked Society are transforming the established concept of learning, teachers’ roles, and even the nature of knowledge itself.
Formalized education is only one of many sources for the knowledge and skills we need to be able to participate in and contribute in society. Education technology and connected learning provide almost unlimited possibilities for the continuous development of skills and knowledge throughout our lives.
Progressive schools are already exploring the revolutionary possibilities that education technology offers. Outside many of the major educational institutions and formal educational systems, a new generation of creative individuals and entrepreneurs is emerging.
An ecosystem for learning
To fully tap the creative potential of each person, we need to move beyond mass production of knowledge and toward mass individualization of personal development. What is lacking, however, is a true educational ecosystem – one in which government, academia, teachers, businesses, and researchers work together to fundamentally rethink how education technology and connected learning can meet the future demands of our society.
Ideas can change the world
Can your big ideas change the world? The annual Ericsson Innovation Awards highlight talented student innovators from around the world. Congratulations to 2015 winners, Team Blendee, for their blended learning self development platform.
Learn more here.
Mixing schoolwork and leisure
According to a ConsumerLab study: Almost half of Estonian pupils use school computers for leisure activities. Many pupils also bring their own mobile phones and tablets to school to use for study purposes. This bring-your-own-device behavior blurs the boundary between leisure and school work.
Download the report
Learning and education in the Networked Society
Students and progressive teachers, empowered by technology, are turning established models of learning and education on their heads.
Download the report
How ICT is changing the classroom
As we journey toward the Networked Society, ICT is unlocking the full potential of learning and education by redefining existing classroom models.
Download the infographic
Related blog posts
Why every school needs an organizational mindset (Huffington Post)
What ‘connected education’ looks like (NY Times) | <urn:uuid:ce31e244-e253-44b3-a1f1-f2ee54260b4d> | CC-MAIN-2017-04 | https://www.ericsson.com/networked-society/trends-and-insights/future-of-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934271 | 457 | 3.28125 | 3 |
Engineering students at Oklahoma State University designed drones that may someday collect new data about tornadoes, helping public safety agencies more accurately predict and plan for disaster. A giant tornado, at least one mile wide, wiped out neighborhoods as it moved through Oklahoma on Monday, May 20. While the student designs are only in the preliminary planning stage, with no firm schedule to move forward, the university’s Department of Mechanical and Aerospace Engineering is negotiating with its partners to settle on a possible multi-year project that could change tornado science and ultimately save lives.
The project, whose partners include the University of Colorado at Boulder, the University of Kentucky, Virginia Tech and the University of Oklahoma, now has several active drone projects in addition to the tornado project. A lot of their drone research is funded by the Department of Defense, said Oklahoma State University Professor Jamey Jacob, but there are a lot of applications for the use of drones in civilian airspace, too.
His school hopes to collaborate with the National Weather Center at the University of Oklahoma to continue development of these types of projects.
Before the giant tornado struck, storm chasers estimated the starting location of the tornado but their estimations were off by tens of miles, Jacob said. But although their predictions were wrong, the storm chasers didn’t make a mistake. There’s just a lot of missing information when it comes to tornadoes. In fact, about 70 percent of tornado warnings are false alarms, Jacob said. “If you look at the tornado genesis, there’s not a lot of understanding,” he said. “We have the initial rotation, but what really snaps the tornado into place?”
Traditional methods of gathering data from tornadoes include things like the use of Doppler radar, which provides data on moisture levels and some other pieces of the equation, but other measurements, such as temperature gradients and pressure levels inside tornadoes remain unknown. Exactly why and when tornadoes form is still a big question, which is why this drone project could be useful, he said. “Whether there’s a magic key in there, in what would be present in meteorological or thermodynamic data, we don’t know yet simply because we don’t have those type of measurements,” he said.
Photo: This drone, named Talos, is currently being tested for search and rescue, and border patrol for the Department of Homeland Security Borders competition. The aircraft was designed and built by a team of five aerospace engineering graduate students from the Oklahoma State University Department of Mechanical and Aerospace Engineering, led by PhD candidate Thomas Hays.
The student designs were required to meet several needs that would make the designs practical for use in a storm. The drones needed the capability to take off from a typical road, fit inside a standard flat-bed trailer, lift off and fly in 22 mph winds, withstand gusts of 28 mph, fly for at least four hours at 5,000 feet without needing to refuel, and carry at least one deployable data-gathering device into a weather system. The data-gathering devices, called dropsondes, are cylinders filled with sensors, that are designed to be dropped into storms and begin collecting data. It’s that data that could fill the gaps in knowledge when it comes to tornadoes, Jacob said.
The student designs are the first step, Jacob said, and the next step would be to redesign the drone so a prototype could be manufactured. While proposals around the future of the project are still being evaluated, one possibility, Jacob said, is to retrofit an existing drone used for fire surveillance to collect storm data. A storm drone could also be equipped with other equipment, such as a thermal imaging camera, and be flown before, during and after a storm to gather data and assist with search and rescue efforts.
The cost to build a prototype storm drone is between $25,000 and $100,000, Jacob said, which is comparable to many existing drones on the market today. But cost is typically not the main concern when it comes to drones. Drone functionality and regulation are still both nascent. Drone regulation requires that a person or agency using a drone as a tool first obtain written authorization to launch their craft, a process that can be tedious and time-consuming.
Ultimately the goal behind using drones to collect data, Jacob said, is to fill gaps in meteorological knowledge that could give scientists better information about storms and allow people to prepare for disaster. It’s not guaranteed, but this technology could save lives when the next tornado hits.
“Our best-case scenario would be to put vehicles in the hands of end-users – forecasters, meteorologists, weather service, those doing research in storm systems and also first responders,” Jacob said. “It’s really important that you’re able to have something in the hands of the police and firefighters who are doing the search and rescue missions post-disaster. | <urn:uuid:37339bcf-f6fa-4136-8517-6883383f6d2b> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Oklahoma-Students-Design-Tornado-Drones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00312-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959771 | 1,023 | 3.421875 | 3 |
Definition: A model of computation whose memory consists of an unbounded sequence of registers, each of which may hold an integer. In this model, arithmetic operations are allowed to compute the address of a memory register.
See also cell probe model, pointer machine, Turing machine, big-O notation.
Note: From Algorithms and Theory of Computation Handbook, page 5-24, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "random access machine", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/randomaccess.html | <urn:uuid:c3446ba6-5cca-453d-9c4c-7c44efc1057d> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/randomaccess.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.8224 | 233 | 3.0625 | 3 |
Yesterday I wrote about the effect that the digital divide in America is having on millions of students who don’t have high speed Internet access at home. As I mentioned, there was no simple solution to the problem on the horizon. The costs and complications of bringing affordable broadband access to every person in every corner of this country are great.
Turns out, as the Washington Post wrote yesterday, that the FCC has a proposal for just such a solution. Well, sort of.
The Post wrote about an FCC plan that was proposed last fall to encourage TV stations to sell back some of their spectrum to the government. The government, in turn, would then sell some of that spectrum to wireless companies, so they can expand and bolster their networks, and to designate a portion of the reclaimed spectrum for unlicensed use. Unlicensed spectrum can be used by anyone for free; it’s what’s used for current WiFi networks, though at different frequencies than what’s being proposed.
What’s most interesting about the proposed FCC plan - and relevant to the digital divide discussion - is that the FCC is also suggesting that the newly unlicensed spectrum could be used to create powerful free WiFi networks, giving, potentially, everyone free Internet access. Since the spectrum used by TV stations is much stronger than current unlicensed spectrum, such WiFi networks would, in theory, be accessible for much longer distances than current free WiFi networks, and even through buildings and walls. The FCC envisions this newly free portion of the spectrum supporting powerful, free WiFi across the country.
Sounds good, right? Could this be the solution to the problem of kids from low income families or in rural areas not having access to broadband at home? Would this solve the problem of kids going to McDonald’s at night to do homework that requires Internet access? | <urn:uuid:d0e455bf-3607-44ca-8735-a215e947f011> | CC-MAIN-2017-04 | http://www.itworld.com/article/2716010/mobile/will-the-fcc-s-free-wifi-plan-bridge-the-digital-divide-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960424 | 373 | 2.84375 | 3 |
Normally, I don’t cover vulnerabilities about which the user can do little or nothing to prevent, but two newly detailed flaws affecting hundreds of millions of Android, iOS and Apple products probably deserve special exceptions.
The first is a zero-day bug in iOS and OS X that allows the theft of both Keychain (Apple’s password management system) and app passwords. The flaw, first revealed in an academic paper (PDF) released by researchers from Indiana University, Peking University and the Georgia Institute of Technology, involves a vulnerability in Apple’s latest operating system versions that enable an app approved for download by the Apple Store to gain unauthorized access to other apps’ sensitive data.
“More specifically, we found that the inter-app interaction services, including the keychain…can be exploited…to steal such confidential information as the passwords for iCloud, email and bank, and the secret token of Evernote,” the researchers wrote.
The team said they tested their findings by circumventing the restrictive security checks of the Apple Store, and that their attack apps were approved by the App Store in January 2015. According to the researchers, more than 88 percent of apps were “completely exposed” to the attack.
News of the research was first reported by The Register, which said that Apple was initially notified in October 2014 and that in February 2015 the company asked researchers to hold off disclosure for six months.
“The team was able to raid banking credentials from Google Chrome on the latest Mac OS X 10.10.3, using a sandboxed app to steal the system’s keychain and secret iCloud tokens, and passwords from password vaults,” The Register wrote. “Google’s Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level. AgileBits, owner of popular software 1Password, said it could not find a way to ward off the attacks or make the malware ‘work harder’ some four months after disclosure.”
A story at 9to5mac.com suggests the malware the researchers created to run their experiments can’t directly access existing keychain entries, but instead does so indirectly by forcing users to log in manually and then capturing those credentials in a newly-created entry.
“For now, the best advice would appear to be cautious in downloading apps from unknown developers – even from the iOS and Mac App Stores – and to be alert to any occasion where you are asked to login manually when that login is usually done by Keychain,” 9to5’s Ben Lovejoy writes.
SAMSUNG KEYBOARD FLAW
Separately, researchers at mobile security firm NowSecure disclosed they’d found a serious vulnerability in a third-party keyboard app that is pre-installed on more than 600 million Samsung mobile devices — including the recently released Galaxy S6 — that allows attackers to remotely access resources like GPS, camera and microphone, secretly install malicious apps, eavesdrop on incoming/outgoing messages or voice calls, and access pictures and text messages on vulnerable devices.
The vulnerability in this case resides with an app called Swift keyboard, which according to researcher Ryan Welton runs from a privileged account on Samsung devices. The flaw can be exploited if the attacker can control or compromise the network to which the device is connected, such as a wireless hotspot or local network.
“This means that the keyboard was signed with Samsung’s private signing key and runs in one of the most privileged contexts on the device, system user, which is a notch short of being root,” Welton wrote in a blog post about the flaw, which was first disclosed at Black Hat London on Tuesday, along the release of proof-of-concept code.
Welton said NowSecure alerted Samsung in November 2014, and that at the end of March Samsung reported a patch released to mobile carriers for Android 4.2 and newer, but requested an additional three months deferral for public disclosure. Google’s Android security team was alerted in December 2014.
“While Samsung began providing a patch to mobile network operators in early 2015, it is unknown if the carriers have provided the patch to the devices on their network,” Welton said. “In addition, it is difficult to determine how many mobile device users remain vulnerable, given the devices models and number of network operators globally.” NowSecure has released a list of Samsung devices indexed by carrier and their individual patch status.
Samsung issued a statement saying it takes emerging security threats very seriously.
“Samsung KNOX has the capability to update the security policy of the phones, over-the-air, to invalidate any potential vulnerabilities caused by this issue. The security policy updates will begin rolling out in a few days,” the company said. “In addition to the security policy update, we are also working with SwiftKey to address potential risks going forward.”
A spokesperson for Google said the company took steps to mitigate the issue with the release of Android 5.0 in November 2014.
“Although these are most accurately characterized as application level issues, back with Android 5.0, we took proactive measures to reduce the risk of the issues being exploited,” Google said in a statement emailed to KrebsOnSecurity. “For the longer term, we are also in the process of reaching out to developers to ensure they follow best practices for secure application development.”
SwiftKey released a statement emphasizing that the company only became aware of the problem this week, and that it does not affect its keyboard applications available on Google Play or Apple App Store. “We are doing everything we can to support our long-time partner Samsung in their efforts to resolve this important security issue,” SwiftKey said in a blog post.
Update: SwiftKey’s Jennifer Kutz suggests that it’s incorrect to use the phrase “pre-installed app” to describe the component that Samsung ships with its devices: “A pre-installed app is definitely different from how we work with Samsung, who licenses/white-labels our technology – or prediction engine – to power their devices’ default/stock keyboards,” Kutz said. “The keyboard is not branded as SwiftKey, and the functionality between our Google Play app, or pre-installed SwiftKey app, is different from what Samsung users have (in short, the official SwiftKey app has a much more robust feature set). The SwiftKey SDK powers the word predictions – it’s a core part of our technology but it is not our full app.”
Tags: 1Password, 9to5mac.com, AgileBits, Android, apple, Ben Lovejoy, Galaxy S6, google, iOS, NowSecure, Ryan Welton, Samsung, Swift Keyboard exploit, Swift Keyboard flaw, Swift Keyboard vulnerability | <urn:uuid:bcc095d3-2628-4e4d-8833-9adc47c5ce1c> | CC-MAIN-2017-04 | https://krebsonsecurity.com/2015/06/critical-flaws-in-apple-samsung-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00092-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950859 | 1,436 | 2.546875 | 3 |
An entity beans class contains the following method signature:public Integer ejbCreate (int partNum, String partDescription,float partCost, String partSupplier) throws CreateExceptionWhich statement is true of the beans ejbCreate() method?
Which choice defines the term isolation when used to describe the properties of a transaction?
An EJB client invokes a create() method. An EJB container instantiates an enterprise bean as theresult of this method call. The bean is then held in a pool awaiting a method invocation. To whichtype of enterprise bean does this process refer?
A finder method in an entity bean is written to find more than one primary key. Which statementcorrectly describes the invocation of this type of ejbFind…() method?
The ejbRemove() method for an enterprise bean contains the following line of code:prepStmt = dbConn.prepareStatement ("DELETEFROM MyTableWHERE MyKey = ?");What type of enterprise bean might this be?
Which statement correctly describes CMP entity beans and finder methods?
Which client application could benefit from the use of a callback object?
Which statement correctly describes the EJBContext interface?
Which type of factory object returns references to objects that reside within the same process asthe factory object?
Which statement correctly describes the EJBHome interface? | <urn:uuid:529fc224-d960-4f00-a82a-29d57d835c60> | CC-MAIN-2017-04 | http://www.aiotestking.com/ciw/category/exam-1d0-442-ciw-enterprise-specialist/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.777211 | 285 | 2.625 | 3 |
NASA today awarded what it called the largest prize in aviation history to a company that flew their aircraft 200 miles in less than two hours on less than one gallon of fuel or electric equivalent.
Their aircraft is the Taurus G4 by Pipistrel-USA.com. The twin fuselage motor glider features a 145 kW electric motor, lithium-ion batteries, and retractable landing gear. The team, using another Pipistrel aircraft has won NASA aircraft challenges before -- the Personal Air Vehicle Challenge in 2007 and the General Aviation Technology Challenge in 2008.
Fourteen teams originally registered for the CAFE (Comparative Aircraft Flight Efficiency Foundation) Green Flight Challenge competition. Three teams successfully met all requirements and competed in the skies over the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif., this past weekend CAFÉ manages the competition for NASA. The second place prize of $120,000 went to team eGenius, which is backed by European aircraft conglomerate Airbus.
Two years ago the thought of flying 200 miles at 100 mph in an electric aircraft was pure science fiction," said Jack Langelaan, team leader of Team Pipistrel-USA.com in a statement. "Now, we are all looking forward to the future of electric aviation."
More on flying, sort of: 10 wicked off-the-cuff uses for retired NASA space shuttles
According to CAFÉ, to win the Green Flight Challenge, an aircraft must exceed an equivalent fuel efficiency of 200 passenger miles per gallon (mpge). Typical general-aviation aircraft have fuel efficiencies in the range of 5-50 mpge. Large passenger aircraft are in the 50-100 mpge range, depending on passenger/cargo load. Green Flight Challenge aircraft also must have an average speed of at least 100 mph over a 200-mile race circuit; achieve a takeoff distance of less than 2,000 feet to clear a 50-foot obstacle; and deliver a decibel rating of less than 78 dBA at full-power takeoff, as recorded from 250 feet away.
It is hoped by NASA and Google that the technologies demonstrated by the CAFE Green Flight Challenge, sponsored by Google, competitors may end up in general aviation aircraft, spawning new jobs and new industries for the 21st century.
NASA noted that there is great evidence such aviation awards can change history. From a NASA white paper on the topic: "Raymond Orteig offered $25,000 in 1919 to the first successful non-stop flight from New York, New York to Paris, France or vice versa. The Orteig Prize was won by Charles Linbergh in 1927 with the Spirit of St. Louis. Henry Kremer established the Kremer Prizes in 1959. He offered $100,000 for the first prize competition which was for the first human-powered aircraft to fly one mile. The prize was won by Paul McCready in 1977 with the Gossamer Condor flown by Bryan Allen.1 Other notable prizes are the $250,000 Sikorsky human powered helicopter prize, $2 Million Defense Advanced Research Projects Agency (DARPA) Grand Challenge: Urban Challenge, $2 Million DARPA UAV Prize, $10 Million Ansari X-Prize, $60,000 Experimental Aircraft Association (EAA) Electric Flight Prize, and the NASA Centennial Aviation Challenges. NASA's first challenge was the $250,000 Personal Air Vehicle Challenge in 2007. Four competitors participated and $250,000 was awarded. It was followed by the $350,000 General Aviation Challenge in 2008 which only awarded $97,000 to its three participants.
The Orteig Prize changed the public's expectations of flying. Charles Lindbergh's flight across the Atlantic created the expectation that anyone could fly. Three major impacts were observed: in 1927, U.S. pilot license applicants increased by 300% and the number of U.S. licensed aircraft increased by 400%; and there were 30 times the number of U.S. airline passengers in 1929 as there were in 1926. The new technologies and innovations from the Kremer prize led to new niche markets. Efficiently using power from batteries contributed to the development of the electric powered car and solar powered aircraft. The lightweight soaring capability contributed to high altitude projects such as the NASA Pathfinder aircraft. Air races like the Schneider Trophy in the 1920s and 1930s pushed airplane and airplane engine performance. New liquid cooled engines were introduced to provide more powerful engines. The airframes were clean and efficient. These races accelerated aircraft speeds from 150 mph to 400 mph in 13 years."
The CAFE Green Flight Challenge is one of NASA's Centennial Challenges, a program of prize contests to stimulate innovation and competition in solar system exploration, the agency said. In the past it has held such challenges to build lunar landers, personal aircraft and astronaut gloves.
In 2010 NASA significantly expanded its Centennial Challenges program to include $5 million worth of new competitions. Those challenges included:
- The Sample Return Robot Challenge is to demonstrate a robot that can locate and retrieve geologic samples from wide and varied terrain without human control. This challenge has a prize purse of $1.5 million. The objectives are to encourage innovations in automatic navigation and robotic manipulator technologies.
- The Nano-Satellite Launch Challenge is to place a small satellite into Earth orbit, twice in one week, with a prize of $2 million. The goals of this challenge are to stimulate innovations in low-cost launch technology and encourage creation of commercial nano-satellite delivery services.
- The Night Rover Challenge is to demonstrate a solar-powered exploration vehicle that can operate in darkness using its own stored energy. The prize purse is $1.5 million. The objective is to stimulate innovations in energy storage technologies of value in extreme space environments, such as the surface of the moon, or for electric vehicles and renewable energy systems on Earth.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:1e17ab69-e713-4a8b-8c97-1aa7048afbb2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220771/green-it/nasa--google-award--1-35m-prize-for-ultra-cool--mega-efficient-electric-aircraft.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945265 | 1,227 | 2.921875 | 3 |
A computer vision system has been developed to enable biologists to identify and monitor large numbers of endangered animals, from butterflies to whales.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Scientists at the University of Bristol, working on Robben Island in South Africa, have devised a surveillance system to capture detailed and reliable data on populations of endangered species.
The project, called the "Penguin Recognition Project" - supported by Earthwatch, the international environmental charity, and the Leverhulme Trust - is being used to monitor the African penguin. The system captures a real-time image of the pattern of spots on the penguin's chests, which uniquely identifies each animal.
The Penguin Recognition Project is being shown at this year's Royal Society Summer Science exhibition (30 June to 3 July). | <urn:uuid:440bba8d-d9cd-4cdc-92a6-51b77e292424> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240086284/Pick-up-a-penguin-Computer-vision-monitors-penguins | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924739 | 180 | 3.65625 | 4 |
Invisible wonder-material is next IP battleground.
Apple and Samsung may be squaring up for another patent fight. Or so the news agenda would have you believe. Our two favorite protagonists are amassing patents about graphene in sheet form.
Ain't the future brilliant?
In IT Blogwatch, bloggers rub their crystal balls.
Your humble blogwatcher curated these bloggy bits for your entertainment.
Here's Jungah Lee, reporting from Seoul:
The main battleground between Samsung...and Apple...is moving from courtrooms to the laboratory, amid a race for patents on atom-thick technology.
…Graphene is...a transparent material that conducts electricity so it can be stretched across glass surfaces...to make them into touch screens. ... It’s ideal for futuristic gadgets like bendable smartwatches or tablets that fold up. ... Samsung, which this month lost a $120 million verdict to Apple, looks like the early leader in the race for intellectual property rights.
…Graphene can be used in all three product categories where...Samsung holds the largest global market share: smartphones, memory chips and TVs. ... Graphene’s ability to conduct electricity about 100 times faster than silicon makes it valuable in other ways too. MORE
While others zig, Ben Zigterman zags:
Graphene is stronger than diamond, more conductive than copper, more flexible than rubber and...you can barely see it.
…Samsung...appears to have the most graphene-related patents, with 405 total and 38 in the U.S. Apple has at least two. IBM and Foxconn also have graphene-related patents. MORE
But Kelly Hodgkins hedges her bets:
Graphene may be the wonder material of the future. [It] could initiate a new wave of innovation in hardware design and manufacturing. ... It also may become the next battlefield for Apple and Samsung.
…The arrangement of the carbon molecules makes the material strong...flexible, conductive and so transparent. ... Apple has been silent on its own research into the use of graphene...unlike Samsung, Apple's own publicly available patents and applications addressing graphene are scant. MORE
Surely there's more to graphene than bendy touchscreens? Let's have something more hyperbolic, please. Katherine Noyes obliges:
It is an innovation with the potential to change the world.
…Aviation components, broadband photodetectors, radiation-resistant coatings, sensors, and energy storage are among numerous other areas of active research. ... The ultimate application is graphene-based transistors, the building blocks of modern electronics. ... Looking a little further out, graphene can be employed in membranes used for water desalination.
…The possibilities that graphene holds for the nearly $2 trillion global electronics industry are difficult to ignore. MORE
Subscribe now to the Blogs Newsletter for a daily summary of the most recent and relevant blog posts at Computerworld. | <urn:uuid:7abfb075-a483-440f-90dd-5b64021cdbce> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2476300/computer-hardware/the-next-patent-fight--twixt-apple-and-samsung--graphene.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908335 | 613 | 2.53125 | 3 |
Imagine living somewhere where traffic jams were a thing of the past, where ambulance crews started on-route to an emergency before anyone had dialled 999 and where critical infrastructure operated at maximum efficiency. This utopian city may not currently be possible, but the Internet of Things could change that in the not too distant future. By connecting various aspects of the urban landscape to the Internet, local administrators will gain access to countless extra datasets that could help them provide better services for their citizens. These smart cities not only promise to improve the lives of their inhabitants, but also provide potentially lucrative commercial opportunities for a number of businesses.
Part of the wider Internet of Things revolution
Although the concept of a “smart city” has existed for some time now, the Internet of Things (IoT) is finally making it a reality. IoT devices are expected to revolutionise a broad spectrum of industries, from manufacturing to healthcare, so it is hardly surprisingly that urban planners, transport managers and infrastructure designers are getting on board. In fact, Gartner predicts that approximately 1.6 billion connected things will be used by smart cities this year, representing a growth rate of 39 per cent when compared to 2015.
"Smart commercial buildings will be the highest user of Internet of Things until 2017, after which smart homes will take the lead with just over 1 billion connected things in 2018," confirmed research vice president at Gartner, Bettina Tratz-Ryan.
Although consumers may not quite be ready to embrace connected devices in the home, outside of connected TVs and thermostats, in the meantime, smart cities could help demonstrate the potential benefits of the Internet of Things. And although connected traffic lights and parking spaces may sound futuristic, some forward-thinking companies are already making inroads in the smart city market.
Present day examples
Smart cities could provide a significant commercial opportunity to many businesses. Not only could companies secure huge contracts to deliver a multitude of IoT services to the city, but they could also generate revenue by selling the data that they collect, providing that they have permission to do so. IoT data, as opposed to the connected devices themselves, is often viewed as being the real game-changer for the market and it is this information that will enable cities to deliver more efficient, targeted services. Here are a few cities around the world that are already using IoT technology to their advantage.
- Barcelona – The Spanish city has been quick to embrace IoT technologies, with smart traffic lights and a host of other sensors now a common feature of the urban landscape. The Internet of Things has been put to use in the irrigation system at the Parc del Centre de Poblenou and in the planning stages of new bus routes. In addition, Barcelona recently announced that it would be extending its partnership with Cisco to create an IoT platform that helps to simplify, accelerate and reduce the cost of deploying new services.
- Amsterdam – The Smart City initiative was launched in Amsterdam back in 2009 and has so far led to the creation of almost 100 IoT projects by businesses, the government and members of the public. Project examples include flexible street lighting, a connected traffic management system to reduce congestion and a dedicated IoT testing area, dubbed the “smartest street in the Netherlands.”
- Milton Keynes – Slightly less well-known on the global stage than both Barcelona and Amsterdam, Milton Keynes is trying to make a name for itself as a pioneering smart city. Its MK: Smart project, which attempts to use energy and water consumption data, transport data, data acquired through satellite technology, social and economic datasets, and crowdsourced data to support economic growth, was a finalist at last year’s Smart City Awards. The project also places a great deal of emphasis on sustainability issues, such as meeting carbon reduction targets.
These are just three examples of IoT and data analytics being applied in cities all over the world.In addition, India’s Prime Minister, Narendra Modi, wants to create more than 100 smart cities in the country, even though this will reportedly require investment in excess of $150 billion. The reason why governments are so keen to create their own smart cities is due to the fact that they can benefit their inhabitants and the wider economy.
“The technology embedded in smart cities has the potential to transform our lives for the better,” explains Jason Hart, CTO, Data Protection at Gemalto. “We’ve already seen new technologies such as NFC contactless payments or mHealth benefitting end users, saving time, bringing more efficiency to businesses and to society in general. There’s no reason to doubt as the technology improves, this will continue to improve the quality of life for future generations to come.
“Smart cities have the potential to draw in people from all over the world, providing services that span the digital divide, allowing citizens to interact and access things like government services at the touch of a button from healthcare and education to smart transportation and tourism.”
As mentioned above, cities looking to embrace IoT technology, data analytics and smarter services will need to invest significantly to meet this aim. Firstly, the transfer, storage and protection of such vast amounts of data requires extremely well developed network infrastructure. Security will also likely need to be improved, particularly if some of the data associated with smart cities is of a sensitive nature. Building and installing sensors into existing urban infrastructure could prove costly and face resistance from local administrators. Why should they spend money on smart traffic lights, when the unconnected ones work fine?
Cultural challenges must also be overcome for cities to work more intelligently. For example, business silos must be broken down if all the information collected from the urban environment is to be used to its full potential. There could be multiple sensor manufacturers, data owners and stakeholders involved in the creation of a smart city. All these interests must work together holistically to create a city that delivers more for its inhabitants.
With all the data being collected, smart cities have also come under fire from privacy advocates. If nearly every piece of the urban landscape is collecting data, will this ultimately amount to another form of government surveillance?
Smart cities are already on the way and despite some challenges, they offer many potential benefits for citizens and corporations. Whether you’re an IoT hardware manufacturer or an organisation looking to generate insights from public data, smart cities provide a huge number of new business opportunities.
Image Credit: Shutterstock/hin255 | <urn:uuid:a695d989-6300-4c01-8e4f-c191e26fe581> | CC-MAIN-2017-04 | http://www.itproportal.com/2016/02/01/smart-cities-what-businesses-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950941 | 1,305 | 2.578125 | 3 |
The ESET survey shows that children today are becoming more technologically independent at a younger age. They tend to have a mobile phone and their own email account by the time they are ten years old – and will have opened a social networking account such as Facebook, Bebo or Twitter, by the time they are eleven years old. The problem is that parents are not keeping up with the issues. For example, 36% of parents don’t believe that mobile phones can be affected – or infected – by viruses; and a further 34% are unsure.
ESET senior research fellow David Harley believes it is symptomatic of a general unawareness of security at home – most home computers are delivered with anti-virus pre-installed. “It doesn't seem to me that adults are necessarily sufficiently aware of their own vulnerability in online contexts, so it's not surprising if they don't have sufficient understanding of the devices their children use and often own.”
The whole problem is exacerbated by today’s children often being more tech-savvy but less security-savvy than their parents. “Young people,” he explained to Infosecurity, “don’t have the life experiences – many of them bad – that tend to instill a certain amount of skepticism and resistance to social engineering in much older generations.” That resilience, he added, “is unfortunately, by no means universal. I've become all too aware in recent years that even the simple landline is a happy hunting ground for scammers who are delighted to make use of the naivete of the very young and, sometimes, the very mature.”
Harley also suspects that part of the problem is that current security thinking focuses more on adults “since they're the ones with the credit cards, or the access to workplace data of interest to criminals”; and not sufficiently on the attacks targeting youngsters – cyber-bullying, misuse of social media by pedophiles and others, scams based on stealing phone credit and so on.
However, Harley does not believe that the solution to the problem is in new legislation (whether primary such as the Cybersecurity Act and Communications Bill, or secondary via executive orders in the US or pressure on ISPs to impose parental controls in the UK). “In the end,” he said, “teaching young people online 'hygiene' is going to be at least as effective as trying to legislate bad actors out of existence by imposing impossible requirements on service providers.”
Nevertheless, lack of security on children’s phones and tablets can have security implications beyond the children themselves. “In many instances,” Harley told Infosecurity, “the corporate perimeter doesn't stop at the firewall, but extends right into the employee's home. Shared home devices and networks have a potential for theft and damage outside the home that I'm sure even corporate security administrators often don't think about.” | <urn:uuid:d07802c7-2b58-46b3-8ddf-f2f6ee5ee63f> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/parents-just-dont-understand-mobile-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973883 | 607 | 2.984375 | 3 |
NASA needs a better plan to help satellites and spacecraft avoid space junk, which has reached a "tipping point," a National Research Council committee reported on Thursday.
There are enough pieces of smashed satellites, meteroids, and trash orbiting the planet to pose a real risk, the report says, and NASA lacks the staffing to deal with it properly.
Not only does NASA need a formal strategic plan for managing the threat, but the space agency may need to launch a project to clean up some of the clutter.
"The current space environment is growing increasingly hazardous to spacecraft and astronauts," Donald Kessler, chairman of the committee that wrote the report and retired head of NASA's Orbital Debris Program Office, said in a statement.
The last time NASA looked at the issue, at the end of the 1990s, things did not look so bad.
But in 2009 two communications satellites--one owned by Iridium Communications and the other owned by the Russian Space Forces, collided and exploded over northern Siberia. It made a big mess and added 2,000 more bits to the flotsam floating in orbit. In 2007, the Chinese government destroyed a weather satellite in an anti-satellite test. Both greatly added to the space-junk problem.
At 17,000 mph, even a tiny piece of metal can penetrate a spacecraft's protective skin or tear a hole in an astronaut's space suit. And at that speed, a collision can blow apart a valuable satellite.
In June, a piece of space debris drifted to within 850 feet of the International Space Station, and the crew took shelter in Russian space capsules as a precaution.
NASA is tracking more than 500,000 of these bits and pieces--and 20,000 of them are larger than a softball.
The report says NASA has to re-think the management structure that deals with the problem.
"Nearly all of NASA's meteoroid and orbital-debris programs are only one person deep in staffing. This shortage of staffing makes the programs highly vulnerable to budget reductions or changes in personnel," the report notes. | <urn:uuid:1eed381f-265d-461e-8492-69da8530bc14> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/2011/09/nasa-needs-to-clean-up-some-space-junk-report-finds/49710/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942695 | 421 | 2.953125 | 3 |
ActiveX: COM objects creation.
Common Dialogs: Dialogs included in comdlg32.dll.
Drivers: IOCtls and Driver communication.
Environment: environment variables.
Exceptions: handled and unhandled user-mode exceptions thrown.
Files: File and directory access.
Handles: Windows handles query functions.
Internet: Wininet.dll functions. Set of high level functions that Internet Explorer use to browse.
Internet helpers: Urlmon.dll functions that are used by Internet Explorer to access Wininet.dll in a multi-threaded context. They also include Zone implementation and some Internet Explorer options.
Localization: Functions used to get current language settings.
Module Handle: GetModuleHandle function.
Ntdll Strings: Ntdll string initialization functions. Useful when you know that a specific string appears after certain event (e.g.: a crash or an issue).
Procedure Address: GetProcAddress function.
Process: Library loads and Process creation.
Registry: Registry activity.
Resources: Load and find resources.
Shell: Shell32 functions that are used to find programs, open file with default program, convert file paths and tons of shell related help functions.
Windows Creation: Window creation and destruction.
Windows Hooks: SetWindowsHook functions used to install message filters.
Windows Messages: SendMessage, PostMessage and show window.
Windows Properties: Windows properties such as TItle, Visible, etc. | <urn:uuid:4e070d53-92ca-44ee-bf12-2dc952065ebb> | CC-MAIN-2017-04 | http://whiteboard.nektra.com/spystudio-group-selection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00166-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.746682 | 312 | 2.515625 | 3 |
If you go to Google today, you will see a Google Doodle in honor of Grace Hopper, the computer scientist whose theory lead to the development of COBOL and who is credited with coming up with the term debugging. What you might not notice is a small line underneath the search bar reading, “Be a maker, a creator, an innovator. Get started now with an Hour of Code.” So what is an Hour of Code? You can see here.
Code.org, a non-profit dedicated to expanding computer science education, created this video to encourage students to try coding as part of the Computer Science Education Week. And the video includes endorsements from some big names, including Angela Bassett, Ashton Kutcher, Bill Gates, Mark Zuckurberg, Steve Jobs. Oh yeah, and President Obama.
The goal is to get 10 million students to try coding this week, something even young kids can do. As of this writing, 2.8 million had tried it. So go ahead, encourage the student in your life to try an Hour of Code. Grace Hopper would approve. | <urn:uuid:2c644c4f-891b-4c1d-b8f1-270809eb1346> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2475746/it-skills-training/try-an-hour-of-code--you-might-just-like-it-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00284-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94334 | 227 | 3.3125 | 3 |
Protect your customers from phishing attacks that impersonate your organization
What Is Consumer Phishing?
In a phishing attack, a criminal sends a large number of consumers a deceptive email appearing to come from a respected brand — typically a financial service provider or an email service provider.
The email uses social engineering techniques to attempt to mislead the recipients to visit a web page appearing to belong to the impersonated brand, where the user will be asked to enter her username and password — and sometimes other information as well. Having stolen this information, the criminal now controls the victim’s account.
A good example of a large scale consumer phishing attack is the recent attack targeting customers of GoDaddy
How Consumer Phishing Works
Most phishing campaigns involve an attacker masquerading as a trusted brand, both in an email sent to the intended victims and using a website looking much like the website of the impersonated brand. The phisher commonly uses email spoofing to assume the identity of the brand he wishes to impersonate. In terms of the email and website content, phishers use copied logos and phrases associated with the brand to look credible.
Most consumers think that phishing is limited to impersonation of financial institutions, but as the black market value of stolen email credentials is going up, attackers are targeting more industries.
Cyber criminals abuse brand trust, using your brand name as a disguise to trick your customers into opening their malicious emails.
Traditional Defenses Identify Bad URLs
Traditional phishing countermeasures are based on rapidly identifying malicious websites — the phishing websites — and then scanning emails for hyperlinks pointing to these pages. To circumvent these countermeasures, phishers use smaller batches of phishing emails, each one of which uses distinct hyperlinks.
Often, legitimate services, such as link shortening services, are used by the phishers to make detection more difficult. The fact that the attacks constantly change makes it difficult for traditional filters to do a good job.
The Solution: Agari Customer Protect
Agari Customer Protect stops phishing attacks by ensuring that every email your customers receive claiming to be from you will actually be from you.
Agari Customer Protect analyzes email sent claiming to be from your domains to 3 billion mailboxes across the world’s largest cloud email providers including Google, Microsoft and Yahoo. Based on that data, Agari creates a model of legitimate email behavior for your organization. Then, that model is published via the DMARC standard and used to block all unauthorized email from reaching your customers’ inboxes. | <urn:uuid:174428d5-fc95-416e-acd4-6c83fcf25492> | CC-MAIN-2017-04 | https://www.agari.com/consumer-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916889 | 518 | 2.609375 | 3 |
To observe intruders using session encryption, researchers needed to find a way to break the session encryption. For many organizations this has proven extremely difficult. In an attempt to circumvent session encryption rather than break it, the Honeynet Project began experimenting with using kernel-based rootkits for the purpose of capturing the data of interest from within the honeypot’s kernel.
These experiments lead to the development of a tool called Sebek. This tool is a piece of code the lives entirely in kernel space and records either some or all data accessed by users on the system. It provides capabilities to: record keystrokes of a session that is using encryption, recover files copied with SCP, capture passwords used to log in to remote system, recover passwords used to enable Burneye protected binaries and accomplish many other forensics related tasks. What follows is a detailed discussion of Sebek, how it works and its value.
Download the paper in PDF format here. | <urn:uuid:9e705f8f-040a-4db0-aff6-ee245fa5d793> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/09/24/know-your-enemy-sebek2---a-kernel-based-data-capture-tool/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907652 | 192 | 3 | 3 |
Last Monday would have been the 107th birthday of computer scientist and US Navy Rear Admiral Grace Hopper. To mark the occasion, Google featured her in its daily “doodle”, celebrating a lifetime of achievements that include creating the COBOL programming language.
In 1943, Hopper walked away from a promising academic career to join the Navy; it was wartime and she wanted to do her part. While serving in the Navy, Hopper came up with the idea for COBOL (common business-oriented language), the most widely used programming language in history, and still in use some 30-plus years after her death. Hopper was also the first to apply the term “debugging” to a computer, when a moth short-circuited a relay in the Mark II she was programming in 1947. (Her notebook containing the moth is now in the Smithsonian Institution.)
Scientist, inventor, and trailblazer: what would Hopper have to say about the state of today’s professional women?
This last year saw Sheryl Sandberg, Facebook’s 44-year-old chief operating officer, urging other professional women to “lean in” to their careers. Newly appointed Yahoo! Chief Executive Marissa Mayer served as a prime example, giving birth to her first child while trying to engineer a turnaround at the Internet pioneer. And General Motors just appointed its first woman chief executive, Mary Barra, a 51-year-old mother of two, who started at GM as an 18-year-old co-op student and meticulously climbed the ladder to the executive suite.
But for all those high-profile appointments, Hopper would be disappointed to see how few women hold technical positions in the computer industry, the field closest to her heart.
The reasons are complex: few women study engineering and computer science in college (a subject for another blog). And Silicon Valley, for all its self-congratulatory talk about being progressive, has often proven less than supportive of women in the workplace.
Nimble Storage has taken an important step toward correcting this imbalance with the Nimble Women’s Network, started by Doris Lim, a member of our human resources team, and chaired by Stacey Cornelius, vice-president of operations. Among its charter members are three female vice-presidents, two of them engineers, who will serve as mentors to other Nimble Storage women.
“A lot companies in the Valley say that its most important assets walk out the door at the end of each day,” says Radhika Krishnan, Nimble Storage’s vice-president of product marketing. “If a company doesn’t help its women to maximize their talents, then it’s allowing a valuable resource to languish. And that makes absolutely no sense.”
“At Nimble we’re determined not to let that happen,” Krishnan says.
The network meets once a month, and provides women with an opportunity to network, as well as to hear from women leaders in the technology industry. The idea is not only to inspire, but also to give Nimble’s women real tools and resources so that they, too, can be successful in their careers.
Grace Hopper believed in the importance of teachers and the power of mentoring. Hopper had taught at Vasser before joining the Navy, and upon retiring she became a goodwill ambassador for Digital Equipment, lecturing audiences on the early days of the computer and her career.
As a reporter at Toronto’s Globe & Mail newspaper in 1983, Michael Kieran, Nimble’s Social Marketing Manager, got the chance to interview Hopper and see her in action. A tiny women who stood ramrod straight in her full-dress Navy uniform, she captivated an audience of technical neophytes with nothing more than a clutch of salvaged phone cables – a prop to illustrate how far light travels in a nanosecond. Once her audience had left, Hopper let loose with a burst of profanity that left the cub reporter red-faced. Yep, she cursed like a sailor.
She finished the interview with a prediction: “My job is to look ahead into the future and to get started on it. The world of computers is just beginning. We’re driving around in Model T’s, and the jet planes and space ships are still to come.”
At Nimble Storage, we couldn’t agree more. We’re committed to helping identify, nurture and empower talent, regardless of gender. After all, the next Amazing Grace might already be working here, quashing a new generation of bugs.
- Nimble Storage Marketing | <urn:uuid:d6fa71bd-8ab7-4572-ae39-395cab584232> | CC-MAIN-2017-04 | https://www.nimblestorage.com/blog/cultivating-the-next-grace-hopper/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964545 | 976 | 2.71875 | 3 |
Rootkits: What They Are and How to Cope with Them
22 Aug 2005
Kaspersky Lab, a leading developer of secure content management solutions that protect against viruses, spyware, Trojans, hacker attacks and spam, releases a new report on the rapid development of rootkits. In April 2005 the number of rootkits detected monthly nearly doubled. This latest report from Kaspersky Lab's analysts looks at what makes rootkits dangerous, how they are detected, why they have become more numerous in 2005, and why they are increasingly being used for malicious purposes.
To read the full report – “Rootkits and how to combat them” – visit www.viruslist.com.
The term “rootkit” refers to a set of programs that allow a hacker to maintain access to a computer after cracking it and that prevent the hacker being detected. Both writers of illegal viruses and developers of so-called “legal” spyware programs openly advertise that programs concealed using rootkits are invisible to the user and undetectable by antivirus programs.
The report states that, “The increased popularity of rootkits is partly due to the fact that the source code of many rootkits is now openly available on the Internet. It's relatively easy for virus writers to make small modifications to such code. Another factor which influences the increased use of rootkits is the fact that most Windows users use the administrator's account, rather than creating a separate user account. This makes it much easier to install a rootkit on the victim machine.” | <urn:uuid:78acd6f2-edcf-4130-92be-4f16e132e478> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2005/Rootkits_What_They_Are_and_How_to_Cope_with_Them | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93215 | 329 | 3.03125 | 3 |
In universities around the world, a question has arisen about computer studies syllabuses: should students be taught how to create viruses and malicious code?
There are several opinions on this issue. Those who think it should not be taught, have many arguments in their favor. To teach a student destructive techniques implies the possibility that, eventually, they could use them not to improve the security systems they might design, but rather to create new, dangerous viruses.
This does not mean computer science students fit the stereotypical image of a hacker: untidy youngsters locked in rooms full of computing materials, programming between pizzas and almost in the dark. A computer science student does not differ much from a law student, and they don’t learn to commit crimes and avoid the law simply by studying it. Students of any discipline simply try to put it technique into practice, and although it is not easy to build a bridge in the first few years of your degree, it is easy to develop an experimental virus.
Even if universities decided not to teach students about how viruses work and how to create them, it is extremely difficult, almost impossible, that a computer science student cannot find out this information for themselves. They do not even need to be college students, as many secondary school students already have the knowledge needed to modify the code of an existing virus and create a new one that can pose an additional threat.
Every occupation in the world has a series of techniques that can be used either to destroy things or create them. Do police departments around the world not know what the most harmful ammunition is? Does any medical student not know what the most dangerous poisons are and how to use them? We can even go a little bit further: do children not know what stones will have the strongest effect on the heads of their enemies?
Everybody could use their knowledge to do harm, but the police knows very well how and when to use their ammunition, and doctors take an ancient oath, the Hippocratic oath, promising, among other things, that “to please no one will I prescribe a deadly drug, nor give advice which may cause death.” In the case of children throwing stones at each other in kids’ squabbles, you can only expect that, after the second stone hits any of them on the head, they will learn that the throwing stones at someone’s head is dangerous.
Going back to the subject of this article, teaching students how to create malicious code can be beneficial for the training of an IT systems student. However, rather than teach them how to create malicious code, classes should focus on the techniques hackers use to create their malicious creations. An engineer must know the destructive power of explosives, not to use them against people but to use them for the benefit of society: in demolition work or to test the resistance of a certain building to an explosion.
A computer science student who knows how a virus works or the dangers posed by a Trojan will know how to defend against them and how to protect the computer networks they will work with the future. But teaching students how to create malicious code cannot be beneficial for the training of an IT systems student. An engineer must know the destructive power of explosives, not to use them against people but to use them for the benefit of society: in demolition work or to test the resistance of a certain building to an explosion. And, of course, in Law Faculties never is taught how to make a bank robbery.
The problem is that, on many occasions, writers of malicious code have been regarded almost as heroes, modern-day revolutionaries fighting the establishment from the IT field. But let’s face reality: in the same ways as Robin Hood’s generosity is just a nice tale, virus writers are just a new type of criminal.
If the time comes when virus creation is implemented in university syllabuses, then there should be a subject dealing with professional ethics. In this way, this technique could be regarded as a useful trade for society. | <urn:uuid:79341a8d-5150-4b5a-9035-56fd62e64c0a> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/06/10/teaching-how-to-create-malicious-code/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957417 | 810 | 3.0625 | 3 |
Researchers at Kapersky Lab recently concluded that Apple is an entire decade behind Microsoft at malware protection, pointing to the spate of Flashback attacks and Apple’s slow response. Kapersky predicting a coming wave of malware as OSX viruses, once rare, grow to be as common as those aimed at Windows. Microsoft, by contrast, has been struggling with malware since its inception and has developed a system of responses and initiatives known as Trustworthy Computing, which, not coincidentally, celebrated its ten year anniversary earlier this January.
In January, 2002, Bill Gates sent an email to all full-time Microsoft employees with grand ambitions for computing, envisioning a future with a pervasive internet, continued growth in personal computers, and a wider range of connected devices, which turned out to be accurate. In a world where so many people are connected to the Internet in so many ways that affect their life, Gates saw that information technology must be “as available, reliable and secure as standard services such as electricity, water services and telephony.” To achieve these immense goals he announced Trustworthy Computing, a drive to improve security, privacy and reliability for all of Microsoft’s offerings.
Over the years, Trustworthy Computing’s most notable contribution was the Security Development Lifecycle, a series of 16 practices to ensure that security is incorporated into every part of the software development process rather than added as an afterthought. Every engineer at Microsoft gets some security training each time they begin a new project, and everyone in the enterprise is expected to know something about security. Tools, software, and personnel are continuously tested or audited for security, privacy, and reliability and, in case anything slips by, compensating controls are built-in to correct flaws elsewhere. Microsoft also brought in Windows Error Reporting, which drastically reduced crashes, and was among the first to publish privacy standards for developers or offer users layered privacy notices.
While Microsoft has come a long way, there is plenty of work left before we can fully trust out computers. Part of the problem, however, comes from the user. One big hurdle in Trustworthy Computing has been users not updating their old software as new and improved solutions become available. For example, Microsoft’s Internet Explorer 9 is much more secure than Internet Explorer 6, but ten years after its launch, IE6 has only just fallen under 1% of the American market despite Microsoft’s best efforts. Another obstacle is that, as Microsoft’s operating system has become hardened, attackers are finding new ways to breach a computer, and now 75% of all attacks are aimed at applications. While compensating controls mitigate some of the risk, the ever-growing IT ecosystem means that Microsoft can’t make computing trustworthy on its own. Still, over the last decade, Microsoft has come a long way and has plenty to celebrate.
Just as Bill Gates said ten years ago about Microsoft, users are connecting a wide array of Apple products, from computers to phones, tablets, and accessories, to the internet and expect privacy, reliability, and security. If versions of Flashback as well as novel malware for Apple operating systems continue to proliferate, Apple will need to implement its own version of the Trustworthy Computing drive. For the sake of anyone with a Macintosh or an iPhone, let’s hope it doesn’t take them ten years to get it right. | <urn:uuid:c1771a15-84a7-467c-a490-46161aa201a6> | CC-MAIN-2017-04 | http://www.fedcyber.com/2012/04/27/does-apple-need-ten-years-of-trustworthy-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959013 | 689 | 2.71875 | 3 |
State legislatures are installing computers on members' desktops to bring quicker access to information.
Level of govt: State
Problem/Situation: Legislators need quick access to information while considering legislation.
Solution: Some states have outfitted chambers with laptops and established networks.
Jurisdiction: California, Texas, Michigan.
Vendors: IBM, NEC, Dell.
Contact: Texas Legislative Council 512/463-1160.
The wheels of lawmaking sometimes spin faster than a legislature's photocopy machines can churn out bill amendments or rewrites. And for legislators embroiled in floor debate, manual distribution strangles the flow of information even further. Amendments -- used by lawmakers to adjust legislation to gain legislative support or negate the potential law's effectiveness -- are often introduced during floor debate and just before a vote. Knowing the amendment language -- which is sometimes pages long -- and how it fits with the entire bill in question, is a challenge legislators are beginning to address with the help of networks and laptop computers installed in statehouse chambers. The Michigan Senate has been wired for about five years, while Texas and California installed laptops as the current session began in January.
Texas, which is finishing a Capitol restoration project, wired the House of Representatives and issued laptop computers for use at legislator desks in the chambers. Amendment text is scanned and bar coded by a floor clerk, and the information is stored on two file servers connected to both the chamber network and the rest of the Capitol. Dell Latitude laptops running OS/2 are attached to members | <urn:uuid:bf745a76-6ce2-444d-b6c4-cf9b2141a637> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Laptop-Legislators.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936106 | 313 | 2.6875 | 3 |
Is a gun better than a knife?
I've been trying hard for an analogy, but this one kind of works. Which is better? A gun or a knife?
Both will help defend you against an attacker. A gun may be better than a knife if you are under attack from a big group of attackers running at you, but without ammunition, you are left defenseless. The knife works without ammunition and always provides a consistent deterrent, so in some respects, gives better protection than a gun.
Which is not a bad way to try and introduce the concept of FIM versus Anti-Virus technology. Anti-Virus technology will automatically eliminate malware from a computer, usually before it has done any damage. Both at the point at which malware is introduced to a computer, thorough email, download or USB, and at the instant at which a malware file is accessed, the AV will scan for known malware. If identified as a known virus, or even if the file exhibits characteristics that are associated with malware, the infected files can be removed from the computer.
However, if the AV system doesn't have a definition for the malware at hand, then like a gun with an empty magazine, it can't do anything to help.
File Integrity Monitoring by contrast may not be quite so 'active' in wiping out known malware, but - like a knife - it never needs ammo to maintain its role as a defense against malware. A FIM system will always report potentially unsafe filesystem activity, albeit with intelligence and rules to ignore certain activities that are always defined safe, regular or normal.
AV and FIM versus the Zero Day Threat
The key points to note from the previous description of AV operation is that the virus must either be 'known' i.e. the virus has been identified and categorized by the AV vendor, or that the malware must 'exhibit characteristics associated with malware' i.e. it looks, feels and acts like a virus. Anti-virus technology works on the principle that it has a regularly updated 'signature' or 'definition' list containing details of known malware. Any time a new file is introduced to the computer, the AV system has a look at the file and if it matches anything on its list, the file gets quarantined.
In other words, if a brand new, never-been-seen-before virus or Trojan is introduced to your computer, it is far from guaranteed that your AV system will do anything to stop it. Ask yourself - if AV technology was perfect, why would anybody still be concerned about malware?
The lifecycle of malware can be anything from 1 day to 2 years. The malware must first be seen - usually a victim will notice symptoms of the infection and investigate before reporting it to their AV vendor. At that point the AV vendor will work out how to counteract the malware in the future, and update their AV system definitions/signature files with details of this new malware strain. Finally the definition update is made available to the world, individual servers and workstations around the world will update themselves and will thereafter be rendered immune to this virus. Even if this process takes a day to conclude then that is a pretty good turnaround - after just one day the world is safe from the threat.
However, up until this time the malware is a problem. Hence the term 'Zero Day Threat' - the dangerous time is between 'Day Zero' and whichever day the inoculating definition update is provided.
By contrast, a FIM system will detect the unusual filesystem activity - either at the point at which the malware is introduced or when the malware becomes active, creating files or changing server settings to allow it to report back the stolen data.
Where is FIM better than AV?
As outlined previously, FIM needs no signatures or definitions to try and second guess whether a file is malware or not and it is therefore less fallible than AV.
Where FIM provides some distinct advantage over and above AV is in that it offers far better preventative measures than AV. Anti-Virus systems are based on a reactive model, a 'try and stop the threat once the malware has hit the server' approach to defense.
An Enterprise FIM system will not only keep watch over the core system and program files of the server, watching for malware introductions, but will also audit all the server's built-in defense mechanisms. The process of hardening a server is still the number one means of providing a secure computing environment and prevention, as we all know, is better than cure. Why try and hope your AV software will identify and quarantine threats when you can render your server fundamentally secure via a hardened configuration?
Add to this that Enterprise FIM can be used to harden and protect all components of your IT Estate, including Windows, Linux, Solaris, Oracle, SQL Server, Firewalls, Routers, Workstations, POS systems etc. etc. etc. and you are now looking at an absolutely essential IT Security defense system.
This article was never going to be about whether you should implement FIM or AV protection for your systems. Of course, you need both, plus some good firewalling, IDS and IPS defenses, all wrapped up with solid best practices in change and configuration management, all scrutinized for compliance via comprehensive audit trails and procedural guidelines.
Unfortunately there is no real 'making do' or cutting corners when it comes to IT Security. Trying to compromise on one component or another is a false economy and every single security standard and best practice guide in the world agrees on this.
FIM, AV, auditing and change management should be mandatory components in your security defenses. | <urn:uuid:7864ba50-da3e-4708-9882-50f68239a78f> | CC-MAIN-2017-04 | https://www.newnettechnologies.com/is-fim-better-than-av.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00175-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936152 | 1,158 | 2.59375 | 3 |
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = Θ (g(n)) means it is within a constant multiple of g(n). The equation is read, "f of n is theta g of n".
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k. The values of c1, c2, and k must be fixed for the function f and must not depend on n.
Also known as theta.
Generalization (I am a kind of ...)
See also ~, asymptotically tight bound.
Note: This is the upper-case Greek letter Theta.
Donald E. Knuth, Big Omicron and Big Omega and Big Theta, SIGACT News, 8(2):18-24, April-June 1976.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 14 August 2008.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "Θ", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 August 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/theta.html | <urn:uuid:c466a749-f8b3-4a08-a32a-ec8097a7bd58> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/theta.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863325 | 358 | 3.390625 | 3 |
Should passwords die in a fire?
Michael Daniel, the US Cyber Security Czar thinks so. His replacement solution? Selfies (the pictures people take of themselves). While the notion of killing the password is shared by many, suggestions -- even those not as laughable as selfies -- have a tendency to fall short.
Now Twitter suggests they need to do away with passwords because it won’t work for people in developing countries. Their suggestion? Use your phone number and a service they developed to have a code sent to your phone. It’s what we used to call a one-time password. Except in this case, what do we know about the wireless network(s) in which the password is delivered? Do you trust them?
Seems a lot of solutions targeting the demise of the password end up relying on… passwords. Sure they wrap complexity around them, or they add factors (see below). What these attempts reveal is clear: passwords aren’t the problem. The friction, agony, and disdain are just symptoms.
The real challenge? The ability to clearly define the problem we’re trying to solve.
Building on the basics of authentication
The basics of authentication draw on a few key concepts:
Identity proofing: the methods used and confidence in associating the identity of a person with an account, device, or other construct
Factors of authentication, classically explained as
something you know (like a password)
something you have (like a token)
something you are (biometrics)
Level of assurance: the strength of the identity proofing that is required. Typically, the higher the assurance level, the more factors of authentication are required
Sometimes people conflate identity with authentication. Allowing for some confusion, we have an increasing need for more assurance/confidence in the authentication.
By focusing on the password, here’s what we miss
The password, itself, is but a part of a larger system. That means the desire to bolster, abandon, or replace passwords needs to address three critical elements (read more here). As a system, authentication has at least three parts:
Design and implementation
Operation and maintenance
Most of the outrage over passwords is hyper-focused on individual usage. As such, the more critical components of the solution are largely ignored. And that creates opportunity for attackers.
While some high-profile attacks are suggested (or confirmed) to take advantage of single compromised user accounts, the broader trend is attacks on password stores and exploits that take advantage of weaknesses in system design.
In the blind rush to end the password, it is essential to keep focus on the expected outcome and necessary parts of the solution design.
Defining the problem we need to solve
The first step to design a better authentication system means forgetting about passwords. It also means setting aside dreams of selfies and other headline-grabbing methods. Instead, go back to the basics and focus on functional outcomes to define the problem before advancing a solution.
Complaints about passwords suggest the problem we need to solve is authentication. The need to design, implement, and offer methods for authentication that are easy-to-use, hard to break, and provide the appropriate level of assurance.
The high-level criteria for a solution include:
Easy to use
Easy to implement
Strong/easy to protect
Allows for the appropriate confidence in identity proofing
Allows for the desired level of assurance
The criteria are both subjective and variable. Designing a solution that allows that sort of flexibility requires more time to clearly define and explain each of the requirements in a way that is easily understood.
Developing a better solution
When people write about using existing networks to handle authentication -- in the name of convenience and ending the password -- we have to evaluate the entire solution design, then compare it against known and anticipated problems.
A good way to get started? Use a concept central to many practices -- put people in the center. Learn how they work, then design and build solutions that address their needs. We never really did that with passwords. Or the real reason for passwords - authentication.
Advancing the discussion
For the nearly two decades of my security career, people routinely call for the demise of the humble password. If we’re going to end the password - which I’d support -- then the answer isn’t as simple as two factor (which typically still uses a password), or biometrics (too many questions to ask, too many left unanswered). Admittedly, Apple’s Touch ID seems to be paving a potential pathway for biometrics, and it merits more scrutiny.
In the meantime, if we stop bashing the password, starting discussing the problem, defining requirements, and sharing our knowledge and experience, our children might actually experience a different, better, and more usable solution to the challenge of authentication. | <urn:uuid:07ff92f9-1dc7-4e98-933c-7288d45442d2> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2840298/security-leadership/the-real-problem-with-passwords-we-only-treat-symptoms.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929701 | 997 | 2.515625 | 3 |
In order to get an unadulterated view of what is on your machine, I recommend making two setting changes.
First, make hidden files and folders visible. When you are troubleshooting and looking for files, you don’t want something to go unnoticed.
Secondly, show extensions for known file types. Instead of relying upon icons to tell you the file extension, see it for yourself.
Open a folder or go to My Computer. From the Menu bar, select Tools, Folder Options…
Click on the View tab.
Set the Radio Button under Advanced Settings to Show hidden files and folders.
Uncheck Hide extensions for known file types.
Now, when you look at files you will see their full name like example.txt instead of just example. | <urn:uuid:3c2e3d4a-1bd4-4036-8a5a-741301ee7d22> | CC-MAIN-2017-04 | https://www.404techsupport.com/2008/10/simple-tip-hidden-folders-and-file-extensions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885439 | 159 | 2.515625 | 3 |
Big data has the power to change scientific research from a hypothesis-driven field to one that’s data-driven, Farnam Jahanian, chief of the National Science Foundation’s Computer and Information Science and Engineering Directorate, said Wednesday.
Reaching that point, however, will require upfront investment from government and the private sector to build infrastructure for data analysis and new collaboration tools, Jahanian said. He was speaking at a big data briefing for congressional staff hosted by the industry group TechAmerica.
The term big data refers generally to the mass of new information created by the Internet and by scientific tools such as the Hubble Telescope and the Large Hadron Collider. The emerging field of big data analysis is aimed at sorting through the massive volume of that data -- whether it’s social media posts, video clips, satellite feeds or the reaction of accelerated particles -- to gather intelligence and spot new patterns.
Federal officials announced in March that the government will invest $200 million in research grants and infrastructure building for big data.
The investment was spawned by a June 2011 report from the President's Council of Advisors on Science and Technology, which found a gap in the private sector's investment in basic research and development for big data.
The research firm Gartner predicted in December 2011 that 85 percent of Fortune 500 firms will be unprepared to leverage big data for a competitive advantage by 2015.
Big data analytics also has the potential to improve government efficiency, panelists at the TechAmerica event said.
The Centers for Medicare and Medicaid Services, for example, could pull data from insurance reports, hospital forms and anonymized data from electronic medical records to get a much better understanding of which medications and procedures are most effective, said Caron Kogan, a strategic planning director at Lockheed Martin Corp.
In addition, the Defense Department could gather better data on the expected life cycle of its equipment so that it could replace equipment before it fails, drastically cutting down its supply chain costs.
“[Some of] these are old concepts,” Kogan said, “but now you have more data so predictability is increased. You’re not working with a sample size, you’re working with all this data to assess which parts might fail.”
Big data also has the potential to point out new patterns or opportunities for efficiency that officials may never have imagined, said Bill Perlowitz, chief technology officer of Wyle science, technology and engineering group.
“In hypothetical science, you propose a hypothesis, you go out and gather data and you see if your hypothesis is supported,” Perlowitz said. “That limits your exploration to what you can imagine. It also limits the number of relationships you can explore because the human mind can only go so far.
“The shift with data-driven science and big data,” he said, “is that first we collect the data and then we see what it tells us. We don’t have a pretense that we understand what those relationships are, or what information we may find.” | <urn:uuid:32fd197d-35bc-487d-a8d4-f727a7049524> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2012/05/big-data-could-remake-science-and-government/55549/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942791 | 633 | 2.53125 | 3 |
Neurosynaptic is among aa basket of research terms you will have get familiar with.
IBM has announced that it will invest $3bn over the coming five years in research semiconductor programmes aimed at creating chips that push the limits of silicon technology to 7 nanometers and below and create a ‘post-silicon future.’
First programme will focus on "7 nanometer and beyond" silicon technology addressing the physical challenges threatening current semiconductor scaling techniques and impede the ability to manufacture such chips, the company said.
IBM Research senior vice president John Kelly, "The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?"
"IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems," Kelly said.
"This new investment will ensure that we produce the necessary innovations to meet these challenges."
The second programme is called Bridge to a "Post-Silicon" Era, which is aimed at developing alternative technology to create post-silicon era chips to overcome the limitations of the present silicon based semiconductors.
The money will be also spent on creating a new programming language and application named semiconductor Neurosynaptic Computing which intends to boost computing speed with less energy consumption.
IBM researchers will also focus on Silicon Photonics, III-V technologies, Carbon Nanotubes, Graphene and the next generation of low power transistors.
IBM Systems and Technology Groups senior vice president, Tom Rosamilia said: "In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future."
"IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems." | <urn:uuid:7351282f-fe53-4991-aca5-22c56a15f548> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/ibm-to-invest-3bn-to-push-the-limit-of-chip-technology-100714-4315478 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920477 | 403 | 2.53125 | 3 |
CompTIA 220-601 – A+ PC components
This post is outdated. For an updated guide see Jed Reisner’s A+ 220-801 and 220-802 guide.
These questions are based on: 220-601 – A+ Essentials CompTIA Self-Test Software Practice Test.
Objective: Personal computer components
SubObjective: Identify the fundamental principles of using personal computers
Single Answer, Multiple Choice
What is the minimum number of DIMMs that can be installed in a typical system?
DIMMs are 64 bits wide. This width matches the memory bus of modern computer systems, which means DIMMs can be installed one at a time.
The DIMM modules are eight bytes wide and transfer eight bytes (64 bits) of data at a time. With DIMMs made of older SDRAM (synchronous DRAM) memory chips, the data is transferred once per clock cycle. But with DDR SDRAM and DDR2 SDRAM, the data is transferred two times per clock cycle — once on the leading edge of the clock signal and again on the falling edge. This allows a tremendous amount of data to be transferred per clock cycle. For example, in a Pentium 4 system with a front-side bus operating at 400 MHz, up to 6,400 MB (megabytes) of data is transferred to and from the DDR2-DIMMs per second.
The total speed is affected by the clock and bus speed supported by the module. For example, DDR 1600 uses a clock speed of 100 MHz with a bus speed of 200, resulting in a total transfer rate of 1600 Mbps.
SIMMs come in two flavors: 30-pin and 72-pin. The 30-pin SIMMs are used in x486 systems, which have a 32-bit memory bus. Each 30-pin SIMM is only eight bits wide, which is why four 30-pin SIMMs are needed to form a single bank of memory. The larger 72-pin SIMMs expand the chip bus to 32-bits wide. They were developed for later x486 systems and the original Pentium systems, which have a 64-bit memory bus. This means that only one 72-pin SIMM can be installed in an x486 system, but two are required for Pentium systems because of their larger bus. SIMMs were phased out in favor of DIMMs because of performance limitations. Their performance is dramatically inferior to DIMM and RIMMs because of the slower speeds at which they operate. For example, the maximum data transfer of an EDO SIMMs operating on a 33-MHz clock is about 266 MBps.
RIMMs come in 64-, 32- and 16-bit packages. The 64-bit modules have an eight-byte bus, and the 32-bit and 16-bit modules have four- and two-byte buses, respectively. The 64-bit RIMMs can operate at very high speeds and can transfer an amazing amount of data per second. For example, RIMM modules operating on a clock speed of 600 MHz can transfer eight bytes of data twice per clock cycle, for a total transfer rate of up to 9,600 MBps.
- Upgrading and Repairing PCs, 17th edition, Scott Mueller, pp. 492-495.
- Upgrading and Repairing PCs, 17th edition, Scott Mueller, p. 487. | <urn:uuid:9875d06f-d4d2-4333-b8f9-0dbbd312ed62> | CC-MAIN-2017-04 | http://certmag.com/personal-computer-components/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912622 | 707 | 3.40625 | 3 |
System and network architects beware – putting that IoT network in place isn’t as straightforward as it looks.
The Internet of Things has been hugely talked up, from the astronomical figures of devices and connections, to the promise of increased efficiency and new revenue streams.
Yet, as many IT teams would attest, putting in the infrastructure to support an IoT project is not easy. Deploying an IoT network requires everything from a proper tendering process for acquiring these devices, and making sure they are secure, to also choosing over what connection and protocol (over Wi-Fi, radio or fibre) these devices connect with one another.
Then there comes the conversation of monitoring the gateway/network, and establishing where the data should be stored, managed and secured. There is too the question of how and where data should be analysed – and by whom.
All of this represents a huge challenge for CTOs and network architects, not to mention application developers, in managing the device, the network and the back-end services.
With that in mind, Internet of Business looks at the five top technical challenges to building and deploying an IoT network:
There are various layers of information security as far as IoT is concerned, from the device and gateway to the network and the cloud, where most of the data will eventually reside.
Security is typically undone by vulnerabilities at the end point, or in the software itself. End-to-end encryption is vital at all times, both when the data is at rest and in transit.
The threats are almost limitless: IoT devices are small, inexpensive devices with no or little physical security, while some of the computing platforms (which are often constrained in memory and computing power resources), sometimes don’t support complex security algorithms owing to weak encryption or low CPU cycles.
There’s also the physical danger of the devices being tampered with, or stolen, and of the device software not being updated – thus making it more susceptible to cyber-attack.
Some say that there must be a stronger focus on identity with the IoT, and then there needs to be a strengthening on network-centric methods like the Domain Name System (DNS) with DNSSEC and the DHCP to prevent attacks.
Experts say that simply monitoring and controlling the flow of data packets to and from the IoT device is not going to be enough to guarantee security, with network management and data logging key to preventing and stopping attacks.
IoT platforms, whether designed internally or externally, need to be scalable and robust. Enterprises want control over their data, and to protect their intellectual property (IP), whilst ensuring that the platform can handle huge amounts of data and connect with existing legacy systems.
It’s important to carefully consider what suppliers you want to work with, as well as the costs involved, and to work out how these platforms will connect with your existing IT architecture.
“The main challenge facing network architects when it comes to IoT is future-proofing the network, and having confidence that they’re making the right choice when it comes to sensor hardware, radio technologies or cloud platforms,” says Adam Leach, director of R&D at Nominet.
“Network architects want to avoid being locked into one vendor, and finding themselves tied to an outdated piece of hardware or software when something better comes to market.
Jan Maciejewski, industry analyst and contributing editor at the Internet of Business, explained that there is simply too much choice.
“Everyone claims to have a platform so the debate is now about a choosing the right one…and getting backwards compatibility. The scalability of platform and the integration with legacy systems is vital to choosing the right one.
“The issue is there is far too many out there. It’s about choosing the right platform for the problem you’re trying to revolve.”
Interoperability and standardisation
Networks can connect over a wide variety of communication protocols, and experts say that standardisation is going to be crucial here if IoT is to survive and thrive.
“The first challenge is planning the physical design of the network, and working out exactly where you’re going to place the sensors. After that, you have to plan a connectivity strategy for these sensors, and then implement it,” says Leach.
Yet, he admits interoperability is essential in the long term. “In the short term, it’s often more important to just get a system deployed. However, in the long-term interoperability is definitely a good thing. It’s the key to a sustainable IoT.”
Bas Geerdink, IT manager at ING, adds: “There is no clear understanding and wide-accepted set of standard protocols yet, although various attempts have been made through Bluetooth, ZigBee, NFC, Wi-Fi, LoRA.”
“Interoperability is really important. It is essential to get a good understanding of the protocols and other network standards to make the IoT efficient. If not, every new project will have to go through the same hassle again of setting up connections, programming interfaces, etc. We’ve seen before that things settle after a while even though there are a lot of vendors and standards, so I have confidence in that the market finds a way.”
Finally, network architects need to address the integration of all these new systems with existing, legacy platforms and analysis tools. This is not an insignificant challenge, so proper planning is vital to ensure new and old systems are properly interconnected, integrated and providing seamless operation.
Data storage and analytics
IoT devices will generate in huge amounts of data, and businesses subsequently have to decide on storage, analysis and deriving insight on that data. For example, how much data can be kept at the edge and how much in the cloud to be monitored by a PaaS or data analytics provider? And how can this data be transferred and stored while adhering to industry governance and international laws?
Analysis of this data can help firms to make proactive, data-driven decisions, while delivering an insight into their network operations. Meanwhile, predictive network analytics tools, offered alongside network management systems, can provide reporting utilities that offer detailed network performance indicators.
This can be as simple as automatically prioritising data traffic, determining whether a new service or application being rolling out will exceed current network capacity, or agreeing a certain time when extra bandwidth will be provided for data-sapping processes.
IoT sensors and devices
Wireless sensor networks are a collection of distributed sensors that monitor physical or environmental conditions, such as temperature, sound, and pressure. Data from each sensor passes through the network.
The network engineers will need to measure and maintain these, while from an IT standpoint there is also the issue of procurement. Data management, security and accessibility all need to be considered in that initial tendering process.
“Network architects should be aware of the grand upscale that the IoT brings,” adds Geerdink. “Where currently an architect has to deal with only a few connections points (servers, computers/laptops), the IoT brings an enormous increase in endpoints.”
IoT Build is the only summit exploring the technical challenges of building the next generation of enterprise IoT networks. Tailored for CTOs, CIOs and Directors of Enterprise Architecture, you will learn deployment lessons from early adopter IoT “builders”. Featuring case studies as diverse as Hive, Bristol is Open, ING and Stanley Black & Decker. | <urn:uuid:9fd81013-3425-4bc3-9b7f-5bf3108eb639> | CC-MAIN-2017-04 | https://internetofbusiness.com/5-challenges-building-iot-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93791 | 1,556 | 2.75 | 3 |
The experts at Kaspersky Lab present their monthly report about malicious activity on users’ computers and on the Internet.
March in figures. The following statistics were compiled in March using data from computers running Kaspersky Lab products:
- 241 mln network attacks blocked;
- 85,8 mln attempted web-borne infections prevented;
- 219,8 mln malicious programs detected and neutralized on users’ computers;
- 96,7 mln heuristic verdicts registered.
Intrusion techniques. Cybercriminals obviously have a soft spot for Java exploits – of the five exploits to appear in the Top 20 malicious programs on the Internet in March, three of them were for vulnerabilities in Java.
Malware writers are also surprisingly quick to react to announcements of new vulnerabilities. A good example of this is a vulnerability in Adobe Flash Player that allowed cybercriminals to gain control of a user’s computer. The vulnerability was announced by Adobe on 14 March and by the next day Kaspersky Lab had already detected an exploit for it.
Social engineering also remains a popular tool for the cybercriminals, who have no qualms about exploiting tragic events for their own benefit. The Japanese earthquake and tsunami, plus the death of Elizabeth Taylor, did nothing to buck this trend. Scammers and malware writers spread malicious links to their own versions of the “latest news”, created malicious websites with content connected in some way to the disaster in Japan and sent out ‘Nigerian’ letters making emotional requests for money to be transferred to the message sender in order to help those who have suffered.
Protection against antivirus programs. The malevolent users behind HTML pages that are used in scams or to spread malware are constantly coming up with new ways to hide their creations from antivirus programs. In February cybercriminals were using Cascading Style Sheets (CSS) to protect scripts from being detected. Now, instead of CSS, they are using <textarea> tags on their malicious HTML pages. Cybercriminals use the tag as a container to store data that will later be used by the main script. For example, Trojan-Downloader.JS.Agent.fun at 9th position in the Top 20 rating of malicious programs on the Internet uses the data in the <textarea> tag to run other exploits.
In addition, according to Kaspersky Security Network (KSN) statistics, malware writers are actively modifying the exploits they use in drive-by attacks in order to avoid detection.
Mobile threats. At the beginning of March, Kaspersky Lab’s experts detected infected versions of legitimate apps on Android Market. They contained root exploits that allow a malicious program to obtain root access on Android smartphones, giving full administrator-level access to the device’s operating system. As well as a root exploit, the malicious APK archive contained two other malicious components. One of them sent an XML file containing IMEI, IMSI and other device information to a remote server and awaited further instructions. The other component had Trojan-downloader functionality.
More detailed information about the IT threats detected by Kaspersky Lab on the Internet and on users' computers in March 2011 is available at: www.securelist.com/en. | <urn:uuid:3828ff03-8175-4dfc-98d2-c9d98cc7b159> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2011/Malware_in_March_Cybercriminals_Extend_Repertoire_of_Tricks_to_Avoid_Detection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00406-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91712 | 671 | 2.71875 | 3 |
Large-scale, worldwide scientific initiatives, such as the one that found the Higgs Boson or the one that is currently researching the depths of proteomics, rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources.
Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
On July 4 of last year, one of the largest physics experiments in history announced the finding of the Higgs Boson. The discovery was another step in the verification of the Standard Model of elementary particles, and it was largely a result of the data collected by the ATLAS detector that was later stored, analyzed, and used in simulations in computational centers around the world.
Naturally, CERN is equipped with significant computational capabilities as it sifts through the swaths of data created by the LHC. However, a great deal of that data was being sent out to scientists across the world in over a hundred computing centers located in over 40 countries.
As a result, Google stepped forward in August of last year to offer its Compute Engine services for overflow scientific computing periods. According to Panitkin, those spikes would occur before major conferences, overloading the existing computational framework. These overflow spikes represent an intriguing phenomenon, a macro-scale example of a problem that many mid-sized research institutions face on their own. Many of those institutions house their own HPC cluster that handles the majority of their heavy duty computational leg-work. When those resources are exhausted at peak times, they turn to the cloud.
When that problem manifests itself at key times across a research project that spans hundreds of facilities across the globe, that becomes a massive, worldwide HPC cloud computing challenge.
As such, the ATLAS project was invited by Google to test the Google Compute Engine in an effort to complete that challenge.
The experience has gone well so far, according to Panitkin. “All in all, we had a great experience with Google Computing. We tested several computational scenarios on that platform…we think that Google Compute Engine is a modern cloud infrastructure that can serve as a stable, high performance platform for scientific computing.”
The ATLAS collector, diagrammed below, was designed to intake and record 800 million proton-proton interactions per second. Of those 800 million collisions per second, only about 0.0002 Higgs signatures are detected per second. That translates to one signature for every 83 minutes or so. The computing systems have to sift through that huge dataset containing information from each of those almost billion interactions a second to find that one distinct pattern.
Thankfully, much of the ATLAS data is instantly filtered and discarded by an automatic trigger system. Were this not the case, the collector would generate a slightly unsustainable petabyte of data per second.
Adding to the challenge that the enormous amount of data presents is the very particular signature the ATLAS project was looking for. According to Panitkin, sifting through that much is akin to trying to find just one person in a system of a thousand planets of the same population as Earth. To help visualize what that looks like, the above picture represents all the possible signatures while the diagram below shows the one specific indicator of the Higgs Boson.
CERN collects the data and initially distributes it to its 11 tier-one centers, as shown in the diagram below. The cloud and specifically the Google Compute Engine enter the picture in tier two, where about two hundred centers across the globe simulate their respective sections based on the tier-naught CERN data.
Combining all of those resources into a shared system is essential for scientific researchers, as they cull information from other tests and simulations run. According to Andrew Hanushevsky, who presented alongside Panitkin at the Google I/O event, the system was aggregated using the XRootD system. XRootD, coupled with cmsd, was instrumental and combining and managing the thousand-core PROOF cluster made for ATLAS as well as the 4000-core HTCondor cluster for CERN’s collision analysis.
The important aspect was ensuring the system acted as one, as Hanushevsky explained. “This is a B tree, we can split it up anyway we want and this is great for doing cloud deployment. Part of that tree can be inside the GCE, another part can be in a private cloud, another part in a private cluster, and we can piece that all together to make it look like one big cluster.”
With that in place, the researchers could share information across the network at an impressive transfer rate of 57 Mbps transfer rate to the Google Compute Engine.
Finally, according to Panitkin, the computations done over GCE were impressively accurate. The system reported, according to Panitkin, “no failures due to Google Compute Engine.”
The best science requires extensive collaboration. Global projects such as the one that found the Higgs Boson mark the pinnacle of that collaboration, and these efforts can only grow stronger with the betterment of large-scale cloud-based computing services like Google Compute Engine. | <urn:uuid:6e838ad9-064d-4a19-8ba1-d8f6488a2bb7> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/05/21/cern_google_and_the_future_of_global_science_initiatives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00130-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949776 | 1,105 | 2.75 | 3 |
Data center networking topology has improved significantly for a few years. With the developments of high speed switching device and multilayer network architecture, we have more powerful data centers with low latency, scalability and higher bandwidth. But due to the exponential increase of internet traffic and emerging web application, still data centers performance is not up to the market to meet the requirement. In order to face the increasing bandwidth demand and power requirement in the data centers, new connection scheme must be developed that can provide high throughput, low latency and less power consumption. So the optimal solution would be using optical fiber link between server to access switch.
Figure 1 shows a block diagram of a typical data center with four layers hierarchical network architecture from the bottom to top. The first layer is the servers which are connected to upper access layer switch. The second layer is access switch where many servers are connected with ToR switch. The third layer is aggregation layer which is connected with bottom access layer switch. Forth or highest layer is core layer where core switches are connected with router at top and aggregation layer at bottom to form a data center block. When a request is generated by a user, the request is forwarded through the internet to the top layer of the data center. The core switch devices are used to route the ingress traffic to the appropriate server. The main advantage of this architecture is that it can be scaled easily and has a good fault tolerance and quick failover. But the main drawback is high power consumption due to the several layer architecture switches and latency introduce due to multiple store-and-forwarding processing.
Figure 1. Traditional data center architecture
As the amount and size of traffic increase exponentially, the architecture of traditional data center is not sufficient to handle traffic and to meet the future challenges. An electrical-optical hybrid network has been proposed. It connects servers with upper layer switch with high availability and low latency. Connecting servers directly with upper layer using both electrical and optical link would fulfill the requirement for the propose scheme. Figure 2 depicts the proposed architecture. The blue lines indicate electrical connectivity and black dashed lines indicate optical connectivity.
Figure 2. Proposed data center architecture
Due to the emerging demand, servers require high bandwidth and low latency communication. Optical connectivity consumes less power at the same bandwidth. Connecting server directly using optical and electrical links will meet the demand and at the same time better load balancing is achieved.
Data center interconnection could be provided by several schemes. Interconnection could be at layer 1, 2 or 3. Considering transportation option and layered architecture of data center, interconnection is recommended at layer 2 aggregation layer. Figure 3 shows a data center interconnectivity solution. Each data center is connected at its aggregation layer using high speed optical fiber.
Figure 3. Data center interconnects architecture
Each server has both optical and electrical connectivity. The electrical connectivity is usually useful for communication between servers and handling short data transfer, whereas optical connectivity is used for long bulked data transfer. This is necessary when transferring data between data centers that require large bandwidth and low latency. At the same time because of layered architecture server can communicate with each other via access layer and don’t need to go the upper layer. Load balancing is achieved due to both hybrid electrical and optical links. Due to the optical connectivity virtual Ethernet port aggregator switching situation enhanced, it becomes easier to move server virtually not only within the data center but also among the data centers. The hybrid links intra and inter data center networking scenarios have significantly improved.
As the number of optical switches is introduced in every layer, the load sharing capability of the network is increased and power consumption for data center is reduced. In each layer, the electrical switch is reduced and replaced with optical switch. So the overall cost and power consumption associated with electrical switches have reduced.
A electrical and optical architecture is presented. Use of optical connectivity has some advantages such as less power consumption, higher bandwidth and low latency. Introducing both types of switches in each layer helps in load balancing. This also helps us when we interconnect our data centers where we require large bandwidth between servers. Because of higher bandwidth, it is easier to move server virtually not only within the data center but also among the data centers. So a hybrid electrical and optical networking topology improves the overall scenarios for data center and also for big data network. | <urn:uuid:15a722af-890e-4e98-8d78-17e73a4dae36> | CC-MAIN-2017-04 | http://www.fs.com/blog/deploying-hybrid-electrical-and-optical-network-to-achieve-big-data-transfer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00462-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908771 | 876 | 2.671875 | 3 |
ETL/ELT and Big Data
What’s the difference between ETL and ELT and what works best with big data?
ETL stands for “extract, transform, and load”. It’s the traditional set of functions that lets organizations extract data from numerous databases, applications, and systems, transform it as appropriate, and load it into another database, a data mart, or a data warehouse for analysis, or send it along to another operational system to support a business process.
As you may have suspected, ELT stands for “extract, load, and transform”. This is a process whereby the data is first loaded and then transformed, used primarily for big data and working with data lakes. Data lakes are storage repositories that hold large quantities of raw data in their native format. This can include structured or unstructured information, or virtually any kind of data.
For big data scenarios, using the ELT process allows you to create copies of the source data and move them into Hadoop. This is not as resource-intensive as ETL, where the transporting and transforming of data can be cumbersome. In ELT, because the data is in Hadoop and takes advantage of large-scale parallel processing, there is less stress on source systems, which shortens the time frame for transformation.
Few solutions can do this all. Rather than piece together the tools for extracting, loading, and transforming, look for a comprehensive big data integration platform that can execute efficiently. More information on how to make your big data work with your existing applications and processes can be found here. | <urn:uuid:cdfec27e-d1da-4d97-b435-f04e99e9b02e> | CC-MAIN-2017-04 | http://www.informationbuilders.com/blog/beth-adams/etlelt-and-big-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919623 | 333 | 2.765625 | 3 |
[ABOVE: Narrated by Apple CEO, Tim Cook, this is a commitment to corporate environmental responsiblity.]
Switch it off
Apple's newly updated environmental responsibility pages tell us iMacs use 0.9 watts of electricity in sleep mode, in contrast to 35 watts used by the original iMac.
That's significant, but millions of Mac users can make a big impact by simply turning their computer off when it isn't being used.
If you leave your iMac in Sleep mode for ten hours a day, you save 9 watts of electricity by switching it off -- that's 3,285 watts per year. If we assume 70 million active Mac users (and for the purposes of this illustration pretend they all run modern iMacs, which they don't) that’s a potential saving of 229,950 megawatts of power each year.
The Palo Verde plant in Arizona (the biggest such plant in the US) generates 3,937 megawatts.
If every Mac user switches their machine off when they aren't using it, the combined difference to global energy needs would be equivalent to the production of several power stations.
Battery powered mobile devices use much less power than PCs, but need recharging. Many iPhone, iPad and other device users leave battery chargers plugged in when they aren't in use. They use little power when left in this state, but with perhaps 1.1 billion mobile devices in use today, that small amount of wasted electricity is significant. Yes, the cost savings to you are minimal but if you multiply that small drain by a billion users then the figures add up. The electricity used annually by 170 million iPhone 5's would power all the homes in Cedar Rapids, Iowa, for a year. Not only this, but think of the money being handed over to greedy electricity firms for the convenience of leaving your charger plugged in. It's free money for them at little cost for you, but what's the global cost?
Just how long does it take you to walk to your printer, television, USB hub or external hard drive to switch it on or off? These consume power in standby mode -- not always a lot, but multiply that waste by millions of computer users and the numbers add up. Switching off electrical devices when not in use can shave a dollar or two off your utility bill, which is nice, but the consequences on global energy supply are incalculable.
- When purchasing electrical equipment check the label. Does it tell you how much electricity the device requires in normal use in a clear and intelligible way?
- Does the manufacturer offer any public statement explaining its environmental commitment?
- Does the manufacturer of the device you're considering offer recycling support?
Recycling schemes exist. Some even pay you for your old electrical devices. Don't just throw these things in the trash for an inevitable journey into landfill -- check with retailers, manufacturers and local services for recycling facilities.
[ABOVE: A partial scan of a full page Apple pro-environment ad on the back page of a newspaper this morning. "Some ideas we want every company to copy".]
Each of these steps makes little difference in isolation, but there are many millions of computer users on the planet. The potential difference to global energy demand made by millions taking these few steps is significant. Taking these simple steps will also help encourage other consumer electronics firms to do what Apple wants them to do and the Earth needs them to do: "Benchmark" its commitment to greener IT.
Got a story? Drop me a line via Twitter or in comments below and let me know. I'd like it if you chose to follow me on Twitter so I can let you know when fresh items are published here first on Computerworld. | <urn:uuid:c20e01ab-bd61-4015-a2b1-d20634ad4173> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2476203/apple-mac/5-steps-to-save-the-planet--earth-day--apple-special.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936188 | 767 | 2.890625 | 3 |
The unceasing arms-race between cyber attackers and cyber defenders has gained unprecedented levels of sophistication and complication. As defenders adopt new detection and response tools, attackers develop various techniques and methods to bypass those mechanisms. And deception is one of the most effective weapons on both sides of the game.
Deception techniques have traditionally been among the favorite methods in the attackers’ arsenal. Surprise and uncertainty provide the attacker with an inherent advantage over the defender, who cannot predict the attacker’s next move. Rather surprisingly, however, the broken symmetry can also be utilized by the defender.
Moving Target Defense (MTD) aims at creating asymmetric uncertainty on the attacker’s side, by changing the attack surface. The US Department of Homeland Security (DHS) defines MTD as, "the concept of controlling change across multiple system dimensions in order to increase uncertainty and apparent complexity for attackers, reduce their window of opportunity and increase the costs of their probing and attack efforts."
This point of view comes from the understanding that absolute security is not an achievable goal; there is an asymmetry between the attackers' and the defenders' costs and efforts. Therefore, there is a need to implement a new paradigm for changing the costs and efforts in this adversarial game.
Moving Target Attacks
Over the years, numerous techniques have been developed to enable recurring modifications of cyber-attacks. The below table lists the more common moving target attack techniques, followed by an explanation of each:
Deception techniques used by the attackers
|Polymorphism||Change malware signature|
|Metamorphism / self-modification||Change malware code on the fly|
|Obfuscation||Conceal code and logic|
|Self-encryption||Change malware signature and hide malicious code and data|
|Anti-VM/sandboxes||Evade forensic analysis by changing behavior in forensic environments|
|Anti-Debugging||Evade automated/manual investigation by changing behavior in forensic environments|
|Encrypted exploits||Evade automated/manual investigation by changing parameters & signatures|
Polymorphism is commonly used by malware authors in order to evade AV detection. By encrypting the malware’s payload, including its code and data, the attackers gain two main advantages. First, they can easily generate different instances of the same malware by using multiple encryption keys. Obviously, this renders the signature-based anti-malware facilities ineffective, as new instances have a new and unknown static signature. Secondly, the malware can bypass even deeper static analysis since its code and data are encrypted, so not exposed to scanners. Using metamorphism techniques, the malware’s author complicates the detection further by changing the in-memory code at every execution.
While polymorphism and metamorphism aim at evading automatic file and memory scanning, obfuscation is also effective against manual inspection of the code. Using obfuscation, the malware’s author creates code which is extremely difficult for a human analyst to understand. This is achieved by creating payload with obscured strings, dummy code and complicated function call graph which can be re-generated randomly with each instance of the malware.
Sandboxes and virtual machines are essential tools for malware analysts. Consequentially, modern malware can employ anti-VM and anti-sandbox mechanisms to detect if they are running within a virtualized sandboxed environment. If a VM or sandbox is detected, the malware alters its behavior and avoids any malicious activity. Once executing on real systems, after being tagged as benign, the malware starts its malicious activities. In the same manner, malware can use anti-debugging techniques to void debugging and run-time analysis.
Encrypted and targeted exploits have been used recently as part of exploits delivered through web pages ('exploit kits'). To avoid detection, URL patterns, host server, encryption keys, and file names are being changed on every delivery. These exploits can also evade honeypots by limiting the number of accesses to the exploit from the same IP.
Finally, some types of attacks are beginning the exploitation phase only after some real user interaction (e.g., web-page scrolling). By doing this, the attacker ensures execution on a real machine rather than automated dynamic analysis.
Those effective deception methods have rendered the defensive mechanisms inefficient over the years, and have led the attackers to a point of superiority. The defender is endlessly chasing the attacker, investing massive resources and efforts merely to detect and prevent previous kinds of attacks. Consequently, the traditional symmetry between defenders and attackers is broken. The attacker knows whom he is going to attack, when, where and by which weapons, while the defender is in a state of constant uncertainty.
Moving Target Defense
There are three main categories of MTD security: (1) network level MTD, (2) host level MTD, and (3) application level MTD.
Network-level MTD includes several mechanisms developed over the years. IP-hopping, for example, was used to change the host's IP address, thus increasing the network's complexity as seen by the attacker. Transparency is achieved by keeping the real host's IP address and associating each host with a virtual random IP address.
Some techniques aim at deceiving the attacker at the phase of network mapping and reconnaissance. The techniques include using random port numbers, extra open or closed ports, fake listening hosts, and obfuscated port traffic. Other techniques aim to provide the attacker with fake information about the host and OS type and version by, say, generating random network services responses which prevent OS identification.
Host-level MTD includes changing the hosts and OS level resources, naming and configurations to trick the attacker.
Application MTD involves changing the application environment in order to trick the attacker. For example, Address Space Layout Randomization (ASLR), which was introduced by Microsoft, involves randomly arranging the memory layout of the process’s address space to make it harder for an adversary to execute its shellcode.
Other techniques involve changing the application type and versioning and rotating them between different hosts, or using different settings and programming languages to compile the source-code, generating different code in every compilation. Table 2 lists the common techniques used in the different categories of MTD.
Deception techniques used by the defenders
|Information system part||Deception method|
|Network||Route change; random addresses, names and ports|
|Host||Change host address, replace host image.|
|OS||Change version and release; change host ID; Change memory addresses, structures, resource names|
The “Moving Target Defense” paradigm promises to break the (a)symmetry between the attacker and the defender. Now the attacker must also operate under uncertainty and unpredictability, where previously these were the concerns of the defender alone.
While network-level MTD is an interesting concept, randomizing IP addresses, network topology and configuration is not sufficient. The final destinations for attackers are the hosts, servers and end-points located behind the networks, firewalls and routers. The Operating System and applications are the lucrative target for zero-day exploits, malware and Advanced Persistent Threats (APT), and hence they serve as the main playground in the attacker-defender game.
Admittedly, the MTD paradigm is still in its infancy, yet it is safe to predict that it's best focused on applications and operating systems.
Some new technologies are taking the MTD paradigm to the next level, by creating environmental modifications of the application and the operating system, in a manner unknown to the attacker. Consequently, the elementary presuppositions used by the attacker in planning and deploying the offensive steps are made irrelevant. Each function call, jump to address or resource access entails potential failure, along with full exposition of the attack. Under these conditions, the costs of the attack rise steeply, while its probability of success sharply declines, making the attack practically and economically less feasible.
Over the near future we are going to witness an adoption of MTD in the seemingly endless cyberwarfare between defenders and offenders. Does it bring this war to its unexpected end? It is still too early to tell, but MTD stands out as a new factor that forces new rules in this old adversarial game.
Mordechai Guri is the Chief Science Officer of Morphisec, an innovator in moving target defense. He is also a security researcher, project manager and lecturer at the Ben Gurion University of the Negev, in the cybersecurity labs division.
This story, "Moving target defense vs. moving target attacks: The two faces of deception" was originally published by Network World. | <urn:uuid:0879cbee-387f-4c35-b0de-e30208308c2f> | CC-MAIN-2017-04 | http://www.itnews.com/article/3018881/tech-primers/moving-target-defense-vs-moving-target-attacks-the-two-faces-of-deception.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00360-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92423 | 1,766 | 2.609375 | 3 |
The Reliability of Flash Drives
By Umesh Maheshwari – Co-founder and CTO
Despite the increasing popularity of flash drives in the data center, very little has been published on their failure characteristics. So, it was a welcome relief to see a paper on Flash Reliability in Production based on flash drives in Google’s data centers.
On the one hand, the paper confirmed what many of us have expected and some of us have observed in practice. For instance, flash drives fail differently from disk drives for each of the two major failure modes:
- Whole drive failure (requiring drive replacement): Flash drives have a lower annual failure rate than disk drives.
- Partial data loss: Flash drives have a higher rate of uncorrectable errors than disk drives. (Each sector, typically from 0.5KB to 4KB, is protected by an error correction code, or ECC. When there are too many bit errors within a sector, ECC is unable to correct the errors, resulting in the loss of that sector.)
Storage system vendors need to understand these failure modes because, when drives fail or lose data, the system must compensate for the failures so that it does not lose data.
On the other hand, the paper uncovered a mystery: that a high raw bit error rate (RBER) is not predictive of uncorrectable errors. Here, “raw” means before applying error correction. A high RBER is generally considered a harbinger of uncorrectable errors. Unfortunately, the paper did not offer much explanation for the apparent lack of correlation between RBER and uncorrectable errors.
Below I explain how Nimble Storage systems are designed to avoid data loss. I also offer a plausible explanation for the mysterious lack of correlation between RBER and data loss.
Whole Drive Failures
It is not surprising that flash drives have a lower replacement rate than disk drives. Disk drives have mechanical and magnetic components that are more likely to result in a whole-drive failure than the mostly solid-state components within a flash drive.
The paper reports the following replacement rates:
- Flash drives: 4% to 10% over four years, or 1% to 2.5% annually on average;
- Disk drives: 2% to 9% annually, based on a previous study.
Nimble Storage has been selling and monitoring systems with flash and disk drives for over 5 years. Our observed failure rates are lower than those reported by the paper, but the relative ratio is consistent with the reported studies: the replacement rate of flash drives is about a third of the replacement rate of disk drives.
But, there is a catch. While disk drives have more complex hardware with mechanical and magnetic components, flash drives have more complex firmware to conduct address translation and garbage collection. Google’s flash drives run proprietary firmware that is likely streamlined and ruggedized for their file system. Off-the-shelf flash drives need to support general-purpose applications, requiring more complex firmware.
So, while the average replacement rate of flash drives is low, it can vary greatly by the make and firmware version, despite rigorous testing by drive and system vendors.
Another factor that introduces risk for flash drives is that the industry is still in its infancy. Disk drives were invented 60 years ago, yet we are still learning new facts about their failure characteristics! For instance, another paper published in the same conference points to how relative humidity plays a big role in disk failures. In contrast, flash drives became popular only about 10 years ago, so we should expect to run into some surprises and road bumps over the next few years or even decades.
How can we deal with this uncertainty? At Nimble Storage, we are paranoid about reliability. While other systems employ dual parity RAID or triple mirroring (each of which is able to tolerate the failure of any two drives), Nimble systems employ triple parity RAID (which is able to tolerate the failure of any three drives).
In addition to triple parity, our All Flash arrays include a reserved spare. When a drive fails, the array is able to rebuild the failed drive without needing to wait for a replacement. This shrinks the window when reliability and performance are degraded. The spare is reserved so that it is always available, even when the system is full. (In some other systems, the spare space disappears as the system is filled with data. That is like tossing the spare tire when the truck is loaded.)
One might think that all this parity and sparing would hurt the usable capacity relative to a system that uses only dual parity without reserved spares. That would indeed be true if the RAID group in the two systems had the same number of drives. But we leverage the higher degree of protection from triple parity and reserved sparing to support a wider RAID group with more drives, which reduces the relative overhead. The net effect is a win-win: the system has higher reliability as well as higher usable capacity.
Partial Data Loss
The biggest concern with flash drives is partial data loss from uncorrectable errors. The paper reports that a whopping 20% to 63% of flash drives lost some data in a four-year period, compared to only 3.5% of disk drives in a 2.5-year period based on a previous study.
An uncorrectable error happens when there are so many bit errors within a sector that ECC cannot recover it. Flash drives include a healthy dose of ECC within each sector, thus correcting many more bit errors than traditional ECC in disk drives. But the relentless drive towards smaller flash cell size and degradation from erase cycles and age result in a net increase in uncorrectable errors.
How can we compensate for uncorrectable errors? The same RAID parities that protect against whole drive failures also protect against partial data loss, because the lost data can be reconstructed from the other drives in the RAID group including the parities.
However, if we truly want to support the failure of up to K drives and support uncorrectable errors at the same time, the system would need K+1 parities. To see why, consider what happens when K drives have failed and the system has only K parities. The system rebuilds the failed drives by reading all of the data in the surviving drives. If any of the surviving drives harbors even a single uncorrectable error, the odds of which are high, the system cannot reconstruct the corresponding stripe.
An additional parity can solve the problem, but a RAID parity is an expensive way to reconstruct a few damaged sectors. Depending on the number of drives in the RAID group, each parity can reduce usable capacity by 5% to 15%.
A more efficient mechanism is to introduce what may be called “intra-drive” or “vertical” parity as opposed to the traditional “inter-drive” or “horizontal” parity. The system divides each drive into chunks, where each chunk includes say a hundred sectors. The system appends one or more parity sectors to each chunk, which is able to recover the loss of as many sectors within the chunk. This intra-drive parity uses much less space than inter-drive parity, typically about 1% of the raw capacity. The figure below shows a RAID stripe with both inter-drive and intra-drive parities.
This is why we call our parity “triple plus”: there are three inter-drive parities and some intra-drive parity. And, there is an additional reserved spare, which is not reflected in the name. Our marketing folks say we should call it “triple plus plus”. Really, we need industry-standard terminology for quantifying attributes such as inter-drive parities, intra-drive parities, and spares.
All this talk on parities pre-supposes that the loss of data is detectable in the first place. When a whole drive fails, the failure is relatively evident. When a sector is uncorrectable, the drive normally returns an error. But we know not to trust the drives to always catch the problem.
At Nimble, we have added system-level checksums to detect silent corruptions within drives. The system protects every block, whether data or metadata, whether on disk or flash, with a checksum and a self-id. Every time the system reads a block, it validates the checksum and the self-id. If either does not match, the system reconstructs the block from inter-drive and intra-drive parities. (The self-id catches a class of drive errors where reads or writes might be misdirected to a false address.)
Together, the checksums and multiple parities provide strong reliability against whole drive failures and partial data loss.
Raw Bit Error Rates and Uncorrectable Errors
One would expect RBER and uncorrectable errors to be correlated. After all, an uncorrectable error happens when there are so many bit errors within a sector that ECC cannot recover it.
Yet, the study on Google drives found no correlation between the two! The authors sliced and diced the information in ten different ways and still found no correlation.
A plausible explanation lies in the way RBER is calculated, which is also how the study calculated it: sum up the number of bit errors across all sectors read and divide by the total number of bits read. The problem is that this gives the average number of bit errors across all sectors, when what matters more is how those bit errors are distributed. If the bit errors are distributed somewhat uniformly across all sectors, ECC has a good chance of correcting every sector. But if some sectors have a higher concentration of bit errors, ECC might not be able to correct them.
Specifically, the paper reports that the 99 percentile RBER across all drives is on the order of 1/10^7 to 1/10^5. Even at the high end of the range, the average number of bit errors in a 1KB sector is only 1/10, which means that if the bit errors were evenly spread, only one in 10 sectors will have a single bit error. On the other hand, the ECC in modern flash drives can correct many 10s of bit errors within each sector. No wonder then that the average number of bit errors is not correlated with the rate of uncorrectable errors.
Here is an analogy. We expect places at higher altitude to have more snow. Now, imagine a world where each continent is mostly low and flat at some fixed altitude between say 0 to 100 meters (varying by the continent). The snowline is at about 1000 meters. The continents have some high mountains that rise above the snowline, but the mountains are so narrow and the continents so expansive that the mountains do not much alter the average altitude of the continent. One would find that there is not much correlation between the average altitude of the continent and the amount of snow it has. To uncover the correlation, one could measure the number of acres in different bands of altitude (e.g., 1—100m, 101—200m, 201—300m, etc.).
Similarly, in the world of flash drives, one could measure the number of sectors in different bands of bit errors (e.g., 1—10, 11—20, 21—30, etc.). Ideally, the drive should provide such a histogram as part of its S.M.A.R.T. interface. At the least, it should provide the number of sectors in the “danger zone”, the band just below the threshold number of bit errors that ECC is unable to recover from. Then we will be able to study how the distribution of bit errors changes with factors such as erase cycles and age, and thereby predict the incidence of uncorrectable errors. I discussed this possibility with the authors of the paper after it was presented at USENIX FAST 2016, and they seemed to agree.
The paper is a huge step forward in understanding the failure characteristics of flash drives. Based on the history of disk drives, we should expect to learn new facts about flash for many years to come.
- Umesh Maheshwari | <urn:uuid:9dcccda9-b601-48ca-8129-a36fe1159570> | CC-MAIN-2017-04 | https://www.nimblestorage.com/blog/the-reliability-of-flash-drives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944404 | 2,495 | 2.578125 | 3 |
ENERGY USE & CLIMATE CHANGE
Energy & Climate Change Strategy
EMC’s primary GHG emissions arise from the electricity needed to run our business—including our supply chain—and power our products. Therefore, our energy and climate change strategy focuses on the following key areas:I. Reducing emissions from our own operations by:
- Decreasing the demand for energy
- Maintaining a highly efficient infrastructure
- Designing and operating data centers for energy efficiency
- Identifying opportunities to adopt renewable energy sources that are economically and environmentally sound
- Engaging suppliers in measuring and reporting
- Collaborating with suppliers to reduce their emissions
- Working with the IT industry to develop standards for reporting supply chain emissions
- Supplying energy-efficient products
- Developing innovative approaches to manage the exponential growth of data in their operations
- Delivering services to help customers implement the most energy-efficient solutions for their businesses
- Supplying information solutions to optimize business functions, accelerate research, leverage data assets, and enhance public infrastructure
We began measuring our GHG emissions in 2005. Since then, our energy intensity by revenue—the amount of global GHG we emit per $1 million we earn—has declined by over 40 percent, from 32.47 to 19.09 metric tons. Based on 2012 data, we achieved our 2015 goal of reducing emissions per revenue by 40 percent from 2005. We’ve also made progress toward our other key performance indicators, including U.S. GHG emissions and global absolute GHG emissions. We did not meet our 2012 goal of reducing energy consumption per employee by 40 percent from the base year of 2005; however, we made significant progress by achieving a 36 percent reduction. In 2012, we gained efficiencies through investments in projects to reduce energy use and through incentives offered by local utility companies, including National Grid and NSTAR in Massachusetts, and Duke Energy in North Carolina. To learn more, visit the Data Dashboard and Efficient Facilities.
While we are pleased to have met our 2012 and 2015 emissions per revenue goals, we recognize there is more we can do to reduce emissions on a global scale. The following is a snapshot of our goal setting and revision process during the past seven years. As we think forward to 2013, we plan to further review our goals and targets to make sure they still closely align with our priorities and material issues.
DETERMINING OUR GOALS
To set our emissions targets, we began with the imperative to achieve an absolute reduction of at least 80 percent by 2050 in accordance with the Intergovernmental Panel on Climate Change’s (IPCC’s) Fourth Assessment Report recommendations. We then modeled various reduction trajectories to help us identify a solution that would be elastic enough to adjust to changes in our business while achieving a peak in absolute emissions by 2015, in accordance with recommendations from the 2007 Bali Climate Declaration.
Our model was based on the Corporate Finance Approach to Climate-stabilizing Targets (C-FACT) proposal presented by Autodesk in 2009. The model calculates the annual percentage reduction in intensity required to achieve an absolute goal. We selected this approach because intensity targets better accommodate growth through acquisitions (in which net emissions have not changed but accountability for them has shifted), and aligns business performance with emissions reductions performance rather than forcing tradeoffs between them. Setting an intensity trajectory also drives investment beyond one-time reductions to those that can be sustained into the future.
The C-FACT system, however, is “front-loaded” as it requires a declining absolute reduction in intensity each year. EMC developed a variant of the model that requires reductions to be more aggressive than the previous year. This makes better economic sense for the company as it leverages the learning curve for alternative fuels as they become more efficient and cost-effective. Please see the figure at the left for more information about the trajectories studied.
REPORTING AND ACCOUNTABILITY
We are committed to reporting our progress transparently and disclosing our GHG emissions annually to CDP. To learn more, see the link in the sidebar for our 2012 Response to CDP.
Our Ireland Center of Excellence (COE) also continues to participate in the European Emissions Trading Schemes, which is managed by the Ireland Environmental Protection Agency (EPA). While we have been significantly below our emissions allowances the past several years, the next period from 2013 to 2020 will be particularly challenging as it is expected that our allowance will be cut by an additional 30 percent. The Ireland EPA will issue allowances in 2013.
EMC’s reduction targets cannot be achieved through operational energy efficiency alone. Our corporate goal is to obtain 50 percent of electrical needs from renewable sources by 2040. We have continued working toward this goal by seeking renewable energy sources that are economically and environmentally sound. In 2012, our efforts included:
- Continued investigation of a combined heat and power plant for a large U.S. site. Though we completed a feasibility study in 2011, we are still analyzing the cost-benefit analysis conducted in 2012 to determine the appropriate next steps to bring this project online.
- Continued evaluation of the use of fuel cell technology for one of our U.S. locations. Findings showed that this technology will be most feasible for locations in the western region, but we continue to analyze the information to determine potential timing for introducing the technology.
- Continued compilation of information from the meteorological tower we installed in 2011 to collect wind data at our headquarters in Hopkinton, Massachusetts. The 18 months of data has determined that wind conditions favor installation of one or more wind turbines. Thinking forward to 2013, we’ll be further exploring how to bring this project online.
- Conducting additional research on solar energy options and evaluating the expected payback period and return on investment.
EMC purchased 175,000 MWh of Renewable Energy Certificates (RECs) in support of renewable energy generated in the U.S. during 2012. The RECs purchased supported renewable electricity delivered to the national power grid by alternative energy sources. The RECs are third-party verified by Green-e Energy to meet strict environmental and consumer protection standards. The 175,000 MWh represents over 30 percent of the grid electricity consumed at all U.S. EMC facilities including all divisions during 2012.
SCOPE 3 EMISSIONS
At EMC, we strive to increase the breadth and depth of our GHG reporting. In 2012, we reported on five of the 15 categories of Scope 3 emissions based on the WRI Greenhouse Gas Protocol Corporate Value Chain (Scope 3) Accounting and Reporting Standard. These reported categories, listed as follows, represent the greatest opportunity to drive improvement through our own actions and influence.
We track global corporate business travel miles from commercial flight and rail via our corporate American Express accounts. Beginning in 2012, we also accounted for the emissions associated with global business travel car rentals in our Scope 3 accounting efforts. The methodology for calculating the emissions associated with business travel is aligned with the GHG Protocol Corporate Accounting and Reporting Standard.
We are undertaking specific actions to reduce GHG emissions associated with employee business travel by implementing changes in technology, business processes, and resource management. We continue to expand technology to perform changes to customer technical environments from remote support centers in lieu of sending an engineer to the customer’s site resulting in reduced travel emissions. A substantial amount of work that previously required travel to a customer location is now being performed remotely. We have implemented other initiatives that will impact Scope 3 business travel emissions over time including increased use of high-definition video conferencing and job role/skill redesign to reduce the number of different individuals required to perform common services. To learn more, visit Employee Travel & Commuting.
EMC maintains a comprehensive employee commuter services program focused on minimizing single-occupancy vehicles and unnecessary local employee travel. In 2012, we expanded these efforts by launching a work-from-home pilot program at the Cork COE and introducing a carpooling tool called iPOOL at the India COE. EMC was again named as one of the Best Workplaces for Commuters by the Center for Urban Transportation Research, a program formerly administered by the U.S. EPA. For the second year in a row, EMC received the Massachusetts Excellence in Commuter Options (ECO) award at the highest Pinnacle level. The ECO Awards celebrate Massachusetts employers and their efforts to reduce congestion and greenhouse gas emissions by encouraging employees to utilize green transportation options. To learn more about our employee commuting programs, visit Employee Travel & Commuting.
Purchased Goods & Services
In 2012, we collected Scope 1 and 2 emissions data from direct Tier 1 suppliers comprising 98 percent of annual spend. Using economic allocation, we then approximated our share of these emissions. This involves determining the ratio of our spend to each company’s revenue, and applying that ratio to their reported emissions. While approximate at best, this methodology follows the WRI GHG Protocol Corporate Value Chain (Scope 3) Accounting and Reporting Standard and is currently the best available option given the level of data reported. Because this allocation approach requires access to supplier revenues, a small number of private companies were excluded from the analysis. To learn more, visit Supply Chain Social and Environmental Responsibility.
Transportation & Logistics
EMC’s Global Logistics Operations generated approximately 167,362 metric tons CO2e in 2012. This number is estimated using the GHG emissions reports from our logistics partners and covers inbound, outbound, interplant, and customer service transportation and logistics. In 2012, we collected emissions reports from carriers representing 75 percent of our logistics spend and extrapolated total emissions based on the reports we received. To learn more, visit Transportation & Logistics.
Use of sold products
EMC estimates that the lifetime GHG emissions from use of EMC products shipped to customers during 2012 will be approximately 3,683,725 metric tons CO2e. This value represents our customers’ Scope 2 emissions from powering our equipment. It is based on an estimated product lifespan of five years, and includes overhead for power distribution and cooling with an average Power Usage Efficiency (PUE) of 1.8. EMC’s configurations vary substantially from customer to customer as well as over time within a single customer. As such, it is not possible to sum the expected emissions from each and every system shipped in 2012. Rather, this estimate is based on the measured power consumption of disk drives, the inventory of disk drives shipped in 2012, an engineering estimate that 80 percent of system power is attributable to the disk subsystems, and an extremely conservative average system utilization of 90 percent. EMC used GHG Protocol methodology and a global average emissions factor of 569.3309 g CO2e per kWh. The IEA 2010 World CO2 emissions factor published in 2012 and IEA 1999-2002 CH4 and N2O emissions factors were applied. The global warming potentials, which were obtained from the IPCC SAR-100, are 1 for carbon dioxide, 21 for methane, and 310 for nitrous oxide. We believe the total is conservative (i.e., that the directly measured value, if feasible to obtain, would be lower) as our calculation takes into consideration neither the reduction over time in carbon-intensity of fuel used by our customers, nor improvements in data center power and cooling efficiency.
Environmental Lifecycle Analyses conducted prior to 2012 confirmed our expectations that more than 90 percent of lifecycle impacts are due to electricity consumed during the product use phase. Armed with this insight, in 2012 we continued efforts to design more efficient products and communicated more frequently with our end-use customers about using products more efficiently. This included drafting and distributing white papers on product energy attributes, both built-in (e.g., efficient power supplies, adaptive cooling, solid state drives, high-capacity hard disk drives) and operational (Fully Automated Storage Tiering, Virtual Provisioning, compression, and data de-duplication), to help our customers realize more energy efficiencies during product use. To learn more, visit Efficient Products. | <urn:uuid:5e4629d1-b7b8-4b11-83b6-b2fcd5dcdce3> | CC-MAIN-2017-04 | https://www.emc.com/corporate/sustainability/sustaining-ecosystems/strategy.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00176-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939116 | 2,479 | 2.5625 | 3 |
Installing Linux on the PlayStation 3
Dec 2, 2008 4:00 AM PT
One of the most exciting aspects of the PlayStation 3 is that it allows users to install an alternative operating system.
You can't overwrite Sony's GameOS or access privileged resources, but you can run your own applications on the new Cell Broadband Engine processor (called the "CBE processor" or the "Cell" for short).
The Cell is the mighty brain of the PlayStation 3, and this article explains how to access it by installing Linux on the console.
Brief Introduction to the Cell Processor
Before you start the installation, it helps to have a basic understanding of the target system. The Cell is composed of nine processing cores -- eight Synergistic Processor Units (SPUs) and one Power Processor Unit (PPU).
The SPUs were designed for high-speed number crunching, and each operates on multiple values at once. When you read about the Cell's performance in Folding@Home or in the Roadrunner supercomputer, the extraordinary speed is provided by the SPUs.
The PPU, on the other hand, was designed for general-purpose processing. It's not particularly fast, but it's well suited for running an operating system and managing the SPUs. You can think of the PPU as the coachman in an eight-horse carriage; it makes high-level decisions and keeps the horses in line.
An Overview of the Linux-PS3 Installation Process
When installing Linux, the first task is to choose a distribution. FixStars Solutions (recent acquisitor of TerraSoft) provides Yellow Dog Linux specifically for the Cell processor. Many users have also had success with Ubuntu and Debian.
However, IBM's Cell Software Development Kit (SDK) is only supported on Fedora Core 9 and Red Hat Enterprise Linux 5.2. For this reason, this discussion focuses on installing Fedora Core 9 on the PS3. The process consists of four main steps:
- Obtain the Fedora Core 9 ISO for the PowerPC and burn it to a DVD.
- Download the PS3 Add-on Tools ISO and burn it to a CD.
- Reformat the PS3 hard drive to support Linux.
- Install Linux using the add-on tools.
The rest of this article explains these steps in detail.
Part I: Obtain the Linux ISO for the PowerPC
The PPU's architecture is based on IBM's PowerPC specification, so you'll need the distribution of Fedora Core 9 that targets the PowerPC. The following steps show how to obtain it.
- Open a Web browser and go here.
- Find a mirror site for your location. In the column labeled Content, click on one of the transfer protocols (http, ftp, or rsync).
- In the mirror's directory hierarchy, open the releases folder, then 9, then Fedora, then ppc, then iso.
- Save Fedora-9-ppc-DVD.iso to your computer and burn it to a DVD.
Part II: Download the PS3 BootloaderTo boot an alternative OS on the PS3, you need a PS3-compatible bootloader. The following steps explain how to acquire this.
- Open a Web browser and go here.
- Save the CELL-Linux-CL_date-ADDON.iso file to your computer.
- Burn this file onto a CD.
This ISO file contains many Linux-related utilities for the PS3, but for our purposes, two are particularly important: otheros.bld and kboot. The first file, located in the PS3/otheros directory, is the bootloader called by the PS3 when it starts in Other OS mode. The second file provides a miniature Linux environment that makes it possible to install the full kernel on the PlayStation.
Part III: Reformat the PlayStation 3 Hard Drive
The following steps explain how to set aside memory on the console's hard drive for the installation:
- Turn on the PlayStation 3 and navigate to the Settings option in the main menu. If you haven't already, update your firmware with Settings->System Update. The console will restart.
- Select Settings->System Settings and choose the Format Utility option. Select Format Hard Disk, then Yes, then Custom. You can allocate memory in three ways: assign all of the memory for the PS3, assign 10 GB to Linux and the rest to the PS3, or assign 10 GB to the PS3 and the rest to Linux. I recommend the last option.
Choose between Quick Format and Full Format. I recommend the quick version, which only takes seconds. Select Yes to delete all data on the formatted memory. Press Enter to restart the PS3.
Part IV: Install Linux Using the Add-on Tools
At this point, you should have a Linux DVD, an Add-on Tools CD, and a PlayStation with memory set aside for Linux. If everything's in order, you're ready to start installing Linux. The procedure is as follows:
- Connect a USB keyboard and mouse to the console. You can navigate through the menu with arrow keys and select options with Enter.
- Insert the add-on CD into the console. Go to Settings->System Settings, and select Install Other OS. The PS3 will search for a suitable bootloader and find otheros.bld on the CD. Select Start and the PS3 will install the bootloader.
- When the installation finishes, eject the CD and insert the Linux DVD. Go back to the main menu and select Settings->System Settings->Default System. You'll see options for PS3 and Other OS, and your choice determines which operating system will start when you turn on the console. Choose Other OS and Yes to restart the console.
- When the PS3 restarts, two penguins will appear above a series of startup messages.
- Enter the bold-faced text at the kboot prompt:
kboot: linux64 xdriver=fbdev video=720p
The video parameter is optional and identifies your display (720p, 1080i, or 1080p). The 720p setting works well for most displays.
Note: If your keyboard sends gibberish to the command line, it means it was designed for Windows. You'll need a different keyboard to continue the installation.
- After a brief startup check, a Welcome to Fedora screen will appear. Choose whether you want to test the DVD or not. When the graphical installer appears, choose your language, your keyboard locale, and click Yes to initialize the hard drive.
- The rest of the Linux installation is standard across all distributions of Fedora Core 9. Configure your network settings, your location, and the drive partitioning. I recommend that you check Review and modify partitioning layout, remove at least 1 GB from the ext3 partition, and add the memory to the swap partition. Once you're finished, the installer will format the partition.
- The Cell's PPU won't run office applications quickly, so I recommend deselecting Office and Productivity. Click Next to start the full installation.
When the installation finishes, Linux will automatically load when you turn on the PS3. To go back to GameOS, restart the console and press the front power button until you hear a beep. Then, to return to Linux from GameOS, go to Settings->System Settings->Default System, select Other OS, and restart the console.
Congratulations! Installing a foreign operating system on a game console is no small task, and you have every right to be proud. If you'd like to pursue Cell development further, I recommend that you download IBM's free Software Development Kit.
Matthew Scarpino is the author of Programming the Cell Processor: For Games, Graphics and Computation. He lives in the San Francisco Bay Area and works as a software developer. | <urn:uuid:82e0e64a-3c55-496f-ac12-cd11d85d960f> | CC-MAIN-2017-04 | http://www.linuxinsider.com/story/open-source-software/65329.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.839061 | 1,625 | 2.59375 | 3 |
Wireless LAN Security and Analysis
Learn WLAN protocols and security mechanisms in order to neutralize hackers.
Tackle Wireless LAN security in this course that teaches the essential concepts and protocols from the inside out. Learn about 802.11 frame formats and transmission protocols in order to gain an understanding of where vulnerabilities might lie, and then apply that knowledge to WLAN security design concepts that make life difficult for hackers at every turn.
In addition to learning the intricacies of the 802.11 standard, WPA/WPA2, and 802.11i, you will build a secure WLAN from the ground up. You will configure and crack a series of security methods during hands-on lab exercises before a robust WPA2 Enterprise network emerges at the end of the week. You will learn to use a variety of professional grade analysis tools and open source attack tools as you test different wireless security protocols. | <urn:uuid:aad6d896-10f1-46d5-a869-eea93caeba0b> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/114684/wireless-lan-security-and-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901923 | 182 | 2.96875 | 3 |
According to the 2011 Annual Security Report published by PandaLabs, malware creation hit an all-time high in 2011.
Researchers at the security company detected 26 million new strains or variants of malicious code. Actual levels of malware created in 2011 may be much higher, as the report is based on data from samples the company collected from monitoring software deployed with customers.
The company attributes the increase to automation techniques being more widely used in the creation of malware variants - slightly altered signatures developed to foil anti-malware detection software.
The report notes a sharp increase in the proliferation of Trojans after levels had decreased the year before.
"In 2011, Trojans dominated the threat landscape more than ever before. Whereas in 2009 Trojans made up 60 percent of all malware, the percentage dropped to 56 percent in 2010. Last year, however, the percentage jumped up to 73 percent, so that three out of every four new malware strains created were Trojans," the report states.
The malware-type distribution for 2011 outlined in the report is as follows:
- Trojans: 73.31%
- Virus: 14.24%
- Worms: 8.13%
- Adware and Spyware: 2.9%
- Other: 1.43%
Asian countries again lead the world in the number of infections detected, with three nations seeing levels far greater than average.
"The countries leading the list of most infections are once again Thailand, China and Taiwan, with 60, 56 and 52 percent of infected computers respectively. These are actually the only countries that exceed the worldwide average of 38.49 percent," the report says.
The report also notes the record number of data records compromised in breaches, particularly the exposure of over 100 million Sony accounts, as well as those of over 35 million Steam customer. | <urn:uuid:4b48940b-99d3-46f7-9880-c24a62aa127c> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/19838-Report-Malware-Creation-Hit-Record-High-in-2011.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938337 | 375 | 2.53125 | 3 |
While protocols like MGCP, SIP, H.323 (H.225/H.225 RAS/H.245), and SCCP provide signalling in VoIP/IP telephony/unified communications networks, the Real Time Transport Protocol (RTP) is used to transport the voice and video media packets.
RTP has a number of important characteristics, including the following:
- RTP is transported over UDP rather than TCP for a number of reasons including the fact that TCP retransmission is not appropriate - any retransmitted voice or video packets would arrive too late to be useful.
- RTP includes timestamps and sequencing information in order to detect packet loss and ensure correct playout timing.
- QoS statistics for RTP flows, including packet loss, jitter, and round-trip time are provided using the Real Time Transport Control Protocol (RTCP).
As previously mentioned, RTP is carried over UDP, and the IP, UDP, and RTP headers that encapsulate voice/video media traffic total 40 bytes. These 40 bytes of overhead are significant because the media payload itself is often considerably smaller - if you are using the G.729 codec, for example, then the media payload may only be 20 bytes.
On slow links, it may be advantageous to compress the IP/UDP/RTP headers using Compressed RTP (cRTP). If you use cRTP then the 40 bytes of overhead incurred by the IP/UDP/RTP headers can typically be compressed down to 2 to 4 bytes (2 bytes when no UDP checksums are sent, and 4 bytes when checksums are sent).
The bandwidth savings when using cRTP can be considerable. If, for example, you use the G.711 codec (default payload 160 bytes), with MLP at 50 pps, then without cRTP you'd require bandwidth of 82.8 kbps, but with cRTP you'd only require bandwidth of 67.6 kbps. So, the bandwidth saving would be 18.36%. | <urn:uuid:1abe293e-e540-47c2-92a9-89cac7bbdbfd> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2344810/cisco-subnet/ccie-voice---ccvp-exam-objectives--7--rtp-and-crtp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868106 | 425 | 3.234375 | 3 |
SOA, or service oriented architecture offers a promising visionrather than massively complex, monolithic business applications, SOA consist of a series of small applications that each perform a limited function or service. For example, rather than a single application providing online banking services, an SOA version of the same might have an application that manages customer logins, another that pulls account balances, and yet another application that creates funds transfers and so on.
If you have ever taken an introductory programming course, this concept probably sounds familiar. Students are admonished to make their code modular", with each module performing a single, discrete function so that the module may be used again. Even to the non-programmer, this seems like simply good thinking. Why recreate the same functionality over and over again?
Superficially, SOA seems like modular programming on a larger scale, the big difference with SOA is there is a strong emphasis on modular business process functionality, rather than just isolating repeated technical functions.
At its core, a process is about doing something with data. Forget the technical notion of data as bits and bytes, but think of it as something like an invoice. To create an invoice we need to gather various other pieces of information, like a customer name or account number. We also need to manipulate the data a bit, perhaps calculating a late fee or discount on the invoice. We take the data, manipulate it, and spit out new data, in this case our completed invoice.
SOA, done well, sees these data gathering and manipulation events as a service. You need not worry about the underlying technical aspects of each service, as long as you can reliably send data to a service and get back the data you were expecting. An SOA-based application strings together these services in novel ways, and in theory allows you to create new applications by changing how the different services interact.
Imagine your company acquires another. In theory, you can rearrange each companys services in the right way, you can easily integrate your accounting and invoicing systems.
Getting On Board
So, why isnt everyone doing SOA? The simple answer is it's incredibly hard. Think of every employee in your company. Does one single person or group understand exactly what information and materials that person needs, what tasks they perform, and what precisely inputs and outputs are required to do their job? SOA requires similar knowledge. Tens of thousands of services must be thoroughly understood, and furthermore there must be a prevalent understanding of how each of these services interact with and depend on other services.
While wholesale implementation of SOA is likely cost prohibitive, getting into an SOA mindset allows for gaining some benefit from SOA without breaking the bank. When implementing new business applications, think of the data manipulation that is required and the associated business processes. Seek to break those processes down into component services that can be reused across applications. Eventually you will have a pool of services that can be rearranged or combined in novel ways; creating entirely new applications without the cost of implementing the associated functionality. | <urn:uuid:e3ce6278-3795-4f98-b83b-9df8851e7ba6> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3753276/Getting-into-the-SOA-Mindset.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00527-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948982 | 622 | 2.6875 | 3 |
Earth Day: How Smart Printing Practices Reduce Environmental Impact
The largest portion of a printer's environmental impact comes from sheer paper consumption alone. In fact, up to 80 percent can be tied to the use of paper alone. In recognition of Earth Day today, Knowledge Center contributor John D. Gagel shares nine ways that you can begin an immediate reduction in the volume of pages you print-and your impact on the environment.
Today is Earth Day. Organizations of all sizes-from large enterprises to small businesses and home offices-should be looking for ways to reduce their environmental impact as they conduct business. No organization can afford to cut out printing altogether, but there are ways that can help them become smarter about how they print-today and every other day.
Forward-thinking organizations have realized that just buying a product that's dubbed green or energy-efficient is no longer enough. Today, organizations must go one step further and implement best practices about how they use their technology devices to promote a greener office environment. When it comes to printing, this means businesses must directly address the amount they print-and look for ways to print more efficiently.
The following are nine easy methods that any organization can implement to immediately reduce the volume of printed pages:
Method No. 1: Think if it really needs to be printed
Before anyone within the organization prints out an e-mail message, a document or a presentation, they should take a second to stop and think if it really needs to be printed.
Documents and e-mail messages can always be reviewed on the computer using tracked changes or shared with a colleague via e-mail for their review. Organizations that commit to only printing out final documents eliminate the waste associated with printing out multiple drafted versions. Also, reducing the margins in all text documents will fit more content on each page and reduce the total number of pages used.
Method No. 2: Use both sides of the paper
This one may seem simple but, whenever possible, print on both sides of the paper. Many printers today offer a duplex or two-sided printing option. Leveraging this function is easy, and can reduce the amount of paper and cost of paper supplies.
Method No. 3: Leave out the ads
Internet ads always find a way to sneak from the screen to the paper, wasting valuable resources. Ensure that the printed version of Web pages only includes the text; eliminate pictures and ads that can take up a lot of space and ink. Use the free, smart printing tool bars that are available and allow you to only print information you need.
Method No. 4: Save your ink
While ink and toner can be costly, higher-yield cartridges can save businesses more money and reduce the printed cost per page. Printing in draft mode also significantly reduces the total amount of ink or toner used. | <urn:uuid:9271e156-9cb5-45c7-afb1-c861211a3d37> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Green-IT/Earth-Day-How-Smart-Printing-Practices-Reduce-Environmental-Impact | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927959 | 575 | 3.234375 | 3 |
As the classic method of combating botnets by taking down command and control centers has proven pretty much ineffective in the long run, there has been lots of talk lately about new stratagems that could bring about the desired result.
A group of researchers from the Delft University of Technology and Michigan State University have recently released an analysis of the role that ISPs could play in botnet mitigation – an analysis that led to interesting conclusions.
The often believed assumption that the presence of a high speed broadband connection is linked to the widespread presence of botnet infection in a country has been proven false.
The examination of some 190 billion spam messages from 170 million unique IP addresses captured between 2005 and 2009 led the researchers to conclude that the presence of piracy is a much more accurate indicator of the botnet infection rates tied to a specific country, and that higher education levels in a country are also conducive to a lower level of infection.
Another interesting result of this analysis is that ISPs of similar size located in the same country can have drastically different infection rates among its users, leading the researchers to conclude that some ISPs have adopted more effective practices against infection than others.
“The networks of just 50 ISPs account for around half of all infected machines worldwide,” say the researchers. “This is remarkable, in light of the tens of thousands of entities that can be attributed to the class of ISPs. The bulk of the infected machines are not located in the networks of obscure or rogue ISPs, but in those of established, well-known ISPs.”
That means that persuading just these 50 ISPs to begin implementing new, more efficient approaches for preventing and eradicating the infection could make a big dent into the botnet market.
“If the 50 ISPs we identified would ramp up their efforts, the problem might migrate elsewhere, say the researchers. “However, it is much more difficult to migrate a network of millions of infected machines than to migrate the C&C servers or other ancillary services.” | <urn:uuid:f2340d00-85ab-40d4-bfa9-13e0c8c60add> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/11/16/50-isps-harbor-half-of-all-infected-machines-worldwide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966127 | 407 | 2.8125 | 3 |
Learning Language ContinuouslyBy Tim Moran | Posted 2010-12-21 Email Print
Computers that keep adding to their knowledge over time.
Live and Learn—Forever
NELL—which is short for Never Ending Language Learner—is an “intelligent computer” created by the researchers at Carnegie Mellon University that's actually being taught to learn. According to a recent story on TechEye.net, NELL is “reading the Internet and learning from it in much the same way that humans learn language and acquire knowledge.”
So, how is NELL doing? So far, she has about a “C” average, having learned more than “440,000 separate things with an accuracy of 74 per cent.”
Nevertheless, while NELL is “getting brainier every day,” it has trouble telling the difference between facts and beliefs. The article suggests that this makes NELL more of a “rumour mill than a trusted source.”
To further complicate matters, NELL can’t unlearn anything or make a change in the way it thinks: “NELL’s human handlers had to tell NELL that Klingon is not an ethnic group, despite the fact that many earthlings think it is.” That’s a problem for some people all of us know.
Smartphones Give Voice to Domestic Violence Victims
The National Domestic Violence Hotline has teamed up with the Enterprise Mobility Foundation (EMF) and NextFone to allow companies to safely recycle mobile phones and help victims of domestic violence, according to a recent release.
The year-long partnership allows companies to send old mobile phones and smartphones to NextFone free of charge. NextFone will remove proprietary data from them and donate the current market value of phones to support the hotline’s services. Over the past 15 years, the hotline has answered nearly 2.5 million calls from women, men, children and families in crisis.
The release states that this effort will enable the hotline to increase its efforts to combat domestic violence across the country. To join the campaign, go to www.smartphonesforcharity.org and donate your firm’s old smartphones to support the hotline.
Cyborg Rat Infestation?
Kevin Warwick, a cybernetics researcher at the University of Reading in the United Kingdom, “has been working on creating neural networks that can control machines,” according to a recent story on Singularityhub.com. “[Warwick] and his team have taken the brain cells from rats, cultured them, and used them as the guidance control circuit for simple wheeled robots.”
The researchers have developed a way for a tiny, wheeled robot to be controlled by the neurons from a rat’s brain—which, believe it or not, is kept in a bell jar. The claim is that this is “the first robot in the world to be controlled entirely by living tissue.”
In other words, this is an animal cyborg. As Warwick has pointed out, these cyborgs are going to become more advanced—probably sooner rather than later: “Eventually, we’ll have a cultured system that is roughly the size of the simplest of mammalian brains.” And that’s the point at which the robot will be able to do more than simply stop itself from rolling into a wall. Spooky, eh? | <urn:uuid:329b1c83-2b52-45d9-b732-0d9ba35ba787> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Innovation/Learning-Language-Continuously-161211 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949194 | 712 | 2.671875 | 3 |
A Toronto hospital is implementing a clinical experiment using big data and real-time analytics to help save the lives of its tiniest patients.
When you watch a premature baby lying still in a clear plastic cradle in a Neonatal Intensive Care Unit, it is hard to imagine that something so small would have a real shot at survival. The obstacles these pre-term infants face are huge: immature lung development, undeveloped immune systems, and a very real change for disabilities -- that's just a small sample of the challenges infants born before 37 weeks of gestation face.
Advances in medical science have made these challenges much more beatable, and many pre-term infants have managed to beat the odds and live successful and happy lives.
Unfortunately, the very medical care that these tiny patients receive can be a vector for infections that, with their compromised immune systems, can be very life-threatening. This class of infections (which can be viral, bacterial, or even fungal) is referred to as nosocomial infections, which is basically any infection a patient picks up in a hospital environment.
Nosocomial infections can be nasty for full-grown adults, and unless doctors and nurses act fast, they can be deadly for premature infants. One of the worst of the bunch is an infection known as late onset neonatal sepsis (LONS). Complicating this is the sad fact that by the time a premature infant starts exhibiting symptoms of LONS infection, things have already gotten pretty bad for the infant. Blood tests aren't even that conclusive, since false-negative results are a real problem when you can't extract that much blood to test.
Launched in 2008
Data is collected from two sites, Toronto and Rhode Island
Three laptops per site
1,256 data points collected per second per patient
Over 375 patients monitored over the course of the study
14.5 patient-years of data (Number of years of study * number of patients)
Less than 0.5% of network bandwidth used
At a glance: Project ArtemisProject Artemis is a real-time data gathering and analysis framework that takes signals from babies' heart monitors, and processes them for signs of late onset neonatal sepsis infection, and immediately alerts health care teams.
Here, then, comes the miracle. A new application of monitoring technology and real-time analysis is being implemented by The Hospital for Sick Children at the University of Toronto that stands a real shot at alerting health-care providers that a child is being infected -- before acute symptoms of infections like LONS show up.
The "before" may sound like some sort of huckster psychic's trick, but there's some real science going on. According to a 2007 medical study by Drs. M. Pamela Griffin, et. al., up to a full day before LONS-infected patients start showing signs of trouble, their heart rates show very subtle yet fairly consistent heart rate changes: oddly, their heart beat becomes abnormally steady for a while, which rarely happens to humans even at rest.
These heart rate changes are impossible to detect by a human being watching or listening to a heart rate monitor. But for Dr. Carolyn McGregor of the University of Ontario Institute of Technology (UOIT), with the right tools, this is exactly the kind of task neonatal monitors and real-time analytic software can handle.
Catching LONS is a tricky business in a human-only clinician environment, McGregor explained. Typically, she said, it's often up to the experienced nurses who will use their knowledge and instincts to see a range of subtle clinical signs to determine a baby is "not quite right."
Treatment for LONS is a full course of antibiotics, which isn't something you want to mess around with with infants this age, and could (if overused) lead to antibiotic-resistant forms of LONS, which clearly would be devastating.
McGregor is the lead researcher on Project Artemis, a real-time data gathering and analysis framework that takes signals from the babies' heart monitors at The Hospital for Sick Children and the Women and Infant's Hospital in Providence, RI, and processes them to seek out signs of imminent LONS infection and then alert health care workers in each facility of the problem as soon as possible.
McGregor, who is the Canada Research Chair in Health Informatics at UOIT, explained that the Artemis project is being conducted as a blind study, with Artemis-monitored patients being diagnosed alongside NICU patients in each hospital, in order to determine if there's a real improvement in LONS diagnosis using the traditional clinician methods versus the Artemis monitors.
The Artemis system collects the massive amount of data that's pulled in from all of the existing monitoring systems that a premature infant is usually connected to anyway. For McGregor, it was the presence of the monitoring systems that helped spark the idea for Artemis: a lot of data was being collected, and nothing was being done with it.
"Massive" and "a lot" deserve some qualifiers, so try this number on for size. McGregor said that currently the Artemis system pulls in 1,256 data points per second per patient. This is what's known in data circles as a fire hose, and this is truly a powerful fire hose.
Yet, for all the information coming through the network, McGregor emphasized, there's not a big footprint here that will bog a hospital's network down.
"These are thin footprints, and are less than one percent of network traffic in a hospital," McGregor said.
The project's use of IBM's InfoSphere streaming technology, has also smoothed out a lot of the real-time analysis issues, since this is right up InfoSphere's alley. Interestingly, this high-level processing capability allows the entire system to be processed not on big iron, but rather three laptops: one for data acquisition, another for online analysis, and the final machine for stream persistence. All of the data from the Toronto hospital is mirrored at UOIT.
The addition of the Rhode Island location for Artemis added some additional challenges, but was done not only to deliver more data for the project's algorithms to analyze, but also to demonstrate the proof of concept of the Artemis system as a cloud-based service. While the actual data is gathered in the U.S., it's processed in Canada before being sent back in a more visual form back in Rhode Island.
Since Artemis is still proceeding as a scientific study, it's too early to report on the success rate of the system. But McGregor was pleased to see one thing come from the implementation of the study: it seems to dispel the notion that clinicians aren't interested in such systems.
"They recognize that this is a powerful clinical decision support tool," McGregor said.
McGregor is also excited about what projects like Artemis, both as monitoring and cloud-based tools, might mean for medical diagnostics. Late-stage, at-risk pregnancies could be monitored remotely (albeit as a slower data rate), thus alleviating the need (and cost) of in-hospital monitoring, she outlined.
Leukemia patients, who also are burdened with compromised immune systems, could also be remotely watched. "We could be looking for infections and catching them a lot sooner," McGregor said.
The potential benefits are high, and McGregor and her team are working hard to work out any technological, procedural, and legal kinks in the Artemis system to help maximize those benefits.
Big data, it seems, may be helping to save a lot of little lives.
This article, "Big data analytics may detect infection before clinicians" was originally published at ITworld.
Read more of Brian Proffitt's Zettatag and Open for Discussion blogs and follow the latest IT news at ITworld. Drop Brian a line or follow Brian on Twitter at @TheTechScribe. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:bfaa3778-a655-4425-9418-db8fef042999> | CC-MAIN-2017-04 | http://www.itworld.com/article/2729007/big-data/big-data-analytics-may-detect-infection-before-clinicians.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00453-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951445 | 1,635 | 3.328125 | 3 |
Arkansas has a bad reputation when it comes to its highways. In 1999, it was voted the state with the worst overall highways in the nation in a survey of truckers for Overdrive magazine.
But identifying and maintaining road surfaces is not as easy as it may seem. Typically, a highway department will invest thousands of person-hours each year into surveying road surfaces visually and then cataloguing which areas show cracks and damage. Add to that the need to revisit stretches of highway to gather more data or check human error, and the process of assessing the health of a highway system can be cumbersome.
So the Arkansas Highway and Transportation Department collaborated with the department of civil engineering at the University of Arkansas to develop a system that allows highway department employees to work more efficiently than ever before.
Let's Go to Video
One of the results of the collaboration with the University of Arkansas was the multimedia-based highway information system (MMHIS). Development of this system began in 1996.
The MMHIS takes video images of a highway, currently gathered with the help of a vehicle called an Automatic Road Analyzer (ARAN), and digitizes the images into MPEG-2 format. Back in the office, the images are synchronized with basic road data (county, road mile, width, recent work completed) and data from the scan (rutting, roughness).
"We're trying to cut down on field trips; [that's] the biggest time and money saver," said Bobby Bradshaw, MMHIS coordinator for the Arkansas Highway and Transportation Department.
Although an initial drive over all the road surfaces in the system is required, the data and video can then be used again and again.
"We can make initial visits and answer the vast majority of questions without going back out," said Bradshaw.
For example, a crew was going to remove a sign from a bridge and was unsure whether they would need to close one or both lanes of traffic. Without the system, someone would have driven to the bridge to visually inspect the sign placement; with the system, the image was available on the desktop for immediate decision-making.
Ready for the Future
The University of Arkansas is not content to rest on its laurels with the MMHIS. Kelvin Wang, a professor of civil engineering and a principal investigator on the MMHIS project, is an advocate for using technology to reduce this major budget item for states. "Highways are the biggest expenditure of public funds," he said. "To monitor the health of highways is very important."
Wang has helped developed a data collection van that is even more precise than the commercially available ARAN. The new van incorporates a digital camera capable of shooting 12 frames per second. The computer then links the images together depending on the speed of the vehicle - more images are retained at higher speeds, fewer at lower speeds. The computer system, using a series of proprietary algorithms, then processes the images and identifies the length and width of cracks. All of this is completed with greater efficiency and accuracy than a team surveying the highway manually is capable of. Wang estimates that a manual survey is, at most, 2 percent to 10 percent accurate. The video survey, on the other hand, completely covers 14 feet of highway width, enough to survey an entire 12-foot lane.
The potential cost savings are clear. Wang estimates a purchase price of $250,000 for the digital van and computer system. A manual survey would require a highway department to hire 50 workers to survey 10,000 miles.
The digital system is not currently available commercially, but Wang expects it to be available soon. He has formed a company, WayLink Systems Corp., with the University of Arkansas to pursue bringing the technology to market. At this writing, they are in negotiations with a larger company that would be the entity for commercialization.
Where does Arkansas plan to take this technology? "We hope to add more data to make a more robust system," said Bradshaw.
The department also plans to save money - not through reductions in staff, but through increases in efficiency. They are looking at incorporating a GPS element into the system to better identify portions of highway, and they hope to deploy the data gathered to all personnel with a need to use it. "It's a logical extension of the things we have here," Bradshaw said. | <urn:uuid:12c51247-f6d7-4452-86da-f80ff2e8c4a2> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Where-the-Tech-Meets-the-Road.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957714 | 888 | 2.84375 | 3 |
A digital system is a data technology that uses discrete values. By contrast, non-digital systems represent information using a continuous function. Although digital representations are discrete, the information represented can be either discrete, such as numbers and letters or continuous, such as sounds, images, and other measurements. The word digital comes from the same source as the word digit and digitus, as fingers are used for discrete counting. Wikipedia.
Digital Systems | Date: 2015-08-26
A method and apparatus for inspecting utility scale wind turbine generator blades from the ground includes a digital camera and an adjustable polarizing filter on a stable platform. The camera is used to record multiple digital images with varying polarization angles using an artificial polarized or polarized solar illumination to produce images with the same or similar registration but different polarization angles, followed by image processing using subtraction or absolute difference routines to yield high resolution, high contrast images with reduced glare and excellent surface definition over the field of view.
Digital Systems | Date: 2015-02-03
A method and system for zonal switching for light steering to produce an image having high dynamic range is described. The system comprises: a light source for providing light along an optical path; a spatial light modulator for directing portions of the light to off-state and on-state light paths, thereby producing an image, the spatial light modulator having a plurality of illumination zones corresponding to the image; and a set of sequentially-arranged optical elements in the optical path for steering at least some of the light from a first subset of the plurality of illumination zones to a second subset of the plurality of illumination zones to increase the dynamic range of the image. The dwell time of the one or more sequentially arranged optical elements can be modified to steer light.
Digital Systems | Date: 2015-05-01
An audio watermarking system conveys information using an audio channel by modulating an audio signal to produce a modulated signal by embedding additional information into the audio signal. Modulating the audio signal includes segmenting the audio signal into overlapping time segments using a non-rectangular analysis window function produce a windowed audio signal, processing the windowed audio signal for a time segment to produce frequency coefficients representing the windowed time segment and having phase values and magnitude values, selecting one or more of the frequency coefficients, modifying phase values of the selected frequency coefficients using the additional information to map the phase values onto a known phase constellation, and processing the frequency coefficients including the modified phase values to produce the modulated signal.
Digital Systems | Date: 2016-01-12
A gaming table provides for use of RFID technology to track chip movement on a table game and to infer an association between a wager and a player position based on a chip identifier of a chip placed on a particular position of the table. In some embodiments, previous position history of the chip is also taken into account in determining a player position associated with a wager.
Digital Systems | Date: 2015-10-19
An angular positioning apparatus that includes a wedge assembly and a rod assembly positioned generally within the wedge assembly. The wedge assembly can include a plurality of serially connected wedges, each separately rotatable and when rotated rotating the wedge or wedges, if any, above it. The rod assembly can be connected to the wedge assembly such that a distal end of the rod assembly has an operative pointing direction for a payload (an antenna, a camera, a laser or other angular sensitive device) mountable to a distal end of the apparatus) that is controllably movable by the wedge assembly and such that the rod assembly does not twist about a longitudinal axis thereof as the wedges are rotated. The rod assembly can form a hollow tube for wires and cables, or it can form a solid shaft. | <urn:uuid:5916b646-8667-4cd5-981f-08667dd9ff31> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/digital-systems-12752/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872381 | 771 | 2.625 | 3 |
Putting Laws into PracticeBy Baselinemag | Posted 2008-01-25 Email Print
Putting the laws into practice
The answer to the question of security rarely has an absolute value. For most enterprises, the virtualization decision is about where and when to apply controls that are sufficient in the environment based on risk tolerance. Ultimately, whether virtualization is bane or boon for security depends on how the systems are configured, deployed, and managed.
To manage these new security concerns, it’s important to understand the underpinnings of today’s virtual systems.
The primary components of a virtual environment are:
· Virtual Machines (VMs) and their accompanying guest operating systems: Theses are the “core” components of the virtual architecture.
· Virtual Machine Monitor (VMM): The software component responsible for managing interactions between the VM and the physical system.
· Hypervisor and/or host operating systems: The software that handles kernel operations.
A virtualized environment consists of a VMM and one or more virtual machines. The VMs and VMM interact with either a hypervisor or a host operating system to access hardware, local I/O, and networking resources. In addition to these components, virtualization architectures leverage virtual networking, virtual storage, and terminal service capabilities to complete their architectures.
This minimum set of components comprises virtual environments in a few distinct ways:
· Type 1 virtual environments are considered “full virtualization” environments and have virtual machines running on a hypervisor that interacts with the hardware.
· Type 2 virtual environments are considered “full virtualization” as well, but work with a host operating system instead of hypervisor (though sometimes the VMM is called a hypervisor anyway).
· Paravirtualized environments make performance gains by eliminating some of the emulation that occurs in full-virtualization environments.
· Other designations include hybrid virtual machines (HVMs) and hardware-assisted techniques.
From a security perspective, a more significant risk profile exists in a Type 2 environment where a host operating system with user applications and interfaces is running outside of a virtual machine at a level lower than the other virtual machines. Because of the architecture, the Type 2 environment increases risk through its incorporation of potential attacks against the host operating system. For example, a laptop running VMware with a Linux virtual machine on a Windows XP system inherits the attack surface of both operating systems, plus the virtualization code of the VMM. | <urn:uuid:2d0f3f90-e35a-4b80-9a5e-5c289835ae7f> | CC-MAIN-2017-04 | http://www.baselinemag.com/cloud-computing/The-Laws-of-Virtualization-Security-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00177-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896542 | 509 | 2.734375 | 3 |
Definition: Randomly permute N elements by exchanging each element ei with a random element from i to N. It consumes Θ(N log N) bits and runs in linear time.
Generalization (I am a kind of ...)
ideal random shuffle, permutation.
See also Johnson-Trotter, pseudo-random number generator.
Note: The algorithm can be viewed as a reverse selection sort. It is described in some detail as algorithm 3.4.2P in [Knuth97, 2:145].
For even a rather small number of elements (or cards), the total number of permutations is far larger than the period of most pseudo-random number generators. This implies that most permutations will never be generated. (After documentation for random.shuffle() in Python, particularly v2.6.1.)
R. A. Fisher and F. Yates, Example 12, Statistical Tables, London, 1938.
Richard Durstenfeld, Algorithm 235: Random permutation, CACM 7(7):420, July 1964.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 9 March 2015.
HTML page formatted Mon Mar 9 13:46:10 2015.
Cite this as:
Paul E. Black, "Fisher-Yates shuffle", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 9 March 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/fisherYatesShuffle.html | <urn:uuid:f8e5fcef-7d94-409b-83a3-7ebefec7ece6> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/fisherYatesShuffle.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00417-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.815964 | 345 | 2.953125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.