text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Ethernet Bridging (e) - Flash
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
As the communications industry transitions to wireless and wireline converged networks to support voice, video, data and mobile services over IP networks, a solid understanding of Ethernet and its role in networking is essential. Ethernet is native to IP and has been adopted in various forms by the telecom industry as the Layer 1 and Layer 2 technology of choice. Ethernet bridging and associated capabilities are used extensively in the end-to-end IP network and a solid foundation in IP and Ethernet has become a basic job requirement in the carrier world. Starting with a brief history, the course provides a focused basic level introduction to the fundamentals of Ethernet Bridging as a key capability of Ethernet based nodes. It is a modular introductory course only on Ethernet Bridging basics as part of the overall eLearning IP fundamentals curriculum.
This course is intended for those seeking a basic level introduction to Ethernet Bridging.
After completing this course, the student will be able to:
• Introduce Ethernet Bridges and explain how they operate
• Introduce Ethernet Switches and explain how they differ from Ethernet Bridges
• Discuss Spanning Tree Protocol and its variations
• Introduce the concept of Multilayer Switching
• Discuss the use of Link Aggregation Group in Ethernet networks
1. Ethernet Bridge
1.3. Learning Bridge
2. Ethernet Switch
2.3. Ethernet Switching
2.4. Full Duplex operation
3. Spanning Tree Protocol (STP)
4. Multilayer Switch (MLS)
5. Link Aggregation Group | <urn:uuid:abefa090-a097-4369-a89b-fe6929148071> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/elearning/ethernet-bridging-e-flash?destination=elearning-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864817 | 360 | 3.21875 | 3 |
Matchbox computers -- small but powerful open-source devices -- are a good way to build your own system and/or carry it with you. We look at the phenomenon and offer a slideshow of examples.
You could sum up most of the history of computing in one word: smaller. Each successive generation of computing devices has been tinier, more energy-efficient and more powerful than the previous one. Now we've reached a point where an entire PC can be crammed into a space not much larger than a matchbox or a stick of gum.
This new wave of "matchbox computers" (also known as "thumb PCs") has ushered in not just new form factors but new kinds of applications. Hobbyists have flocked to these tiny systems, attracted by their size, low cost and inherent hackability.
Raspberry Pi was one of the first open-source matchbox computers: inexpensive, tiny and energy-efficient.
Many of these devices are built either on open, freely reproducible architectures, or with well-documented components. Their appeal isn't just limited to the U.S., either: the Raspberry Pi was developed in the UK, and the Odroid is a South Korean product.
Even big PC makers are now getting into the game: Dell's Project Ophelia, an Android device on a stick, will run Android-based apps when plugged into a display's HDMI port. It debuted at CES this year; the developer's version should ship this month and the consumer version in July. It is expect to have a $100 price tag.
What led to matchbox devices?
Matchbox computing owes its existence to the confluence of several different trends:
Linux, GNU and FOSS. The Linux kernel and the GNU toolchain, products of the culture of free/open-source software (FOSS), have been used as the common substrate for any number of hardware designs. These range from set-top boxes and networking gear (with a little help from derivative projects like BusyBox) to Android-powered devices. Android itself, too, has been put to use in the same way.
Want to see what they look like?
Check out our slideshow:
From the most bare-bones and hobbyist-oriented to full-blown, ready-to-run PCs, a device for almost every price range and level of technical expertise.
Consequently, many matchbox devices are powered either by a Linux distribution of some kind, or by a stock edition of Android. Linux alone is most widely used on devices where the user interface is either minimal -- no more than a command-line interface is needed, if even that -- or where one needs to be custom-built for the device. Android, on the other hand, is useful for system-on-chip devices with built-in multimedia -- graphics, sound, HDMI-out, etc.
System-on-chip (SoC) devices. A good deal of the most recent engineering for SoCs has been for the smartphone and ultrabook market, with smartphones showing off SoC designs at their most compact, power-efficient and feature-laden. And since SoCs consist of very few components by design, that makes them easier to build a device around. Wireless networking is also included by default in most SoC designs, which makes them even more compact because a network port doesn't have to be included.
Standard interfaces. The standardization of interfaces for power, data and networking on most computers has made it easier for matchbox devices to implement those features consistently. USB, for instance, can be used both to supply power and to attach external devices, which means one less connector type needs to be provisioned on the device, making it smaller and less complex. The same goes for HDMI for video and audio, and SD card slots for external storage.
Hardware hackers. The subculture of tinkerers -- making devices do things that weren't intended by the manufacturer -- isn't by itself new. But the combination of open or standardized hardware designs plus free/open software on top of that has opened up whole new realms of possibility for what could be built and what those devices could be used for.
Many matchbox uses
Like computers themselves, matchbox devices have been finding a broad range of applications:
Replacing a full PC. Whether or not you can replace a full-blown PC with a matchbox system depends entirely on what you're using it for. For those who have already built a good deal of their workflow inside Linux or Android, using a matchbox system isn't much of a stretch. Users need to bring their own keyboards and displays (and sometimes software and operating systems), but those things aren't hard to add.
It's also possible for these systems to run as servers and not just workstations. Most matchbox devices have only the CPU and memory to handle modest user loads, but for certain applications (such as remote control) that may be all that's needed.
Media playback. Many matchbox systems have storage extensions, either in the form of an SD card slot or a port for an external USB drive. Such a system can be docked with a display and sound system via a USB or HDMI port -- or even a conventional audio jack -- and used to play back music or videos on the go. Matchbox systems can also be used to build full-blown media centers.
Prototyping. Matchbox systems can be used as the basis for hardware prototypes for devices yet to be built. The final product could incorporate the matchbox hardware itself, or be custom-made using the same core components as the matchbox system (in other words, the same SoC, just in a different configuration). The makers of the Raspberry Pi device, for instance, have an entire forum dedicated to discussions of projects that can be built with the board.
Low-power scenarios. Running a full PC, even a laptop, requires a power draw that might not be possible in some circumstances -- for example, if you need a system to run for a long time on a single battery charge. Matchbox systems draw very little power and can often be powered by no more than the wattage provided by a USB port.
Robotics and control. Many matchbox systems are used not just to build devices, but to be the control systems for other devices. The Gumstix Users wiki, for example, tracks Gumstix projects related to robotics, and various Raspberry Pi projects have been created to do things like control room lighting or water plants.
Obviously, matchbox systems aren't going to challenge any of the consumer-targeted systems currently in the market, even low-cost ones such as the $249 Samsung Chromebook. However, hobbyists, developers and tinkerers may find some of these tiny computers an intriguing challenge and/or a useful tool.
We've assembled a slideshow of sixteen matchbox devices, from the most bare-bones and hobbyist-oriented to full-blown, ready-to-run PCs. Check it out -- there's a device for most every price range and level of technical expertise out there.
This article, Matchbox computers: Small is beautiful (and powerful), was originally published at Computerworld.com.
Serdar Yegulalp has been writing about computers and information technology for over 15 years for a variety of publications.
Read more about pcs in Computerworld's PCs Topic Center.
This story, "Matchbox computers: Small is beautiful (and powerful)" was originally published by Computerworld.
One Cluster HAT, four Raspberry Pi Zeros, and one controller Raspberry Pi, and for under $100 you have...
A review of 19 companies that offer free cloud storage
The U.S. government reportedly pays Geek Squad technicians to dig through your PC for files to give to...
Interestingly the TSA finds some amusement or amazement in these finding as it now posts its own Top 10...
More than 30 vendors offer an SD-WAN option. To decide which is best for you, start by looking at the...
It turns out making something easy isn't easy, and cloud is anything but easy.
Here's an analysis of emergency room data that shows most computer-related injuries stem from far less... | <urn:uuid:020e4ef6-0ea6-4469-ab39-02d1ad0bb568> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2168029/computers/matchbox-computers--small-is-beautiful--and-powerful-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939685 | 1,690 | 2.75 | 3 |
Security Spotlight: Biometrics
Although they are the focus of a great deal of hype and hope, biometrics also represent interesting technology to help boost network and system security. Basically, a biometric device is one that uses some measurable item or element from a person attempting system access to permit or deny such access. Thus, voice prints, thumb-, finger- or hand-print scans, retinal scans, facial scans and so forth all constitute valid, available biometric technologies.
You’d think that biometric technology would be pretty much unbreakable, but it’s always smart to remember that no single physical technology is enough to guarantee strong security. (See Bruce Schneier’s outstanding discussion of this topic in “Homeland Insecurity” at http://www.theatlantic.com/issues/2002/09/mann.htm). But when used in a multi-factor access-control environment (with passwords, pass phrases, questions based on pre-screened questionnaires and so forth), biometrics can help solve the problem of remembering the physical element. (Since presumably what’s being scanned is part of the person who’s attempting access, they should always be ready for scanning.)
The expense involved can range anywhere from several hundred dollars to several thousand dollars, depending on what’s being scanned. This generally means that expensive scanners are used as part of physical access controls to locked-down sites or rooms, whereas less expensive scanners might be used on individual computers.
For more information on biometrics, the Connecticut Department of Social Services has put together a great collection of pointers to papers and overviews on biometrics in general and on currently available biometric technologies. (See http://www.dss.state.ct.us/digital/ditutor.htm). | <urn:uuid:bff125e7-4b40-4531-997b-d581cf3b5c5f> | CC-MAIN-2017-04 | http://certmag.com/security-spotlight-biometrics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00312-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916858 | 375 | 2.65625 | 3 |
DARPA Develops Hollow-Core Fiber Technology
/ July 23, 2013
Through DARPA's Compact Ultra-Stable Gyro for Absolute Reference (COUGAR) program, hollow-core fiber technology design and production is now in the United States.
Using a hollow, air-filled core dramatically improves light's performance by forcing it to travel through channels of air -- and DARPA’s unique spider-web-like, hollow-core fiber design demonstrates single-spatial-mode, low-loss and polarization control.
Single-spatial-mode means that light can take only a single path, enabling higher bandwidth over longer distances; low-loss means that light maintains intensity over longer distances; and polarization control means that the orientation of the light waves is fixed in the fiber, which is necessary for applications such as sensing, interferometry and secure communications.
These properties, according to DARPA, are key in advanced military applications such as high-precision fiber optic gyroscopes for inertial navigation. | <urn:uuid:d3993879-3d1f-4fa5-b74e-5122ccf55e1f> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-DARPA-Develops-Hollow-Core-Fiber-Technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00036-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898395 | 211 | 2.78125 | 3 |
A research team at the University of Nebraska-Lincoln is preparing to begin the second stage of a three-year green energy pilot project that’s researching the viability of integrating wind and solar power into a municipal power grid to power traffic and street lights.
Funded by a $1 million grant from the U.S. Department of Transportation, the project is called Energy Plus Roadways. Those involved hope to develop a smart grid system for roadway infrastructure that will prove practical for widespread, large-scale use.
The end goal is negative energy consumption, said Anuj Sharma, a UNL assistant professor of civil engineering. In other words, more energy would be produced by the municipal grid than is consumed. Ideally a smart grid system would generate enough renewable energy to power all the traffic and street lights in the city and still have some left over to sell back to the power companies.
“That’s where the name Energy Plus Roadways comes from,” Sharma said.
The second phase of the project, scheduled to begin in May, involves developing the electronic control system that would allow a network of wind and solar power generators to intelligently distribute electricity where it’s needed, said Jerry Hudgins, the project’s leader and also a professor of electrical engineering at UNL. The team will run simulations to test how such a system could adapt to power grids of different sizes and layouts.
The first phase of the project, which began last summer, involved monitoring a single 30-foot Bergey XL 1.0 wind turbine connected to a traffic light at the intersection of 84th Street and Nebraska Highway 2 in Lincoln. The traffic light receives its power from a battery that’s charged by the wind turbine, but the traffic light’s power supply is still connected to the main power grid as a contingency. Any excess power generated can be sold back to the electric company for about four cents per kilowatt hour.
Hudgins predicts this type of system will become increasingly attractive as the price of energy increases and green technology improves. “Initially, it’s going to add some reliability to the system,” Hudgins said. “It’s also going to have some cost savings increase over time.”
UNL assistant professor of electrical engineering Wei Qiao will develop the control system system that will allow the grid to make decisions on power distribution and usage.
A solar panel in a shady area or a turbine that receives little wind can use the control system to borrow the power it needs from another generator in a sunny or windy area that’s experiencing an energy surplus. The system is designed to run without the help of a human operator and uses existing power lines to transmit the power from one area of the grid to another.
Each power station will be able to transition between three operating modes. In one operating mode, the power station will rely entirely on power from the main grid. This might occur when it’s cloudy or not windy for an extended period. The second operating mode will allow a station to draw power exclusively from solar and wind power on the grid. The third mode allows a working generator to isolate itself from the rest of the grid in case of a problem nearby, minimizing the scope of the problem.
In the third stage of the project, scheduled to begin in summer 2012, the team will implement a micro-grid consisting of up to eight power generators, eight traffic light poles and two intersections.
Those working on the pilot have calculated that an effective smart grid for traffic lights and other transportation infrastructure would ultimately save cities money on power costs, increase utility uptime, reduce pollution and reduce reliance on nonrenewable energy. If all 300,000 roadway intersections in the U.S. that have traffic signals were powered by renewable energy on a smart grid, nearly $50 million would be saved each year, according to University of Nebraska-Lincoln researchers. | <urn:uuid:a1b181db-3fd2-41fb-ae5f-669b60fd67d9> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Wind-and-Solar-Power-Traffic-Lights.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94258 | 803 | 2.5625 | 3 |
Have you ever witnessed your phone making different calls send text messages and internet browsing without your permission and authentication?
It might be possible that a hacker hacked your phone and using your phone in fact your personal assistant Siri or may be Google Now.
This is the latest hack in the market that give permission to hackers through which they can make calls, send text browse infected sites and other activities that are harmful for your android and IOS device. The most amazing is use of google now or Siri without even uttering a word, they will be able to use it as per researched by Internet Researchers.
NSSI’s group of researchers who belong to French government have revealed that hacker can control any android or IOS device remotely by sending radio commands from 16 feet distance.
Know how this Hack works lets unveil the secret.
The very basic and foremost important thing required is the IPhone or Android phone must have headphones plugged in with a radio transmitter.
The radio waves are used for this hack and the waves initiate voice commands on any iPhone and Android phone that has a microphone enabled headphones plugged in.
Headphones are the one who help in transmitting radio as the headphones has radio antennas. The headphones are treated as radio antennas and can be exploited by hackers that can lead to user’s permission.
This will enable a hacker to use your entire phone without even talking to the user of phone. The permission list includes
Send text messages
Dial hacker’s number turn on the phone’s eavesdropping device.
Browsing of malware websites
Phishing and spam messages using social channels
One can send some electromagnetic waves that can call a paid number and get cash.
The hack only needs:
- A headphone-linked iPhone or Android phone
- Siri enabled from the lock screen as a Apple’s Default setting.
A more influential hack that ranges to more than 16 feet requires larger batteries and could only fit inside a car, the researchers say. | <urn:uuid:e75c5c3c-1ceb-4759-9ae4-ebe28fc4f8bf> | CC-MAIN-2017-04 | http://www.hackersnewsbulletin.com/2015/10/new-hack-block-siri-google-now.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900176 | 403 | 2.625 | 3 |
Is it possible to understand each separate detail of a project well enough to answer every question anyone could ask, but not know anything about the project as a whole?
Not in any normal universe, but researchers in quantum physics claim to have found evidence not only that knowing the details – or at least being able to prevent anyone proving you don't know the details -- does not mean you understand the whole.
" Let’s say that you study a book that has two pages: one and two," according to an interview co-author Stephanie Wehner of the Center for Quantum Technologies at the National University of Singapore did with Physorg.com.
"You are going to sit an exam in this class, and you only had a small amount of time to study. You don’t know everything that is on page one and page two. Can I point to a page that you don’t know, thereby exposing your ignorance? Classically, this is indeed true: For example, if you only know the information on page one, I can point to page two and expose your ignorance. I can catch you.”
In the quantum world that approach won't work because even a partly informed examinee can guess the answer to any question – almost perfectly. (Here's the abstract; here's a PDF of the press release; here's the title, which is the most complex thing you will read today: “Does Ignorance of the Whole Imply Ignorance of the Parts? Large Violations of Noncontextuality in Quantum Theory.”)
“Our example demonstrates that this effect can be rather dramatic. But does it always have to be this way? And, how do we tell," she asked.
Whether the effect has an impact depends on the type of memory involved – conventional or quantum. I think she's talking about computer memory, quantum states and quantum information, not human versions of those things.
Usually what happens at the quantum level has nothing to do with the behavior of anything big enough for us to perceive – which probably means we either don't understand what we're seeing on the quantum level or don't know what we're doing on the human one.
If states and conditions in the quantum world apply to the real (and biological one) that should mean tests wouldn't work to demonstrate whether a human had studied or not.
That doesn't mean it's not valid on the subatomic scale or that discovering a particle can lie so convincingly a human can't catch it won't have a big impact on the development of quantum computers, which may end up being as much more powerful than regular computers as the atom bomb was compared to the hundreds of thousands of bombs dropped over Europe during WWII.
It would be nice to know, though, if quantum computing lets us make breakthroughs in space travel or time travel or really, really good tax preparation that doesn't give us a headache, that the quantum computer we're relying on for an accurate course to Mars, for example, knows what it's talking about and isn't just making up the answers. | <urn:uuid:646c7095-8f56-4e88-8c73-ab7e06c2948e> | CC-MAIN-2017-04 | http://www.itworld.com/article/2737910/hardware/would-a-quantum-computer-give-great-answers--or-lie-to-cover-its-ignorance-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967007 | 629 | 2.515625 | 3 |
Passwords have had their day. With so many to remember for each application that we access and each device that we use, the majority of users resort to insecure practices in order to remember them. Even when we do remember them, exploits such as keyloggers can be used to trace and subvert them. Passwords are almost universally acknowledged as not being sufficiently secure to guard against the threats that we face today.
Some time ago, back in the 1990s, it was originally envisioned that public key cryptography technology would provide the answer for a more secure authentication mechanism than passwords. However, commercial implementations of public key cryptography proved to be costly, complex to deploy and difficult to use. Although in use, that use is limited to a few applications and markets. Strong multifactor mechanisms were touted as an alternative and have come into widespread use. However, they are not based on common standards, are often used just for specific applications and are not ideally suited for use in the online world.
The Fast Identity Online (FIDO) Alliance was launched in early 2013 to address the problems of too many passwords and lack of interoperability among existing strong authentication devices. It has developed a new standard for authentication based on a device-centric model in which users register to a server via their device using the private key that it holds. Based on public key cryptography, this standard has enormous potential to finally solve the password conundrum in a cost-effective, usable and secure manner that safeguards user privacy. Public key cryptography can finally come of age.
Almost all devices have mechanisms that can be leveraged, including embedded secure elements, which are defined by GlobalPlatform as tamper-resistant platforms that are capable of securely hosting applications and confidential cryptographic data and that control interactions in a secure manner. Smart card vendor association, Eurosmart, predicts that shipments of secure elements for near field communications devices will grow by 64% in 2014 to reach 435 million units, many of which will be embedded chips for smartphones. The new standard will also be used in trusted platform modules, USB tokens and smart cards. The standard will also be capable of supporting a range of authentication technologies, including all sorts of biometrics. Interest in the use of fingerprint biometrics, in particular, is being driven by the growth of the inclusion of fingerprint readers in smartphones.
The raison d’être behind the new FIDO standard is to provide a simpler, stronger means of online authentication supporting any device, any application and any authenticator. Since the launch of the FIDO alliance in early 2013, it has fast expanded beyond its six founder members and currently counts some 80 organisations as members, including device manufacturers, operating system and browser manufacturers, authentication vendors, payment services providers and technology giants.
According to one of the founding members of the FIDO alliance, Nok Nok Labs, this new standard not only provides secure and strong authentication for current and legacy authentication solutions, as well as existing devices, but will also help to provide security that will drive take-up of new technology innovations to ensure their rapid growth and widespread usage. It will do this by enabling end-to-end trust across the internet and inter-connected networks of devices. It will provide added impetus to the growth of cloud, mobile and ecommerce markets by providing a universal authentication mechanism that is highly secure, cost-effective and simple to use. According to CEO Phil Dunkelberger, ubiquitous use of cloud models and the promise of universal connectivity offered by the Internet of Things will never become a reality unless we fix the online authentication problem. This new standard will do much to ensure that that promise can become a reality. | <urn:uuid:440aa27e-2155-462f-b82d-c6c272710036> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/fran-howarth/new-standard-for-simpler-stronger-online-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958368 | 737 | 2.890625 | 3 |
Since the physical fiber optic cabling is expensive to implement for each and every service separately, its capacity expansion using a Wave Division Multiplexing (WDM) is a necessity. WDM is a concept that describes combination of several streams of data/storage.video or voice on the same physical fiber-optic cable by using several wavelengths (frequencies) of light with each frequency carrying a different type of data.
There are two types of WDM architecture: Coarse Wave Division Multiplexing (CWDM) and Dense Wave Division Multiplexing (DWDM). CWDM systems typically provide 8 wavelengths, separated by 20nm, from 1470nm to 1630nm. Some DWDM systems provide up to 144 wavelengths, typically with no more than 2nm spacing, roughly over the same range of wavelengths.
The main advantage of CWDM is the cost of the optics which is typically 1/3rd of the cost of the equivalent DWDM optic. This difference in economic scale, the limited budget that many customers face, and typical initial requirements not to exceed 8 wavelengths, means that CWDM is a more popular entry point for many customers. With PacketLight’s WDM equipment, a customer can start with 8 CWDM wavelengths but then grow by introducing DWDM wavelengths into the mix, utilizing the existing fiber and maximizing return on investment.
By utilizing CWDM and DWDM or the mixture of thereof, carriers and enterprises are able to transport from services of 2Mbps up to 10Gbps of data over 36 different channels. This white paper explains this capability of such expansion and its associated cost.
Best of Both Worlds
Typically CWDM solutions provide 8 wavelengths capability enabling the transport of 8 client interfaces over the same fiber. However, the relatively large separation between the CWDM wavelengths allows expansion of the CWDM network itself with an additional 32 wavelengths utilizing DWDM technology, thus expanding the existing infrastructure capability up to 36 wavelengths and utilizing the same equipment as part of the integrated solution.
Additionally, the typical CWDM spectrum supports data transport rates of up to 4.25Gbps, while DWDM is utilized more for large capacity data transport needs of up to 10Gbps. By mapping DWDM channels within the CWDM wavelength spectrum as demonstrated below, much higher data transport capacity on the same fiber optic cable can be achieved without any need for changing the existing fiber infrastructure between the network sites. | <urn:uuid:c641632b-8f2a-4784-9806-297348970f80> | CC-MAIN-2017-04 | http://www.fs.com/blog/dwdm-over-cwdm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932277 | 508 | 3.125 | 3 |
Pop Quiz: Windows 8.1 IPsec Rules
Applies to the "Configure network connectivity" objective of Exam 70-687.
Q: A company's network security team needs to configure a secure communication link between multiple devices on the network.
Which command is used when creating an IPsec rule?
Answer is A. The PowerShell cmdlet used when creating an IPsec rule is New-NetIPsecRule. Get-NetIPsecRule will display an IPsec rule, Set-NetIPsecRule can be used to modify an existing rule, and Copy-NetIPsecRule can be used to copy an entire IPsec rule and associated filters to the same or a different policy store.
Quick Tip: This is an example of a common IPsec rule that creates a rule that could be used in a domain isolation scenario, where incoming traffic is only permitted from other domain member computers. The default main mode negotiation uses Kerberos v5 for computer and user authentication;
PS C:\> New-NetIPsecRule -DisplayName "Domain Isolation Rule" -InboundSecurity Require –OutboundSecurity Request -PolicyStore contoso.com\Domain_Isolation
Reference: Network Security Cmdlets in Windows PowerShell
Bonus Question: Can Windows 7, Windows 8, and Windows 8.1 devices all be connected to the same HomeGroup? (The answer, of course, will be revealed next time!)
Answer to bonus question from last week:
The command that can be used to view all wireless profiles on a Windows 8.1 device is netsh wlan show profiles.
Andy Barkl, MCT/MCITP/MCSA, A+, Network+, Security+, CCNA has been studying technology for 30 years. Of the last 15 years, he has spent much of his time parting the knowledge and experience he has gained through IT exams, over 300, to help others be prepared and successful. He teaches classes in Phoenix, Ariz. where he has lived most of his life. He can be reached by e-mail at firstname.lastname@example.org. | <urn:uuid:15a1b1b6-b85e-4792-b592-537fbc07d0a4> | CC-MAIN-2017-04 | https://mcpmag.com/articles/2014/07/22/windows-8-1-ipsec-rules.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00139-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902821 | 436 | 2.703125 | 3 |
Modern Computers (Circa 2007)
Computing machines are very common in a modern industrialized society.
number of functions performed by these devices is almost endless.
Here is a partial list.
1. General numerical computation, involving both integers and real numbers.
2. Device automation and control.
3. Message switching, including routers and firewalls on the Internet.
4. Computer–generated graphics.
5. Graphics–based computer games.
6. Computer–enhanced video.
(How about those extra lines superimposed on football fields?)
Computers come in two broad classes:
General purpose these are adaptable to a wide variety of programs.
Special purpose these are designed for one purpose only; e.g. routers.
Special–purpose computers are usually limited to high volume markets. It is often easier to adapt a general–purpose computer to do the job.
General Purpose Computers
This course will focus on general purpose computers, also called “Stored Program Computers” or “Von Neumann Machines”.
In a stored program computer, a program and its starting data are read into the primary memory of a computer and then executed. Early computers had no memory into which programs could be stored.
The first stored program computer designed was the EDVAC, designed by John Von Neumann
(hence the name), John Mauchley, and J. Presper Eckert. The
“Electronic Discrete Variable Automatic Computer” was described in a paper, published on June 30, 1945 with Von Neumann as the sole author.
The first stored program computer to become
operational was the EDSAC (Electronic Delay Storage Automatic Computer), completed May 6, 1949.
This was developed by Maurice Wilkes of
The first stored program computer that contained all of the components of a modern computer was the MIT Whirlwind, first demonstrated on April 20, 1951. It was the first to use magnetic core memory.
Components of a Stored Program Computer
The four major components of a modern stored program computer are:
1. The Central Processing Unit (CPU)
2. The Primary Memory (also called “core memory” or “main memory”)
3. The Input / Output system
4. One or more system busses to allow the components to communicate.
Major Components Defined
The system memory (of which this computer has 512 MB) is used for transient storage of programs and data. This is accessed much like an array, with the memory address serving the function of an array index.
The Input /
Output system (I/O System) is used for the computer to save data and
programs and for it to accept input data and communicate output data.
Technically the hard drive is an I/O device.
Processing Unit (CPU) handles execution of the program.
It has four main components:
1. The ALU (Arithmetic Logic Unit), which performs all of the arithmetic
and logical operations of the CPU, including logic tests for branching.
2. The Control Unit, which causes the CPU to follow the instructions
found in the assembly language program being executed.
3. The register file, which stores data internally in the CPU. There are user
registers and special purpose registers used by the Control Unit.
4. A set of 3 internal busses to allow the CPU units to communicate.
A System Level Bus, which allows the top–level components to communicate.
Reality Intrudes (Part 1 of Many)
The design on the previous slide is logically correct, but IT WON’T WORK.
IT IS TOO SLOW. Problem: A single system level bus cannot handle the load.
Modern gamers demand fast video; this requires a fast bus to the video chip.
The memory system is always a performance bottleneck. We need a dedicated memory bus in order to allow acceptable performance.
Here is a refinement of the above diagram.
This design is getting closer to reality. At least, it acknowledges two of the devices requiring high data rates in access to the CPU.
Reality Intrudes (Part 2 of Many)
We now turn to commercial realities, specifically legacy I/O devices.
When upgrading a computer, most users do not want to buy all new I/O devices (expensive) to replace older devices that still function well.
The I/O system must provide a number of busses of different speeds, addressing capabilities, and data widths, to accommodate this variety of I/O devices.
Here we show the main I/O bus connecting the CPU to
the I/O Control Hub (ICH), which is
connected to two I/O busses:
one for slower (older) devices
one for faster (newer) devices.
The Memory Component
The memory stores the instructions and data for an executing program.
Memory is characterized by the smallest addressable
Byte addressable the smallest unit is an 8–bit byte.
Word addressable the smallest unit is a word, usually 16 or 32 bits in length.
Most modern computers are byte addressable, facilitating access to character data.
Logically, computer memory should be considered as an
The index into this array is called the address or “memory address”.
A logical view of such a byte addressable memory might be written in code as:
Const MemSize =
byte Memory[MemSize] // Indexed 0 … (MemSize – 1)
The CPU has two registers dedicated to handling memory.
The MAR (Memory Address Register) holds the address being accessed.
The MBR (Memory Buffer Register) holds the data being written to the
memory or being read from the memory. This is sometimes
called the Memory Data Register.
The Simplistic Physical View of Memory
I call this the “linear view”, as memory is still modeled as one large linear array.
The N–bit address selects one of the 2N entities, numbered 0 through (2N – 1).
Read sequence: First address to MAR; command a READ.
then copy the contents of the MBR.
Write sequence: First address to MAR; data to the MBR.
then command a WRITE.
This is logically correct, but difficult to implement at an acceptable price.
What we want is a very large memory, in which each memory element is fabricated from very fast components. But fast means expensive.
What we can afford is a very large memory, in which each memory element is fabricated from moderately fast, but inexpensive, components.
Modern computers achieve good performance from a large, moderately fast, main memory by using two levels of cache memories, called L1 and L2. These work due to an observed property of programs, called the locality principle.
A typical arrangement would have a large L2 cache and a split L1 cache. The L1 cache has an Instruction Cache and a Data Cache.
Note that the Instruction Cache (I Cache) does not write back to the L2 cache.
Organization of Primary Memory
We turn our attention again to the primary memory. When we left it, we had a linear view with an N–to–2N decoder.
We shall study decoders in a later class. At present, it should be obvious that construction of a 32–to–4,294,967,296 decoder would be very difficult.
Memory on all modern computers is obviously built from smaller chips. Each of these chips will be constructed from a number of smaller chips.
For example, a 1 GB memory might have four 256 MB memory modules.
Each 32 MB chip would be organized as eight 32 Mb chips.
Each 32 Mb chip is organized as an 8,192–by–4,096 array.
Also called “core memory”, “store”, or “storage”.
with the MIT Whirlwind and continuing for about 30 years, the
basic technology for primary memory involved “cores” of magnetic material.
All modern computer systems use virtual memory. At various times in the course, we shall give a precise definition, but here is the common setup.
In MS–Windows, the area of the system disk that handles virtual memory is called the paging file. My system has a 768 MB paging file.
The Central Processing Unit (CPU)
Traditionally, the CPU is considered as having four main components.
1. The Arithmetic Logic Unit
2. The three bus structure that feeds the ALU and accepts its results.
3. The control unit that interprets the machine language.
4. The register set, containing both general purpose (user) registers, and
special purpose registers. The latter include:
MAR the Memory Address Register.
MBR the Memory Buffer Register.
PC the Program Counter, pointing to the next instruction.
IR the Instruction Register, holding the current instruction.
Memory Creeps onto the CPU Chip
Modern computers, such as the P4, have placed both L1 caches and the L2 cache on the CPU chip itself. Here is a picture of the P4 chip, annotated by Intel.
In older computers, the main difference between CPU registers and memory was that the registers were on the chip and memory was not. This no longer holds.
Memory on the CPU Chip (Part 2)
With two L1 caches (the I cache and the D cache) and the L2 cache on the CPU chip, we look for another difference to distinguish user registers from memory.
The main difference is historical. It has to do with the way that the assembly language program accesses the device.
There are register–specific instructions and memory–specific instructions.
A modern computer (Pentium series excepted) will have between 8 and 32 user registers. These store temporary results for computations.
The Pentium register set (EAX, EBX, ECX, and EDX) is rather unusual and would be cheerfully ignored were the Pentium not such an important design.
Modern computer architecture usually involves a series of design tradeoffs.
Question: Should we
place more general–purpose registers on the CPU
chip or have a larger L1 Data Cache?
provides about the same improvement in performance.
Flip a coin or use some other criterion.
The ALU (Arithmetic Logic Unit)
The ALU performs all of the arithmetic and logical operations for the CPU.
These include the following:
Arithmetic: addition, subtraction, negation, etc.
Logical: AND, OR, NOT, Exclusive OR, etc.
This symbol has been used for the ALU since the mid 1950’s.
It shows to inputs and one output.
The reason for two inputs is the fact that many operations, such as addition and logical AND, are dyadic; that is, they take two input arguments.
Reflecting on the last 60 years of the history of computing machines, we see a development constrained by the available technology and economics.
We see a constant move towards devices with
less cost and physical size
more performance and reliability (longer time between failures).
As an example, the ENIAC seldom functioned for more than a few hours continuously before it suffered a failure.
Memory technology is a good example. We have four stages.
1. No memory (ENIAC).
2. Very unreliable memory, such as mercury delay lines and Williams tubes.
3. Very reliable memory, specifically magnetic core memory.
4. Very reliable and inexpensive memory, specifically solid state devices.
We now begin a look at the computer from a logical view.
The Fetch–Execute Cycle
This cycle is the logical basis of all stored program computers.
Instructions are stored in memory as machine language.
Instructions are fetched from memory and then executed.
The common fetch cycle can be expressed in the following control sequence.
MAR ¬ PC. // The PC contains the address of the instruction.
READ. // Put the address into the MAR and read memory.
IR ¬ MBR. // Place the instruction into the MBR.
This cycle is described in many different ways, most of which serve to highlight additional steps required to execute the instruction. Examples of additional steps are: Decode the Instruction, Fetch the Arguments, Store the Result, etc.
A stored program computer is often called a “von Neumann Machine” after one of the originators of the EDVAC.
This Fetch–Execute cycle is often called the “von Neumann bottleneck”, as the necessity for fetching every instruction from memory slows the computer.
Avoiding the Bottleneck
In the simple stored program machine, the following loop is executed.
Fetch the next instruction
Loop Until Stop
Execute the instruction
Fetch the next instruction
The first attempt to break out of this endless cycle
was “instruction prefetch”;
fetch the next instruction at the same time the current one is executing.
As we can easily see, this concept can be extended.
Instruction–Level Parallelism: Instruction Prefetch
Break up the fetch–execute cycle and do the two in parallel.
This dates to the IBM Stretch (1959)
The prefetch buffer is implemented in the CPU with on–chip registers.
prefetch buffer is implemented as a single register or a queue.
The CDC–6600 buffer had a queue of length 8 (I think).
Think of the prefetch buffer as containing the IR (Instruction Register)
the execution of one instruction completes, the next one is already
in the buffer and does not need to be fetched.
Any program branch (loop structure, conditional branch, etc.) will invalidate the contents of the prefetch buffer, which must be reloaded.
Instruction–Level Parallelism: Pipelining
Better considered as an “assembly line”
Note that the throughput is distinct from the time required for the execution of a single instruction. Here the throughput is five times the single instruction rate.
What About Two Pipelines?
Code emitted by a compiler tailored for this architecture has the possibility to run twice as fast as code emitted by a generic compiler.
Some pairs of instructions are not candidates for dual pipelining.
C = A + B
D = A · C // Need the new value of C here
This is called a RAW (Read After
Write) dependency, in that the value
C must be written to a register before it can be read for the next operation.
Stopping the pipeline for a needed value is called stalling.
2, 4, or 8 completely independent pipelines on a CPU is very
resource–intensive and not directly in response to careful analysis.
the execution units are the slowest units by a large margin. It
is usually a better use of resources to replicate the execution units.
What Is Executed? The Idea of Multilevel Machines.
In discussing the fetch–execute cycle, we claimed that each instruction is fetched and executed. We now ask about the type of instruction.
In order to answer this question more precisely, we introduce the idea of a multilevel machine and multiple levels of computer languages.
We begin this discussion by discussing three levels of languages.
High–Level Language English–like statements Z = X + Y
Assembly Language Mnemonic
codes Load X
Machine Language Binary
(Here shown in 0x3101
hexadecimal form) 0x2102
The machine language used in this example is the MARIE design (CPSC 2105)
The Multilevel Machine
Following Andrew Tanenbaum(1),
we define a four–level machine.
Each level of the machine corresponds to a language level.
Machine Language Language Type
M3 L3 High level language such as C++ or Java
M2 L2 Assembly language
M1 L1 Binary machine language
M0 Control Signals Microarchitecture level
Following Tanenbaum, we define a virtual machine as a hypothetical computer that directly executes language at its level. For example, M3 as a virtual machine directly executes high level language programs.
The student should be aware that there is another, very important, use of the term virtual machine, with an entirely different definition. We use that later.
Computer Organization (5th Edition) by Andrew S. Tanenbaum.
ISBN 0 – 13 – 148521 – 0. Dr. Tanenbaum defines six levels.
Options for Executing a High Level Language Program
There are three options for executing a L3 program. Each has been tried.
Execution. This has been tried with the
This is much less flexible than the other two approaches,
much more difficult to implement, and less efficient.
the L3 program to a lower level language, such
as L2 or L1. The lower level languages are much more
based on the computer hardware, and easier to execute.
For a HLL, this step is called compilation.
Interpretation Write a
program in a lower level language, either L2 or
L1, that takes the L3 program as input data and causes
the computer to achieve the desired effect.
JVM (Java Virtual Machine) is a virtual machine that
appears to execute the Java program directly. In actual
fact, it translates the Java code into byte code and
interprets that byte code.
Levels from the “Bottom Up”
The lowest levels of the computer were not shown on the above diagram. These are the digital logic level and the analog devices upon which the level is based.
The microarchitecture level, the first real level, shows all of the components of the CPU (ALU, Control Unit, internal busses, user registers, control registers), the set of control signals, as well as the method of generating these signals.
At this level, the registers are connected to the ALU to form a data path, over which the data flow: registers to ALU, then ALU back to a register.
At this level, the basic design question is how to build the control unit.
The ISA (Instruction Set Architecture), the next level up, describes the binary machine language instructions, their mnemonic representations, and the general purpose registers that can be accessed by a machine language program.
The Higher Level Language level, the top level, represents the view of the Instruction Set Architecture as seen through the compiler or interpreter for the higher level language.
How Does the Control Unit Work?
The binary form of the instruction is now in the IR (Instruction Register).
The control unit decodes that instruction and generates the control signals necessary for the CPU to act as directed by the machine language instruction.
The two major design categories here are hard–wired and microprogrammed.
Hardwired: The control signals are generated as an output of a
set of basic logic gates, the input of which derives
from the binary bits in the Instruction Register.
Microprogrammed: The control signals are generated by a microprogram
that is stored in Control Read Only Memory.
The microcontroller fetches a control word from the
CROM and places it into the mMBR, from which
control signals are emitted.
The microcontroller can almost be seen as a very simple computer within a more complex computer. This simplicity was part of the original motivation.
How to Handle Complexity in a HLL
Modern computer design practice is driven by the fact
that almost all programs, including Operating Systems, are written in a HLL (High Level Language).
For interpreted programs, the interpreter itself is written in a HLL.
Almost everything executing on a modern computer is thus the output of a compiler. We now adjust the ISA to handle compiler output.
But where do we put the complexity associated with processing a modern HLL?
We could have a straightforward compiler that emitted complex machine language instructions for execution at the microarchitecture level. This approach requires a very sophisticated control unit, which is hard to design.
We could have a very complex compiler (still easy to write) that emitted more machine language instructions, each of which was very simple. This approach allows a very simple control unit, which is easy to design and test.
A hard–wired control unit for the complex ISA of the first approach was found to be very difficult to design and test. For that reason, a simpler micro–control unit was designed and microprogrammed.
Modern Design Realities
Some assumptions that drive current design practice include:
1. The fact that most programs are written in high–level compiled languages.
2. The fact that all modern compilers are
designed to emit fairly simple
machine language instructions, assuming a simple ISA.
3. The fact that a simpler instruction set
implies a smaller control unit,
thus freeing chip area for more registers and on–chip cache.
4. The fact that current CPU clock cycle times
(0.25 – 0.50 nanoseconds)
are much faster than memory devices, either cache or primary memory.
5. The considerable experience in writing
sophisticated compilers that can
handle very complex constructs and emit very efficient machine code.
NOTE: The appearance of a new memory technology with
enhanced performance would require a completely new design
approach. This would be welcome, but quite a challenge.
Modern Design Principles
1. Implement the microarchitecture level to
provide direct hardware execution
of the more common instructions with micro–routines for the more complex
instructions. Fortunately, the more complex instructions are rare.
2. Use pipelining and maximize the rate at which instructions are issued.
3. Minimize the number of instruction formats
and make them simpler, so that
the instructions are more easily and quickly decoded by the control unit.
4. Provide plenty of registers and the largest
possible on–chip cache memory.
A large number of registers helps the compiler generate efficient code.
5. Minimize the number of instructions that
reference memory. Preferred
practice is called “Load/Store” in which the only operations to reference
primary memory are: register loads from memory
register stores into memory.
This implies that many operations, such
as addition and logical AND,
operate only on the contents of CPU general–purpose registers. | <urn:uuid:03e2a49e-14a7-4110-bfe0-16faebcc9711> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155_Slides/Chapter01/ModernComputers.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912958 | 4,725 | 3.71875 | 4 |
NOAA will lead an international effort to pinpoint the locations of more than 40 global positioning satellites (GPS) in Earth orbit, which is vital to ensuring the accuracy of GPS data that millions worldwide rely upon every day for safe navigation and commerce.
NOAA personnel will compile and analyze satellite orbit data from 10 analysis centers worldwide to ensure the accuracy of GPS information. For the next four years NOAA's National Geodetic Survey will serve as the Analysis Center Coordinator for the International Global Navigation Satellite Systems Service, a voluntary federation of more than 200 organizations that provide continuous global satellite-tracking data.
"For GPS receivers to provide accurate information, the precise location of positioning satellites as they orbit the Earth must first be determined," said David Zilkoski, director of NOAA's National Geodetic Survey. "NOAA looks forward to leading this international partnership to produce the highest quality satellite position data possible."
The Global Navigation Satellite Systems, which include the U.S.-based Global Positioning System, the Russian GLONASS system, and the upcoming European Galileo system, are used for accurately determining the geographic position of any point on Earth.
A GPS receiver calculates its position by measuring the time it takes a signal to travel from the satellite to the receiver. Because the signal travels at a known rate and the time is precisely measured using an atomic clock, the receiver can calculate its distance from the satellite. By repeating this process from four or more GPS satellites whose orbits are precisely known, the GPS receiver can determine its position.
NOAA is dedicated to enhancing economic security and national safety through the prediction and research of weather and climate-related events and information service delivery for transportation, and by providing environmental stewardship of our nation's coastal and marine resources. Through the emerging Global Earth Observation System of Systems (GEOSS), NOAA is working with its federal partners, more than 70 countries and the European Commission to develop a global monitoring network that is as integrated as the planet it observes, predicts and protects. | <urn:uuid:44e47b0a-425c-402a-925e-5be97fd8e1f1> | CC-MAIN-2017-04 | http://www.govtech.com/geospatial/NOAA-to-Ensure-GPS-Navigation-Accuracy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91503 | 402 | 3.5 | 4 |
With this speed you can download a movie in 0.2ms.
Researchers at Technical University of Denmark’s (DTU) High-Speed Optical Communications (HSOC) team have been able to transfer data at 43Tbps with a single laser in the transmitter, setting a new world data transfer record.
Eclipsing the earlier record of 32Tbps set by Germany’s Karlsruhe Institute of Technology, the latest breakthrough was achieved by using a new single multi-core optical fibre from the Japanese telecoms major NTT.
Researchers noted that data speed competition has been the driving force behind the development of technology that can compete with the exponential growth of internet traffic, which is presently deemed to be rising by about 50% every year.
Furthermore, the increasing transfer speeds are claimed to offer environmental benefits. Emissions from the Internet’s overall energy consumption currently equate to over 2% of overall man-made carbon emissions.
"DTU noted: It is therefore essential to identify solutions for the internet that make significant reductions in energy consumption while simultaneously expanding the bandwidth."
"This is precisely what the DTU team has demonstrated with its latest world record.
"DTU researchers have previously helped achieve the highest combined data transmission speed in the world – an incredible 1 petabit per second – although this involved using hundreds of lasers." | <urn:uuid:97fe4277-7237-42b9-9387-4ee3eb85a630> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/it-network/danish-researchers-set-new-world-data-transfer-record-at-43tbps-050814-4335184 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00367-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94342 | 276 | 2.546875 | 3 |
Language is the principal tool we have for bringing minds together. But technology can amplify the power of language and bring more minds together, in more ways.- Richard Brodhead, President, Duke University
Enable collaboration among students and faculty anywhere, anytime.
- Prepare students to contribute to a global society
- Engage students who grew up in a digital world
- Unify students and faculty in international locations
A 140 seat Virtual Lecture Hall using Cisco TelePresence to extend the classroom to global locations.
The ability to share course content in new ways, including video and online communities.
- Enabled remote experts and students to participate in classes as if they were physically present
- Empowered students to collaborate with professors and peers anytime, anywhere
- Gave students experience with the advanced collaboration tools they will use in the workplace | <urn:uuid:438ccfc4-0e42-4aeb-842c-c756e043be51> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/services/it-case-studies/duke-university-telepresence.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924968 | 168 | 2.671875 | 3 |
Definition: A graph in which the number of edges is much less than the possible number of edges.
Generalization (I am a kind of ...)
See also dense graph, complete graph, adjacency-list representation.
Note: A directed graph can have at most n(n-1) edges, where n is the number of vertices. An undirected graph can have at most n(n-1)/2 edges.
There is no strict distinction between sparse and dense graphs. Bruno Preiss' definition of sparse and dense graphs has problems, but may help. First, for one graph, one can always choose a k. Second a class of graphs might be considered sparse if |E| = O(|V|k) and 1 < k < 2. |E| is the number of edges, and |V| is the number of vertices.
Preiss reference from Andreas Leiser <firstname.lastname@example.org> 22 December 2003
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 14 August 2008.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "sparse graph", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 August 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/sparsegraph.html | <urn:uuid:3a70c566-6eb9-44f2-93f4-d2283a1948cd> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/sparsegraph.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896908 | 323 | 3.390625 | 3 |
Logging is essential for providing key security information about a web application and its associated processes and integrated technologies. Generating detailed access and transaction logs is important for several reasons:
Logs are often the only record that suspicious behavior is taking place, and they can sometimes be fed real-time directly into intrusion detection systems.
Logs can provide individual accountability in the web application system universe by tracking a user's actions.
Logs are useful in reconstructing events after a problem has occurred, security related or not. Event reconstruction can allow a security administrator to determine the full extent of an intruder's activities and expedite the recovery process.
Logs may in some cases be needed in legal proceedings to prove wrongdoing. In this case, the actual handling of the log data is crucial.
Failure to enable or design the proper event logging mechanisms in the web application may undermine an organization's ability to detect unauthorized access attempts, and the extent to which these attempts may or may not have succeeded.
On a very low level, the following are groupings of logging system call characteristics to design/enable in a web application and supporting infrastructure (database, transaction server, etc.). In general, the logging features should include appropriate debugging information such as time of event, initiating process or owner of process, and a detailed description of the event. The following are recommended types of system events to log in the application:
Reading of data
Writing of data
Modification of any data characteristics should be logged, including access control permissions or labels, location in database or file system, or data ownership.
Deletion of any data object should be logged
Network communications should be logged at all points, (bind, connect, accept, etc.)
All authentication events (logging in, logging out, failed logins, etc.)
All authorization attempts should include time, success/failure, resource or function being authorized, and the user requesting authorization.
All administrative functions regardless of overlap (account management actions, viewing any user's data, enabling or disabling logging, etc.)
Miscellaneous debugging information that can be enabled or disabled on the fly. | <urn:uuid:59520ccb-0df3-4910-aa4c-3ad093fe47b2> | CC-MAIN-2017-04 | http://www.cgisecurity.com/owasp/html/ch09.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00423-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914558 | 431 | 2.578125 | 3 |
If you want to send or store data and be sure it is safe from being intercepted, you will use Cryptography. Cryptography uses chipper as mathematical virtual lock to make data scrambled so that is not understandable if intercepted by unauthorized third parties.
There are different cryptography techniques, some of them are: encryption, hashing, and steganography.
Cryptography can be differentiated by usage of different key types:
- Symmetric Key Encryption
- Asymmetric Key Encryption
Symmetric Key Encryption is sometimes known as Secret Key Cryptography. Main characteristic of this type of cryptography is the same key usage in encryption and decryption of transferred data. Every change in the secret key will make data decryption impossible.
Asymmetric Key Encryption is known as Public Key Cryptography technique. Main characteristic of this type of cryptography is usage of two sets of keys which are generated for the process. One key is public and other is private. Public key encrypts the data. We can only decrypt that data using appropriate private key. The best part of asymmetric cryptography is that is giving us a technique to share encrypted data and enable the receiver to decrypt that data without sending the decryption key across unsecured network.
It’s a process in which you share public key with the whole world and every sender can encrypt the data that it is sending to you with your public key. You will then be the only one in the world to have the private key that can decrypt that message and that key was never sent to anybody.
It’s like if you would send an small lockable box – a safe – to someone, but without locking it. He can put something inside, lock it, and send it back to you. The transfer was secure, no one could take the box and see what’s inside on the way back to you. On the other hand, you can open it easily. The best thing is that the key was never transferred in-between you and the other side.
The disadvantage of Public Key Cryptography is in the situation where you lost private key or it leaked, or somebody used a quantum computer to brute force your key. In that moment you will have to generate a new pair of public and private keys to be able to continue secure message sending.
Whitfield Diffie and Martin Hellman invented in 1976 the public-key or asymmetric key technique, called Diffie–Hellman key exchange protocol, which described the method for sending encrypted data across unsecured media without the need to send the decryption key too. The idea of using two different but mathematically related keys.
They didn’t came up with the mathematics of generating the private-public key pairs, this was done two years later by Ronald Rivest, Adi Shamir, and Len Adleman who invented the RSA algorithm.
How Asymmetric Key Cryptography works
From technical perspective, every cryptographic algorithm in asymmetric key cryptography is based on mathematical problem with no known efficient solution. Some of the most common examples are integer factorization, elliptic curve relationships or discrete logarithm usage. The base of the technical functionality is comprised of easy technique for a public and private key pair generation and their usage for encryption and decryption alternately. The main strength is in highly mathematically/computationally complicated way to calculate the private key from its corresponding public key. This math trick enables the system to publish the public key without affecting security. Security then depends on keeping the private key secure.
Security in current crypto methods
Symmetric Key Encryption issues
There are few major issues with symmetric encryption that are becoming main motivators for scientists to investigate the possibilities of a better encryption system. The major issue with traditional symmetric cryptography is the secret key which is both encrypting and decrypting the message. If somebody else than sender and receiver comes into possession of that secret key, it means that he could snoop on private communication by decrypting messages between trusted parties.
It usually doesn’t stop there. When somebody, except sender and receiver, has the secret key, that device or person is able not only to read the message but also alter the content without trusted parties knowing about the changes. He is able to decrypt the message, change the content, encrypt again and send it towards the receiver.
Second issue with symmetric crypto is in selecting the secret key properly. What would be the best mechanism to generate and then decide if the secret key selected is secure enough so that nobody on the way could guess it or brute force it in limited time-frame.
The biggest issue of symmetric key and the main reason for Asymmetric Key method existence is the problem of distributing the key to the receiver. Taking the facts that sender will generate the key, use the key to encrypt the message, taking into account that we are looking at symmetric system, the same key would need to be available at receiver side in order for him to be able to decrypt and read the content of the message. If sender is in Croatia and receiver is maybe in Sri Lanka. It is easy to see that the communication with which the key will be send also needs to be secure. It’s a catch 21.
Asymmetric Key Encryption Security Consideration
In order to secure the communication channel, used to exchange secret keys, asymmetric key encryption is used. Usage of asymmetric key encryption, like described in previous section, provides, to date, secure technique of encryption without the need of sending the private key (used for decryption) to receiver.
Asymmetric crypto is considered secure so it is not here listed as another issue which justifies the need for new cryptography mechanism. In this case the issue is not even the present super-computer increasing calculation speed. Today’s supercomputers will still need billions years to generate the right private key from public key. Even when we take the Moore’s law into account it is clear that the issue is not in increasing speed of current computer architectures. Calculations on our computers are based on binary number manipulation making one arithmetic key pair try per CPU cycle. For example, 3GHz will make 3.000.000.000 instructions per second. Computer needs to go through many instructions to make a simple calculation and try if two primes are resulting in product that we search for (one step of many same steps to make brute force and try all prime numbers products until we hit the one product we search). This process is fairly slow, speeding the CPU and continuing the usage of the same brute force technique will not get us very far in breaking AES encryption. To break current AES with 128-bit key will take approximately 1 billion billion years. Making the process faster will still leave you with billions years of tries.
The real issue is in new, quantum computer architectures, which can, in the near future, change the way multiple calculation steps are performed in the same moment and in this way break asymmetric key security by checking million and millions combinations of primes simultaneously.
If we start to think of new cryptology methods at that time, it will surely be too late to bring bank systems and similar secure communication dependent systems back online in time.
Quantum cryptology . It is clear that all issues above describe problem with keys in current cryptographic mechanisms and their security in transmission.
Quantum cryptography solves exactly that using basic quantum mechanics postulates. Quantum Key Distribution technique will be descried in one of the next articles soon. | <urn:uuid:e0ec407d-f17e-4d26-a7d0-0d4093bee8d8> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2016/cryptography | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00239-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941204 | 1,533 | 3.71875 | 4 |
Can NASA give away a working satellite?
- By Kevin McCaney
- Feb 14, 2012
What can NASA do with a space telescope that’s in good working order but whose funding has run out? Instead of letting it become just another piece of space junk, the space agency wants to give it away.
In an unprecedented move for an operational spacecraft, NASA is negotiating to turn operation of the Galaxy Evolution Explorer over to the California Institute of Technology, Spaceflight Now reports.
The satellite, known as GALEX, has been providing data on the ultraviolet properties of nearby galaxies since 2003 and, among other discoveries, helped confirm the existence of dark energy, according to Popular Science.
SETI’s search for alien life is back in business
Space junk problem: Is a solar-sail ship the answer?
It already has lasted far longer than its expected, 29-month lifecycle, and except for one detector that shorted out in 2009, it’s working fine. But NASA’s Senior Review Panel, which in 2006 had voted to continue the mission, in 2010 decided that other operations had higher priorities. GALEX was put into standby mode Feb. 7, though data archiving and analysis will continue into the summer of 2012, Spaceflight Now reported.
NASA is hoping to reach a deal by March 31 with CalTech, which has led GALEX’ research.
The transfer of operational control would be made under the Stevenson-Wydler Technology Innovation Act, which allows for government agencies to donate computer and research equipment to educational and nonprofit institutions. A lot of PCs have been given to schools under the act’s provisions, but never anything floating in space.
The question is whether CalTech wants to assume control of GALEX. Although it wouldn’t have to pay for the satellite — Stevenson-Wydler prohibits agencies from accepting money for equipment — manpower and money are involved in maintaining its operation.
GALEX, which has a 19.7-inch telescope and to date has been managed by NASA’s Jet Propulsion Laboratory, most recently has been surveying magellanic clouds, the galactic plane and the stars also being studied by NASA’s Kepler space telescope for signs of plants, according to Spaceflight Now.
During its nine years in space, GALEX has produced an unprecedented archive of information on how the basic structures of the universe evolve, according to CalTech. Its data has been used to catalog millions of galaxies spanning 10 billion years.
Among its other mission highlights, GALEX has explored “teenage galaxies,” discovered a gigantic comet-like tail behind a speeding star, found rings of new stars around old galaxies and witnessed a black hole devouring a star, Universe Today reports.
CalTech officials are expected to give NASA an answer by March 31. If they choose not to continue the mission, GALEX will join other spent satellites and spacecraft in the great junkyard in the sky.
Kevin McCaney is a former editor of Defense Systems and GCN. | <urn:uuid:44ba4454-dd80-450d-a424-80b3ce9d2ce3> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/02/14/nasa-galex-satellite-donate-to-caltech.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952569 | 634 | 2.6875 | 3 |
Computers Make Mumbling Doctors Worse, Good Doctors BetterBy M.L. Baker | Posted 07-12-2005
The study analyzed 54 patient visits conducted by nine clinicians before, one month after and seven months after a large medical practice installed computer screens in exam rooms.
"The most frequently used decision of where to place the computer was where it was convenient to drop the wires," said Richard Frankel, a medical sociologist at the Indiana University School of Medicine and author of the study. "That turned out to be in the corner of the room so that the clinician had his back to the patient." Not surprisingly, the placement interfered with doctor-patient communication. The study will appear in the August issue of the Journal of General Internal Medicine.
Another finding was less intuitive. Computers in exam rooms help doctors communicate with patients, but only if the doctor already communicates well. For doctors with poor interaction skills, computers exacerbate their problems. Such doctors tended to interact more with the computer and less with the patients.
Further, after six months' experience using computers during patient visits, clinicians did not improve in their abilities to organize information exchange, make regular eye contact with patients, or navigate through computer screens. One likely reason is that the practice had already used the electronic medical record for several years, so the only difference was that physicians were now being asked to record information during a patient visit rather than inputting information after the encounter.
For computers to improve communication, clinicians "have to move to the computer record being seen as an educational tool, not notes to self," said Frankel. Indeed, the skilled communicators did this instinctively. For example, they would tilt computer screens to show patients previous lab results or drug information. They would also clarify any discrepancies between what a patient told them and what appeared on the computer screen.
For less skilled communicators, the computer, rather than the patient, became the focus of the visit. Clinicians seemed confused if a patient described a reason for a visit that differed from that on the computer screen.
That doesn't mean that poor communicators should shun computers, said Frankel, adding that communication training improves patients' satisfaction with doctor visits. Clinicians will change when they're made aware of the issues, he said. "It's a relatively simple thing to encourage clinicians to turn the screen and make eye contact."
Another issue may not be so simple to solve, probably because it clashes with the ubercompetent image doctors are often expected to project. "Many physicians are embarrassed by poor typing skills," said Frankel, "so they won't use the EMR with the patient in the room."
Check out eWEEK.com's for the latest news, views and analysis of technology's impact on health care. | <urn:uuid:d599eea5-1998-4b45-98a7-d5d062b693d5> | CC-MAIN-2017-04 | http://www.cioinsight.com/print/c/a/Health-Care/Computers-Make-Mumbling-Doctors-Worse-Good-Doctors-Better | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969908 | 562 | 2.640625 | 3 |
Researchers have found a weakness in the AES algorithm. They managed to come up with a clever new attack that can recover the secret key four times easier than anticipated by experts.
In the last decade, many researchers have tested the security of the AES algorithm, but no flaws were found so far.
In 2009, some weaknesses were identified when AES was used to encrypt data under four keys that are related in a way controlled by an attacker; while this attack was interesting from a mathematical point of view, the attack is not relevant in any application scenario.
The new attack applies to all versions of AES even if it used with a single key. The attack shows that finding the key of AES is four times easier than previously believed; in other words, AES-128 is more like AES-126.
Even with the new attack, the effort to recover a key is still huge: the number of steps to find the key for AES-128 is an 8 followed by 37 zeroes.
To put this into perspective: on a trillion machines, that each could test a billion keys per second, it would take more than two billion years to recover an AES-128 key.
Note that large corporations are believed to have millions of machines, and current machines can only test 10 million keys per second.
Because of these huge complexities, the attack has no practical implications on the security of user data; however, it is the first significant flaw that has been found in the widely used AES algorithm and was confirmed by the designers.
The AES algorithm is used by hundreds of millions of users worldwide to protect internet banking, wireless communications, and the data on their hard disks. In 2000, the Rijndael algorithm, designed by the Belgian cryptographers Dr. Joan Daemen (STMicroelectronics) and Prof. Vincent Rijmen (K.U.Leuven), was selected as the winner of an open competition organized by the US NIST (National Institute for Standards and Technology).
Today AES is used in more than 1700 NIST-validated products and thousands of others; it has been standardized by NIST, ISO, and IEEE and it has been approved by the NSA for protecting secret and even top secret information.
The attack is a result of a long-term cryptanalysis project carried out by Andrey Bogdanov (K.U.Leuven, visiting Microsoft Research at the time of obtaining the results), Dmitry Khovratovich (Microsoft Research), and Christian Rechberger (ENS Paris, visiting Microsoft Research). | <urn:uuid:ca94d86d-1cde-4d62-8ff5-ea25bbe4a317> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/08/17/researchers-identify-first-flaws-in-the-advanced-encryption-standard/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958236 | 515 | 3.03125 | 3 |
Localization is the process of adapting software to meet the requirements of local markets and different languages. Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes. Localized applications should reflect correct cultural and linguistic conventions that the target market uses. Localization and internationalization make it possible for you to create a localized version of your software.
BlackBerry PlayBook tablets are sold all over the world and BlackBerry applications are translated into many languages, including languages that are not based on a Latin alphabet. Early in the design process, consider whether your application might require localization. If your application does not require localization now, consider designing your application so that it would be easy to localize it in the future. Be aware that even if your application might not be localized, some users might want to type text in other languages in your application. | <urn:uuid:e3aae27b-549d-4ffe-af26-525b7029b0da> | CC-MAIN-2017-04 | http://developer.blackberry.com/design/playbook/localization_tablets_1748541_11.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919933 | 177 | 2.9375 | 3 |
Jason | 5th December 2012
First off, there are a number of targets you should be aware of when filtering for @mentions of Twitter usernames:
twitter.user.id - The ID of the Twitter user that sent this Tweet
twitter.user.screen_name - The @username of the Twitter user that sent this Tweet
twitter.mentions - An array of Twitter @usernames that are mentioned in this Tweet
twitter.mention_ids - An array of Twitter user IDs that are mentioned in this Tweet
twitter.in_reply_to_screen_name - The @username of the Twitter user this Tweet is replying to. Note: This @username will also appear in the twitter.mentions and twitter.mention_ids arrays
twitter.retweet.user.id - The ID of the Twitter user that sent this retweet
twitter.retweet.user.screen_name - The @username of the Twitter user that sent this retweet
twitter.retweet.mentions - An array of Twitter @usernames that are mentioned in this retweet
Both Tweets and retweets
interaction.author.id - The author's ID on the service from which they generated a post. For example, their Twitter user ID
Secondly, there are two important syntax rules you should be familiar with when writing your CSDL. These rules are useful to know both when filtering for @mentions, and filtering for other keywords:
Use of [email protected] symbols when filtering for Twitter @usernames
You should not use the [email protected] symbol when filtering for usernames. Twitter usernames are passed on to us from Twitter as the bare username, without the appended [email protected] symbol. Further details of how our CSDL filtering engine works with regard to @Mentions, URLs and punctuation can be found on the documentation page - The CSDL Engine : How it Works.
Use of the IN and CONTAINS_ANY operators
contains_any - Matches if one of your comma separated keywords or phrases are contained as words or phrases in the target field. For example, twitter.user.location contains_any “New, Old” will match locations such as “New York”, but not “Oldfield”.
in - Matches if your comma separated keywords or phrases are an exact match of the full content of the target field. For example, twitter.user.location in “New York” would match the location “New York”, but not “New York, NY”.
How to filter for users sending Tweets
The best targets to use if you want to filter on a list of users who are sending Tweets are twitter.user.id or twitter.user.screen_name. If you are only interested in people sending retweets, you would want to use the twitter.retweet.user.id or twitter.retweet.user.screen_name targets. If you would like to receive both Tweets and retweets, you will be better off using interaction.author.id in conjunction with ‘interaction.type == “twitter”’.
How to filter for Twitter users @mentioned in Tweets
You should use the twitter.mentions or twitter.mention_ids targets. People often try to filter incorrectly for Tweets containing mentions of @usernames using the following CSDL:
twitter.text contains_any "@DataSift, @DataSiftDev, @DataSiftAPI"
The DataSift filtering engine filters keywords by first stripping out any @mentions or links from the main body of text, and filtering them separately using the twitter.mentions targets and links augmentation respectively, so you should never be able to find a @mention by filtering on twitter.text or interaction.content.
Below is an example of a correct way to filter for @mentions of Twitter usernames within Tweets:
twitter.mentions in "DataSift, DataSiftDev, DataSiftAPI"
You could also use twitter.mention_ids to filter on the Twitter user ID, rather than the @username:
twitter.mention_ids in [155505157, 165781228, 425158828] | <urn:uuid:adc52db0-7ada-464a-93f9-894f87ae1ab8> | CC-MAIN-2017-04 | http://dev.datasift.com/blog/how-best-filter-twitter-mentions | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.83686 | 904 | 2.671875 | 3 |
Sophos has revealed new research into the use of other people’s Wi-Fi networks to piggyback onto the internet without payment. The research shows that 54 percent of computer users have admitted breaking the law, by using someone else’s wireless internet access without permission.
According to Sophos, many internet-enabled homes fail to properly secure their wireless connection properly with passwords and encryption, allowing freeloading passers-by and neighbours to steal internet access rather than paying an internet service provider (ISP) for their own. In addition, while businesses often have security measures in place to protect the Wi-Fi networks within their offices from attack, Sophos experts note that remote users working from home could prove to be a weak link in corporate defences.
Stealing Wi-Fi internet access may feel like a victimless crime, but it deprives ISPs of revenue. Furthermore, if you’ve hopped onto your next door neighbours’ wireless broadband connection to illegally download movies and music from the net, chances are that you are also slowing down their internet access and impacting on their download limit. For this reason, most ISPs put a clause in their contracts ordering users not to share access with neighbours – but it’s very hard for them to enforce this.
Have you ever used someone else’s Wi-Fi connection without their permission?
Sophos online survey, 560 respondents, 31 October – 6 November 2007.
Sophos recommends that home owners and businesses alike set up their networks with security in mind, ensuring that strong encryption is in place to prevent hackers from eavesdropping on communications and potentially stealing usernames, passwords and other confidential information. | <urn:uuid:a4de8049-8826-425e-a494-2f3e799c74cd> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2007/11/15/wi-fi-piggybacking-widespread/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936982 | 342 | 2.625 | 3 |
As we prepare for Cyber Monday and a holiday season of increased online shopping, NCSA advises that everyone take a moment to practice safe cyber behaviors.
These simple steps apply to everyone who connects to the Internet, whether from laptops, personal computers, mobile phones, or gaming consoles. Before you connect to the Internet, take a moment to evaluate that you’re prepared to share information or engage in a larger community.
Keep a clean machine:
- Keep security software current: Having the latest security software, web browser, and operating system are the best defenses against viruses, malware, and other online threats.
- Automate software updates: Many software programs will automatically connect and update to defend against known risks. Turn on automatic updates if that’s an available option.
- Protect all devices that connect to the Internet: Along with computers, smartphones, gaming systems, and other web-enabled devices also need protection from viruses and malware.
- Plug & scan: “USBs” and other external devices can be infected by viruses and malware. Use your security software to scan them.
Protect your personal information:
- Secure your accounts: Ask for protection beyond passwords. Many account providers now offer additional ways for you verify who you are before you conduct business on that site.
- Create Strong Passwords: Combine capital and lowercase letters with numbers and symbols to create a more secure password. When opening new accounts, use long and strong passwords.
- Provide Only Essential Personal Information: Only provide the minimal amount of information needed to complete a transaction. When providing personal information for any purchase or other reason, ensure that you know who is asking for the information, and why they need it.
- Unique account, unique password: Separate passwords for every account helps to thwart cybercriminals.
- Write it down and keep it safe: Everyone can forget a password. Keep a list that’s stored in a safe, secure place away from your computer.
- Own your online presence: When available, set the privacy and security settings on websites to your comfort level for information sharing. It’s ok to limit who you share information with.
Connect with care:
- When in doubt, throw it out: Links in email, tweets, posts, and online advertising are often the way cybercriminals compromise your computer. If it looks suspicious, even if you know the source, it’s best to delete or, if appropriate, mark as junk email.
- Get savvy about Wi-Fi hotspots: Limit the type of business you conduct and adjust the security settings on your device to limit who can access your machine.
- Protect your $$: When banking and shopping, check to be sure the websites you visit are security enabled. Look for web addresses with “https”, which means the site takes extra measures to help secure your information. “http” is not secure.
- Be Aware of Holiday Shopping Gimmicks: Be mindful of holiday shopping efforts to lure you. Cyber crooks will adjust to the holiday season, trying to get you to click through to deals that may appear too good to be true. They may also try to trick you by sending emails that something has gone wrong with an online purchase.
Be web wise:
- Know the Seller: Research online retailers before a first time purchase from a merchant (or auction seller) new to you. Search to see how others have rated them, and check their reviews. Do these things even if you are a return customer, as reputations can change.
- Stay current. Keep pace with new ways to stay safe online. Check trusted websites for the latest information, and share with friends, family, and colleagues and encourage them to be web wise.
- Think before you act: Be wary of communications that implores you to act immediately, offers something that sounds too good to be true, or asks for personal information.
- Back it up: Protect your valuable work, music, photos, and other digital information by making an electronic copy and storing it safely.
Be a good online citizen:
- Safer for me more secure for all: What you do online has the potential to affect everyone – at home, at work and around the world. Practicing good online habits benefits the global digital community.
- Post only about others as you have them post about you.
- Help the authorities fight cyber crime: Report stolen finances or identities and other cybercrime. | <urn:uuid:b8aa29fb-3a40-461f-9c66-7abb0ccc26fd> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/11/21/secure-practices-for-online-shopping/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897201 | 915 | 2.734375 | 3 |
Organizations around the world are choosing to move from traditional physical data centers to virtual infrastructure, affecting every layer in the data center stack. This change will not only yield a scalable and elastic environment, but will also be more sustainable and secure. This new converged data center, sometimes referred to as a software-defined data center (SDDC), is centrally managed with capabilities to control demand capacity and resource allocation from a single dashboard. Ensuring that the SDDC is sustainable and secure requires a new approach to IT, and nowhere is this more apparent than in the software-defined network (SDN).
Traditional data centers relied on perimeter-based network security appliances placed at strategic choke points on the physical network. The SDN’s ability to dynamically adapt, introduce new abstraction layers and avoid traditional routing necessitates a more comprehensive security implementation. Network security must be multifunctional and adaptive, ensuring that security controls can react to change events in the converged data center. This discussion focuses specifically on how SDN components offer new opportunities for improved network security controls and compliance, organizational changes as technologists’ roles shift, and considerations when implementing security controls in virtual compute and network architecture.
A Look at the New Converged Data Center
The new converged data center, or software-defined data center, is a data-storage facility in which all elements of the infrastructure—networking, storage, CPU and security—are virtualized and delivered as a service. Deployment, provisioning, configuration and operation of the entire infrastructure is abstracted from hardware and implemented through software. Network virtualization is a concept of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned to a particular server or device in real time.
The transitional process to reach a software-defined environment starts with understanding what technical capabilities will need to change. When most IT professionals think of SDN, it’s usually in the context of the SDDC. An SDN without the proper security mechanisms in place leaves the data center professional with only a piece of the overall puzzle. The capability to manage capacity demand on the fly requires that the components that make up the architecture be standardized and supportive of the methods of virtualization and automation. For example, unlike traditional networks that default to “open,” thus requiring firewalls to provide isolation and segmentation, SDN defaults to “close.” Only when connections between devices are explicitly defined can they communicate. So the functions of firewall and network traffic monitoring, such as net flow, must adapt. It makes little sense to build out a virtual network and then secure it with traditional perimeter-based devices that hinder the capabilities of virtual fabric and undermine the automation process while providing little visibility and control into inner virtual processes. Determining the correct technical controls is just as important as choosing the foundational equipment to support the virtual strategy. To maximize efficiencies and return on investment, organizations must architect a security strategy from inception as part of the software-defined environment.
New Opportunities for Network Security in the SDN
Software-defined networking promises highly efficient management capabilities coupled with the simplicity and the exponential speed of execution, consuming the attention of vendors and consumers alike. There are many considerations when building out an SDN, one being security—a critical component that requires a new approach in the SDN. At a basic level, the definition of SDN is the ability to separate the data plane from the control plane, enabling centralized software-based control. Commands from the controller are then communicated back to the data plane for execution on the switches and routers. Ultimately, this approach enables a full perspective of the network and gives the administrator the ability to make changes centrally without a device-centric configuration on each switch or router. Although some vendors have taken a more immediate, tactical approach by providing direct access to the hardware via an API, this method does not allow for central control and is proprietary in nature.
Central control of the network is accomplished by the logical centralization of control-plane capabilities, enabling the network administrator to deal with a pool of network devices as a single entity. A global abstraction layer, as opposed to the individual devices used by the OpenFlow protocol, then controls network flows. Central command simplifies network administration by providing this single point of instruction and execution. Network allocation becomes achievable, with more-accurate perspective of the flow demand and bandwidth constraints than ever. All of these capabilities will aid in the ever evolving challenges faced by today’s IT work force; the opportunity that comes with ease of administration is the capability to secure and ensure compliance in a way that capitalizes on the fundamental concepts of SDN.
Ensuring that security controls are multifunctional and adaptive and can react to change events in the network is an essential component of the converged data center. Software-defined security (SDS) meets these needs and protects the network from within the virtual infrastructure. What distinguishes SDS from perimeter security are three characteristics: (1) the use of logical zoning that relies on SDDC APIs to (2) implement policy-based multifunctional software-defined controls for continuous monitoring and mitigation of risk, (3) deployed at the lowest possible level on the virtual switch fabric. Compliance can then be achieved through continuous monitoring of the security event stream against the appropriate control framework.
The concept of logical segmentation, or trust zones, is in line with the concepts of a software-defined data center. Trust zones are logical, flexible policy envelopes that continuously detect and assign all virtual machines (VMs) to groups. They are enabled by the tight integration of software-defined security with the SDDC APIs. This automated zoning mechanism ensures that all VMs are identified and assigned to a policy group, providing real-time perfect inventory and security coverage. Segmentation enabled by trust zones provides precise visibility and management of all virtual networks, network devices, system components and sensitive data in the cloud.
Trust zones can be aligned with SDN logical groupings such as Cisco Application Centric Infrastructure’s (ACI) use of end-point groups (EPGs). They can thus ensure that assets automatically inherit security policies set for the containers, where the containers can be defined as EPGs. Proper segmentation requires that even if an out-of-scope system component is compromised, it cannot affect the security of sensitive data in a trust zone. The automation around trust zones provides a crucial benefit as a compensating control against any ACI change that violates policy, since manual tracking is nearly impossible owing to rapid, continuous changes in virtual infrastructure. An additional benefit is independent audit and control to assure accurate inventory mapping, thus enabling automatic production of net-flow diagrams across all systems and networks. Manually mapping accurate net flow is impractical if not impossible in the converged data center.
Policies automatically assigned to virtual assets placed in trust zones enable centrally controlled software-defined security to automatically and deterministically enforce those policies to protect sensitive data wherever it may be processed, stored or transmitted in the virtual environment. Trust-zone membership is automatic and based on any attribute of the asset. Policy-based security controls are orchestrated in SDS, continuously monitoring network components in the entire virtual environment to ensure adherence to policies. The benefit of continuous monitoring is the ability to immediately spot changes that may compromise the security and compliance posture of an organization. Policies can include automatic mapping to regulatory standards and must include vulnerability management to include network-based checks on VM and hypervisor configuration. Alerts for security-policy violations can be followed by manual or automatic policy-based enforcement actions to mitigate risk and maintain compliance.
Software-defined security is deployed and managed at the lowest level, on the virtual switch fabric, ensuring the highest level of visibility and control over events in the software-defined network. Managed from a single processing hub and interface gives organizations significant operational efficiencies—beginning with a simplified infrastructure to support security controls and compliance for the virtual environment. Software-based security has a minimal processing footprint and is easily hosted by existing IT platforms. As multifunction security, organizations get systematic and maximum coverage without having to deploy and manage multiple tools. Automation of inventory tracking and monitoring as well as accurate reporting are available on demand. Converged data center technologists as well as security and compliance professionals can focus on a single interface, driving efficiency in the organization.
Approached in a manner consistent with the focus of agile, predetermined rules and policies applied and monitored automatically, security as software in the data center is adaptive and elastic. Investing in a software-defined environment to impose only legacy security methods will not only prove ineffective, but it can also be detrimental to the security posture and compliance model. Consider that the compelling factor driving the transition to the SDDC model is the ability to instantaneously adapt to organizational needs and requirements. With this notion, organizations should without question do the same with their security and compliance strategy.
Organizational Changes and Shifting Roles
With the advent of virtualization, changes in data center architecture have also led to shifting roles in the organization. Software-defined networking and software-defined security present an opportunity for existing IT personnel to embrace change and expand their portfolios. The software-defined data center is radically reshaping traditional IT responsibilities and roles for network administrators, security administrators and operations. For IT to function efficiently, these changes must be understood and managed. Rather than regarding the changes as reducing responsibilities or otherwise changing them for the worse, the software-defined data center is actually an opportunity to take on a larger scope, as the days of IT siloes are over.
The integration of traditional operations and hypervisor administration with network and security management in the software-defined data center necessitates a workflow shift. The focus has turned away from workflow process management towards forward-looking development, supporting system enhancements and improvements. Historically, IT organizations have had multilevel approval processes for change control in the network topology and have dedicated resources to tuning devices or validating whether incidents are false positives. Applications have been based on the limitations of the network. Software-defined networks and security have reversed the focus. With the ability to institute predefined capabilities based on rules executed automatically, the network is now designed according to the needs of the applications. IT can spend less time on operations and more time building highly efficient applications. IT personnel can also contribute more to the organization by expanding their roles and becoming leaders in converged-infrastructure administration.
Five Key Considerations With SDN Adoption
As organizations plan to move to virtualized systems and software-defined networks, it is helpful to review the realistic challenges that they will face. To be able to take advantage of the benefits of a software-defined environment, architects should consider the following:
- Vulnerabilities: A converged network will inherit common operating-system vulnerabilities. Greater attention to patch management and configuration changes must be implemented. Continuous monitoring is critical and can be automated with the right tools in place.
- Access control: An SDN will have single points of compromise that lead to broader access. Strong access-control policies for authentication and authorization must be imposed on the system. It is best to use a role-based authorization mechanism to assign access levels, permissions and privileges.
- Failover: Design the SDN for failure, including adequate backups for speed-to-recovery, fault tolerance and failover capabilities.
- Control plane: The control plane requires elevated privileges. Manage the SDN control plane out of band, separating the path to it from the path for normal traffic. Remove all default configurations from SDN, as this is common information for those with negative intentions.
- Activity log: Implement a logging mechanism and net-flow analysis to track activity and report on compliance status. Continuous monitoring will ensure speed to resolution for any misconfiguration or unauthorized activity.
As is true with any technological movement, operations must adapt as data centers evolve. The key to adaptation in the software-defined network—and more specifically, the software-defined data center—is putting a plan in place that not only addresses infrastructure requirements, but also supports security and compliance policies. The virtual fabric introduces new vulnerabilities that can be managed with the right set of tools in place. Having the opportunity to implement security differently—and better—will pay dividends as security risks and compliance regulations increase. Perhaps most importantly, legacy processes can be dramatically improved on by boosting operational efficiency and promoting greater innovation as team members shift focus from process management to application development.
Security implementation and organizational opportunities faced by IT during the transition to converged data centers are daunting only because change is required. Solutions are available and continue to improve, supporting a more secure and solid virtual architecture than ever. It is essential to embrace the changes, and ultimately, both data centers and technologists will have a more competitive edge in their industries.
Leading article image courtesy of earthrangers
About the Author
Randal Asay recently joined Catbird as chief technology officer in September 2013, having over 15 years of experience in network security, architecture, design and implementation in a variety of retail and governmental environments. Randal has vast experience in the implementation of security practices relating to large enterprise solutions as well as e-commerce platforms alike.
Before Catbird, Randal served as a director of engineering at Walmart Stores Inc. In his time with the company he developed industry leading code-analysis practices to support security and compliance initiatives, as well as contributing to the development of an outsourcing governance body. Randal served numerous roles in information security department, contributing to the enhancements of perimeter and network security as well as overall policy enforcement. In addition to his leadership in the information security domain, he led the e-commerce infrastructure teams through extensive growth, delivering capacity-management and technology-refresh methods ranging from network design, storage capacity and database tuning. Before Walmart, Randal focused on government agencies, servicing the information assurance division of the United States Air Force, and focused on perimeter security, data security and incident response.
Randal received his bachelor of science degree from Weber State University, as well as his sequential master's degree in information technology management and MBA from Webster University. | <urn:uuid:644adee3-76f2-471a-b7d6-f934b4b67450> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/ripple-effects-sdn-change-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00534-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927205 | 2,894 | 2.578125 | 3 |
1940s motion simulator rehabbed for tech testing at sea
The Navy is repurposing a piece of training equipment from World War II and converting it into a facility for testing and simulating how radar and other tactical communication systems would operate at sea.
The 96,000 pound “motion table” platform, rechristened the Ship Motion System (SMS), was originally used in the 1940s to simulate ship motion for training machine gun operators for action at sea.
Now the Naval Research Lab and the Office of Naval Research are mounting an effort to restore the system to test how radar, tactical electronic warfare, communications, optical sciences and remote sensing would operate with rolling and pitching on the deck of a Navy ship maneuvering at sea.
To use the SMS with today’s high precision systems, NRL will upgrade its control and monitoring systems. The foundation and two main decks will be reused. The hardware will be replaced with state-of-the-art equipment including motion control and monitoring, according to a Navy notice.
NRL engineers Richard Perlut and Chuck Hilterbrick are leading the effort to refurbish the system at the NRL Chesapeake Bay Detachment, on the shore of the Chesapeake in Calvert County, Md.
The NRL uses the site for research in radar, electronic warfare, optical devices, materials, communications and fire research. It says the facility is ideal for the SMS project, as well as for experiments involving simulating targets of aircraft and ships.
Posted by GCN Staff on Jan 17, 2014 at 8:55 AM | <urn:uuid:5374ec12-e5f5-43e6-a088-5d87c9049ac1> | CC-MAIN-2017-04 | https://gcn.com/blogs/pulse/2014/01/navy-ship-motion-system.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943529 | 320 | 3.03125 | 3 |
3.1.2 How fast is the RSA algorithm?
An "RSA operation," whether encrypting, decrypting, signing, or verifying is essentially a modular exponentiation. This computation is performed by a series of modular multiplications.
In practical applications, it is common to choose a small public exponent for the public key. In fact, entire groups of users can use the same public exponent, each with a different modulus. (There are some restrictions on the prime factors of the modulus when the public exponent is fixed.) This makes encryption faster than decryption and verification faster than signing. With the typical modular exponentiation algorithms used to implement the RSA algorithm, public key operations take O(k2) steps, private key operations take O(k3) steps, and key generation takes O(k4) steps, where k is the number of bits in the modulus. ``Fast multiplication'' techniques, such as methods based on the Fast Fourier Transform (FFT), require asymptotically fewer steps. In practice, however, they are not as common due to their greater software complexity and the fact that they may actually be slower for typical key sizes.
The speed and efficiency of the many commercially available software and hardware implementations of the RSA algorithm are increasing rapidly; see http://www.rsasecurity.com/ for the latest figures.
By comparison, DES (see Section 3.2) and other block ciphers are much faster than the RSA algorithm. DES is generally at least 100 times as fast in software and between 1,000 and 10,000 times as fast in hardware, depending on the implementation. Implementations of the RSA algorithm will probably narrow the gap a bit in coming years, due to high demand, but block ciphers will get faster as well.
For a detailed report on high-speed RSA algorithm implementations, see [Koç94]. | <urn:uuid:7fd88efb-cf24-498a-be59-d9a463eeecda> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/how-fast-is-the-rsa-algorithm.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928267 | 388 | 3.859375 | 4 |
Intel Unveils Teraflop Processor
The CPU was intended to be a research project into how to develop an effective and efficient multicore processor, since Intel sees cores, not clock speed, as the means for performance advancement in the future.
"To increase performance, we need to scale out to parallelism. But then you have new issues to deal with, such as all the threads, an operating system that can parallelize the threads, caches that can handle simultaneous processes," said Sean Koehl, technical strategist of the Terascale program at Intel (Quote).
The experiment proved insightful for Intel. "We have learned we're able to create a high speed mesh that can handle terabits of data per second," Koehl told internetnews.com. "Something you need to scale these processors is high bandwidth for core-to-core communication."
The chip is not based on x86 or any existing design. The 80 cores are all floating point engines, each with its own high-speed controller for communicating with the other cores. The chip uses Intel's new 45nm design and the new metals design recently announced that will allow for even smaller processor design in the future.
This means energy efficiencies, such as the cores turning each other on and off to save power. "We think this is how we will scale performance in the future. The old manner of turning up clock speed is not proving energy efficient," said Koehl.
One of the ongoing efforts is learning how applications operate in parallel. The terascale chip literally breaks an application or task into 80 pieces and each core does its own small part in the computation process before the entire process is reassembled.
However, learning how to effectively design applications to run in parallel is a long process. Because they all behave differently, some applications can achieve the teraflop of throughput this chip is said to offer, while others run considerably slower. | <urn:uuid:4106ce2d-530b-41ef-96b6-508815646daa> | CC-MAIN-2017-04 | http://www.cioupdate.com/print/news/article.php/3659701/Intel-Unveils-Teraflop-Processor.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949687 | 393 | 3.34375 | 3 |
In the Fall of 1998, Erick and Linda von Schweber, founders of the Infomaniacs think tank, published an article in eWeek predicting the evolution of "a new wave in computingone that we believe promises within the next five years to deliver alm
In the Fall of 1998, Erick and Linda von Schweber, founders of the Infomaniacs think tank, published an article in eWeek predicting the evolution of "a new wave in computingone that we believe promises within the next five years to deliver almost limitless cheap computing power and to change the balance of power among technology vendors."
They called their idea "computing fabrics" and described it as "a new architecture" that would "erase the distinctions between network and computer" by linking "thousands of processors and storage devices into a single system."
Today, only three years into their five-year time frame, the concept is called "grid computing," and, increasingly, its hailed as the Next Big Thing. Fabric? Grid? The only difference is the density of the weave.
Recently, as I reread the von Schwebers prescient article, I realized that the one major phenomenon they failed to anticipate proved as important as all the things they got right.
Although Linux was already gaining popularity on campuses and in research centers in 1998, the phenomenon they underestimated was the open-source movement. As it turns out, it is the open-source Globus Project that is making grid computing possible in a wayand at a speednone of the proprietary initiatives they cited could have hoped to achieve.
In 20/20 hindsight, it now seems to have been inevitable. Open-source development is the human version of distributed processing, and it boasts many of the same efficiencies that make grid computing attractive.
The problem is that we tend not to take free stuff seriously. At about the same time Bill Gates was writing a version of BASIC destined to seed the worlds largest software company, Ward Christensen, an IBM systems engineer from Dolton, Ill., coded a little program he called modem, which enabled computers to send binary code to one another over phone lines. The difference was that Christensen released his code to the pubic domain, enabling other programmers to enhance and perfect a system of checksums and protocols that became the foundation of all computer communications.
Similarly, many of us tended not to take Napster seriously, and we smiled a little condescendingly at the thousands of volunteers donating computing cycles to SETI@Homes search for extraterrestrial intelligence. Both contributed to peer-to-peer networks and the distributed architecture of grid computing.
There is a lesson here. The next time some wizened vendor on the streets of New York counsels, "For free, takefor buy, waste time," take a moment to consider whether he might be right.
Are the best things in IT free? Let me know at firstname.lastname@example.org. | <urn:uuid:0e7eeeac-adc1-4856-b009-ece4bf197988> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/Open-Source-Is-the-Loom-for-Fabrics | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955986 | 602 | 2.578125 | 3 |
Laid Y.,National Institute of Public Health |
Boutekdjiret L.,National Institute of Public Health |
Oudjehane R.,National Institute of Public Health |
Laraba-Djebari F.,University of Science and Technology Houari Boumediene |
And 9 more authors.
Journal of Venomous Animals and Toxins Including Tropical Diseases | Year: 2012
Scorpion stings are a public health problem in the Maghreb region. In Algeria, epidemiological data were collected over the past twenty years by the Algerian health authorities. This study is an analysis of morbidity and mortality data collected from 2001 to 2010. Annual incidence and mortality due to scorpion envenoming were 152 ± 3.6 stings and 0.236 ± 0.041 deaths per 100,000 people (95% CI), respectively. The risk of being stung by a scorpion was dramatically higher in southern areas and central highlands due to environmental conditions. Incidence of envenoming was especially higher in the adult population, and among young males. In contrast, mortality was significantly higher among children under 15 years, particularly ages 1-4. Upper limbs were more often affected than lower limbs. Most stings occurred at night, indoors and during the summer. Data collected since 2001 showed a reduction of mortality by nearly 50%, suggesting that the medical care defined by the national anti-scorpion project is bearing fruit. © CEVAP 2012. Source | <urn:uuid:f0c261e6-2fef-4f1b-bed7-c1b3d68ecde7> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/national-committee-on-control-of-scorpion-envenomations-cnles-2363462/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951568 | 304 | 2.828125 | 3 |
An optimization problem is one where you have to make the best decision (choose the best investments, minimize your company's costs, find the class schedule with the fewest morning classes, or so on). In optimization models then, the words "minimize" and "maximize" come up a lot when articulating an objective.
In data science, many of the practices, whether that's artificial intelligence, data mining, or forecasting, are actually just some data prep plus a model-fitting step that's actually an optimization model. We'll start with a little practice with optimization now. Just a taste.
In Excel, optimization problems are solved using an Add-In that ships with Excel called Solver.
On Windows, Solver may be added in by going to File (in Excel 2007 it's the top left Windows button) > Options > Add-ins, and under the Manage drop-down choosing Excel Add-ins and pressing the Go button. Check the Solver Add-In box and press OK.
On Mac, Solver is added by going to Tools then Add-ins and selecting Solver.xlam from the menu.
A Solver button will appear in the Analysis section of the Data tab in every version.
All right! Now that Solver is installed, here's an optimization problem: You are told you need 2,400 calories a day. What's the fewest number of items you can buy from the snack stand to achieve that? Obviously, you could buy 10 ice cream sandwiches at 240 calories a piece, but is there a way to do it for fewer items than that?
Solver can tell you!
To start, download a copy of the Calories spreadsheet from the book's companion website at www.wiley.com/go/datasmart -- use the download link for chapter 1. Make a copy of the Calories sheet in the Concessions.xlsx Excel workbook, name the sheet Calories-Solver, and clear out everything but the calories table on the copy. If you don't know how to make a copy of a sheet in Excel, you simply right-click the tab you'd like to copy and select the Move or Copy menu.
To get Solver to work, you need to provide it with a range of cells it can set with decisions. In this case, Solver needs to decide how many of each item to buy. So in Column C next to the calorie counts, label the column How many? (or whatever you feel like), and you can allow Solver to store its decisions in this column. | <urn:uuid:1ff5dbe0-1d9d-4cab-8239-c104ad5992f1> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2487503/business-intelligence/how-to-solve-optimization-problems-with-excel-and-solver.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00488-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924832 | 531 | 2.75 | 3 |
If you are going to read the Empty file it will get into the AT END condition.
If you use a RECORD-READ counter to track the no. of records read you will find that the value of this counter will be zero.
The code will be something like below:
MOVE ZEROES TO RECORD-COUNTER.
DISPLAY 'End of File.'
NOT AT END
ADD 1 to RECORD-COUNTER
When u try to read an empty input file then the file-status will be NON ZERO (i.e) 35.
so check the file status if it is '35' then u read an empty input file.
Then abort or do the exception wat ever u need.
Joined: 08 Jun 2007 Posts: 71 Location: Zoetermeer, the Netherlands
Be carefull with "empty files" on the mainframe. Whenever a dataset (=file) is catalogued (by means of an IEFBR14 step) but never opened for output then you should not try to open this dataset for input. You might get an abend. It is more safe to ensure that you have a "real" empty dataset.
This can be accomplised "on the drawing board" by means of JCL or coding standards.
File status 10 indicates end of file (i.e) you have opened the infile properly & read the data.
But in case if the file is empty you cannot open the file itself. so immediately after opening the file u will get the file status as 35 and no need to read the file & u can't read the file.
ex : OPEN input INFILE.
IF INFILE-STATUS = 35
Perform your logic to abort or what ever ur option | <urn:uuid:0c3da515-6a08-420e-891b-f5d535d96199> | CC-MAIN-2017-04 | http://ibmmainframes.com/about22228.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00424-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.844681 | 365 | 2.609375 | 3 |
Supercomputing luminary Larry Smarr is uniquely qualified to write about the future of networked computing. In an essay in today’s New York Times, Smarr, the founding director of the California Institute for Telecommunications and Information Technology (Calit2), explores what he believes is a likely extension of today’s increasingly-networked world. Smarr’s theory begins with a simple premise: “Over the next 10 years, the physical world will become ever more overlaid with devices for sending and receiving information.”
At the heart of this distributed communication system is an expanding network of low-power processors, nearly invisible in their embedded-state, yet ubiquitous in number. They’re in the phones we use, in our automobiles’ navigation systems. They’re inside a growing number of appliances and even in our homes and office buildings. Other networked information collectors include real-time traffic monitoring systems and a variety of both government-sponsored and privately-run surveillance systems.
Smarr notes that all these tiny computers are constantly receiving input, which is then sent to remote datacenters, the “vast clouds” owned by the big Web-era players, such as Google, Amazon, Microsoft and Apple.
Smarr also points to the development of spatially-aware apps, which can take in all the data along with the associated geographical and time stamps to create real-time portraits of various systems. For example cell phones can “talk to each other,” pinging the cloud as needed to provide commuters with up-to-the-second traffic information.
“Smart electric grids are measuring our homes’ use of power; active people are tracking their heart rates; and hundreds of millions of us are uploading geo-tagged data to Flickr, Yelp, Facebook and Google Plus. As we look 10 years ahead, the fastest supercomputer (the “exascale” machine) will be composed of one billion processors, and the clouds will most likely grow to this scale as well, creating a distributed planetary computer of enormous power.”
This unprecedented level of compute power, combined with the an equally vast storage system, will enable the analysis necessary for extracting meaningful intelligence from the torrent of incoming data. Research from the Institute for the Future shows that this global cloud computer will likely progress from simply collecting local data to being able to control it. “In this evolution,” writes Smarr, “the world gradually becomes programmable.”
There’s an apocryphal ring to this tale, but it’s a pretty straight-forward extrapolation of the current state of technology. Of course, any technological advance can be used for good or evil, a fact clearly illustrated by history. The good guys try to stay a step ahead of the bad guys; sometimes they fail, but for the most part it seems to work out. Smarr’s academic home, Calit2, is exploring specific “programmable world” scenarios, such as using sensors to monitor electricity consumption, while no doubt remaining acutely aware of the potential for misuse. | <urn:uuid:ea379f50-8528-458f-b3a5-79479d75d9db> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/12/06/one_cloud_to_rule_them_all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928605 | 645 | 2.65625 | 3 |
Manipulation effect tested on undecided Indian voters.
Search engines can influence voters without them noticing, new research has found.
Psychologist Robert Epstein, whose most recent findings were released on Monday, coined it the Search Engine Manipulation Effect.
Epstein, who is a critic of Google, extended his research on the effect during India’s national elections.
The latest experiment, conducted with a group of more than 1,800 undecided Indian voters, was able to shift votes by a 12.5% average by altering candidate rankings in Google search results. Only one in every 100 participants noticed the intentional manipulation.
Participants, who were found by advertising for undecided voters for India’s election, were made to sign on to a Web portal, and were then presented with a Kaboodle search engine and asked to find out information on candidates. The portal was, however, rigged, and each of the participants was assigned to a group favouring one of the candidates.
Epstein, who is a senior research psychologist at the American Institute for Behavioral Research and Technology, said: "It confirms that in a real election, you can really shift voter preferences really dramatically."
However, sceptics of Epstein said that voters are influence by many more sources than search engine results, so the effect can not be the ultimate factor in deciding a vote. Furthermore, search engine operators would face major public and political backlash if it was found that results were manipulated.
Google officials have said in a statement responding to Epstein’s research: "Providing relevant answers has been the cornerstone of Google’s approach to search from the very beginning. It would undermine people’s trust in our results and company if we were to change course."
But Epstein said: "Even if you’re not doing it deliberately, you are driving votes. They are running a system that is determining the outcome of elections." | <urn:uuid:ab410054-aca9-45a0-9ff5-aaec1498f7fb> | CC-MAIN-2017-04 | http://www.cbronline.com/news/social-media/how-google-search-results-are-influencing-elections-4265529 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00414-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975314 | 387 | 2.734375 | 3 |
One of the world’s most powerful supercomputers devoted to climate change – the 1.5 petaflop Yellowstone system – is fulfilling its mission to help find solutions to serious climate change issues. Recently a team of researchers from National Center for Atmospheric Research (NCAR) and several partner institutions used Yellowstone, an IBM iDataPlex cluster, to simulate the threat of ozone pollution as temperatures continue to rise due to climate change.
According to the sophisticated simulation, the United States will likely experience a 70 percent increase in unhealthy summertime ozone levels by 2050 – unless serious mitigating actions are implemented.
An article published by NCAR explains that temperature fluctuations spurred by a changing climate will result in higher atmospheric levels of methane and more volatile organic compounds (emitted by plants), setting off further chemical reactions that create excess ozone.
The IBM Yellowstone supercomputer at the heart of this research is among the world’s most powerful computers dedicated to climate and atmospheric science. Funded by the NSF and the University of Wyoming and installed at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming, Yellowstone ran simulations of hourly pollution levels for 39 hypothetical summers. The design allowed the research team to incorporate year-to-year variations in meteorological conditions to arrive at a more detailed and accurate representation of future pollution levels.
Although ozone pollution is invisible, it is triggered by visible pollutions (like smog) that react with sunlight. This type of pollution has long been known to reach unhealthy levels in major cities like Los Angeles, but according to this latest research, regions across the continental United States will begin to experience at least a few days of unhealthy air during the summers. The worst-hit areas will be parts of the East, Midwest, and West Coast that already have above-average levels of pollution and ozone.
“It doesn’t matter where you are in the United States – climate change has the potential to make your air worse,” said NCAR scientist Gabriele Pfister, the lead author of the study. “A warming planet doesn’t just mean rising temperatures, it also means risking more summertime pollution and the health impacts that come with it.”
It’s not all doom and gloom, however. As with many climate change findings, it’s not too late to enact changes. A targeted reduction campaign addressing the worst pollutants will prevent the ozone overload even in the face of rising temperatures.
The study is said to be the first of its kind to employ highly advanced geoscience supercomputing capabilities. Needing to incorporate both global climate and regional pollution conditions, the researchers turned to two well-known models – the Community Earth System Model, and an air chemistry version of the multiagency Weather Research and Forecasting model – both of which were developed through collaborations with the atmospheric science community.
Even with Yellowstone’s leadership-class capabilities, the simulations took months to complete.
“This research would not have been possible even just a couple of years ago,” said Pfister. “Without the new computing power made possible by Yellowstone, you cannot depict the necessary detail of future changes in air chemistry over small areas, including the urban centers where most Americans live.”
Findings will be published this week in the Journal of Geophysical Research-Atmospheres, a journal of the American Geophysical Union. In addition to NCAR, the study co-authors have affiliations with Pacific Northwest National Laboratory, the University of Colorado, Boulder, and North-West University in South Africa. | <urn:uuid:8206383f-397f-44fa-b24b-1c7f43e74cbe> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/08/computing-ozone-pollution-threat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929724 | 736 | 3.8125 | 4 |
The world's most powerful particle smasher will restart in November at just half the energy the machine was designed to reach. But even at this level, the Large Hadron Collider has the potential to uncover exotic new physics, such as signs of hidden extra dimensions, physicists say.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The LHC is a new particle accelerator at the CERN laboratory near Geneva, Switzerland, designed to answer fundamental questions, such as what gives elementary particles their mass, by colliding particles at higher energies than ever achieved in a laboratory before.
But the first attempt to turn on the LHC failed in September 2008 when a joint connecting a pair of superconducting wires overheated, causing an explosive release of helium used as a coolant. Scientists have been making repairs and checking the strength of other electrical connections since then to pave the way for a second start attempt.
Now, CERN has announced that the LHC's first data collecting run, to begin in November, will collide protons at only half the energy the accelerator was designed to achieve. The run will initially smash protons together at 7 trillion electron volts (7 TeV), compared to the design goal of 14 TeV, according to a CERN statement on 6 August. (Protons in each of the two opposing beams will have 3.5 TeV of energy, producing collisions at 7 TeV.)
But even 7 TeV is much higher than physicists have ever probed in the laboratory before. The Tevatron accelerator at Fermilab in Batavia, Illinois, is the current record holder, with collisions at 2 TeV.
No one knows exactly what energy threshold must be crossed to catch a glimpse of new and exotic physics that is not contained in the standard model of particle physics, which fits everything seen so far at lower energies.
But new phenomena are widely expected somewhere between one and a few TeV, because that is where the mathematics underlying the standard model starts breaking down, says Greg Landsberg of Brown University in Providence, Rhode Island. Landsberg is involved in the CMS experiment, one of the LHC's two main detectors.
"Nature is full of surprises and something exciting and possibly unexpected could happen at 7 TeV," Landsberg told New Scientist. "Extra dimensions could easily open up at that energy."
This first run is supposed to last until late 2010, and CERN plans to boost the energy to 10 TeV before it is over. Landsberg is optimistic that the LHC will be able to quickly ramp up the energy, allowing most of the run to be carried out at 10 TeV.
However, getting collisions at energies of even a few TeV is harder than it might seem, because protons are made up of smaller particles called quarks and gluons. When a pair of protons collides, it is actually a pair of these constituents that hit one another. This usually involves only a small fraction of the total kinetic energy of the protons – about one-tenth on average.
Only in rare, lucky instances do the collisions involve most of the kinetic energy of the protons. That means that many collisions at an advertised energy of 10 TeV are required to get one that actually unleashes energy close to that amount.
Slow and steady
After the 2009-2010 run, the LHC will be shut down, with upgrades made to allow it to go to higher energies. Measurements have revealed that some of the electrical connections are not robust enough to handle operation beyond 10 TeV.
The ultimate goal is still to reach 14 TeV. Landsberg believes the LHC will reach that energy eventually, but is not sure how long it will take. Whether it is "one year, two years or three years is anyone's guess", he says.
But the LHC is a very complex machine, so it makes sense to be careful while scientists gain more experience in running it, he says: "When learning to drive, you don't take your car at 100 miles per hour around a hairpin curve – you take it slowly around the parking lot."
In the meantime, Fermilab has a window of opportunity to find the first evidence for the last unseen component of the standard model, the Higgs boson, which is thought to endow other particles with mass, Landsberg says. But Fermilab could only beat the LHC to finding Higgs if the particle turns out to be relatively lightweight, allowing it to be produced reasonably often at the energies Fermilab can probe, he says.
Images from Rex Features.
This article originally appeared on New Scientist. | <urn:uuid:8aa290ec-1baa-4818-b3d2-5ff85a013918> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280096979/Large-Hadron-Collider-to-Resume-at-half-potential-power-just-to-be-on-the-safe-side | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951697 | 964 | 2.84375 | 3 |
Some time ago I came across this while reading a book – SQL Server 2008 Step by Step (Mike Hotek, Microsoft Press) – and to date I think it’s the best description of normalization that I’ve ever seen… it just took me this long to get around to obtaining permission to post the excerpt!
I don’t have much further to say on it, other than many thanks to the O’Reilly permissions people, so without further ado…
Entire books have been written and multi-week courses taught about database design. In all of this material, you will find discussions of first, second, and third normal forms along with building logical and physical data models. You could spend significant amounts of time learning about metadata and data modeling tools. Lost in all of this material is the simple fact that tables have to be created to support an application and the people creating the tables have more important things to worry about than which normal form a database is in or if the remember to build a logical model and render a physical model from the logical model.
A database in the real world is not going to meet any theoretical designs, no matter how you try to force a square peg into a round hole.
Database design is actually a very simple process, once you stop over-thinking what you are doing. The process of designing a database can be summed up in one simple sentence: “Put stuff where it belongs.”
Boiling down these tens of thousands of pages of database design material into a single sentence will certainly have some people turning purple, so let’s investigate this simple assertion a little more closely.
If you were to design a database that will store customers, customer orders, products and the products that a customer ordered, the process of outlining a set of tables is very straightforward. Our customers can have a first name, last name and an address. We now have a table named Customer with three columns of data. However, if you want utilize the address to ship an order to, you will need to break the address into its component parts of a location, city, state or province, and a postal code. If you only allowed one address for a customer, the address information would go into the customer table. However, if you wanted to be able to store multiple addresses for a customer, you now need a second table that might be called CustomerAddress. If a customer is only allowed to place a single order, then the order information would go into the customer table. However, if a customer is allowed to place more than one order, you would want to split the orders into a separate table that might be called Order. If an order can be composed of more than one item, you would want to add a table that might be called OrderDetails to hold the multiple items for a given order. We could follow this logic through all of the pieces of data that you would want to store and in the end, you will have designed a database by applying one simple principle: “Put stuff where it belongs.”
Microsoft® SQL Server® 2008 Step by Step by Mike Hotek published by Microsoft Press, A Division of Microsoft Corporation Copyright © 2009 Mike Hotek. All rights reserved. Used with permission. | <urn:uuid:a7aa79d3-0eff-49ef-8261-99ea80196d99> | CC-MAIN-2017-04 | http://www.dymeng.com/techblog/put-stuff-where-it-belongs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949827 | 662 | 2.859375 | 3 |
San Jose, Calif., is in the midst of using an environmentally friendly road paving method that, if successful, will save the city some money and help reduce its carbon footprint.
Called “cold in-place recycling” (CIR), the process involves a machine chewing up existing city streets — in this case, Monterey Road in San Jose — grinding up and recycling the rock, injecting it with binding material, and spitting it out as new pavement. A two-inch rubberized coating is placed on top to protect the road.
The initial results have been positive. By using recycled material, the city avoided using 10,000 tons of new asphalt, and saved 1,500 truck trips to dispose of 10,000 tons of waste. In addition, had the road been redone using traditional paving with new asphalt, it would have cost $3.1 million. But the price tag for using CIR was $2.3 million, saving San Jose $800,000.
Michael Witkovski, with the San Jose Department of Transportation (DOT), said the technology has been around in various forms for 20 years and the California Department of Transportation has used it to redo some stretches of Interstate 80 in the state’s Sierra Nevada mountain area.
He explained that while the city always looks for new technologies to improve pavement, keep costs down and find greener ways of doing things, it has only been recently that San Jose felt the time was right to try CIR.
“We determined that it now has reached a state of maturity that makes sense to use it,” Witkovski said. “We didn’t want to be too far out on the bleeding edge, but we wanted to be using best practices.”
The San Jose DOT reached out to officials with the state of Virginia, the Canadian province of Ontario and some manufacturers over the past year about their use of CIR. The city then developed a specification to use it and ultimately selected Monterey Road as its first testing ground for the technology.
The city doesn’t really consider the use of CIR on Monterey Road a test, however. Witkovski said that the San Jose DOT is confident the use of CIR will work, as its life cycle should be equivalent to new asphalt.
The key is being vigilant on maintenance.
While the city applies two inches of melted tire rubber coating on top of the recycled roadway materials, that seal needs to be redone every decade to ensure that the road underneath remains preserved.
“We’ll come in 10 years and … seal the top of the rubberized asphalt to keep it from deteriorating,” Witkovski said. “Asphalt ages from the top down, so as long as we keep the wear course good, there is no reason the base should fail.”
The final day of paving Monterey Road is Thursday, Nov. 17, with striping, concrete and electrical work to be completed by the end of the year.
The city plans to use CIR to rehabilitate other streets in San Jose in 2012. Witkovski said 11 roads are “in the hopper” for work next year, although he stressed that San Jose could probably only afford to rehab three or four of the streets.
In addition, CIR may not be appropriate for use in all cases. Witkovski explained that in testing, while recycled asphalt is measuring up to be just as strong as new asphalt, the San Jose DOT rates CIR at 80 percent of the strength that new asphalt provides, just to be on the safe side. So depending on the road, using CIR may not be appropriate.
“If a road was never designed for garbage trucks and now garbage trucks are tearing it up, I can’t just go in and rehab the existing asphalt and expect it to last,” Witkovski said. “I may have to go thicker and rehab the road to handle the load.” | <urn:uuid:09a82303-6f2d-4ae9-923a-b3f65bdea52d> | CC-MAIN-2017-04 | http://www.govtech.com/technology/San-Jose-Calif-Recycles-Pavement-to-Save-Money.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954378 | 822 | 2.984375 | 3 |
Global Orphan Drugs Market Report: 2016 Edition
- Report Description Table of Contents
A rare disease is the one that occurs uncommonly or rarely in the general population. However, there exists no clear definition for categorizing “rare diseases” and is usually defined on the basis of the prevalence, and some other factors including the severity of disease and availability of treatment options. Rare diseases are often serious, chronic and progressive. Orphan drugs are medicinal products intended for the diagnosis, prevention or treatment of rare diseases. These drugs are referred to be “orphan” as under normal conditions because these drugs are not cost effective to be developed by the pharmaceuticals industry, intended for a small number of patients suffering from rare conditions.
Increasing sales of prescription drugs, substantial benefits for new entrants in rare disease drug market, increased spending on medicines, rising healthcare expenditure and improving economic conditions of nations are some of the significant factors driving growth of the Orphan drugs market. However, the growth of the market is hindered by certain challenges including tougher regulatory approvals, limitation on charging higher prices and no approved drugs for several rare diseases including Polychethemia Vera (PV).
The global orphan drugs market is expected to see numerous developments including approval of several Ultra-Rare drugs, increasing scope of Gene therapy, development of drugs for rare blood disease, higher success prospects for Hematology compared to Solid Tumors, attractive pricing option for orphan drug competitors and striking opportunities in developing orphan drugs.
The report, “Global Orphan Drugs Market” analyzes the current prevailing condition of the market along with its future scope of development. The global market along with specific market of the U.S., Canada, Europe and Japan, is being discussed in the report. The major trends, growth drivers as well as issues being faced by the industry are being presented in this report. The major players in the industry are being profiled, along with their key financials and strategies for growth. | <urn:uuid:14fb6a0e-4fe6-40d0-8ad3-d6e376aa6a06> | CC-MAIN-2017-04 | http://www.marketreportsonline.com/444811.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953969 | 401 | 2.546875 | 3 |
For decades, musicians have been using sound synthesizers to generate audio to replace or complement acoustic instruments. However, some types of complex south synthesis have not been possible on traditional CPUs. Now, sound researchers are turning to GPUs to give them the processing power needed to take on tougher audio challenges.
Bill Hsu and Marc Sosnick-Pérez explore some of the newer GPU techniques being used for synthesizing sounds in an article titled “Finite difference-based sound synthesis using graphics processors,” which was recently published in the Association for Computing Machinery’s online publication, acmqueue.
Due to the lack of computing power, sound synthesizers were forced to use fairly rudimentary techniques to create sounds in real time, the authors write. This includes using compute simple waveforms, using sampling and playback, and applying spectral modeling techniques to model wave forms. A common thread among these techniques is that “they work primarily with a model of the abstract sound produced by an instrument or object, not a model of the instrument or object itself,” Hsu and Sosnick- Pérez write.
As computing power increased, researchers discovered they could create audio waveforms in an entirely new way: by simulating the physical natures and properties of objects and instruments themselves. After a detailed numeric model of the object or instrument is created, it can then be “played” as it would be in the real world.
“By simulating the physical object and parameterizing the physical properties of how it produces sound,” the authors write, “the same model can capture the realistic sonic variations that result from changes in the object’s geometry, construction materials, and modes of excitation.”
Several techniques exist to create the numerical models of objects and instruments, including one, called the finite difference approximation method, which is said to generate very good sound. However, this approach appears too computationally intense to run on CPUs, hence the interest in using GPUs to exploit multi threaded architectures and a high degree of data parallelism.
In their paper, Hsu and Sosnick- Pérez compare how well CPU- and GPU-based systems perform sound synthesis using the finite difference approximation method. The pair used their own software package, called the Finite Difference Synthesizer, in the tests. FDS simulates a vibrating plate (think drum) and runs in a CUDA environment on Mac OS and Linux.
While the results varied, the GPU-based systems consistently outperformed the CPU-only systems. In some cases, a GPU-based system was able to deliver acceptable CD-level sound quality on a two-dimensional grid (think cymbal) that was nearly 50 percent bigger than what the CPU-based system could handle.
There are several caveats to using GPUs with this method, according to the researchers. The first is something called kernel launch overhead, and manifests as a potentially significant delay. The second is that a limit on the number of threads may restrict how the simulation is mapped to the GPU. The third has to do with a potential inability to synchronize the execution of threads. Some of these problems are more apparent on older GPU architectures, and are less a concern on newer architectures, such as NVIDIA Kepler.
Despite the challenges, the future of using finite difference approximation methods and GPUs to model physical objects and instruments for real time audio synthesis appears to be bright. | <urn:uuid:a2c1b17d-7a61-4ee8-a84c-ce7bf76fe93d> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/07/16/sound_synthesizers_get_performance_boost_from_gpus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00497-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939022 | 700 | 3.859375 | 4 |
The name and date are symbolic. On 5 November 1606 a plot to blow up the House of Lords, with one Guy Fawkes in charge of the explosives, was foiled. The name, Vendetta, comes from the film “V for Vendetta” in which a folk hero (or terrorist, depending upon your point of view) wearing a Guy Fawkes mask, succeeds in doing just that in a modern, fictional Britain.
Now the various Anonymous groups around the world are supporting each other “for what will be the strongest display of anonymous in the UK so far.” While focused on the UK, it is intended to be a worldwide protest, both on the ground and in cyberspace. “On the 5 November 2012 the Anonymous Collective will engage in the largest orchestrated attack on the Governments of the World. The centerpiece will be the re-enactment of the Film V for Vendetta scene at the Houses of Parliament with ground and online protests across the globe,” a spokesman from ATeamAnon in the UK told Infosecurity.
The film actually shows the destruction of Parliament and Big Ben. There is no suggestion that Anonymous wants any actual violence, but rather, says ATeamAnon, to re-enact “citizens uniting as one to fight against the tyranny of a corrupt fictional Government – which resonates with the goals of Anonymous.”
In the statement to Infosecurity, Anonymous explained that the symbolism actually goes deep. “While the story is fictional, corrupt Governments like the UK are not. Centralised and unaccountable governments are no longer democratic or representative of the people they claim to serve. Every revolution starts with a spark – a spark that has been hidden from the citizens of the world by a media complicit in the system which causes war, suffering, poverty and erosion of human rights for billions of people.”
What this demonstration shows is the increasing politicization of Anonymous. Its success or failure on the streets of London will be a measure of public support. However, Anonymous made it clear that while it is “physically going to be present at the Houses of Parliament... quite a few online protests will be conducted in relation to related government websites.” It concluded, “On the 5 November 2012 expect All of Anonymous.” | <urn:uuid:fb0ed4fc-4002-4921-9237-77382aeef6ab> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/anonymous-opvendetta-set-for-5th-november/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955504 | 478 | 2.546875 | 3 |
White Paper: Consumer Password Worst Practices
In December 2009, a major vulnerability was discovered in Rockyou.com. By examining a hacker's blog, a major vulnerability was discovered that led to the breach of 32 million passwords and the hacker posted to the Internet the full list of the 32 million passwords (with no other identifiable information). The data provides a unique glimpse into the way that users select passwords and an opportunity to evaluate the true strength of these as a security mechanism. Further, never before has there been such a high volume of real-world passwords to examine. The Imperva Application Defense Center (ADC) analyzed the strength of the passwords.
Download the report and consumers as well as administrators will learn:
- Common trends for user's passwords and what to avoid
- How to write better password protection policies
- How to prevent data loss from hackers | <urn:uuid:b4b01a77-a71f-4c71-a0fd-874a81529b2f> | CC-MAIN-2017-04 | https://www.imperva.com/lg/lgw.asp?pid=379 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00525-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929042 | 173 | 2.640625 | 3 |
Why would a blind person go into a library? Maybe to borrow a book in Braille, or more likely to borrow a talking book, CD or DVD. In Lambeth the new answer is to learn to use a computer.
Computers have the potential to improve the quality of life and job prospects of anyone who is blind or has a vision impairment (a Vision Impaired Person or VIP for short). A very large percentage of the world's knowledge has only been available in printed form on paper. This meant that it was inaccessible to anyone with a limited or no vision. Over the years various solutions have been used to close this gap a bit.
Braille was one very clever and successful solution but it is limited by: the cost of setting and printing any document; the size of a Braille document, which is a disadvantage; but probably the biggest issue is that it is difficult to learn, especially for anyone whose lost their vision later in life.
Audio books is a wonderful medium for fiction, where a book is read, and listened to, from the beginning to the end. It is not such a useful solution for reference or text books where navigation becomes a major issue. The cost of production also limits the titles available.
DAISY talking books provide the benefits of audio books and adds navigation capabilities.
All of these solutions have major limitations, as the number of titles is limited and great swathes of printed material - newspapers, magazines, journals - are generally not available and certainly not in a timely fashion.
More recently the rise of the computer, the Internet and various forms of electronic publishing have enabled a whole new set of sources of textual information; emails, blogs, wikis, online news channels etc. All of this is displayed on a screen and is again not accessible.
However, the fact that this information is electronic and therefore can be manipulated means that it is possible to turn the electronic words into spoken words that are accessible to people with vision impairments. All of human knowledge is being rapidly turned into electronic format and thus the knowledge available to a VIP is growing exponentially.
Unfortunately there are two barriers that need to be removed before all this information is available. Firstly the user needs access to suitable hardware and software that will read the information on the screen and enable them to navigate easily. Secondly they need to learn how to use the hardware and software. A VIP who has not learnt to use the system will not be able to assess the benefit to them and therefore will not be able to justify the initial outlay. The cost of a suitably configured machine is a considerable barrier to adoption.
The libraries in Lambeth have recently been the venue for an experiment to fix both these problems. The initiative is being driven forward by a local resident, Christina Burnett of Wide Eye Pictures, who is passionate about the benefits of computing to VIPs.
Like every modern library Lambeth has several computers in each library. The only extra hardware required was headphones; these are obviously essential if there are going to be several screen reader users in the library at one time. It is probable that headphones would have become necessary anyway for the general public as more and more audio information is available on the Internet. The other addition was to install screen reader software on all the library machines; it was decided to install it on all machines so that a VIP could use the system whichever library they wanted to visit. Some screen reader solutions are expensive and it would have been prohibitive to equip all machines; this was resolved by installing a free screen reader called Thunder which is available from www.screenreader.net. So for a minimal expenditure the libraries removed the first of the barriers.
To assist the VIP to learn to use the system a series of seven weekly training sessions was run, called DTvip (Digital Tuesday for Vision Impaired People). The initial set of sessions trained some VIPs and some volunteers so the scheme can be repeated and extended in the future. Screenreader.net, which developed Thunder, has obtained funding from Awards for All to work with DTVIP on this pilot training scheme at the Tate South Lambeth Library.
The first set of sessions was a great success and proved that the model works. Naturally, lessons have been learnt, in particular to have a structure that can support different users, ranging from a VIP who has never used a keyboard, through to a VIP who is an expert PC user but, through failing sight, needs to learn how to use a screen reader.
The second set of sessions is under way. The question now is how to quickly extend this model throughout Lambeth and the rest of the UK. | <urn:uuid:08fa5221-0604-4549-8414-caa9a9210149> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/thunder-libraries/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00277-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965116 | 938 | 3.1875 | 3 |
Benitez-Lopez A.,Institute Investigacion en Recursos Cinegeticos IREC CSIC UCLM JCCM |
Casas F.,CSIC - Estacion Experimental De Zonas Aridas |
Mougeot F.,CSIC - Estacion Experimental De Zonas Aridas |
Garcia J.T.,Institute Investigacion en Recursos Cinegeticos IREC CSIC UCLM JCCM |
And 5 more authors.
Biological Conservation | Year: 2015
Survival and the underlying causes of mortality are key demographic parameters for understanding animal population dynamics and identifying conservation needs. Here we use a large data set of tagged wild pin-tailed sandgrouse (. Pterocles alchata) to examine the influence of individual traits (age, sex, size, movements and reproduction), and of temporal and spatial variations on the survival of this steppe-bird of conservation concern in Europe. Annual survival for adults and juveniles was estimated at 0.60 and 0.61, respectively. Survival rate tended to be lower towards the northern margin of the European distribution. In the core distribution area (central Spain) mortality was more frequent during the non-breeding season due to higher predation rates. Survival rate was slightly higher in males than females, which may explain male skewed population sex-ratios. Sedentary birds had lower survival than birds using different areas for breeding and wintering, indicating that high mobility, as a possible behavioural response to varying conditions, could eventually increase fitness. Predation was the main cause of mortality, but illegal hunting was also recorded, indicating a need for stricter law enforcement and regulation plans. Sandgrouse can be characterized by a relatively slow life history (medium-high adult survival and low reproductive rate), with lower survival rates than other sympatric steppe birds of larger size. Management and conservation efforts should focus on maintaining a high adult survival within protected areas. Sandgrouse mobility, which positively influenced survival, should also be taken into account in conservation plans, especially when considering the size and connectivity of protected areas. © 2015 Elsevier Ltd. Source | <urn:uuid:910f24b9-be41-4291-8387-a59706d1cee5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/conservatoire-despaces-naturels-de-provence-alpes-cote-dazur-1914065/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.877545 | 438 | 3 | 3 |
Everybody and their mom uses cgi-bin’s in some way or another on their web pages, or on their web server, aware or not of that fact. Today’s not so hot topic is cgi-bin vulnerabilites. In the following couple of infite text lines below, I’ll explain the cgi-bin concept, and some little mischevious naughty things you can accomplish misusing it. Notice that I’m not encouraging any sort of malvolent activites, nor will take any reponsibility for your actions. This article is written for educational purposes only. Let’s pretend that we don’t know anything about CGI’s, so…
The interface in-your-face
CGI stands for Common Gateway Interface, which is a standard for a gateway, or interface, between clients and web servers. It allows interaction between them, transparent and smooth. Web pages per se are static, plain HTML, sometimes rather messy, but readable text files. Now, CGI’s are scripts, or small programs, which allow you to make your web pages dynamic, and add various nifty things to them. A CGI program/script can be written in any language that allows it to be executed on the system, such as: C/C++, Fortran, PERL, TCL, Any Unix shell, Visual Basic, AppleScript… It just depends what you have available on your system. Usually, CGI’s are located in the /cgi-bin folder of your web server, and if you have CGI’s which are not only shell scripts, you also might have a /cgi-src folder. Of course, these may vary, so please don’t think it is carved in the stone just because I said so…
CGI’s are emmbeded into HTML pages via a simple link tag, ie. a CGI script incorporated into your page might look something like this:
where picknose.sh is just a simple bash script, located in the /cgi-bin folder. What it does, well, that’s a different story, and completly irrelevant to our little debate. 🙂
For what will I use CGI’s one might wonder, and to that question the answer is fairly simple, but to make it even more simplified, I will elaborate it on an example. Imagine you have some sort of a database on your web, and you need to make it searchable to the user surfing the web. The best way to do this is via CGI scripts. You need a way to interact and transmit information between your host, and the user’s web browser and that’s what the common gateway interface or CGI does. It serves as a gateway between the user and your web. It (CGI script) will be executed by the web daemon to transmit query to the database and send results back to the user, via the same daemon. Kinda of a third party involvment. This is the simplest example of how to use CGI’s. Implementation is easy, and the possibilites are limited only by your immagination. Make sure your CGI’s are as simple as possible and that they do not take long time to execute. You can read more about the CGI concept and other CGI stuff here.
Your system is the world’s oyster
As written above, CGI’s are programs or scripts, that serve as a gateway between your web and the end user. And, of course, CGI’s are executables which means they run on your system. Now, the idea of having anyone accessing your web and running mayhem with executables on your system looks a bit frightening, does it not?
Most security issues that arise from usage of CGI’s are not directly caused by CGI’s but with the way certain standards are set by the HTTP protocol, and CGI’s only allow access to these security holes. Specifications of the CGI interface enable reading files on the system, shell access, and accessing file structure on the hosts.
Naturally, malicious CGI’s exist, and can be set up, but I will not disscuss them, instead I will focus on the damage that can be done via your CGI’s on your own host, not on the user surfing your web.
In order for an attacker to find an vulnerable CGI, all he has to do is to connect to port 80 and repeatedly send a GET request to CGI’s on the server or suspecting they are on the server. Simply by checking your logs for repeated GET requests from a single remote host resulting in a 404, the ‘file not found’ error can give you an idea that something wicked is going on. As time passes, that same attacker may come up with an unsecure CGI on your system. If that is the case, he’ll most probably try to exploit the vulnerability.
Basically, most security issues that arise from usage of CGI’s is the fact that the user input is not parsed or filtered properly, and various parameters, or commands can be issued via web URL. An attacker may try to access any of your CGI’s in order to exploit any known security issues or vulnerabilities. For instance, an example of malformed URL would be:
As you can see, after the valid part of the URL, the attacker has added a new line code (%0a) and has issued a simple viewing of /etc/passwd via the cat command. The %20 presents an ASCII value for a blank line. This will not work provided that the CGI is properly set. Also, badly implemented system calls in various scripting languages such as perl, can prove to be fertile ground for various vulnerabilities.
Another example of viewing the files outside the restricted folder is to exploit a bug in the viewsrc.cgi, which is documented here. It’s a script for viewing source code, but it contains a vulnerability which allows viewing any file on the server, by issuing the following
But viewing of files is not the end of the story. You can do a Denial of service attack against a host running a vulnerabile CGI, for instance, a good example is the IBM Websphere/NetCommerce3 DoS vulnerability, where you can do a DoS against a server running Websphere/NetCommerce3, by simply issuing the following:
Those were the most obvious examples of a CGI vulnerabilities. A lot of other possibilites exist for an attacker, weather it may be a simple directory traversal, command execution to obtaining the proper permissions or privileges to manipulate with the web server.
If you’re planning on creating some CGI’s of your own, bear in mind these few things: in perl and bash scripts, don’t use the ‘eval’ statement used for creating strings which will be executed, be careful with popen() and system() calls, and turn off the server side includes. Also, don’t leave any means for a client to manipulate with input of your scripts, don’t rely on the fact that it will escape any special characters for they will be used by an attacker. It would be smart to check the ‘suexec’ documentation, for apache web server and use it on your server.
If you’re interested in tools publicly available for checking CGI vulnerabilities, read on…
And cats have…whiskers!
A great and effective CGI scanner is Rain Forrest Puppy’s Whisker. You can obtain Whisker here.. Download it and use it. It’s a perl script, so you have nothing left to do but run it, so:
perl whisker.pl -i -v -h hostname -l filename
and the filename you provided should resemble something like this. Mind you, these whiskers can smell a lot of things, and if you invoke it without any switches and addresses, ie perl whisker.pl you will get a full list of options. As you can see from the output, it’s pretty much clear situation. Of course, output may vary, from host to host, accordingly. So, try it, and see for yourself.
By using a cgi scanner you can safely find out by yourself for any insecure CGI’s on your system. And, surely you want to do that, you don’t want to leave anybody any options for manipulating with your system. You can use any other CGI scanner, it should work just fine. Most of them have plugins of some kind to keep them up-to-date with vulnerabilities.
And thus cameth the conclusion
CGI vulnerabilities exist, and new ones are being found on a daily basis. And, of course, they will be exploited for this or that purpose. Unfortunately, if you have to use them, you can only hope that a patch exists or will be soon put out. Alternative, if you’re into programming, you can try to fix them yourself.
So, if you use CGI’s, use them wisely. Check them out, constantly as you try new ones. A good thing would be to regularly check sites like cgisecurity and Help Net Security for new CGI vulnerabilites, as they appear on a daily basis. It is of vital importance to keep an eye on new vulnerabilities concerning any software you run on your web. It will help you prevent any malicious activities against your site. Apply patches that become available sooner or later for the same issues. Applying patches reduces the risk level to minimum. Consider running utilites such as above mentioned suexec and CGIWrap on your web.
Hopefully, this article brought you in speed with the term cgi-bin vulnerabilities. Of course, much more can be said here, but for starters and for getting acguinted, this should do just fine. If you’d like to find out more, a lot of good information and links can be found at the sites mentioned in throught the article.
Last but not least, I’d like to thank Zenomorph from cgisecurity for his help and suggestions. Believe it or not, this concludes today’s easy reading material! | <urn:uuid:c092f4c0-4ec1-409a-acbf-01b47a9c7395> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/04/08/cgi-vulnerabilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00333-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92722 | 2,131 | 2.671875 | 3 |
Delta and sustainability – Frequently asked questions
What aspects have to be considered to measure high efficiency precisely?
The most precise way is to measure the output power and the power losses by using a calorimeter test method. To perform this calorimeter test method, it requires a very accurate, special test infrastructure.
The less accurate test method is based on traditional output power and input power measurement. With this test method, tolerances in the measurement have significant impact on the accuracy efficiency result. The higher the efficiency, the more sensitive is the accuracy of the test method. Therefore the test setup and measurement points have to be precisely planned. Highly accurate power analyser for the AC side measurement, multimeters and shunts for the DC side measurement has to be selected. The measurement on AC side and DC side has to be synchronized to avoid that any variation effects impact on the efficiency result. Additional side effects to be considered during the test process are the thermal conditions by ambient temperature and warm up effects during the operation of the rectifier.
White Papers TPS
What does Delta mean by “Total Energy Efficiency”?
Total Energy Efficiency means that we focus not only on the direct but also the indirect environmental effects of our products. We pay attention to, for example, system-level energy efficiency, material consumption, transportation, the need for maintenance visits, fuel consumption and cooling solutions.
How is the energy of a RenE system produced during the night or in bad weather?
The batteries of a stand-alone solar power system are charged and discharged daily. During the day, the power system recharges the battery. From late afternoon to mid-morning, the system uses the energy from the batteries. As for bad weather, a stand-alone solar power system bridges a sunless period with additional battery capacity and solar recharge power. A hybrid system with a generator uses the generator. Optimized generator use ensures the lowest possible diesel consumption.
What kind of hybrid options does Delta offer?
The energy source of a RenE solution can be any combination of solar power, AC mains, wind turbines, generators and fuel cells. The customer chooses the primary and supporting energy sources. Location, climate conditions and operational costs play an important role in this decision.
When is a hybrid solar power and generator system most feasible?
A solar power system supported by a generator is a suitable option especially on islands and in remote areas where refuelling trips are nevertheless possible. Usually the reason for using a generator instead of AC mains is that an AC infrastructure is not available or is unreliable.
In what voltage range does Delta’s stand-alone solar power system operate?
The voltage of our standard solar power system is 48V. If an additional 24V backup power supply is required, a modular DC/DC converter can convert 48V to 24V. If further additional backup power supply is needed for AC-powered equipment, a modular or stand-alone inverter will convert 48V to backup the required AC.
In what way does a rectifier’s high power density help reduce a carbon footprint?
High power density in a rectifier means that a great amount of power has been packed into a small space. Therefore, smaller and lighter devices are able to meet more demanding energy needs – and require less materials, packaging, space and transportation. Delta’s DPR 2400 and DPR 2700 series rectifiers have industry-leading power density.
How do the power supply controllers support sustainable development?
A controller is the brains of a power system. A high-quality controller such as Delta’s PSC3 offers direct and indirect benefits in terms of sustainability.
Delta’s PSC3 features a scheduled start-up process for the rectifiers. This means that the diesel generator can be reduced to half its normal size, which in turn means a better fuel-to-electricity conversion efficiency. Optimizing the generator size and operating point can save up to 30% of fuel during operation. This reduces the number of refuelling trips to the site – also an important aspect to total energy efficiency. In addition, the periodical automatic battery tests and remote control possibilities further reduce unnecessary check-up trips to the site.
What does the hybrid cooling of OutD systems mean?
Hybrid cooling means that air ventilation, air conditioning and heat exchanging are combined in a manner ideal for the specific conditions. For example, in an air ventilation + air conditioning configuration, air ventilation is sufficient as long as the temperature remains moderate. Only when the temperature rises does active cooling kick in. For example, if you operate in an area where the temperature rises above +25°C only 4% of the time (e.g. Frankfurt), you can save as much as 83% on energy costs.
In which way has the environment been taken into consideration in developing the Site Monitoring and Control System?
The SMCS promotes sustainability primarily through its impact on diesel generator (genset) operation. For example, the option of using either a full-scale or half-scale genset saves diesel especially during start-up. The SMCS also saves diesel consumption by reducing the need for genset start-ups, as the power system can operate in single phase supply mode using AC utility. Remote monitoring, control and maintenance functions make daily operation easier and also reduce the number of trips to the site, which is especially important in rural areas.
What is a green building?
A green building has been designed so as to take into account energy and water use, ecological considerations, waste and emissions, as well as a safe and pleasant working environment. Green building certification requires the building to meet the official criteria, which vary by country. Delta Rudrapur and Tainan factories are Green buildings, for more information, visit the Indian Green Building Council (http://www.igbc.in:9080/site/igbc/index.jsp
) or the Taiwan Green Building Council (http://www.taiwangbc.org.tw/en/
What are Delta EnergE products?
EnergE stands for energy efficiency. We use the EnergE label to help our customers identify the rectifier products that are especially energy efficient. All rectifier products with the label have the world’s leading energy efficiency in their specific power range, and always an efficiency of 95% or higher.
What does a rectifier’s energy efficiency mean?
The energy efficiency of rectifiers refers to how well (that is, without losses) the energy is converted from the AC mains to the DC output. Another dimension of energy efficiency is how well the power system is controlled – to convert energy efficiently, rectifiers should always operate in the optimal way. Thus the best total energy efficiency is achieved by the controller and rectifier working together. | <urn:uuid:6b168612-00e4-4a39-a8e3-d37c181b3170> | CC-MAIN-2017-04 | http://www.deltapowersolutions.com/en/tps/faq.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921436 | 1,398 | 2.828125 | 3 |
What do dropped packets on a traceroute mean?
What is a traceroute?
A traceroute is a program that Traces the path a packet will take from point A to point
B. It does this by sending a series of packets that when a router receives it will reply
with an error message letting the program know that that router is in that Path. It sends
these packets one after another until it gets a response from its final destination. This
allows to the program to string a series of these error messages in a row to create a
logical Path that the packet takes.
What information can I get from it?
A trace route can be a very useful tool in pointing out a single failure in a network or
congestion and even sometimes loops. It allows you to see the route a packet will take
to a specific IP and allows Network Engineers to find out if a specific router may be
having an issue.
Why timeouts on a router aren’t always a bad thing.
Control Plane vs Data Plane
When routers receive packets addressed to them they treat them much differently than
when they receive packets not addressed to them. This is what is called the separation
between control plane(brain of the router) and the forwarding plane(the arms of the
router). Routers are very different machines than most servers; this is because they
were designed to do one thing route packets. To do this they have very very strong
arms(Forwarding Planes) but very weak brains(Control Planes) in terms of processing
When a router receives a packet that is not addressed to it then it will use its
arms(Forwarding Plane) to Push the packet out the correct interface; this is a very
quick process(millionths of a second).
When a router receives a packet that is addressed to itself it interacts directly
with the routers brain(Control plane). This is a much slower process and if in the
hands of a nasty user can be used to attack a router and possibly bring it down. To
counteract this engineers designed the brain(Control plane) to be completely separated
from the arms(Forwarding Plane) and limited the amount of packets that can be
processed by the brain(Control plane) so that it would be much more difficult for an
attacker to overload it.
But what does this have to do with traceroutes?
Well you see, when one of those error messages come in to a router from a trace route
program they are sent to the brain(Control plane) of the router. These messages have a
VERY low priority; this means that unless the router has literally nothing better to do it
will simply drop the message(Timeout). This means that if there are lots of people
sending traceroutes to that router (as there often is with high traffic routers) then it will
appear that that router is having issues when in reality there could be nothing wrong with
When ARE timeouts on a traceroute a bad thing?
There is only one real instance when a timeout on a traceroute is a bad thing. That is
when you see timeouts that continue forward in the route. By that I mean when you see
an individual timeout and then many more after that.
There are two main instances when this can happen, the first and most common is that
there is a firewall that was configured to block these packets in the route. The other
instance is that a router is dropping packets going THROUGH it (i.e. Forwarding
Plane(arms) packets) and this is can be a VERY bad sign. This is usually caused by one
of three reasons either the router is overloaded, the router having a software or physical
failure or the router is configured to do this(null route/blackholes). This should be brought
to our team’s attention so that we can do our best to avoid this route lessening the
impact on your services or investigate if there is a null-route/firewall in the way.
A Good Trace Route
- Even though there is a time out in the middle the packet still makes it all the way to the end (note the “Trace complete”)
- The last 3 hops are the most important, the packet comes into our edge(ten-7-4.edge1.level3.mco01.hostdime.com) then past our core(xe1-3-core1.orl.hostdime.com) then to the final hop.
- If a trace route begins to time out after the core then 99% of the time its a firewall issue on the server itself.
A Bad Trace Route
- Even though there is a time out in the middle the packet still makes it another hop(18.104.22.168) meaning that the 4th hop router is not the issue.
- We know from the last trace route that (ae-1-8-bar1.orlando1.level3.net) is the next hop meaning thats where the issue starts.
- Also note that the trace does not finish, meaning that this device is unreachable.
A Good Trace Route that appears bad due to a firewall
- Just like with both the good and the bad traceroute there is a drop in the middle, but note that the packet still makes it all the way past both our core and edge meaning it makes it into our network.
- 99% of the time if it makes it past our edge and core then it is a firewall/physical issue. | <urn:uuid:40045871-0fdb-4fcd-a77b-5258241443ec> | CC-MAIN-2017-04 | https://www.hostdime.com/resources/trace-routes-timeouts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937247 | 1,174 | 3.375 | 3 |
A Quick Intro to Sniffers:
Wireshark/Ethereal, ARPSpoof, Ettercap, ARP poisoning and other niceties.
When I tell some of my coworkers that I'm sniffing the network, they have a tendency to look at me funny. A Sniffer (also know as a Network Analyzer) is a piece of software that can look at network traffic, decode it, and give meaningful data that a network administrator can use to diagnose problems on a network. Sniffers are also useful tools for deviant computer users since they can be used to pull plain text passwords off a network. A few popular general purpose Sniffers are NAI Sniffer (commercial), Wireshark (previously know as Ethereal, an Open Source GUI Sniffer for Linux, Windows and other platforms), TCPDump (Open Source command line Sniffer for *nix - any Unix like operating system like Linux or FreeBSD-) and its Windows version called WinDump.
First an explanation of some network basics is in order. Most Ethernet networks use to be of a common bus topology, using either coax cable or twisted pair wire and a hub. All of the nodes (computers and other devices) on the network could communicate over the same wires and take turns sending data using a scheme known as carrier sense multiple access with collision detection (CSMA/CD). Think of CSMA/CD as being like a conversation at a loud party, you may have to wait for quite a spell for your chance to get your words in during a lull in everybody else's conversation. All of the nodes on the network have their own unique MAC (media access control) address that they use to send packets of information to each other. Normally a node would only look at the packets that are destined for its MAC address. However, if the network card is put into what is known as "promiscuous mode" it will look at all of the packets on the wires it is hooked to.
To cut down on the number of collisions and the possibility of sniffing data that does not belong to a node, most networks use switches. On a network, a hub is a passive device that sends all traffic it receives to all of its ports. A switch on the other hand looks at the MAC address of the nodes hooked to it and what ports they are on then tries to send packets only to the nodes they are intended for. A switch cuts back on the number of collisions on the network, increasing throughput. In theory, on a switched network a node can only see broadcast messages (meant for all computers on the LAN) and packets addresses to its MAC along with the occasional stray packet whose destination is not known. Even with switches in place a LAN can be sniffed using the mirrored port on some switches (put there so administrators can use a Sniffer to diagnose network problems), by confusing the switch into mirroring traffic to all ports or by a technique know as ARP poisoning (more on this later).
The above is about Ethernet networks, WiFi (802.11a/802.11b/802.11g/802.11n) is a bit different however. Wireless LANs act a lot like Ethernet LANs using hubs. Every computer on the LAN can see the traffic destined to others but normally they just choose to ignore it. (In reality it's a little more complicated than that, but I want this to be an article and not a book on the intricacies of 802.11 networks) However, if a network card is put into what is known as promiscuous mode, it will not ignore traffic going to other computers and will instead look at it, allowing the user of the computer running the sniffer to see the data traveling to other computers attached to the same access point. Promiscuous mode works on pretty much any wired network card in Windows and Linux (or other Unix like Operating System), but not all wireless cards support it properly (like Intel's Centrino 802.11g chipset know as IPW2200). If the sniffer's card does support promiscuous mode it will have to be attached to the wireless networks WAP (Wireless Access Point) to be able to see anything. If the attacker is using Linux (or another Unix like Operating System) the attacker may be able to use what is known as monitor mode if their card supports it. In monitor mode, the wireless network card listens to the raw packets in the radio waves without ever having to attach to a WAP. The nice thing about monitor mode from the attacker's perspective is that they leave no logs of their activities since they don't have to attach to the WAP and don't have to send any packets on the network.
Sniffing WiFi networks is further complicated by what security protocols they use. If your card support promiscuous mode and you can attach to a wireless network using WEP (in other words, you know the WEP key) you can sniff pretty much anything you want. If the network is using WPA it's not as easy since just knowing the pass phrase won't let you decode all traffic in a network conversation your box is not evolved in. However, it may be possible to ARP poison or use some other MitM (Man in the Middle) attack to get the data routed through you.
Sniffers have many legitimate uses that system
administrators should be aware of. They can be used to find what computers on
the network are causing problems such as using too much bandwidth, having the
wrong network settings or running malware. I've personally found them useful in
the past for finding hack attempts as they were happening by sniffing my own
servers for inappropriate traffic. Every system admin would do well to learn
about using sniffers to find network problems and I'd recommend starting with
Wireshark since it's free, multiplatform and well supported (see the links
section of this article of more information).
Sniffers can also be used by those trying to bypass security. Many popular application protocols pass logon credentials (username and password) in plain text or using weak encryption that's easy for a Sniffer to decode. Common examples of such insecure protocols are FTP, Telnet, POP3, SMTP, and HTTP Basic Authentication. In their place use encrypted protocols like SFTP, SSH (Secure Shell), and HTTPS (SSL) when possible. Protocols like FTP may be hard to switch away from because the clients for more secure protocols like SFTP are not as readily available. FTP clients come with every recent version of Windows (ftp.exe from the command line and Explorer from a GUI), but free clients that support SFTP like FileZilla and PSFTP can be downloaded. A few sniffers that have good password extraction abilities include Cain, Dsniff and Ettercap. All three are free or Open Source. Cain is for Windows only and Dsniff and Ettercap are mostly used in *nix environments but also have Windows versions available.
ARP Spoofing/ARP Poisoning
ARP stands for Address Resolution Protocol and it allows the network to translate IP addresses into MAC addresses. Basically, ARP works like this: When one host using IP on a LAN is trying to contact another it needs the MAC address of the host it is trying to contact. It first looks in its ARP cache (to see your ARP cache in Windows type in "arp -a" at the command line) to see if it already knows the MAC address, but if not, it broadcasts out an ARP request asking "Yo, who has this IP address I'm looking for?" If the host that has that IP address hears the ARP query it will respond with its own MAC address and a conversation can begin using IP. In common bus networks like Ethernet using a hub or 802.11b all traffic can be seen by all hosts whose NICs (network interface card) are in promiscuous mode, but things are a bit different on switched networks. A switch looks at the data sent to it and tries to only forward packets to its intended recipient based on the MAC address. Switched networks are more secure and help speed up the network by only sending packets where they need to go. There are ways around switches though. Using a program like Arpspoof (part of the Dsniff package), Ettercap or Cain we can lie to other machines on the local area network and tell them we have the IP they are looking for, thus funneling their traffic through us.
Even with a switched network it's not hard for an attacker to use Dsniff or Ettercap from the BackTrack boot CD to do some ARP spoofing and redirect traffic through them for the purposes of sniffing. These tools can even parse out usernames and passwords automatically, making the attacker's job easy. If the attacker ARP Spoofs between the gateway and the FTP server he can sniff the traffic and extract user names and passwords as users are trying to get their data from offsite, and the same thing goes for SMTP and POP3. Even with SFTP, SSL, and SSH, passwords can still be sniffed with Ettercap because it has the ability to proxy those types of connections. The user might get a warning that the public key of the server they are trying to get to has changed or may not be valid, but how many of us just click past those kinds of messages without actually reading them?
The image in figure 1 helps to illustrate how ARP Spoofing/ARP Poisoning works. Basically, the attacker is telling Alan's box that he has the IP that corresponds to Brian's box and vice versa. By doing this the attacker receives all network traffic going between Alan and Brian. Once the attacker has ARP Spoofed his way between two nodes he can sniff the connection with whatever tool he likes (TCPDump, Wireshark, Ngrep, etc.) By ARP Spoofing between a computer and the LAN's gateway an attacker can see all the traffic the computer is sending out and receiving from the Internet. In this article I'm only giving the basics of how these tools are used.
A quick demonstration of ARP Spoofing using Dsniff Tools and Ettercap
Let's start by using Dug Song's Arpspoof program that comes with his Dsniff package. I use the *nix version but if you look around you may be able to find a Win32 version. The easiest way to run Dsniff is to boot from a BackTrack boot CD. The first thing you should do is make sure packet forwarding is turned on, otherwise our machine will drop all traffic between the hosts we are trying to sniff, causing a denial of service. Some of the tools I use do this automatically (Like Ettercap), but to be sure, you may want to do it yourself. Use the following commands, depending on operating system:
echo 1 > /proc/sys/net/ipv4/ip_forward
sysctl -w net.inet.ip.forwarding=1
Now that your computer will forward the traffic you can start ARP Spoofing. Let's assume you want to sniff all traffic between a host and the gateway so you can see the traffic it's sending to the Internet. To get traffic in both directions you would use the following two commands:
arpspoof -t 192.168.1.1 192.168.1.2 & >/dev/null
arpspoof -t 192.168.1.2 192.168.1.1 & >/dev/null
The "& >/dev/nul" part is there to make it easier to run from one terminal, but you may want to omit it for debugging purposes. Now you can use any package you wish to sniff the connection. To start with I'd recommend using the Sniffer Dsniff that comes along with Arpspoof to sniff for plain text passwords. To start sniffing with Dsniff just drop out to a command shell and type:
As Dsniff finds passwords and usernames it will print them to the screen. To look at all sorts of other traffic I would recommend TCPDump or Wireshark. When you are ready to stop ARP Spoofing issue the following command:
This should kill the two instances of Arpspoof started above.
Another great tool is Ettercap, the Swiss
army knife of ARP Poisoning and password sniffing. I usually use it in non-interactive mode, but by default it
has a ncurses interface that some may find easier to use. If you would like to
use Ettercap for ARP poisoning instead, the following commands should serve as
good examples. If we wanted to target all hosts on the network and sniff traffic
between every node, we would use the following command:
ettercap -T -q -M ARP // //
Be careful with the above command, having all of the traffic on a large network going though one slow computer can really bog down network connections. If we had a specific victim in mind, let's say a host with the IP 192.168.1.1, we would use this command:
ettercap -T -q -M ARP /192.168.1.1/ //
If 192.168.1.1 is the gateway, we should be able to see all outgoing traffic. Here are what the command line option flags do:
-T tells Ettercap to use the text interface, I like this option the best as the more GUI modes are rather confusing.
-q tells Ettercap to be more quiet, in other words less verbose.
-M tells Ettercap the MITM (Man in the Middle) method we want to use, in this case ARP poisoning.
For some other things you can do with Ettercap check out my
Ettercap Filters: The Movie.
There are many other packages I would like to mention as well. The first is Cain, which Windows users will be much more comfortable with. It has some great functionality and a nice interface. I have a video tutorial on how to use it here:
If you like pretty GUIs, Cain is the way to go. It does not have as many options as Ettercap, but it's still pretty cool and has some other Windows specific extras built in.
There are also specialized snuffers for certain kinds of content. Driftnet parses out the images people are seeing as they web surf .I've not done a video on Driftnet, but I have done one on NetworkActive which can also parse out images from web traffic:
These also sniffers like P0f that let you finger the OS of network traffic passively:
And that's just the tip of the iceberg when it comes to specialized sniffers.
Mitigating Sniffing Attacks
There are quite a few ways to mitigate sniffing attacks.
1. Avoid using insecure protocols like Basic HTTP authentication and Telnet. As a matter of fact you should sniff your own network to see what passwords the tools listed above can pick up.
2. If you have to use an insecure protocol, try tunneling it though something to encrypt the sensitive data. I have a video on SSH Dynamic Port Forwarding that show one way to accomplish this.
3. Look into using Static ARP tables between critical workstations and servers. They are more trouble to maintain but limit arpspoofing.
4. Run software like ARPWatch to detect changes in MAC addresses on your network that may point to Sniffers.
5. Try running tools like Sniffdet and Sentinel to detect network cards in promiscuous mode that may be running sniffing software.
6. Have outside laptops using Wi-Fi that come into your facility use a VPN to connect to the network.
7. Lockdown workstations so users can't install sniffing software or boot from a CD like Knoppix.
8. Keep public terminals on a separate LAN from the staff workstations and servers.
I hope you have found this article useful.
Wireshark User's Guide
02/01/2005: Article first published.
07/30/2007: Update Ethereal to Wireshark, added section on WiFi networks, switched to using BackTrack CD and a lot of other little tweaks.
04/30/2008: Fixed a stupid error I made where I mistyped 801.11 instead of 802.11. | <urn:uuid:e0bb2135-38a2-475c-a098-874fe29bcb9b> | CC-MAIN-2017-04 | http://www.irongeek.com/i.php?page=security/AQuickIntrotoSniffers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937686 | 3,441 | 2.9375 | 3 |
Which of these is the incorrect explanation of Traceview?
In order to create a Tracefile, you must write methods specifying the starting and ending
positions of the part to be profiled.
Since Tracefiles are saved on SD cards, the developer must specify a file name.
Traceview is one of the tools of the SDK, and can be used from the command line or DDMS.
Since Traceview cannot use tracefiles as is, a development machine with adb must be used. | <urn:uuid:2ce7d421-6bfe-4faf-9739-bf7c1ac46491> | CC-MAIN-2017-04 | http://www.aiotestking.com/android/which-of-these-is-the-incorrect-explanation-of-traceview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.844639 | 104 | 2.671875 | 3 |
Information Security in 2020
The rise in mobility and participation in social networks, the increasing willingness to share more and more data, new technology that captures more data about data, and the growing business around Big Data all have at least one assured outcome — the need for information security.
However, the news from the digital universe is as follows:
- The proportion of data in the digital universe that requires protection is growing faster than the digital universe itself, from less than a third in 2010 to more than 40% in 2020.
- Only about half the information that needs protection has protection. That may improve slightly by 2020, as some of the better-secured information categories will grow faster than the digital universe itself, but it still means that the amount of unprotected data will grow by a factor of 26.
- Emerging markets have even less protection than mature markets.
In our annual studies, we have defined, for the sake of analysis, five levels of security that can be associated with data having some level of sensitivity:
- Privacy only — an email address on a YouTube upload
- Compliance driven — emails that might be discoverable in litigation or subject to retention rules
- Custodial — account information, a breach of which could lead to or aid in identity theft
- Confidential — information the originator wants to protect, such as trade secrets, customer lists, confidential memos, etc.
- Lockdown — information requiring the highest security, such as financial transactions, personnel files, medical records, military intelligence, etc.
The tables and charts illustrate the scope of the security challenge but not the solution. While information security technology keeps getting better, so do the skills and tools of those trying to circumvent these protections. Just follow the news on groups such as Anonymous and the discussions of cyberwarfare.
However, for enterprises and, for that matter, consumers, the issues may be more sociological or organizational than technological — data that is not backed up, two-phase security that is ignored, and corporate policies that are overlooked. Technological solutions will improve, but they will be ineffective if consumer and corporate behavior doesn’t change.
Big Data is of particular concern when it comes to information security. The lack of standards among ecommerce sites, the openness of customers, the sophistication of phishers, and the tenacity of hackers place considerable private information at risk. For example, what one retailer may keep private about your purchase, such as your transaction and customer profile data, another company may not and instead may have other data hidden. Yet intersecting these data sets with other seemingly disparate data sets may open up wide security holes and make public what should be private information.
There is a huge need for standardization among retail and financial Web sites as well as any other type of Web site that may save, collect, and gather private information so that individuals’ private information is kept that way. | <urn:uuid:164e1c43-3050-4ce0-b449-b4319ac7b4f3> | CC-MAIN-2017-04 | https://www.emc.com/leadership/digital-universe/2012iview/information-security-2020.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00498-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934616 | 589 | 2.515625 | 3 |
Geographic information systems professionals have been seeing an increase in career opportunities lately -- from working for specifically map-based enterprises such as Esri’s CityEngine and Google Earth to helping the public sector with its geospatial needs.
But now, a new avenue has opened up in the world of GIS: video games. Many games have begun to feature complex interactive maps, as seen in Grand Theft Auto V, which has blown away gamers with its attention to detail, and Call of Duty: Ghosts, where multiplayer maps play an important role. Other games such as SimCity allow you to build your own world.
"Whether it is introducing an improved user experience for games or creating more dynamic public spaces for our citizens to enjoy, there are so many ways in which GIS data can be leveraged to improve the world,” Stephen McElroy, GIS program chair at American Sentinel University, said in a statement. “3-D environments that simulate the real world help us to understand and plan sustainable environments."
If you're a GIS professional looking for a job, gaming is one route to consider. | <urn:uuid:e678dbd4-33c6-419d-adf3-4705ec0916ad> | CC-MAIN-2017-04 | http://www.nextgov.com/cio-briefing/wired-workplace/2014/08/mapping-out-video-games/91793/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949258 | 229 | 2.515625 | 3 |
The Federal Communications Commission said it wants to make up to 195 megahertz of additional spectrum in the 5 GHz band available to unlicensed wireless devices with the idea that such a move would enable Wi-Fi equipment that can offer faster speeds of one gigabit per second or more, increase overall capacity, and reduce congestion.
"Unlicensed National Information Infrastructure devices today operate in 555 megahertz of spectrum in the 5 GHz band, and are used for short range, high speed wireless connections including Wi-Fi enabled local area networks and fixed outdoor broadband transceivers used by wireless Internet service providers to connect smart phones, tablets and laptops to the broadband network," the FCC stated.
The FCC proposal needs to go through a public comment period and is by no means a slam dunk as the military, the US Department of Homeland Security and others already parts of that spectrum and have expressed concern about sharing it with commercial applications.
"Wi-Fi congestion is a very real and growing problem. Like licensed spectrum, demand for unlicensed spectrum threatens to outpace supply. The core challenge is the dramatically increased use of wireless devices, which require spectrum," said FCC Chairman Julius Genachowski said at the agency's monthly meeting in Washington. "This additional spectrum will increase speeds and alleviate Wi-Fi congestion at major hubs, such as airports, convention centers and large conference gatherings. In addition, this would also increase speed and capacity for Wi-Fi in the home where multiple users and devices are often on the network at the same time. Because the 5GHz band is already used for other purposes by both federal and non-federal users, the effort will require significant consultation with stakeholders to enable non-interfering shared use of the spectrum. But consultation can't be an excuse for inaction or delay."
Interestingly deflecting such battles is the idea behind a new program researchers at Defense Advanced Research Projects Agency (DARPA) will detail this month. DARPA's Shared Spectrum Access for Radar and Communications (SSPARC) program has a goal of boosting radar and communications capabilities for military and commercial users by creating technical ways to enable spectrum sharing.
SSPARC looks to support two beyond state of the art types of spectrum sharing: Military radars sharing spectrum with military communications networks, and military radars sharing spectrum with commercial communications networks, DARPA stated.
"Balancing national security requirements of radars and military networks with the growing bandwidth demands of commercial wireless data networks calls for innovative approaches to managing spectrum access," DARPA stated.
DARPA went on to say that the challenge of spectrum access is especially acute in the frequencies between 2-4 GHz, which are highly desirable for military systems and commercial networks. SSPARC will focus on technologies to share spectrum at these frequencies. Technologies developed in the program could be applicable at other frequencies as well.
In related news, the FCC approved a new regulation letting companies or consumers use approved and licensed signal boosters to amplify signals between wireless devices. Signal boosters, thousands of which are already in use, not only help consumers improve coverage where signal strength is weak, but they also aid public safety first responders by extending wireless access in hard-to serve areas such as tunnels, subways, and garages, the FCC stated.
"Most of the procedural and technical rules we adopt for consumer signal boosters are based on a Consolidated Proposal, agreed to by several signal booster manufacturers, the four nationwide wireless service providers, and over 90 small, rural, wireless service providers. They are designed to facilitate the development of safe, economical signal boosters, reduce consumer confusion, and encourage innovation in the booster market," said FCC's Mignon Clyburn. "We also adopt different, but sensible rules for Industrial Signal Boosters. These devices are typically designed, to serve multiple users simultaneously, and cover larger areas such as stadiums, airports, office buildings, and hospitals. They are high powered and may use a greater number of antennas, amplifiers, and other components. Given the characteristics of industrial boosters, this order reasonably requires greater coordination by the installer with the wireless service provider."
Check out these other hot stories: | <urn:uuid:7d8e1dc2-8a99-48f4-9339-753cd9f01055> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224071/wireless/fcc-moves-to-boost-wireless-speeds--avoid-congestion.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00434-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943266 | 835 | 2.875 | 3 |
It’s still a buzzword. And it’s still, in the eyes of many companies, the ticket to big bucks. Confusion as to the nature of the cloud remains, enabling “cloudwashing” to remain as a marketing strategy and luring unsuspecting customers into arrangements that may be less profitable than the hype would indicate. So, where does the cloud really stand right now?
Defining the Cloud
For some, the cloud is synonymous with the Internet. Although there is a close correspondence in many cases, some differences do exist. Defining the cloud exhaustively is a difficult task, and a certain amount of “I know it when I see it” is required. The National Institute of Standards and Technology (NIST) offers a good starting point for a definition of the cloud. Although it shouldn’t be taken as authoritative in an absolute sense, it serves as a foundation for discussion.
The NIST definition addresses three areas: essential characteristics, service models and deployment models. These areas more or less break down into what the cloud looks like, what it does and how it does it. The essential characteristics are largely self-explanatory: on-demand self-service (automation, basically), broad network access (services are available through standard channels, like the Internet over a mobile device), resource pooling (“delocalization”), rapid elasticity (what you need, when you need it), and measured service (service allocation and billing capability).
The NIST service models include software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS). A cloud service can range from just providing users with the fruits of an application (SaaS) to simply providing the hardware remotely to run the software stack (IaaS). The cloud is essentially outsourced hardware with varying levels of software service added on: customers might simply interact with a cloud service using a browser, or they might run all of their processes from the application to the operating system on a provider’s hardware (servers and such).
The deployment models are fairly uncontroversial: private cloud (the entire service is dedicated to a single customer), community cloud (service limited to a particular set of customers), public cloud (think the Internet) and hybrid cloud (some combination of the above).
Again, this definition need not be considered authoritative or exhaustive, but it provides some grounding for a discussion of the cloud that prevents everything from becoming a cloud product.
Novelty is often a selling point, so naturally, companies will tout characteristics of their products that fit the mold of the latest trend. Thus, the phenomenon of “cloudwashing” was born. Because the cloud is this great new thing (or perhaps not so new—nothing in the NIST definition is fundamentally different from previous computing models), if the term can be worked into the product name somehow, it will garner more attention. A number of companies have become infamous for cloudwashing, such as Oracle.
A fuzzy example is Adobe’s Creative Cloud, which, according to Mashable, “is not exactly a cloud-based app. [The Adobe CEO] accurately describes it as a hybrid solution. You will still install local software. The subscription model…means you gain access to the latest software updates as soon as they become available in the cloud, as well as all the cloud-based services and integration.” This product has some elements of the cloud, but then, so do many products that don’t bear the name. Perhaps the key element is that the heart of the system—the software (Photoshop and such)—run locally on the user’s computer, and the default for file storage is local as well. The new Adobe model seems to center more on software subscription than the cloud.
A chief characteristic of the cloud, since it typically involves outsourcing of hardware at a minimum and software in many cases, is centralization of resources. Following the advent of the PC, the cost of computing power declined, leading to decentralization. But the cloud model reverses this trend: computing power (including storage and so on) is increasingly centralized in data centers that may serve a large number of customers. This trend raises a number of concerns—particularly, the effects of downtime (many customers are affected instead of just one, typically) and the potential for cascading failure. Other concerns include control of resources by fewer (and larger) companies, as well as the implications for privacy, particularly in light of the NSA scandal and the complicity of major technology firms.
Centralization does serve the mobile market quite well, however. Mobile users typically lack the same computing power of fixed users thanks to heat restrictions and limited space and power in devices. Also, storage becomes a concern. Cloud providers, thanks to their centralization and relative ubiquity via the Internet, can offer remote hardware for computing and storage, as well as the capability to sync data across a number of connected devices.
A major concern about a new product or business model is its newness—can it withstand the stresses of use (and abuse)? In the cloud context, security has been a major concern, but in many ways, the cloud has reached maturity in this area. To be sure, it’s imperfect, but no system is totally secure. The centralization of the cloud, although it creates a potentially bigger target for hackers and other malicious parties, also enables greater investment in security, so there is at least some balance. At this point, however, neither the cloud nor local computing is fundamentally more secure than the other (excepting the case where the local model is physically separated from the network).
The NIST definition of the cloud is a good starting point, although it need not be an authoritative or exhaustive one. This view of the cloud identifies fundamental characteristics, service models and deployment models. A good definition—or at least a tentative one—is necessary to filter through the cloudwashing that paints non-cloud products as cloud services to exploit the attention that the market is receiving. One of the chief consequences of growing reliance on the cloud, centralization, has both upsides and downsides. Although it can reduce costs through economies of scale and enables greater mobility, it also creates the potential for greater damage resulting from a single failure. Because of the NSA scandal, the cloud has gained some negative attention, and many privacy matters remain unresolved. But the cloud is a mature (or nearly so) model that, although it will be refined over time, offers value over localized computing in some situations. The key, as always, is for customers to evaluate their needs and determine if the cloud really offers value for their specific applications. | <urn:uuid:526a42f3-3f0a-47c8-8e11-b223f039b33b> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/cloud-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947182 | 1,379 | 2.5625 | 3 |
The research found the females are actually searching for a safe haven from birds and other predators rather than hunting for the perfect match. Researcher Professor Patricia Backwell from The Australian National University (ANU) said the findings overturn previous theories and helped scientists better understand the breeding habits of fiddler crabs, which are crucial to the ecological health of mangroves, salt marshes and muddy beaches around the world. "This behaviour of visiting and supposedly rejecting successive males has always been taken as a defining feature of female choosiness, but this study shows that things are not always what they seem," said Professor Backwell, from the ANU Research School of Biology. Male fiddler crabs are known for having one claw that is considerably larger than the other. Fiddler crabs are found in mangroves and salt marshes and on sandy or muddy beaches of West Africa, the Western Atlantic, Eastern Pacific and Indo-Pacific. Professor Backwell said female fiddler crabs visited successive displaying males in their burrows to identify safe places to hide in the event of predator attacks, and not because they were searching for a perfect mating partner. "If a bird attacks, female fiddler crabs can move quickly and directly back to the last burrow it visited," said Professor Backwell, who is based in Darwin at the moment to conduct field work. "Having this map of burrow positions is essential if they are to survive a bird attack, and this is true for females who are looking for a mate and those who are looking for a burrow." Co-lead researcher Dr Marianne Peso said the team conducted experiments to observe and compare the behaviour of mate- and burrow-searching females. The team noticed female fiddler crabs not seeking a mate visited successive males before settling in a new burrow in the same manner as mate-searching females. "We watched displaced females move across the mudflat, testing mate preferences with male-mimicking robotic crabs, examining male reactions to the females and testing the females' response to a simulated bird predator," said Dr Peso, who was based at ANU at the time of the study and is now at Macquarie University Department of Biological Sciences. "In all experiments, mate-searching and burrow-searching females behaved identically. "They all visited courting males, they found the same robotic males attractive, males treated them in the same way as potential mates and all the females retreated to the last burrow they visited when swooped by the plastic bird." More information: M. Peso et al, Not what it looks like: mate-searching behaviour, mate preferences and clutch production in wandering and territory-holding female fiddler crabs: Table 1., Royal Society Open Science (2016). DOI: 10.1098/rsos.160339
Lead researcher Dr Yuerui (Larry) Lu from The Australian National University (ANU) said the discovery hinged on the remarkable potential of the molybdenum disulphide crystal. "This type of material is the perfect candidate for future flexible displays," said Dr Lu, leader of Nano-Electro-Mechanical System (NEMS) Laboratory in the ANU Research School of Engineering. "We will also be able to use arrays of micro lenses to mimic the compound eyes of insects." The 6.3-nanometre lens outshines previous ultra-thin flat lenses, made from 50-nanometre thick gold nano-bar arrays, known as a metamaterial. "Molybdenum disulphide is an amazing crystal," said Dr Lu "It survives at high temperatures, is a lubricant, a good semiconductor and can emit photons too. "The capability of manipulating the flow of light in atomic scale opens an exciting avenue towards unprecedented miniaturisation of optical components and the integration of advanced optical functionalities." Molybdenum disulphide is in a class of materials known as chalcogenide glasses that have flexible electronic characteristics that have made them popular for high-technology components. Dr Lu's team created their lens from a crystal 6.3-nanometres thick - 9 atomic layers - which they had peeled off a larger piece of molybdenum disulphide with sticky tape. They then created a 10-micron radius lens, using a focussed ion beam to shave off the layers atom by atom, until they had the dome shape of the lens. The team discovered that single layers of molybdenum disulphide, 0.7 nanometres thick, had remarkable optical properties, appearing to a light beam to be 50 times thicker, at 38 nanometres. This property, known as optical path length, determines the phase of the light and governs interference and diffraction of light as it propagates. "At the beginning we couldn't imagine why molybdenum disulphide had such surprising properties," said Dr Lu. Collaborator Assistant Professor Zongfu Yu at the University of Wisconsin, Madison, developed a simulation and showed that light was bouncing back and forth many times inside the high refractive index crystal layers before passing through. Molybdenum disulphide crystal's refractive index, the property that quantifies the strength of a material's effect on light, has a high value of 5.5. For comparison, diamond, whose high refractive index causes its sparkle, is only 2.4, and water's refractive index is 1.3. This study is published in the Nature serial journal Light: Science and Applications. Explore further: New material allows for ultra-thin solar cells
News Article | September 7, 2016
The Australian National University will establish an international research program to improve ways to store renewable energy under a new $8 million partnership with the ACT Government. ANU Vice-Chancellor Professor Brian Schmidt thanked the ACT Government for contributing up to $5 million to support the program, which would help to establish Australian research leadership in the integration of battery material technology with electricity network storage.
News Article | January 18, 2015
It was an interesting week for ideas about the future of the Internet. On Wednesday, satellite industry notable Greg Wyler announced that his company OneWeb, which wants to build a micro-satellite network to bring Internet to all corners of the globe, secured investments from Richard Branson's Virgin Group and Qualcomm. Then in a separate announcement on Friday, Elon Musk said that he would also be devoting his new Seattle office to creating "advanced micro-satellites" to deliver Internet. OneWeb, formerly WorldVu Satellites Ltd, aims to target rural markets, emerging markets, and in-flight Internet services on airlines, the Wall Street Journal reported. Both Branson and Qualcomm Executive Chairman Paul Jacobs will sit on the company's board, but Wyler did not say how much Virgin and Qualcomm invested in his company. WSJ: Plan to "deliver Internet access across the globe" in early stages. Wyler said that his company's goal is to create a network of 648 small satellites that would weigh in at around 285 pounds each. The satellites would be put in orbit 750 miles above the Earth and ideally cost about $350,000 each to build using an assembly line approach. Wyler also said that Virgin, which has its own space segment, would be launching the satellites into orbit. “As an airline and mobile operator, Virgin might also be a candidate to resell OneWeb’s service,” the Journal noted. Wyler has said that he projects it to take $1.5 billion to $2 billion to launch the service, and he plans to launch in 2018. OneWeb's advantage is that it already secured the rights to a block of radio spectrum that it will use for Internet service through the International Telecommunications Union. Wyler's first big satellite Internet startup was O3b Networks Ltd., which partnered with Google to produce a similar product. That company took six years to launch its service and eventually suffered from satellite performance issues. Wyler left that company in September 2014 to create WorldVu/OneWeb, however, and he took with him the band of spectrum that his new company hopes to use. Bloomberg Businessweek reports that Wyler has a team of more than 30 engineers “developing the satellites, antennas, and software for OneWeb.” On the other hand there's Musk, who's a seasoned space-business launcher that's starting fresh in the world of satellite Internet services. The Telsa and SpaceX founder announced his plans to launch 700 satellites weighing less than 250 pounds each in November. His satellites would also orbit the Earth at 750 miles above. Musk spoke to Bloomberg on Friday evening explaining that 750 miles above the Earth is much closer than the tens of thousands of miles above the Earth at which traditional telecommunications satellites operate. “The speed of light is 40 percent faster in the vacuum of space than it is for fiber,” Musk said. “The long-term potential is to be the primary means of long-distance Internet traffic and to serve people in sparsely populated areas.” In Musk's vision, while sending data from Los Angeles to San Francisco may not be faster by satellite, sending data from Johannesburg to San Francisco might. Musk said on Friday night that the project would be based out of the Seattle office, and he will start with a team of 60 that could expand into a team of 1,000 in three to four years. “The employees will also work on SpaceX’s Falcon rockets, Dragon capsules, and additional vehicles to carry various supplies (and soon, people) into space,” Bloomberg reports. Musk's venture will be considerably more expensive, possibly costing as much at $10 billion. It could take more than five years to get operational. “But we see it as a long-term revenue source for SpaceX to be able to fund a city on Mars,” Musk said on Friday night. “It will be important for Mars to have a global communications network as well. I think this needs to be done, and I don’t see anyone else doing it.” Of course, satellite Internet as it now stands can't touch the quality of terrestrial-bound Internet in most areas of the world. And the Wall Street Journal notes that, “Historically, complex satellite projects with large constellations have run over budget and taken longer than expected to build and deploy.” In November when Musk first put forward the idea of creating a satellite network for Internet services, anonymous sources indicated that Wyler and Musk were considering working together to bring down the cost and the risks of such a difficult venture. On Friday, however, Musk made it clear that the two companies are competitors more than partners, asserting that SpaceX’s manufacturing techniques would give the company an edge over OneWeb. “Greg and I have a fundamental disagreement about the architecture,” Musk said on Friday. “We want a satellite that is an order of magnitude more sophisticated than what Greg wants. I think there should be two competing systems.” Branson, for his part, told Bloomberg that Musk doesn't have a chance compared to Wyler. “Greg has the rights, and there isn’t space for another network—like there physically is not enough space. If Elon wants to get into this area, the logical thing for him would be to tie up with us, and if I were a betting man, I would say the chances of us working together rather than separately would be much higher.”
Human remains, estimated to be about 2,500 years old, were unearthed at the Plain of Jars site in Laos. An ancient burial site, including an oddly shaped quartz stone covering the face of one of the newly uncovered human skeletons, has been discovered at the mysterious Plain of Jars, an archaeological site in remote central Laos littered with thousands of stone vessels. The new findings could help researchers solve the long-standing puzzle of why the stone jars were scattered across this part of Laos. When it was found, the skull beneath the quartz adornment appeared to be looking through a large hole in the stone, said Dougald O’Reilly, an archaeologist at the Australian National University (ANU), who led a team of scientists on a joint Laos-Australian expedition to the Plain of Jars in February. [In Photos: Exploring the Mysterious Plain of Jars Site] "When we excavated it, the skull was actually looking out through that perforation. It was quite interesting, but whether it was done purposefully is difficult to know," O’Reilly told Live Science. The burial site is estimated to be 2,500 years old, and was found when researchers from ANU, Monash University in Australia and the Laos Ministry of Information, Culture, and Tourism, spent four weeks mapping and excavating the ground around a group of the massive carved stone jars that dot the landscape. More than 90 jar sites — some with up to 400 stone jars measuring as tall as 10 feet (3 meters) high — are spread across foothills, forests and upland valleys of this remote region. The members of the Laos-Australian expedition worked at the most accessible site, known as Jar Site 1, located a few miles outside the city of Phonsavan, in Xiangkhoang province in central Laos. The researchers plan to explore a second, more remote jar site next year. The Laos government hopes to develop Jar Site 1 as an archaeological center and UNESCO World Heritage site, to protect the unique Plain of Jars landscape and to stimulate scholarship and cultural tourism in the area. O’Reilly said the latest expedition was the first major effort by archaeologists since the 1930s to visit the site, in an effort to understand the purpose of the jars and who created them. Since that time, however, some archaeologists have undertaken important work at the Plain of Jars, mainly on their own. [The 7 Most Mysterious Archaeological Finds on Earth] The latest team of around 11 researchers worked together to compile the first comprehensive scientific study of one of the jar sites, including a GIS (geographic information system) map recording the precise location of each of the jars, stone disks and quartz stone markers scattered over the site. The largest jars weigh more than 10 tons (9,000 kilograms), and a big part of their mystery is how they got there. "There are a few well-known quarry sites where the jars were sourced and then brought across the landscape, about 8 to 10 kilometers [5 to 6 miles] to the jar sites," O'Reilly said. "So there's a huge amount of effort involved in moving them — one would have to speculate that elephants must have been involved, given the incredible weight of the jars." And carving the massive jars would have been no easy task for primitive peoples with iron tools, he added. "Some of the jars are over 2 meters [6.5 feet] or perhaps even 3 meters [10 feet] in height, and in girth you couldn't get your arms around most of them," O'Reilly said. "And there are variations in the design of the jars: some have larger or smaller openings, some are rectangular, some circular or oval — in some cases you wonder how did they even carve these things?" The variety of sizes and shapes of the jars has prompted many researchers to theorize about their purpose over the years. "It’s probably likely that they do represent a memorial of some kind, and the variations in the sizes of the jars may indicate that there were differences in status and perhaps a hierarchy in the society that created the jars," O'Reilly said. "You could spend a lot of time theorizing." The burial site with the oddly shaped quartz stone was one of three distinct types of burial sites found at Jar Site 1, the researchers said. [Top 10 Weird Ways We Deal With the Dead] "This is the first time that this type of interment has been uncovered at the Plain of Jars, but if there is one, there will probably be others," O'Reilly said. "And this burial is also quite interesting because it contained the remains of not one but two individuals: the cranial bones of what's estimated to be an 8-year-old child were found in that burial as well [as an adult skeleton]." The expedition also uncovered 11 ceramic jars, which are expected to contain "secondary" burials of human bones from which the flesh was removed. A pit filled with bones from several secondary burials and covered with a large limestone block was also found, and the marker stones and stone disks on the ground around the stone jars seemed to correspond to the location of secondary burials, O'Reilly said. Scientific study of samples and remains from the Plain of Jars site will continue in the laboratory. O’Reilly said the expedition recovered some human teeth that could provide DNA for testing and clues to the origins of the ancient peoples buried there. But, DNA tends to degrade heavily in the climate conditions of Southeast Asia, so a proper analysis might not be possible, he added. The contents of the ceramic jars excavated from the site will also be carefully examined to confirm if, as the researchers suspect, they hold human remains. But the Plain of Jars is not giving up all its secrets just yet. Although some archaeologists have proposed that the stone jars were used to decompose bodies before the bones were cleaned for secondary burials, it may be impossible to know for sure. "This is something you find in various religious practices in different parts of the world, but it's something that needs to be investigated a little further at the Plain of Jars," O’Reilly said. One of the biggest problems at the site is that the jars have been exposed to the harsh Southeast Asian climate for more than 2,000 years, making it very difficult for scientists to study and run test on the artifacts. "Possibly we could look at trying to extract lipids from the stone jars to see if there is any evidence for decomposition of human remains, but the jars have been exposed for so long that it's a bit of a long shot," he said. "So, I fear we probably will never know the true purpose of the large stone jars." In Photos: Archaeology Around the World Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:842d5ff5-6e1f-4283-b0c3-b81f3f2aff7d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/anu-158118/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964228 | 3,768 | 3.53125 | 4 |
What You'll Learn
- Define and describe Interact's purpose and demonstrate how it functions.
- Be conversant with what an Interactive Channel is and what it does, and understand its key components.
- Identify the key components of the Interact architecture.
- Identify what Interact data sources are and describe how they are used.
- Demonstrate how the Interact API works using a simple example.
- List what the developer needs to know about designing interactions with IBM Interact.
- Understand the request-and-response approaches of the IBM Interact API.
- Understand and use the NameValuePair construct.
- Define the uses for the following classes of the Interact API: startSession, getOffers, postEvent, and executeBatch.
- Set the logging level for Interact.
- Understand what JMX Monitoring is and begin to use it to monitor performance and make adjustments.
- Begin to be familiar with causes and remedies for performance issues.
- Understand how Interact determines which offers are presented to customers.
- Use table-driven features to affect offer serving.
- Understand what the EXTERNALCALLOUT macro is, and how to use them.
- Be familiar with how the cross-session response handling feature works.
Who Needs To Attend
This advanced course is for advanced users of IBM Interact. | <urn:uuid:c4ca12a1-51ff-44ce-854a-e715a2fe62d5> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120209/ibm-interact-advanced/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00133-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.827702 | 287 | 2.671875 | 3 |
Iqbal M.,Cotton Research Station CRS |
Ahmad S.,Cotton Research Station CRS |
Nazeer W.,Cotton Research Station CRS |
Muhammad T.,Cotton Research Station CRS |
And 6 more authors.
African Journal of Biotechnology | Year: 2012
This study was conducted to compare the seed cotton yield and cotton leaf curl virus (CLCV) infestations of newly evolved cotton genotypes (MNH-886, MNH-814 and CIM-496) under different plant spacings (15.0, 22.5, 30.0, 37.5 and 45.0 cm) at Cotton Research Station (CRS), Multan (71.43°E, 30.2°N and 122 m above sea level), Pakistan, during two consecutive years, that is, 2008 and 2009. The results indicate that cotton sown with 15 cm spaced plants resulted in maximum seed cotton yield only due to highest plant density (88,888 plants ha -1), as cotton sown with 30 and 45 cm spaced plants (44,444 and 29630 plants ha -1, respectively) had more number of bolls per plant in both years. Plant spacing had non-significant effects on boll weight and CLCV infestations. Genotypes (MNH-886 and MNH-814) resulted in the highest boll weight, number of bolls per plant and higher seed cotton yield compared with CIM-496 because of about 20% lower CLCV attack. Plant population was statistically similar for all cotton genotypes. In summary, cotton genotypes MNH-886 and MNH-814 resulted in higher seed cotton yield when sown with 15 cm plant spacing. © 2012 Academic Journals. Source | <urn:uuid:68bf5a66-4ea1-4eae-b9ac-bc49ec8a4fcd> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cotton-research-institute-495450/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900235 | 368 | 2.5625 | 3 |
Privilege management can be a way of spotting an attacker’s footprints
Cyber-criminals are motivated, technically innovative and incredibly observant; it is a depressing and repeating pattern of the last decade that they have been able to zero in on important weaknesses when the defenders were either unaware these existed or just had their fingers crossed.
There are really only three ways to break into an organization digitally. You target a poorly-secured resource (e.g. a server left unguarded using a default login), attack an insecure or unpatched application, or try to undermine the end user with social engineering. All three serve as jumping off points to spider behind the firewall or intrusion prevention layer in search of further resources, data or users to target. Conceptually, cybercrime really is that simple.
Recent history tells us that the criminals have worked out the implications of this. Resources can eventually be secured by technological means but for applications it is much harder. For the user it might be almost impossible, which is why the user is usually where today’s attacks usually begin.
The PC as Trojan
Thanks to the way Windows PCs have been designed to operate as powerful, independent tools, undermining them is like unlocking a kingdom. PCs connected to networks can be used as staging posts to map resources, and are afforded privileges that make it possible to hide an attack that would be spotted were it launched from outside the firewall. Best of all, PCs run software that turns out to be riddled with flaws (stand up Adobe Flash, PDF Reader and Java) and are piloted by users who can be manipulated with a grim statistical certainty to click on malicious web links or open rogue attachments.
The vulnerability of the PC/user turned into a weakness so slippery that even today the security industry is still struggling to come to terms with it. If the openness of the PC can be combined with the suggestibility of the end user, a well-designed attack can become almost undefendable. One amplifies the other and it is the interaction of the two that is key.
This uncomfortable fact has slowly dawned on many organizations and caused an uptick in interest in systems that don’t simply defend a resource but manage, record and audit the trail of machine-human interactions that are directed towards them. The interactions can’t and shouldn’t be stopped but as long as they can be seen and their patterns analyzed, there is hope that attacks can be made visible.
Probably the most important interaction for the Windows PC is the granting of administrator privileges to a user or application; something that numerous high-profile malware attacks have used to gain control of targets. Admin privileges are also easily abused by users, either deliberately or inadvertently, to bypass policies and install software and, of course, to naively allow malware to subvert defenses such as antivirus.
Active management arrives
Windows XP assumed admin privileges were good because it made the user’s life easy and so developers obliged; by the time Vista and Windows 7 debuted a technology called User Account Control (UAC), the folly of this design had been realized but backwards compatibility meant that just turning off admin rights wasn’t always an option.
The emphasis has now shifted to one of active management where privileges can be elevated from standard to administrator only when absolutely necessary, and preferably on the basis of an application’s need rather than a user’s. Such elevation should always be as temporary as possible and always logged.
You could say that this design – least privilege – works to reduce the attack surface of the PC-to-user interaction as far as possible and captures information to model what counts as legitimate elevation so that it doesn’t turn into a barrier.
But as ever, the criminals have carried on innovating, trying to bypass this layer of control using a number of techniques often directed at software vulnerabilities in browsers and plug-ins, or by creating malware that works entirely in memory (e.g. has no need for admin privileges to write to disk); today’s attacks are often architected to reach out to users via the web or email keeping themselves as far as possible away from the operating system’s privilege layer.
In addition there are new types of application such as native Windows 8 ‘Store’ apps designed to carry out potentially quite intrusive functions without asking for admin rights or portable versions of apps run from USB sticks (for instance portable browsers or Skype) that grab their own space in memory that could give an attacker a foothold
Checkmate? Not necessarily. A good privilege management system doesn’t just restrict privileges, it manages applications, noting which are being used and limiting the user’s ability to install non-approved software. This is essential. Just because software does not ask for admin rights does not mean it is not trying to elevate its control in hidden ways. Software should always be locked down, if necessary using a layer of blacklisting to block known risky apps and whitelisting to allow known good ones.
The biggest takeaway of all is to grasp that even the most complex attacks – Advanced Persistent Threats (APTs) for instance – are invariably designed as a series of simple stages. First, target the end user as a primary weakness, second hit the application layer with an exploit and third capture a resource. Finally, automate this process and repeat on an industrial scale so that success becomes a matter of statistical inevitability.
The strength of privilege and application management is that it is a conceptually simple way of fighting back, putting a gatekeeper between the user, the applications and the resources, introducing a layer that makes the usually invisible interactions suddenly visible. It does not guarantee defense but it dramatically improves the odds. | <urn:uuid:03ed4462-6e45-4f0d-a69b-cdf729541b5b> | CC-MAIN-2017-04 | https://blog.avecto.com/2013/05/securitys-secret-weapon-is-always-visibility/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95311 | 1,168 | 2.671875 | 3 |
In the wake of the Paris attacks, many politicians have been creating visions of jihadists storming our streets, using untraceable encrypted communication to enable their deadly assaults (see Paris Attacks Reignite Encryption Debate).
"Terrorism isn't about means, but about ends. It's not about the technology but about the anger, the ignorance that holds a firm grip over the actor's mind."
So in a classic political move, their response has been to suggest that strong crypto be banned unless software and hardware vendors add in backdoor access for government agencies.
This "cryptophobia" stance conveniently overlooks - or else demonstrates an inability to grasp - the fact that we rely on strong cryptography, with no backdoors, to protect everything from our online banking transactions to our children's privacy.
Here are five essential crypto factors to consider:
1. Bans Don't Work
It's impossible to prevent people from using strong encryption that's free from backdoors. The same is true of so-called crypto-currencies such as Bitcoin, which can be difficult - albeit not impossible - for law enforcement agencies to trace. "If you outlaw it, the only people that will use it will be outlaws," University of Surrey information security professor Alan Woodward tells BBC World Service. And law enforcement agencies would be none the wiser. "It's not like you can see these things going over the network," he says. "You simply couldn't [effectively] ban it."
2. Not Clear That Paris Attackers Used Crypto
The push for "backdoored" strong crypto looked like it had gone dormant until, in the wake of the Nov. 13 Paris attacks, numerous law enforcement and intelligence officials suggested that the attackers might have been using encrypted communications, without providing any firm evidence to support that assertion. The latest push came this week from the International Association of Chiefs of Police and the U.S. National District Attorneys Association, which demanded that Congress mandate crypto backdoors, citing the vague threat of investigations "going dark" (see After Paris Attacks, Beware Rush to Weaken Crypto).
And any future attack could be planned without using encryption, for example via steganography - hiding messages in other files - or even pre-agreed one-time codes (see Attacks in Paris: The Cyber Investigation).
3. Backdoors Create New Problems
While it's easy to decry how strong crypto can be used to facilitate criminal behavior, adding backdoors may create more problems than it solves, according to a technical report, "Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications," that researchers at the Massachusetts Institute of Technology released earlier this year.
The researchers warned that backdoors break forward secrecy - meaning that earlier communications could be compromised in the future - as well as increase system complexity, thus dramatically increasing the risk of introducing exploitable vulnerabilities that would undermine the entire system. And any system that forces organizations or Internet service providers to share access credentials with the government, which then centralizes that information, would itself be at risk from "bad actors" - both internally and externally.
"Recent attacks on the U.S. government Office of Personnel Management show how much harm can arise when many organizations rely on a single institution that itself has security vulnerabilities," the researchers write (see OPM: 'Victim-as-a-Service' Provider). In addition, it's worth highlighting that amassing so much information in one place would make life much easier for foreign intelligence agencies, who could then steal the backdoor keys and gain easy access to encrypted communications and data.
4. 'Good Guys' Can Attack
Insiders are also a risk. In the investigation into the notorious Silk Road darknet marketplace, for example, two of the men arrested, who later pleaded guilty to related crimes, were federal agents participating in the investigation (see Former Secret Service Agent Pleads Guilty to $800K Bitcoin Theft).
It's also worth noting that the United States already has a system where the "good guys" hold keys to locks. As pundit Cory Doctorow has noted, the Transportation Safety Administration requires all locked baggage to use Travelsentry-compatible locks, for which TSA agents hold master keys.
A CNN investigation into lost luggage found numerous cases of insider theft, including baggage handlers rifling through bags in TSA-secured areas. TSA has fired 513 officers for theft, CNN reports.
5. Strong Crypto Benefits Society
Many politicians and government officials that are quick to push for adding backdoors to crypto spend no time detailing the likely repercussions.
"Do we want strong encryption to protect our businesses, to protect our online privacy and prevent mass surveillance by rogue states?" asks Brian Honan, an information security consultant and cybersecurity adviser to the association of European police agencies known as Europol. Or do we instead want to provide backdoor access to encrypted communications, thus imperiling our collective security? Because when it comes to effective crypto, there's no way to limit exceptions to just the "good guys." The only reliable solution is to ensure that we all have access to strong crypto.
"The premise driving the people writing encryption software is ... the hope that we can enforce existing rights using algorithms that guarantee your ability to free speech, to a reasonable expectation of privacy in your daily life," says Nadim Kobeïssi, a Beirut native who's now a cryptography researcher at the French Institute for Research in Computer Science and Automation - known as INRIA - in a blog post. "When you make a credit card payment or log into Facebook, you're using the same fundamental encryption that, in another continent, an activist could be using to organize a protest against a failed regime," he notes, echoing a perspective also being advance by technology giants such as Google and Microsoft.
Undercutting encryption, Kobeïssi argues, is not only bad for society, but would do nothing to curb terrorism. "If we take every car off the street, every iPhone out of people's pockets and every single plane out of the sky, it wouldn't do anything to stop terrorism," he says. "Terrorism isn't about means, but about ends. It's not about the technology but about the anger, the ignorance that holds a firm grip over the actor's mind." | <urn:uuid:0265543f-5329-4544-a328-ad83b4e72552> | CC-MAIN-2017-04 | http://www.bankinfosecurity.com/blogs/5-facts-shred-weak-crypto-push-p-1987/op-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939005 | 1,277 | 2.6875 | 3 |
Internet's Roots Stretch Nearly 600 Years to Gutenberg Printing Press
By the time Gutenberg was printing his now famous and greatly treasured Bibles, he had a staff and three printing presses going to produce a variety of documents. It was this ability to print as many of the same documents as were needed quickly and accurately that led to the transformative development that was mass communications. Perhaps more telling, it was the ability of the printing press to turn out vast quantities of the same document from a number of different places that led in 1517 to the first document to "go viral." That document was Martin Luther's 95 Theses that were distributed in printed form to launch the Reformation that rocked the Catholic Church and all of Christian Europe. While it would be wrong to say that the Internet is simply printing taken to the next level, there is a direct line from the printing revolution to the Internet revolution. The moveable metallic type has given way to photons and pixels, but the principles remain. The Internet gives us the ability to create, revise and distribute documents faster than ever. Like Gutenberg's printing press, we can create new symbols as needed; we can combine symbols into words, words into thoughts and with those thoughts, we can change minds. It is the ability to take those new thoughts, spread them through the world to impact the thinking of others and to influence events that made Gutenberg's printing transformative. It is what now makes the Internet even more transformative.Like the Internet, Gutenberg's printing process launched thousands of imitators as printing spread through Europe and the world. As information became widely available, society changed. The difference brought by the Internet was that this information could be delivered on demand anywhere, instantly. The Internet, effectively, stands on Gutenberg's shoulders. It is because of the quick and efficient distribution of information that printing provided that led to the explosion in knowledge that was the Internet. Before Gutenberg, mass communications was impossible. With the ability to distribute information that Gutenberg developed, the explosion became inevitable.
But, of course, there's much more to the Internet than simple printed words. There's the ability to spread knowledge, to preserve ideas and to generate new thoughts. The printing revolution did this as well, because it was printing that enabled the explosion in knowledge that led to the Renaissance, the scientific revolution and, eventually, to a knowledge-based economy that we enjoy today. | <urn:uuid:0bfd8432-89a4-4583-94d1-4f1d8231672c> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/internets-roots-stretch-nearly-600-years-to-gutenberg-printing-press-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972013 | 478 | 3.40625 | 3 |
Cleaning up E-Business
In early July, the World Wide Web Consortium
(new window) unveiled a public working draft of version 1.2 of the Simple Object Access Protocol. At the time, observers across the industry hailed the latest draft of the SOAP protocol as another major step toward a transparent infrastructure for e-business.
But what is SOAP? And why should you care about it?
SOAP is based on XML, which is commonly used to define data elements on a Web page and in other documents. Sound a lot like HTML? Not necessarily: Although XML uses a tag structure that resembles HTML’s structure, the similarities end there.
For example, XML tags can define what kinds of content data elements actually contain, while HTML tags, conversely, define the manner in which data elements are displayed. And developers can also use XML to identify almost any conceivable data item because it gives them the ability to define document or page tags; HTML, on the other hand, uses predefined tags. That, in turn, allows Web pages to function like database records.
So how does SOAP fit into the picture? Simply put, it’s an enabling mechanism for XML exchange between applications. SOAP allows XML data to invoke objects running on remote servers across both corporate intranets and across the Internet. In practice, SOAP borrows heavily from the client/server model; it operates as a remote procedure call mechanism that leverages HTTP as a base transport, encoding requests and responses as XML documents. The first draft of SOAP provided support only for HTTP, but the revised version 1.2 specification supports SMTP and FTP as well.
SOAP’s support for a range of Internet protocols means it can easily traverse corporate firewalls, which obstructs any proposed application interoperability solution. Because SOAP object invocations can appear to a Web server as HTML code, for example, they can pass through existing firewalls without modification.
SOAP was originally created by Microsoft Corp., of all vendors, which submitted it as an open standard to the W3C in late 1999. SOAP enjoys the support of a host of industry heavyweights including IBM Corp., which helped prepare the revised version 1.1 of the specification that was placed before the WC3 last year. Other industry leaders that have announced support for SOAP are Hewlett-Packard Co., Ariba Inc., Compaq Computer Corp. and SAP AG.
That’s great, you say, but what does all of this mean? More importantly, how does it relate to mainframe environments?
SOAP is seen as an important cog in an XML platform integration stack that includes not only XML itself, but also a new business-to-business services standard called Universal Description, Discovery and Integration and another emerging standard called the Web Services Description Language.
SOAP support is expected to be incorporated in Web application server products from IBM and HP, among others. Also, Microsoft’s .NET application and services framework is expected to heavily leverage SOAP.
These technologies promise to link business partners, customers and possibly even competitors together over the Internet, says Mike Schiff, vice president of e-business and business intelligence with analyst firm Current Analysis. In this model, XML and SOAP work in tandem to enable companies to access the business-to-business services of other companies without regard for platform considerations. Meanwhile, Schiff says standards such as UDDI and WSDL provide core e-business description, location and registration services.
Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology. | <urn:uuid:f374efec-199e-418d-af7f-43b14822107b> | CC-MAIN-2017-04 | https://esj.com/articles/2001/09/01/cleaning-up-ebusiness.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00243-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915891 | 740 | 2.65625 | 3 |
Importance of temperature control for healthcare providers
Wednesday, Oct 3rd 2012
Having quality environmental monitors in place in storage settings must be a top priority for hospitals and healthcare providers, as not having early detection in place could yield catastrophic results. Everything from medicine and tissue samples to donated blood and food needs to be kept at specific temperature and humidity levels, and a failure by a healthcare provider to properly monitor storage conditions might cause serious harm to a patient.
One of the main areas in which temperature control is of the utmost importance to hospitals and other healthcare providers is in the storage and handling of tissue samples. In order to properly diagnose a number of diseases, doctors need to be looking at quality tissue samples, according to the Archives of Pathology. To ensure that tissue samples are not compromised in any way, a healthcare provider needs to put a premium on quality storage solutions, which should include remote monitoring to see that the storage conditions remain optimal.
This is important in cancer research, as the World Journal of Gastrointestinal Oncology reported that only high quality tissue samples that have remained stored under optimal conditions can be used for research and for testing new treatments.
"[T]he laboratory results are only as good as the specimens received for testing," the Program for Appropriate Technology in Health (PATH) said. "Quality laboratory results begin with proper collection and handling of the specimen submitted for analysis."
According to PATH, blood samples are especially susceptible to their surroundings, as blood that is improperly stored at any time could alter its form. For example, a change in environment could lead to the blood clotting, which would make it unusable in a medical setting.
Cold storage for food
Restaurant freezers and hospitals may seem unrelated, but in fact healthcare providers need to be as careful - if not even more careful - about their cold storage facilities for food items as restaurants are. Food borne illnesses pose a threat to everyone, but that risk is highlighted in a healthcare setting where patients are especially susceptible to illness. For example, the World Health Organization said that infant formula that is stored or prepared at less than ideal temperatures could make infants, who do not have fully formed immune systems, quite sick.
This also extends to hospitals that treat children, as they are another at-risk population for contracting food-borne illnesses, New York State's health department reported. To help prevent the risk of disease, the department recommended keeping dry storage facilities between 50 and 70 degrees Fahrenheit, with cold storage maintained at a maximum of 40 degrees and freezers set to zero degrees. | <urn:uuid:a18e6f71-affc-40dd-be6b-b528e24efc1c> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/importance-of-temperature-control-for-healthcare-providers-800877757 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960242 | 516 | 3.171875 | 3 |
New dangerous versions of the virus have been detected "in the wild"
Cambridge, UK, November 13, 2000 - Kaspersky Lab Int., an international data-security software-development company, warns users of the discovery of Hybris, a new Internet-worm. Kaspersky Lab has been receiving reports of the discovery of this virus "in the wild" worldwide, being particularly active in Latin America although infections by this virus have also been found in Europe.
The first version of this Internet worm was discovered by Kaspersky Lab and several other anti-virus software developers at the end of September and was classified as a low risk malicious program. However, within the last few days, the company has been inundated by reports from users whose computers have been infected by this virus. At this moment, Kaspersky Lab has discovered five versions of Hybris, and it is expected that new variations will be found in the near future.
The Internet worm Hybris spreads by attaching itself to infected e-mails and works only under MS Windows. When the recipient executes the attached file, Hybris infects the host PC. The procedure for infection is typical for this type of malicious program and is performed in a similar way to the Happy or MTX viruses.
To proliferate, the worm infects the WSOCK32.DLL library and also intercepts the Windows function that establishes the network connection; it then scans sent and received data for any e-mail addresses, and sends copies of itself to these e-mail addresses. Subject, text and name of the attached file are chosen randomly, for example:
From: Hahaha firstname.lastname@example.org
Subject: Snowhite and the seven Dwarfs - The REAL Story!
In addition, this worm has some specific features. Hybris contains several (up to 32) components (plugins) in its code and executes them depending on its needs. The worm's functionality is mostly defined by the plugins. They are stored in the body of the worm and are encrypted by a very strong crypto algorithm.
However, the main peculiarity is that Hybris maintains the functionality of the plugins: it sends its own components to the anti-virus conference "alt.comp.virus" and downloads from there any upgraded or missing plugins. The virus components can also be updated by the worm from the author's Web page, via the Internet. So far, plugins found in the known versions of this virus and those at the Web site are fairly harmless and do not cause any direct damage. But, the fact that they can be updated means that they may be given completely different functions, for example, installing a Trojan horse backdoor. Although there have previously been some cases when a malicious program has been updated from the Internet, this is the first time it has occurred on this scale "in the wild."
"What we have here is perhaps the most complex and refined malicious code in the history of virus writing," comments Eugene Kaspersky, Head of Company Anti-Virus Research Center. "Firstly, it is defined by an extremely complex style of programming. Secondly, all the plugins are encrypted with very strong RSA 128-bit crypto-algorithm key. Thirdly, the components themselves give the virus writer the possibility to modify his creation "in real time," and in fact allow him to control infected computers worldwide."
Protection procedure against the Internet worm Hybris and its versions have now been added to anti-virus databases of KasperskyTM Anti-Virus (AVP).
Technical details about the worm principals and functioning order are available at the Kaspersky Virus Encyclopedia.
To learn more about the latest dangerous viruses and how to protect yourself against them, you are welcome to visit the Kaspersky Lab presentations at the Comdex Fall 2000 Show that is taking place in the Las Vegas Convention Center from 13 until 17 November (booth N L4820). | <urn:uuid:3a807627-9277-44f0-800f-357bb79f4b11> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2000/Hybris_The_Story_Continues | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954804 | 805 | 2.65625 | 3 |
RAID arrays such as RAID-10 are typically used in servers and NAS devices. Gillware is ready to assist you with your RAID 10 data recovery needs.
What Is RAID-10?
RAID-0 and RAID-1 make up, on their own, the simplest RAID levels. RAID-0 breaks up all of the data written to its disks into stripes. These stripes are usually 32 to 64 kilobytes in size. Any file larger than the stripe size gets broken up into pieces. RAID-0 creates a single volume as large as the combined capacity of all of the hard drives in the array. However, there is no fault tolerance whatsoever. If one hard drive goes down, the entire RAID array goes down with it.
RAID-1 usually only uses two hard drives. The RAID controller takes any changes made to one hard drive and copies them over to the other, making the contents of both hard drives exact duplicates of each other. If one hard drive fails, the other immediately takes its place. RAID-1 provides fault tolerance, but offers no increased capacity.
RAID-10, also known as RAID 1+0, combines RAID-0 and RAID-1 to offer the features of both configurations. Data is not only striped between multiple hard drives, but mirrored to an equal number of hard drives. This results in an array with both increased capacity and fault tolerance, like RAID-5 or RAID-6.
RAID-10 seems like a huge improvement over RAID-5 and RAID-6. No matter the size of the array, you can, in theory, lose up to half your hard drives without having to deal with a RAID-10 crash. It seems quite unlikely you would ever need RAID 10 data recovery services. But RAID-10 isn’t quite that reliable in practice.
RAID-10 vs RAID-6
Take, for example, a simple, hypothetical four-drive RAID array combining RAID-0’s striping with RAID-1’s mirroring. Despite having four hard drives, you only have two drives’ worth of usable capacity. But at least you have the peace of mind of knowing you can lose two hard drives and keep going, right?
Wrong. You could lose half of your hard drives and not even notice—as long as you only lose the right two drives. If you lose two drives that contain both a set of “original” data and its mirrored copy, it’s game over. In this hypothetical four-drive RAID 1+0 setup, there are six possible combinations of two drive failures in this array, two of which result in the RAID array failing. You have a 33% chance of two drive failures causing a RAID-10 crash.
A comparison of a simple four-drive RAID 1+0 array and a four-drive RAID-6 array. If the right two drives fail in the RAID 1+0 array, you could lose all of your data. If any two drives in the RAID-6 array fail, your data will still be perfectly safe due to the double layer of XOR and Reed-Solomon parity data.
Contrast this with RAID-6. RAID-6 works very similarly to RAID-5, but has an extra layer of parity calculations. The extra parity takes up another disk’s worth of space, spread across the disks in the array.
So, what if you take the four drives in the hypothetical RAID array and configure them in a RAID-6 array instead? Your RAID controller has to do some extra parity computations every time you write data, resulting in a bit of a performance hit. However, keep in mind that computers are really good at doing math, and doing it really quickly. It was their original raison d’être, after all.
As you add drives to a RAID-10 array, the chance of two drives causing your array to fail decreases. However, with RAID-6, the probability of two drive failures crashing your array is still zero, no matter how many drives you have.
The RAID 10 Data Recovery Process
So, how do our data recovery technicians here at Gillware Data Recovery recover data from a failed RAID-10 array?
Free Evaluation of Your Failed RAID-10 Array
As is our policy, the first thing we offer you here at Gillware is a totally financially risk-free RAID-10 data recovery evaluation. If you live in the continental United States, we are even happy to offer you a prepaid UPS shipping label to get your crashed RAID-10 array to us at no cost to you.
It’s important to send all of the drives in the array, including any hot spares. The RAID enclosure, controller card, and other hardware components, however, are completely unnecessary. Our RAID data recovery experts use custom software to emulate the RAID controller.
Once your crashed RAID-10 array has been evaluated, we present you with a price quote for data recovery and our predictions for the case results. This is not a bill; we only need your promise that you are comfortable paying for our data recovery efforts in the event that we recover the data you need. We only continue on with the RAID 10 data recovery work if you approve our price quote.
Independent Analysis of the Hard Drives in Your Failed RAID 10 Array
Our data recovery experts assess the health of every hard drive in your crashed RAID-10 setup and make repairs to the drives’ electrical components, internal components, or firmware as needed, and use our proprietary data recovery software HOMBRE to create full forensic write-blocked images of all the drives in the array. These images are contained on our internal customer data drives, which are never allowed outside our facility and are zero-filled according to Department of Defense standards after your data recovery case has been completed.
Determining the Physical Geometry of Your Failed RAID 10 Array
RAID 1+0 and RAID 0+1 arrays are complex RAID arrays. Essentially, they are two RAID configurations stacked on top of each other. Using HOMBRE’s relational database and the metadata contained on each hard drive in the failed RAID array, our RAID 10 data recovery technicians can puzzle out the way the hard drives in your RAID 10 have been arranged. After determining the striping size, striping pattern, data offset, and the order of the drives in the array, our engineers begin to have a clear picture of the lost data in the array.
Reuniting You with Your Data
After rebuilding the failed RAID 10 array with our disk images, our engineers analyze the recovered data, with the help of HOMBRE. We make as certain as we can that there is as little file corruption as possible and that your most critical data is functional.
We only bill you for our data recovery efforts after we’ve successfully recovered your data. Upon payment, we extract your recovered data to a new, healthy hard drive with hardware-level encryption for your security and ship your data back to you.
Why Choose Gillware for RAID 10 Data Recovery?
Our RAID-10 data recovery staff are experts in their field. Our technicians have logged thousands of man-hours of data recovery experience and have dealt with just about every RAID setup under the sun. We’ve done things our competitors have told their customers were impossible. We’ve done things our competitors have said were possible at lower prices than those same competitors charge their customers.
We here at Gillware stand by our engineers’ skills and our financially risk-free RAID 10 data recovery services. If we can’t recover your critical data at a price that works for you or your organization, you don’t owe us a dime.
Ready to Have Gillware Assist you with Your RAID 10 Data Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:87694c7f-f30e-4db2-89dc-ffa89893f25b> | CC-MAIN-2017-04 | https://www.gillware.com/raid-10-data-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925783 | 2,160 | 2.671875 | 3 |
The use of fiber optic systems is expanding at a amazing rate. Only in the past Ten years, fiber optic communications systems have replaced just about all coaxial and twisted pair cables particularly in network backbones. This is also true in almost any long distance communication links.
This can be explained simply. Optical fiber cable is easier to set up, lighter than traditional copper cable, and much smaller than its electronic counterpart. The most crucial factor is it has much more bandwidth. Because fiber optic cables are lighter, they are simpler to survive existing ducts and cable raceways. There are other big benefits of fiber optic cables including their immunity to electromagnetic interference, longer repeater distances, lower power requirements, and better flexibility.
All the above pros make fiber optic cables very attractive and most important of all, very economical. The unstoppable trend for fiber optic applications would be the change from the long haul (long distance) to our desk, our house, and our office. The terms include FTTC ( fiber towards the curb), FTTD (fiber towards the desk), FTTH (fiber towards the home) and FTTB( fiber to the building). Fiber optic cables enable our imagine integrating all our phone, Internet and TV services. Fiber’s wide bandwidth makes this possible. It offers more than enough ability to meet all our voice, data and video requirements.
The transformation from copper to fiber is greatly accelerated through the invention of optical fiber amplifier. Optical fiber amplifiers enable optical signal transmission over very long distances without the expensive procedure for conversion to electronic signals, electronic amplification and the conversion to optical signal again as in traditional regenerators.
Today most of the network traffic switching continue to be done by electronic switches such as those from Cisco. But tremendous interest and effort of utilizing all-optical devices for those network switching are accumulating in the industry. The most important sign of all-optical switching lies in its almost unlimited transmission capacity. However, it is still within the prototype stage for controlling light with light, so optical swith circuits continue to be controlled by electronic circuits now. The switching matrix may be optical circuits but the control are still done by electronic circuits.
Optical fiber is nearly the perfect medium for signal transmission available today and in the foreseeable future. The excellent sign of optical fiber is its immunity to electromagnetic interference. Optical circuits can be crossed inside a common space without cross interference among them. But you will find problems which are impeding the rate of all-optical system development. The most obvious and basic reason may be the compatibility requirements with legacy fiber optic systems.
Another huge advantage of optical fiber is based on the opportunity to multiplex its capacity via WDM (wavelength division multiplexer). WDM modulates each of several data streams right into a different part of the light spectrum. WDM is the optical equivalent of FDM (frequency division multiplexer). The use of WDM can increase the capacity of merely one channel fiber optic communication system by countless times.
In additional to optical communication systems, fiber optic technology is also widely used in medicine, illumination, sensing, endoscopy, industry control and more.
About the writer:
Fiberstore is experienced on fiber optic communication technologies and merchandise. Learn more about fiber optic networks on www.FiberStore.com. | <urn:uuid:6a480e11-4b1d-4524-b7e3-0bf6c59ffa4f> | CC-MAIN-2017-04 | http://www.fs.com/blog/applications-of-fiber-optics-in-communication-systems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920489 | 671 | 3.15625 | 3 |
In 2004, the University of California, San Diego created Quartzite, a communications infrastructure for researchers across the La Jolla, Calif., campus to use. Now, UCSD is raising the bar with its more advanced optical computer network, Prism.
Funded in part by a $500,000 grant from the National Science Foundation, researchers in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2) have created the network to help researchers across the university access the data they need to complete their projects.
Prism is being used as a bypass network so the massive amount of information won’t crash the campus’s main network, which is based on a 10-billion bit capacity. Prism can carry 20 times the traffic of Quartzite, along with 100 times the bandwidth of the main campus network.
UCSD is relying on Prism to free up congestion on the main network by shifting traffic from researchers in the most data-heavy fields onto the new network, where they can work with massive sets of data and leave the main campus infrastructure free for the other 30,000 people that use the network.
“You can think of Prism as the HOV lane,” said Philip Papadopoulos, principal investigator on the Prism@UCSD project, “whereas our very capable campus network represents the slower lanes on the freeway.”
Employing a next-generation Arista Networks 7405 switch-router, which has triple the energy efficiency and four times the capacity of Quartzite’s switch, Prism will be expanding the existing Calit2-SDSC optical-fiber connection from 50 to 120Gbps, lengthening the bandwidth to one trillion bits per second. This will allow labs all over UCSD to share large amounts of data with each other. The network is so large it’s being used as a sort of hybrid, with both production and experimental parts to it. The production side will focus on real-world use, while the experimental portion will let researches test out networking ideas for future breakthroughs.
If Prism is a success at UCSD and relieves traffic on the main network, the project will expand and explore ways to give other research labs access to the network, even if they aren’t on the UCSD campus. | <urn:uuid:1658056a-1577-4ba5-9463-76a83d33d2e3> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/03/21/ucsd_s_big_data_freeway/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918389 | 475 | 2.890625 | 3 |
(Part II) Virtual Routing and Forwarding
This is the second part in the series of posts dedicated to network virtualization and path isolation.
Ever needed one extra router? It’s possible to split the router into more logical routers by using VRF. How? Here’s how!
Virtual Routing and Forwarding or VRF allows a router to run more that one routing table simultaneously. When running more routing tables in the same time, they are completely independent. For example, you could use overlapping IP addresses inside more VRFs on the same router and they will function independently without conflict (You can see this kind of overlap in the example below). It is possible to use same VRF instance on more routers and connect every instance separately using VRF dedicated router port or only a sub-interface.
You can find VRFs to be used on ISP side. Provider Edge (PE) routers are usually running one VRF per customer VPN so that one router can act as a PE router for multiple Customer Edge (CE) routers even with more customers exchanging the same subnets across the VPN. By running VRF per customer, those subnets will never mix in-between them.
VRFs are used to create multiple virtual routers from one physical router.
Every VRF is creating his own Routing table and CEF table, basically a separate RIB and FIB.
| Continue Reading.. | | <urn:uuid:e445491f-4880-4d58-81c5-8983e7ac3bcb> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/routing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00410-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888582 | 290 | 2.78125 | 3 |
Ten years ago, the mechanics of congressional and legislative redistricting in most states was largely a manual process. The software of the period was cumbersome, difficult, expensive and carried a steep learning curve.
Todays redistricting tools are something entirely different. Designed primarily by makers of geographic information systems, they allow users to quickly analyze an enormous range of demographic information, voting records and other aggregate data. Incumbents can watch as a boundary line is moved this way or that and immediately see the changes in population, the ethnic and racial mix of a particular block, whether a precinct or neighborhood is being split, where minority/majority blocks can be created and which party is likely to gain or lose seats in Congress or in the state assembly. In fact, todays redistricting tools can spit out plans, maps and boundary options faster than anyone is capable of absorbing.
The new technology may make redistricting a faster, more open process. Tools that can quickly and accurately analyze and map the Census Bureaus TIGER 2000 files and P.L. 94-171 demographic fields may help produce plans that not only stand up in court, but reduce the number of legal challenges that dogged 41 of the 50 states after redistricting plans were enacted in 1992.
What affect the new technology will have on the actual political process of redrawing congressional and legislative boundaries depends as much on the legislatures approach as on the incumbents involved in the process. Minnesotas Hennipin County Commissioner Randy Johnson described redistricting as an almost life-and-death issue to many politicians. How congressional and legislative lines are redrawn can determine who will get elected or not elected over an entire decade. "In the scramble for political survival everybody is looking to rely on somebody else to make sure they dont get shafted in the process." Johnson said. Under these circumstances, partisan infighting and incumbency protection can quickly overshadow demographic concerns and community interests.
Minnesota is one of the most progressive states when it comes to utilizing software in redistricting efforts. In Minnesota, Republicans control the House, the Democratic-Farm-Labor (DFL) Party has a majority in the Senate, and the governor is a member of the Independence Party. Although the Minnesota Legislature has yet to agree on principles for redrawing congressional and legislative boundaries, the new technology reportedly has several advantages and very little downside. In addition to speed and convenience, advantages include openness, increased accuracy of demographic analysis, and the ability to support redistricting standards that are more likely to stand up in court.
So far, the technology appears to have had little effect on the partisan nature of the redistricting process. Republican and DFL caucuses in both the Senate and the House each draw up a redistricting plan, four in all, along with the principles and guidelines used in drafting them. Each caucus has a team of hired GIS/redistricting technicians. Each has the same redistricting software, printers, plotters, monitors and workstations all other caucuses use. Completed redistricting plans and principles worked out by the different caucuses are sent as bills to the nonpartisan Legislative GIS Office, where they are processed into a standardized format with maps, reports and statistics for each district. The bills are then made available to conference committees and floor sessions, and at the same time put on the Web for public access. Anyone can download them, look at the interactive maps and use the data to put together their own plans, including those by counties and cities.
Lee Meilleur, director of the Legislative GIS Office, said Minnesota regularly uses ESRI software, but this year the legislature, GIS office and counties are all using Maptitude for Redistricting from Caliper Corp. "We cant produce the quality of maps with Maptitude that we can with ArcInfo, but redistricting is more or less a data-crunching and plan-making process," Meilleur said. "Maptitude is more flexible and provides more of what we need."
Maptitude was particularly helpful during the period when the four caucuses were meeting almost constantly and submitting bills. Troy Lawrence, assistant director of the GIS office, said that over a four-day period toward the end of May, the GIS office worked almost around the clock processing plans, turning out maps, preparing for hearings, etc.
As of June, all redistricting bills had been through the committee process, were past the floor and were in the conference phase, where conferees were trying to reconcile differences between the House and Senate principles. If consensus is reached, plans will still have to meet the technical requirements of law, be approved and signed by the governor and, finally, stand up to court challenges. Peter Wattson, chief council for the Minnesota Senate, said conferees may work through summer and fall until they have a plan. At that time the governor can call a special session of the legislature to enact it. If they dont produce a plan by March 19, 2002, the courts will step in.
Affecting the Political Process
Apart from the political process of redrawing congressional and legislative boundaries, Wattson said the software does make a difference. "The technology made it easier to get districts of equal population. Its made it possible to reduce population deviations," he said. "Also, the technology may make it easier to keep track of cities and counties that have been split, even reduce the number of splits. The standard reports we were able to produce with the plans make splitting, compacting, the partisan character of districts and the populations so easy to see -- they just jump right out at you."
"The ability to download a plan, analyze it and run it against another plan or index that another organization has come up with makes it more difficult to disguise political gamesmanship [such as splitting or compacting]," said Michael Brodkorp, redistricting specialist for the Minnesota Senate Republican Caucus. "The technology lets you see immediately where and how changes have been made."
"The new software has made the redistricting process easier for some people to explore many alternatives quickly, " Johnson said. "After the next decennial census, it will be sophisticated enough and simple enough that large numbers of people who want to use it and get involved in the process will be able to."
In Minnesota, the new software has shown a potential for making redistricting a much more open process, making legislators more accountable to the public, and helping ensure that demographic concerns are not overshadowed by partisan interests or incumbency protection. Will the technology reduce delays caused by partisan disputes and long, drawn-out debates? Maybe not this time, but then redistricting tools will be even cleverer next time around.
Bill McGarigle is a write, specializing in communications and information technology. He is based in Santa Cruz, Calif. | <urn:uuid:ccc0e7c5-08bc-4d62-a37f-7c15ab519734> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100497619.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959978 | 1,386 | 2.6875 | 3 |
|Requirements-based Vocabulary Tutorial||Class Programs|
This chapter describes how you can use objects in your programs. The same rules apply whether the program you write is an Object COBOL class, or a procedural COBOL program in which you want to use Object COBOL objects.
Objects can be used by an application which is not itself object-oriented. This chapter describes how you can add objects to a program and send messages to them. All the information in this chapter applies equally to Object COBOL class programs and to procedural programs which use objects.
This chapter uses examples of code to illustrate points. For the full syntax definitions, see your Language Reference.
Programs which use objects all have certain features in common:
This lists all the classes the program is going to use. If the program itself is an Object COBOL class, the Class-Control paragraph also lists its superclass, and usually the program itself (a class doesn't have to list itself in the Class-Control paragraph if the class-name and filename are the same).
An Object Reference data item holds an object handle. The run-time system assigns a unique object handle to every single object active in your application. An object handle enables you to send messages to the object.
You can also pass object references as parameters when you send a message or make a COBOL CALL, in effect enabling you to pass objects between different parts of an application.
When you send a message to an object, you invoke a method inside the object. A method is a piece of code which performs one of the functions of the object. Some methods receive or return parameters; when you invoke the method you include the parameters as part of the message in the INVOKE statement.
Generally, a program which uses objects will send a message to one or more class objects to create the instance objects it requires. For each instance object created, you must declare an object reference to hold the object handle, although you may overwrite the object handle in an object reference with a new one if you have finished with it. When a program has finished with an object, it should destroy it (by sending it the "finalize" message).
Before you can use an Object COBOL class, you must register it. To do this, you must declare it in the Class-Control paragraph of every program which uses it. This binds the name of the class to the executable file for the class program.
It also makes the name of the class available to your program as an Object Reference data item with the same name as the class. This enables you to send messages to the class object.
The code below shows class registration for classes CharacterArray and myClass:
class-control. CharacterArray is class "chararry" myClass is class "myclass" .
The filename is in quotes after the IS CLASS clause. The spelling and case of a filename in Class-Control must match in all the Class-Control paragraphs which refers to it. You are strongly advised to adopt a convention of specifying all filenames in Class-Control in lower-case. Matching is case-sensitive to keep source-code compatibility between case-insensitive and case-sensitive operating systems.
A data item of type object reference can hold a handle to a class or instance object. In this release of Object COBOL, object references are untyped, so the same data item can hold an object reference for any class or instance object. That is, there is no distinction between the type of object reference declared to hold an instance of one class or another.
You need to declare data items of type OBJECT REFERENCE to hold handles to any objects you will be using.
01 anItem object reference.
You also require object references for any class you are going to use in your program. These are declared automatically when you list your classes in the Class-Control Section.
The only operations you can use on object references are:
To send a message to the object represented by the handle in the object reference. For example:
invoke Object1 "message"
To copy an object reference to a different data item. For example:
set Object1 to Object2
to see whether two object references refer to the same object. For example:
if Object1 = Object2 ...
No other operations are valid, because the object reference only contains a handle to an object. To change or interrogate an object's internal state, you have to send messages to it.
If you clear all references to a particular object (with SET), you will be unable to send that object any more messages. In particular, you will not be able to destroy that object and de-allocate its storage (see the section Destroying Objects).
You send a message to an object to make it execute a method. The method executed has an identifier which matches the message.
If an object does not understand a message, it is passed up the method inheritance chain until it is recognized and executed (see the section Method Inheritance in the chapter Class Programs for more information). The method inheritance chain is not a consideration if you are the user of the object; it is part of the object's implementation details.
You send a message by using the INVOKE verb. For example:
invoke myClass "new" returning myObject
This sends message "new" to myClass; in this example myClass is an object reference referring to the class declared in the example in the previous section. The target of an invoke is always an object reference. Using and returning parameters are optional, and follow the same rules as a COBOL CALL statement.
You do not have to use a literal for the message name. You can also put a
message name into a
pic x(n) data item. For example:
move "new " to aMessage ... invoke myClass aMessage returning myObject
The data item used to store the message must be large enough to allow at least one space at the end of the message name. The run-time system needs the space to be able to find the end of the message name.
The results are unpredictable if you send a message and use the same variable for sending a parameter and returning a result. For example, do not code:
invoke myObject "add" using aValue returning aValue
When you create a new object, the run-time system allocates an object handle, and storage for the variables declared in the Object-Storage Section. Classes either provide their own methods for creating new object instances, or inherit them from a superclass.
Method "new" is implemented in class Base (part of the supplied Class Library) to create an instance of a single object. Send the message to a class and provide a variable to receive the object reference.
working-storage section. 01 anObject object reference. ... procedure division. ... invoke aClass "new" returning anObject
In this example, aClass returns a handle to a new instance of itself in object reference anObject. Some Class Library classes implement a version of "new" which expects one or more parameters for initialization. Some classes, for example Window, implement a "new" method which sends the newly created object the "initialize" message.
Consult the Class Library Reference to check the interface to the "new" method for a class you want to use. Object creation methods are always class methods. Not all types of objects are created with the "new" message; for example the collection objects are created by "ofReferences" and "ofValues" methods.
When an application has finished with an object it should destroy it, freeing the memory it uses. This is because Object COBOL does not yet implement automatic garbage collection.
The following sections explain object destruction in more detail:
To destroy an object, send it the "Finalize" message. For example:
invoke anObject "finalize" returning anObject
The "finalize" method is implemented in the Base class (part of the supplied Class Library) and inherited by all classes. When you finalize an object, the method returns a null object handle.
A null object handle is always taken to refer to the NilObject. Should you accidentally send a message to the NilObject it will raise a "doesNotUnderstand" exception.
You may have copies of the object handle to a finalized object in data items other than the one you used in the "finalize" message. Sending a message to an object handle after the object to which it refers has been finalized gives unpredictable results:
There is an Object COBOL run-time switch to prevent reuse of finalized object references - this ensures that you will always get a run-time error when you attempt to send a message to a finalized object (see the chapter Debugging Object COBOL Applications).
Some Class Library objects contain references to other objects (for example, collection objects). When you send the containing object "finalize", the objects it contains do not get finalized. If the only references you had to these objects were the ones held in the containing object, you can now no longer reach or destroy those objects.
Class library container objects, such as the collection objects, respond to the "deepFinalize" message. This destroys all the objects within the container as well as the container itself. You may want to consider implementing "deepFinalize" in any objects of your own which contain other objects, or reimplementing "finalize" to destroy contained objects.
Which is appropriate depends on the nature of the containing object. If the containing object is the sole owner of the objects, it should implement its own version of "finalize". If the containing object is storing objects as an access mechanism for other clients, it should only destroy them if it receives a "deepFinalize".
This topic provides some guidelines on destroying objects:
However, if you have passed an object to another part of your application, you may not know whether the object is still in use or not. You should establish rules which state whether or not an object passed to another object can be destroyed.
The supplied Class Library uses these rules:
An object which is no longer in use by an application, but which has not been destroyed is said to constitute a memory leak. See the section Finding Memory Leaks in the chapter Debugging Object COBOL Applications for more information.
Once you "finalize" an object, you must avoid sending messages to any reference which still points to that object otherwise results are unpredictable.
For example, do not code:
set objectA to objectB invoke objectB "finalize" returning objectB invoke objectA "aMessage"
The object handle held in objectA now points to a non-existent object. The code fragment below is a more subtle example of the same thing:
invoke objectA "setValue" using objectB invoke objectB "finalize" returning objectB
Unless objectA created its own copy of objectB and kept the handle to that, objectA now has an invalid object handle. All objects in the supplied Class Library create copies of any object you pass in as a parameter, with the exception of elements stored in collections.
Copyright © 1999 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.
|Requirements-based Vocabulary Tutorial||Class Programs| | <urn:uuid:22418d3b-90e7-4b10-992b-d98b63c6c6c5> | CC-MAIN-2017-04 | https://supportline.microfocus.com/documentation/books/sx20books/opusob.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.85708 | 2,351 | 3.328125 | 3 |
Changing usage patterns and hand movements could detect criminals.
IBM has patented a technology to help web site operators, cloud service providers and mobile application developers detect fraud by analysing the browsing behaviour of the users.
An analytics based system studies user browsing behaviour to help prevent the log-in credentials and other sensitive information landing at the hands of frauds.
The technology keeps tab on the users’ interaction with a site, the way they click certain areas more often than others; their use of up and down arrow keys on the keyboard to navigate; whether they rely solely on the mouse.
It also analyses the way users tap or swipe the screen of a tablet or smartphone in a distinct manner to set a characteristic of the user.
IBM’s new technology called, "User-browser interaction-based fraud detection system", tracks sudden changes to user behaviour and triggers a secondary authentication measure such a security question to find out whether the user is genuine or not.
IBM master Inventor and co-inventor on the patent Keith Walker said the invention improves the effectiveness of authentication and security systems with insights derived from real-time data analytics.
"For example, if an individual suddenly changes how they interact with an online bank or store, such as due to a broken hand or using a tablet instead of a desktop computer, I want these web sites to detect the change, and then ask for extra identity confirmation before accepting a transaction."
"Our experience developing and testing a prototype, which flawlessly confirmed identities, shows that such a change would more likely be due to fraud, and we all want these sites to provide more protection while simultaneously processing our transactions quickly."
According to the company, with the growth of e-commerce, fraudsters are targeting digital channels such as mobile devices, social networks and cloud platforms to exploit the vulnerabilities to steal login and password information from the ecommerce sites.
Despite having stronger passwords and authentication systems, online fraud continues to exist in digital world, the company added. | <urn:uuid:61121a6c-7a9b-4e57-8e71-38c820d8e3e4> | CC-MAIN-2017-04 | http://www.cbronline.com/news/verticals/ibms-new-tech-to-eliminate-fraud-by-analysing-browsing-behaviour-4282564 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90265 | 408 | 2.765625 | 3 |
NASA today said its planet hunting Kepler Space Telescope has spotted what the agency called the first Earth-size planet orbiting a star in the "habitable zone" -- the range of distance from a star where liquid water might pool on the surface of an orbiting planet.
NASA said that the discovery of what will be called Kepler-186f confirms that planets the size of Earth exist in the habitable zone of stars other than our sun. While planets have previously been found in the habitable zone, they are all at least 40% larger in size than Earth and understanding their makeup is challenging. Kepler-186f is more reminiscent of Earth. Although the size of Kepler-186f is known, its mass and composition are not. Previous research, however, suggests that a planet the size of Kepler-186f is likely to be rocky, NASA said.
+More on Network World: NASA: Kepler's most excellent space discoveries+
Kepler-186f is some 500 light-years from Earth in the constellation Cygnus. The system is also home to four companion planets, which orbit a star half the size and mass of our sun. The star is classified as an M dwarf, or red dwarf, a class of stars that makes up 70% of the stars in the Milky Way galaxy, NASA stated.
Some other facts on the discovery:
- Kepler-186f orbits its star once every 130-days and receives one-third the energy from its star that Earth gets from the sun, placing it nearer the outer edge of the habitable zone.
- On the surface of Kepler-186f, the brightness of its star at high noon is only as bright as our sun appears to us about an hour before sunset.
- The four companion planets, Kepler-186b, Kepler-186c, Kepler-186d, and Kepler-186e, whiz around their sun every four, seven, 13, and 22 days, respectively, making them too hot for life as we know it. These four inner planets all measure less than 1.5 times the size of Earth.
- M dwarfs are the most numerous stars. The first signs of other life in the galaxy may well come from planets orbiting an M dwarf, NASA said
"When we search for life outside our solar system we focus on finding planets with characteristics that mimic that of Earth," said Elisa Quintana, research scientist at the SETI Institute at NASA's Ames Research Center in Moffett Field, Calif., and lead author of the paper published on the current discovery in the journal Science. "Finding a habitable zone planet comparable to Earth in size is a major step forward."
The discovery is also a testament to how much data Kepler has produced and how much of an impact its findings will continue to influence the astronomy community even though it currently isn't capable of doing anything.
You may recall that since last May the telescope has been largely disabled. There is a plan afoot known as Kepler Second Light or K2 that would make use of the Sun and Kepler's orbit around it to stabilize the craft and let it start taking images of space again.
Kepler's mission was to determine what percentage of stars like the sun harbor small planets the approximate size and temperature of Earth. For four years, the space telescope simultaneously and continuously monitored the brightness of more than 150,000 stars, recording a measurement every 30 minutes. More than a year of the collected data remains to be fully reviewed and analyzed, NASA has said.
Check out these other hot stories: | <urn:uuid:d18f44f7-155b-49e0-8d95-66adc2c33ea3> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226761/security/nasa-kepler-spies-earth-sized-planet-in-habitable-zone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952582 | 715 | 3.953125 | 4 |
And here's another: What happens if one of the humans has a medical emergency and needs surgery? As Carnegie Mellon professor James Antaki told New Scientist, "Based on statistical probability, there is a high likelihood of trauma or a medical emergency on a deep space mission."
This is not just a matter of whether you'll have the expertise on board to carry out such a task. Surgery in zero gravity presents its own set of potentially deadly complications.
Think about how hard it is to pee in zero gravity: You need a funnel and a tube that siphons your urine to a sewage tank. Without those tools: Urine everywhere. Consider the difficulties of brushing your teeth. It took astronaut Leroy Chiao three paragraphs to explain that process, and it involved bungee cords, drink bags, and velcro.
In zero gravity, blood and bodily fluids will not just stay put, in the body where they belong. Instead, they could contaminate the entire cabin, threatening everybody on board.
This week, NASA is testing a device known as the Aqueous Immersion Surgical System (AISS) that could possibly make space surgery possible. Designed by researchers at Carnegie Mellon and the University of Louisville, AISS is a domed box that can fit over a wound. When filled with a sterile saline solution, a water-tight seal is created that prevents fluids from escaping. It can also be used to collect blood for possible reuse. "You won't have a blood bank in space, so if there is bleeding you want to save as much blood as you can," one of the researchers, James Burgess, told New Scientist.
If it works well enough, that will be one more thing ticked off the pre-Mars to-do list. | <urn:uuid:f4136fe8-4168-440e-9c61-65cca927bf34> | CC-MAIN-2017-04 | http://www.nextgov.com/health/2012/10/blood-zero-gravity-nasa-tries-prepare-surgery-space/58596/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950099 | 356 | 3.4375 | 3 |
The 9/11 terrorist attacks triggered a nationwide scramble to develop interoperable communications among emergency responders. But Hurricane Katrina left officials scratching their heads over the utter disappearance of even basic communications.
Radio system failures meant local first responders couldn't communicate easily among themselves, not to mention the array of outside agencies that arrived on the scene to help.
"We didn't have an interoperability problem, we had an operability problem," said Lt. Col. Joey Booth of the Louisiana State Police. "We couldn't communicate within our own department much less with other departments. We had a lot of responders coming in to help, but our system didn't have the capacity to operate with all these new users."
Experts say these shortcomings point to the need for more attention on ensuring the availability of radio systems during major disasters, or at least to creating plans that guide first responders when radio communications fail. Furthermore, experts warn that public policymakers must address the issues that hinder radio operability when it's needed most -- a dearth of available radio spectrum and the failure to anticipate worst-case scenarios.
Ultimately the challenges of radio operability and interoperability are intertwined. To succeed, policymakers and emergency responders must tackle both issues: building radio systems that withstand worst-case disasters and linking to systems used by other agencies.
During the Katrina response, emergency personnel found that nearly all forms of communication, such as cell phones, landlines and satellite phones, were down, and the Louisiana State Police radio system was inoperable because the frequency on which it operated was clogged with users.
Insufficient frequency plagues public safety communications, but Congress and the FCC are working to set aside more frequency for public safety agencies.
With no means of communicating, it's difficult to dispense commands and coordinate response. During Katrina's aftermath, communications were reduced to the use of runners, as in World War II.
"One of the basic foundations for incident command is communications," said Willis Carter, who, as chief of communications, manages the Shreveport, La., Fire Department's Emergency Communications Center. Carter is also the first vice president of the Association of Public Safety Communications Officials International (APCO). "You have to be able to communicate throughout the command system for it to be effective. We were unable to do that."
Given the recent emphasis on communications, David Boyd, director of the Department of Homeland Security's (DHS) Office for Interoperability and Compatibility, was surprised there was such a complete breakdown.
"Part of that is because we just assumed everybody understood that operability was the essential first requirement," Boyd said. "Among public safety agencies, they understand that. But it's clear at the policy level that there is some confusion about the distinction between interoperability and operability."
Efforts to develop interoperability began before the Sept. 11 terrorist attacks, but the concept became a popular topic immediately thereafter, when fire, police and port authority personnel couldn't communicate with one another. As a result, some perished in the collapse of the World Trade Center towers.
The problem -- the inability of disparate radio systems to communicate with one another -- really began in the early 1990s when vendors started building proprietary systems. The different frequencies on which those systems are aired exacerbate the problem.
The Louisiana State Police radio system operates on an 800 MHz band frequency, and could not handle the multitude of users swarming the area to help.
A frequency band is a range of frequencies in a spectrum used for transmission or reception of radio waves. The spectrum's ranges/bands go from very low frequencies of 3 kHz to 30 kHz, to ultra-high frequencies of 300 MHz to 3000 MHz, and all the way up to extremely high frequencies of 30 GHz to 300 GHz.
Because radio transmitters sharing the same frequency band will interfere with one another, the federal government regulates band usage, allocating specific bands to users such as broadcasting, public safety and amateur radio.
"The basic problem is we needed the infrastructure to add the capacity to the system to allow us to not only have interoperability, but basic operability," Booth said. "We've been working very hard on this, and like anything else, the funding is a big piece of the answer -- but [so are] standards. We have our license in the 4.9 [GHz band], but like everybody else, we're waiting on standards and equipment so we can move into it."
Booth said even without the storm, which knocked out the system, the Louisiana State Police radio infrastructure didn't have the capacity to handle the multitude of users during Katrina.
"There needs to be more push, if you will. There needs to be some way to go ahead and plan to have systems that can be easily connected," he said. "It's all the more important that we move toward standardization where possible in communications and get, across the country, police, fire -- all the disciplines -- on some sort of common network."
But sources said that final standardization will take some time, and urged state and local agencies to work toward developing operable systems, back-up systems and plans of operation to follow when communications systems are completely wiped out.
"Our material always starts with operability first," Boyd said. "The inverted triangle always starts with communications internally."
There may be a point, as was the case with Katrina, where communication on the most basic level must be the goal, and interoperability is not a reality, according to Tom Tolman, program manager of Communications Technology for the National Law Enforcement and Corrections Technology Center. "I would say operability is the word before interoperability for a situation like that. Foundationally that infrastructure being wiped out [and] that level of catastrophe override an issue such as 'A isn't talking to B.'"
James Carafano, senior fellow at the Heritage Foundation, said there will always be communication challenges during disasters, and that public safety officials must prepare for that eventuality.
"This notion that somehow interoperability is the silver bullet that could solve all our problems is a bit ridiculous," he said. "We should be having two discussions: What do you do in an emergency when the infrastructure is destroyed, there's no network, there's no Internet, and there's no telephone or anything else? [And] how do you re-establish a modicum of communications to integrate everybody?"
Carafano said another issue to address is, "Where do we really need to be interoperable, and what should our priorities be?"
Boyd said the hope is that operability will move toward regional interoperability -- meaning all regional systems with different protocols and technologies will communicate on a national scale. And it already has in many areas.
"Remember that we don't need every officer to be able to talk to every other officer in the United States," Boyd said. "He needs to be able to talk to those people who are involved in the incident that he or she is directly involved in."
He said there is no plan for a single national network or a single technology-based network.
"You want to be cautious that you not stifle innovation," Boyd said. "So if you produce standards that are too rigid and specifications for linking that are too rigid, we wind up locking people into today's technology. In part, that's kind of what we have now. People have locked themselves into technology that's sometimes 30 years old. We don't want that to happen again."
The first priority, Boyd said, is formulating regional agreements -- statewide plans that work from the bottom up so that all localities are involved.
"We emphasize that that needs to be driven by localities because they own, operate and maintain the bulk of these systems," he said. "And if they will agree to work together, then that goes probably 90 percent of the distance to achieve near-term interoperability to address emergencies."
Is there a danger that those regional systems won't become linked on a national scale?
Boyd says yes.
"There is a clear possibility of that, which is why we keep making the point that there's no silver bullet," he said. "We have to develop a combination of standards."
Boyd pointed out that federal grants, such as Urban Areas Security Initiative grants that have gone for communications, provide guidance on how to use the money. The DHS's national Statement of Requirements is also a good source for guidelines.
Another guideline is Project 25 (P25) -- the now 15-year-old, yet still incomplete, initiative to build standards for the eight interfaces between various components of a land mobile radio system. This includes handheld to handheld, handheld to mobile unit, and mobile unit to tower communications.
P25 is a committee tasked to select common standards for public safety radio communications. It includes representatives from APCO, the National Association of State Telecommunications Directors, the National Communications System, and other selected federal entities, and is directed by the Telecommunications Industry Association, an American National Standards Institute accredited organization.
So far only one interface -- the common air interface, which defines wireless access between mobile and portable radios -- has been advanced to the level where it could satisfy interoperability goals, according to the Sept. 29, 2005 testimony by Dereck Orr, program manager of Public Safety Communications Systems at the National Institute of Standards and Technology (NIST), before the Senate Committee on Commerce, Science and Transportation.
"That's a whole other story," Tolman said. "Why is it taking so long to try to get it so vendor A can communicate with vendor B, and get past these proprietary systems that can't talk to each other?"
Tolman, a member of the steering committee for P25, said the federal government, through NIST, is trying to push the project onto the fast track.
"We can't wait another 15 years," he said. "It always seems to take a disaster to move things along."
During Katrina, Louisiana used an analog communication system installed in 1996 that was severely constrained -- partly because of a limited number of tower sites, and partly because the system was built on the 800 MHz frequency band.
"Even if our system was operating the way it was before the storm, we did not have the capacity to have five times the user group come in and operate on the system," Booth said. "We were operating on the old 800 trunk system -- that was a problem before the storm, and it was a worse problem after the storm."
Booth said there's a need for additional spectrum, reiterating an ongoing argument that includes moving the 700 MHz band away from broadcast media to public safety.
Tolman said the FCC and Nextel struck a deal whereby Nextel surrenders frequencies it uses in the 800 MHz band to give police and fire more room to operate in that spectrum. Nextel was granted 5 MHz blocks of spectrum in the 1900 MHz band.
A next critical step in the spectrum world is to force a deadline for TV broadcasters to free up spectrum in the 700 MHz band, which is ideal for first-responder communications, because signals sent via the frequency can penetrate walls and travel long distances.
A 1995 congressional panel recommended that broadcasters relinquish this spectrum and transition to digital transmission, which requires less spectrum, when 85 percent of households have the equipment to receive digital signals.
But critics say the percentage of households ready for digital broadcasts is far below 85 percent, and that there's no incentive for viewers or broadcasters to push the issue along -- absent a law.
When the 700 MHz band becomes available to public safety, it will be beneficial in more ways than one. In addition to providing more spectrum, it will allow manufacturers to design dual-band radios that operate under both 800 MHz and 700 MHz bands, according to Tolman. "Now the struggle is to get a certain date on when broadcasters will start disconnecting and shutting down their analog," he said.
No Silver Bullet
In the meantime, state and local first-responder agencies might want to look to the military for operability and interoperability solutions when the infrastructure is wiped out.
"I think the answer is something like some kind of wireless ad hoc network that you would put up," Carafano said.
That type of network -- a portable network that allows agencies to quickly deploy wireless communications in areas where fixed infrastructure doesn't exist or has been destroyed -- has its genesis in the military, according to Tolman.
"That concept is really moving out. We're seeing more and more stories of entrepreneurial and established [systems], including Motorola with their Motobridge for example, and also the lesser known [systems] out there.
"Isn't it interesting that the military doesn't have to worry about infrastructure or sites because they've got that figured out? They take their systems with them," Tolman added.
But again, there is no silver bullet.
"I would not say there is one supreme system, tool or capability, or one answer," Tolman said. "But rather a combination, and I would say finding the right combination given the circumstances."
Boyd agreed, saying there are many technologies that play a role, but none by itself is the answer. "Mesh networks, for example, do a pretty good job of allowing you to put together a headquarters or an incident control network in a relatively small area. They're designed for small area sorts of things."
The military also excels at logistics, and state and local first-responder agencies could better prepare for disasters by acknowledging the worst-case scenario during training exercises.
"How does the military get it done?" Tolman asked. "There's something in there that the state and locals can learn from. Again, we've been through this before, where we've talked about the importance of planning. I would say 15 percent of the issue is technology, and the other 85 percent is planning and preparation, operationally and logistically."
Boyd said the military always prepares for the worst-case scenario. "You [plan] that everything, at some point, is going to fail, and you build from that," he said.
Boyd agreed that most state and local government first responder agencies don't plan for the worst-case scenario. "I would be inclined to say it would be fairly rare. Most people don't think in terms of worst case."
The plan should, but often doesn't, include planning for the complete loss of a communications system during a disaster, Boyd said. "We typically don't really exercise communications. The reason [state and local agencies] don't is they're trying to test some element or some feature of the general planning, but of course once you lose communications, all that falls apart."
Disaster planning should be multi-tiered, providing exercises, including communications, for each step along the way during an emergency -- all the way down to having to use the runner system for communications when all else fails.
"Part of the logic behind that is [that] the mere process of going through the worst-case thinking and planning makes it easier for you, when things fail, to begin to make some kind of ad hoc fixes, because you'll have some idea about what fixes will be required," Boyd said. "You have an idea because you've thought through already what kinds of things might be required.
"You have to know what's in the plan, exercise the plan and revise the plan," he continued. "You have to train operators so they can pass the message accurately, consistently. And then you have to be prepared for what happens if it all fails. The plain truth is that nature is going to be bigger and more powerful than we are." | <urn:uuid:de2c2591-2076-47bc-8c9e-e5c6194ed912> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Operation-Operability.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00234-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966866 | 3,206 | 2.71875 | 3 |
In a wireless mesh deployment, multiple APs (with or without Ethernet connections) communicate over wireless interfaces to form a single network. This wireless communication between APs is called Mesh Networking. Meraki's mesh networking functionality is automatic, self healing and available on all Access Points.
A detailed article on designing wireless Mesh networks can be found here.
In a mesh network, access points can be in one of two states: Gateway, or Repeater.
Gateway access points are connected directly to the wired network, granting it an uplink to the Internet. If a gateway loses its Internet connection, it will look for a nearby gateway and automatically fail over to acting as a repeater, allowing it to continue serving clients.
Meraki determines whether a device should be a repeater or a gateway on boot, when the unit sends out a DHCP request. If it receives a DHCP reply from a device on the wired network, it assumes that it has a valid LAN connection and will become a gateway AP. If a gateway AP is unable to reach the LAN gateway/upstream router, the AP will fail over to repeater mode.
Repeater access points are not directly connected to the wired network, instead relying on wireless mesh links to reach the Internet. As long as the repeater has power and a strong (unobstructed, line-of-sight) wireless connection to another repeater or gateway, it will form a mesh link.
Please note, it is not possible to configure a static IP address for a repeater AP; doing so will automatically designate the device as a gateway instead of a repeater.
Both gateways and repeaters can serve wireless clients. It is possible to have multiple gateways in a mesh network, and repeaters will automatically choose the gateway to which it has the strongest connection.
Meraki devices in a mesh network configuration communicate using a proprietary routing protocol designed by Meraki. This protocol is designed specifically for wireless mesh networking, and accounts for several unique characteristics of wireless networks (including variable link quality caused by noise or multi-path interference, as well as the performance impact of routing traffic through multiple hops). This protocol is also designed to provide ease of deployment while maintaining low channel overhead.
Each AP in the Meraki mesh network constantly updates its routing tables with the optimal path to network gateways. If the ideal path changes due to node failure or route metric, traffic will flow via the best known path. Data traffic sent between devices in a Cisco Meraki network is encrypted using the Advanced Encryption Standard (AES) algorithm.
Because certain mesh gateways may be located on different IP subnets, each TCP flow is mapped to a particular mesh gateway to avoid breaking established connections. In the event of a mesh gateway failure or the emergence of a new mesh gateway with a better routing metric, all new traffic flows will be routed to the new mesh gateway. The current route to a given mesh gateway may change over time, to adapt to network conditions.
Mesh monitoring tools are located at the bottom of every AP detail page, which can be accessed by navigating to Wireless > Monitor > Access Points, then clicking on a Access Point.
The image below shows an example AP acting as a repeater. The time selector at the top right hand corner will adjust the timeframe of all of the UI components in the mesh monitoring section of the UI.
The time selector may select data from:
The data is historically visible and can be accessed by panning the information back by increments of 1 week or 1 day. In this case, the UI is showing data for the last two days:
The Routes Table shows the routes used by different flows over time. As new routes are selected, they are added to the routes table. The overall amount of traffic per-route over the time period selected is shown in the Usage column. The Metric is also displayed in this table, representing a combination of loss and packet delivery times.
The Mesh Neighbors table shows the APs that have been discovered automatically. The table is typically broken into two sections, "Strong mesh neighbors" and "Mesh neighbors". The Strong Mesh neighbors table lists the preferred APs that may be used for potential routes in the future. The link quality is a metric that takes into account signal strength and packet delivery success rates in each direction. A link quality of 70% or higher is recommended for a strong link.
Wireless repeaters periodically run a download throughput test to their gateway access point to measure wireless link health. The results are displayed on the the Wireless mesh throughput graph on the right hand side of the page. | <urn:uuid:9838d184-0cdf-4774-a0ab-6240c08c8425> | CC-MAIN-2017-04 | https://documentation.meraki.com/MR/WiFi_Basics_and_Best_Practices/Mesh_Networking | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91954 | 937 | 3.015625 | 3 |
Cognitive scientists from the University of Rochester have discovered that playing action video games trains people to make the right decisions faster.
In an upcoming study in the journal Current Biology, researchers Daphne Bavelier, Alexandre Pouget, and C. Shawn Green report that video games could provide a potent training regimen for speeding up reactions in many types of real-life situations.
The researchers found that video game players develop a heightened sensitivity to what is going on around them, and this benefit doesn't just make them better at playing video games, but improves a wide variety of general skills that can help with everyday activities like multitasking, driving, reading small print, keeping track of friends in a crowd, and navigating around town.
The researchers tested dozens of 18- to 25-year-olds who were not ordinarily video game players. They split the subjects into two groups. One group played 50 hours of the fast-paced action video games "Call of Duty 2" and "Unreal Tournament," and the other group played 50 hours of the slow-moving strategy game "The Sims 2."
The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.
"It's not the case that the action game players are trigger-happy and less accurate: They are just as accurate and also faster," Bavelier said. "Action game players make more correct decisions per unit time. If you are a surgeon or you are in the middle of a battlefield, that can make all the difference."
Image credit: J. Adam Fenster, University of Rochester
To subscribe to our Twitter feed, head over to @PCR_online. | <urn:uuid:9887502b-2925-431c-814d-15df53030bb8> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/action-gamers-make-faster-decisions/021518 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00501-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971173 | 350 | 3.65625 | 4 |
governments and three federal agencies are tied into the West Nile Virus system.
In April 2002, there were 400,000 hits on the system's secure Web site, demonstrating the value of the mapped data in the operation of government. The Web site
The GIS maps show where clusters of disease-carrying mosquitoes or birds were found. Furthermore, the site explains what eradication methods are used and offers advice on how to prevent mosquito-breeding environments from developing.
With the cross-agency GIS foundation in place and the West Nile system proving its value, Conrad was already considering other applications for the program when the unexpected happened.
"We were starting small but had big dreams for what it could do," he said. "On 9-11, I watched the towers come down and said 'We have something that can help.' It was one of those chills-type moments."
Like the virus-tracking system, the PAIRS program requires that agencies share information, adopt standards and remove historical barriers.
"Communication and cooperation are the key," Conrad said. "It's getting people to share the data and trust the system. If you keep the door open, sooner or later people will come through."
He said the demonstrated successes of the West Nile system are being used to encourage participation in PAIRS.
For example, the West Nile system simplified the task of entering data about virus samples by assigning a bar code to each sample. Consequently, laboratory scientists could immediately identify specimens by using a bar code reader and matching the specimens to collection sites.
Conrad said bar-coding reduced the workload from about eight hours to two hours, an efficiency gain that has cross-agency appeal.
With PAIRS, information about suspected bio-terrorist incidents can be entered from remote sites and statewide notifications sent to agencies. Authorized officials can view maps and data generated by the system, look for patterns and access risks. All information is viewed in real time, and material from other relevant databases, such as JNET, also will be integrated on the system's internal Web site.
Further examination of an incident will be handled through a secure "conference center" site where users with proper clearance can log on and view proprietary data, which can be removed at the close of the call.
A public version of the PAIRS Web site will offer automatic updates of information, send out e-mail alerts and provide a consistent message from all the participating agencies. It also will serve as a central point for media information.
"Across the agencies, at different levels, the opportunity that I see is that most GIS applications are added on," Conrad said. "We've taken it and put it at the beginning so that it is an integral of the process. The map drives the rest of the report. We've really reversed the process."
PAIRS has been so effective that the Office of Homeland Security invited Conrad to demonstrate the system. Although he was anxious to show PAIRS to homeland security officials, Conrad says that GIS technology is being used in innovative ways throughout the nation.
"I [believe] that everyone has something to contribute," he said. "I don't want to be limited to my own visions. I want to build on what others have done. The goal is not one person succeeding, but the nation succeeding." | <urn:uuid:ad2ebd23-b2c8-4a85-9dce-128ca12ca87c> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Bitten-by-the-GIS-Bug.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954441 | 682 | 2.9375 | 3 |
One of the more promising solid state memory technologies on the horizon is Phase Change Memory (PCM). PCM has the potential to write data faster than current DRAM chips by charging atoms within a crystal. This has led some people to believe that the technology might enable the much-sought-after instantaneous computer boot-up. Ars Technica discussed the future prospects for PCM last week.
At the atomic level, PCM stores data in a compound of germanium, antimony and tellurium. When a voltage is applied to the atoms, they change into an ordered crystal. The data can then be deleted by melting the crystalline substance. To read the information, a computer determines the electrical resistance of the material.
An important attribute of phase change memory is that the technology is non-volatile. This means it does not require power to retain information like standard RAM offerings. Along with the possibility of replacing system memory, these chips might end up competing with NAND flash as well.
Some memory manufacturers are dabbling with PCM on a small scale. Micron offers phase change modules with densities up to 128 MB and Samsung inserted PCM into an unnamed cell phone, but ended up removing it later on.
Despite the benefits, PCM suffers from an inherent issue that has slowed its path to adoption. The biggest one is its write speed. Current DRAM technology can perform write operations within a 1-10 nanosecond window, which is faster than the time it takes for the germanium, antimony and tellurium compound in PCM to crystallize. Other crystalline compounds with faster reaction times have been researched, but they are not as stable as the current PCM design, slowly erasing themselves in low temperatures over time.
Recent research from the University of Cambridge has given hope to the new technology though. Stephen Elliott, Professor of Chemical Physics at the university, along with his colleagues, have discovered a method to improve PCM write speed.
By preparing the material with a 0.3-volt electrical current, crystallization occurred after receiving a 500-picosecond burst of 1 volt. Essentially, the low current made the material act like water at near-freezing temperatures. A few crystalline seeds formed, enabling the material to change at an accelerated rate when receiving additional voltage. The improvement was ten times faster than similar compounds that were tested and remained stable for 10,000 write-rewrite cycles.
The need for extra electrical current during the write cycle could become an Achilles heel for phase change memory, however, since that’s going to increase the overall power draw. It’s a relatively new development though, and further optimizations may be under development. If the price point and power consumption are competitive, PCM may indeed replace one or more current memory technologies. | <urn:uuid:5fd49624-db92-493d-b919-313e51dc0cb8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/06/26/changing_the_phase_of_memory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941509 | 574 | 3.578125 | 4 |
Continuing our recent coverage of cloud storage, this post seeks to clear up the different types available. Despite some new technologies starting to gain ground to take advantage of the cloud’s unique topology (see our exploration of Gartner’s 2013 Cloud Storage Trends), the most common terms relating to storage in a data center environment are SAN, NAS and DAS.
In the world of the data center, it is generally wise to keep separate the networks used for access to servers vs. storage. This prevents access problems and helps with redundancy; when one part of the network has issues the other parts remain unaffected. The major differences between different storage systems for data centers is whether the storage disk is attached directly or through a network.
Storage Area Network (SAN) – Both DAS and NAS can be considered part of a Storage Area Network. These other terms describe the type of connection, while a SAN is any combination of storage drives and connections used to access them. However, SAN and NAS have key differences in their file structures. SANs basically consolidate many different dedicated storage units through high-speed networking. The file system on each section of a SAN is dedicated to a single server and dependent on that server’s system, simply relying on SCSI (small computer system interface) to transfer information.
Direct Attached Storage (DAS) – DAS connects directly to servers via a Host Bus Adapter, allowing direct access. Because this is fairly simple and dedicated, it can provide better performance and has less opportunity for interference or downtime. DAS is a good solution for databases and clustering, among other implementations.
Network Attached Storage (NAS) – NAS devices are storage arrays that servers connect to via IP network, usually via CIFS (common internet file system, usually used for Windows environments) and NFS (network file system). NAS can be set up as a dedicated solution, ideal for intensive workloads like archiving, e-commerce transactions, Big Data, and rich media.
Posted By: Joe Kozlowicz | <urn:uuid:7d946e45-5e5b-45d3-a714-516a4d8e65eb> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/cloud-storage-defined-nas-san-or-das | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00135-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918136 | 415 | 2.625 | 3 |
When using a web browser, what should a user do to prevent private data from being stored locally?
In order to rename the directory ~/bilder/letzter-urlaub to ~/bilder/sommer-2011, which commandline could be used?
Which of the following commands will output a list of all of the file names, under your homedirectory and all subdirectories, which have file names ending with .pdf?
Which of the following applications are popular Open Source relational database systems? (SelectTWO correct answers)
Which of the following is a technology used to connect a hard drive directly to a computer’smotherboard?
Which command shows, amongst other information, the IP address of the current DNS server fora Linux system?
Which of the following programs is not a graphical web browser?
Which of the following services are used for network file systems? (Select TWO correct choices)
Which of the following software packages is a mail server?
Given a file called birthdays containing lines like:YYYY-MM-DD Name1983-06-02 Tim1995-12-17 SueWhich command would you use to output the lines belonging to all people listed whose birthday isin May or June? | <urn:uuid:91885629-8c93-4c6f-9aa6-33b4ceab2080> | CC-MAIN-2017-04 | http://www.aiotestking.com/linux/category/exam-117-010-entry-level-linux-essentials-certificate-of-achievement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903632 | 259 | 2.53125 | 3 |
The setiQuest project has released its first open source code in an effort to have anyone interested help search for intelligent life out there.
The source code for Open SonATA is available on GIT Hub now and covers three programs:
The waterfall display shows the power for each bin in a subchannel. The power values are reported every .75 seconds. The brighter the dot, the stronger the power.
The baseline display shows the average power in each subchannel over channel bandwidth. The baseline is averaged over time and is reported every 15 or so seconds.SonATA InfoDisplay:
The sonataInfoDisplay program uses a curses display window to present the status of the various system components such as the channelizers, dx's, archivers, etc.
Basically, the three programs help visualize the radio waves captured by the ATA - the Allen Telescope Array, an array of 42 — eventually, that will be 350 — small dishes in northern California.
Given the financial constraints on NASA and other government programs, it makes a lot of sense to open-source the Search for Extra-Terrestrial Intelligence. The more ears on the array, the more likely it is that if there is some signal, it will be caught. The SETI director explained there's a very small number of people involved in the process, in an article in The Space Review:
The challenge with increasingly complex SETI search techniques is that the resources and expertise of the SETI community is fairly limited. “The number of people in the world actively involved in SETI could fit into a phone booth,” said Jill Tarter, director of the Center for SETI Research at the SETI Institute.
Some might wonder how this differs greatly from 1999's SETI@home project, which some might call the older cousin to setiQuest. Simply, in the former project, your computer's resources are being used to help process data.
In setiQuest your brain's resources are being used. That seems to be a better deal for everyone involved - SETI, searchers and aliens alike. | <urn:uuid:20df70a3-4233-455c-9147-82bb1e29605f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2227007/microsoft-subnet/open-sourcing-seti.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924461 | 422 | 2.59375 | 3 |
Rsync allows you to synchronize directories locally or between two servers. Rsync determines which files are different or new between two locations and then synchronizes the directories. Rsync can be used to copy folders or entire directories.
In my examples I use the following Rsync command line options.
The -a option enables archive mode which will preserve file permissions, ownership, and modification times.
The -e option specifies the remote shell to use. This is most commonly set to ssh.
The -v option enables verbose mode which will display info while running.
The first thing to do is make sure that Rsync is installed on both servers that you will be performing an rsync on.
After you’ve installed rsync on both servers that you plan to push and pull files/directories from, navigate to the locations on both servers where you plan to push and pull the files from. In my example I navigated to the /tmp directory and ran an ls –l command to see what files were in each directory. You can see on the server to the left that I have an ubuntu-15.10-server-i386.iso file that I’d like to copy over to the server on the right.
You can then either “push” or “pull” the files/directories from the server of your choice. This would be an example of pushing a file over to a server as the server on the right now has the file that the server on the left pushed over to it.
rsync -av -e ssh <file or directory> root@<destination serverIP>:<destination folder on receiving server>
In this example I “pull” the same file over from the server on the left. (I deleted the file so I could pull it again.) Also to note is the port specification in my example. If your server runs ssh on a different port other than 22, specify it with the same syntax in my command example.
rsync -av -e "ssh -p <port>" root@<source serverIP>:<full path of file on server pulling from> <destination folder>
Now I’m going to show you how to rsync directories over. In this example I created a Test directory in my /tmp directory on the source server and created 2 files called file1 and file2 within it. After running the command, rsync will connect to the destination server and create the needed directories and sync the files accordingly.
rsync -av -e ssh <source server folder> root@<destination serverIP>:<destination folder on receiving server>
And there you have how to use Rsync to sync files and directories between two servers over ssh connectivity. | <urn:uuid:d43338a8-0c12-482b-bf3c-c375657a2e4c> | CC-MAIN-2017-04 | http://www.codero.com/knowledge-base/content/24/403/en/how-to-use-rsync-to-move-files-between-servers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899333 | 572 | 2.671875 | 3 |
Earlier this year, a study from IDC and the National University of Singapore (NUS) predicted that enterprises will spend around $500 billion in 2014 on making fixes and recovering from data breaches and malware. In the past few months alone, we’ve seen Target reveal the cost of its recent breach could reach as much as $148 million. The figures are stark, but for the uninitiated the world of malware and its history is something of a mystery. So, where did they originate? How have they changed? And what does the future of malware look like?
To answer these questions we need to start from the beginning, and the Apple 2 computer. In 1982 a 15-year old High School student called Rich Skrenta created the world’s first virus, called Elk Cloner. Attached to a game and spread via floppy disk, Elk Cloner attached itself to the Apple 2 operating system and when released displayed a seven line poem informing the user they’d been infected.
Elk Cloner was a game changer. It was the first virus of its kind to spread “in the wild”, outside of a laboratory environment. It paved the way for a host of other malware such as the Friday 13th virus, which infected users on that date or the Casino Virus which gave the infected user 5 credits to play with, but no matter if they win, lose or draw they’d have to re-boot their system and re-install their software.
As malware became easier to write and sites were created which gave people ‘off the peg’ viruses they could easily piece together – the evolution of malware started its first phase – they became accessible to all.
Advances in technology also played their part. The development of the CD-ROM and the dominance of Microsoft shifted malware from micro to macro, meaning attachments, documents, discs and programs all now posed a security risk. The CIH or Chernobyl virus in 1998, The Melissa Virus in 1999 and Love Letter in 2000, which sent millions and millions of messages worldwide with the subject “ILOVEYOU”, were all high profile viruses which caused significant problems the world over.
What’s notable about these types of malware is they are largely motivated by young, tech savvy individuals flexing their muscles – showing off rather than making any financial gain. With 300,000 pieces of malware now emerging every 24 hours, the bad guys have realized they can also cash in. Malware moved from being the preserve of sophisticated techies, to the weapon of choice for the savvy cybercriminal.
Ransomware, scareware and banking malware were all borne from this new era, as attacks became targeted rather than random and the terminology of malware started to get ugly.
Today, organizations are exposed to a multitude of threats, from malware finding its way on to payment systems, as seen most recently with Home Depot, to state sponsored surveillance and zero-day threats. What we can say with certainty is that malware, and the people behind it, will constantly find new, sophisticated ways to expose and exploit holes in the system.
For businesses looking to mitigate the threat of malware, it’s no longer feasible to solely rely on antivirus technology. Earlier this year, Symantec’s senior VP of information security described antivirus as ‘dead’, estimating that it only stops 45% of cyber-attacks.
Next generation attacks need a next generation response – combining proactive and reactive security strategies to layer multiple mitigation controls. This defense in depth approach ensures that if an attacker combats one security barrier, such as the perimeter firewall, there are preventative measures on the inside to contain the breach.
The best strategies are those that prioritize those controls with the biggest impact. Technologies such as privilege management and application whitelisting, along with regular patching and adopting standard configurations are named by SANS and the Council on Cyber Security among others, as the most effective ‘quick wins’ based on real-life attacks.
You can find out more about the history of cyber threats through our infographic.
Ransomware is the most profitable malware in history but, by taking a proactive approach, you can stop it in its tracks and prevent your business being hit.
This report covers what ransomware is, how it works, the damage it causes, predictions for the future and, crucially, how to prevent it with a simple yet effective security strategy. | <urn:uuid:af14dfde-f120-4fd7-af80-6c0c447bda4e> | CC-MAIN-2017-04 | https://blog.avecto.com/2014/11/malware-an-evolutionary-story/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00309-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960711 | 910 | 3.0625 | 3 |
Domain 4.0 Application, Data and Host Security
1. Fuzzing (4.1)
Fuzzing is a testing method that uses a brute force technique to send data to software or hardware inputs and then notice the response or reaction. The purpose is to discover programming or design flaws that need fixing or which can be exploited. Both security professionals and hackers use fuzzing tools. These tools can operate autonomously to craft random or sequential input data sets in order to stress test a target.
2. Cross-site Request Forgery (XSRF) prevention (4.1)
Cross-site Request Forgery (XSRF) is an attack that takes advantage of a Web server’s trust in an authenticated client. Usually attacks of this type wait until a valid client authenticates to a server before launching and making command requests to the server as if it were the client. The flaw is the server assuming that an authenticated client will only make valid and reasonable requests. This is a bad assumption. Prevention of XSRF must take place at both the client and server. Clients should avoid risky behavior to prevent malware infection and run current anti-malware scanners. Servers should limit the abilities or functions clients can access and re-request authentication when a sensitive action is requested.
3. Cable locks (4.2)
Cable locks are an important part of portable device physical security. A cable lock is used to secure a notebook or other device to a less mobile object (preferable immovable) using a looping cable along with a lock that connects into a K-slot on the device. Cable locks are not insurmountable; a good set of wire cutters or an adept lock-pick may be able to bypass the protection. However, the presence of a cable lock mandates the additional effort in a theft attack, thus reducing the attack’s success rate.
4. Mobile Devices (4.2)
There are several mobile device specifics on the objectives list, including screen lock, strong password, device encryption, remote wipe/sanitation, voice encryption, and GPS tracking. Most of these are standard security issues on desktops and notebooks. Smartphones are somewhat more vulnerable to remote wiping and GPS tracking. Often, these services must be installed and configured prior to a loss or theft event.
5. Data Loss Prevention (DLP) (4.3)
Data Loss Prevention (DLP) is the focus plan and policy to prevent data from being disclosed to unauthorized entities, especially outsiders and hackers. Most of the efforts related to data access, encryption, tracking, and confidentiality protection are part of the DLP solution. Showing sufficient DLP is also an important part of regulation compliance.
Domain 5.0 Access Control and Identity Management
1. Common Access Card (5.2)
The Common Access Card (CAC) has been commonly used by the government and military of the USA since the early 2000s; however, CACs are found in many private companies as well. The CAC is a smart card commonly used to control physical and logical access into a secured environment. It often consists of a photo ID, smart card technologies, and proximity mechanisms (such as RFID).
2. Personal identification verification card (5.2)
A personal identification verification (PIV) card is a more generic form of a CAC. It is any form of ID card that can be used to confirm or check someone’s identity. A PIV could refer to a driver’s license, an access badge, a photo ID, or a visitor’s badge, etc.
Domain 6.0 Cryptography
1. Miscellaneous Cryptography Items (6.0)
This domain contains several new topics not included on the previous exam. The topics are not new to the IT security realm as they are standard elements of most cryptography discussions. They include: block vs. stream, transport encryption, WEP vs. WPA/WPA2 and preshared key, RIPEME, HMAC, RC4, Blowfish, whole disk encryption, TwoFish, SSL, TLS, IPSec, SSH, HTTPS, and PKI.
The descriptions and definitions of some of the new Security+ topics listed here are designed to pique your interest. This is not an exhaustive coverage of these issues, but they point to a larger discussion of security topics that require greater context. | <urn:uuid:5006a965-e838-4ebf-9976-8da4d43a34c7> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/04/04/new-topics-on-security-2011-sy0-301-from-domains-4-0-6-0/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927723 | 908 | 2.9375 | 3 |
It’s well understood that the battery in a UPS is the most vulnerable part of the system. In fact, battery failure is a leading cause of load loss. Knowing how to maintain and manage UPS batteries will extend their life and save data center managers time and trouble in the future.
Improvements in battery technology have been evolutionary rather than revolutionary. Capabilities such as advanced charging regimens, software management for accurate remaining life information and firmware adding intelligence to batteries have reduced, but not eliminated, the risks inherent in any battery. As a result, it’s prudent if not essential to take a close look at what may be increasing the risk of unexpected load loss from a failing UPS battery. After all, even large installations with many batteries are vulnerable to the failure of a single battery.
UPS Battery Overview
There are primarily two kinds of batteries in UPSs: valve-regulated lead acid (VRLA), also known as sealed or maintenance free, and wet cell (also called flooded cell or vented cell). VRLA batteries usually have lower up-front costs but a shorter lifetime than wet cell—usually around five years. Wet-cell batteries require more-advanced maintenance but have a longer lifetime—up to 20 years.
VRLA batteries are sealed, usually in polypropylene plastic. They were developed because they have the advantage of containing no sloshing liquid that might leak or drip out when inverted or handled roughly. The term valve regulated refers to the method of gas release. If the gas pressure becomes too great inside the battery, the valve will vent when it reaches a certain pressure.
During the charging of a lead-acid battery, hydrogen is normally liberated. In a vented battery, the hydrogen escapes into the atmosphere. In a VRLA battery, the hydrogen recombines with oxygen, so water loss is minimized. Under normal float conditions, virtually all the hydrogen and oxygen is approximately 98% recombined. Resealable valves vent nonrecombined gases only when pressure exceeds a safety threshold.
A VRLA battery is distinguishable from a flooded-cell battery by the rate at which oxygen is evolved from the positive plate and diffused to the negative plate, ultimately forming water. This rate is several orders of magnitude faster than in a flooded-cell battery. Because water can’t be added, its recombination is critical to the life and health of a VRLA battery. Any factor that increases the evaporation rate or water loss—such as ambient temperature and heat from the charging current—reduces the battery life.
Wet Cell/Flooded Cell
Wet-cell/flooded-cell batteries have thick lead-based plates that are flooded with an acid electrolyte. This design is highly reliable: failures normally don’t occur until halfway through their 20-year prorated life, at which time the failure mode is most often a short circuit. This situation is not an extreme emergency because any one shorted cell only affects overall reserve time by a very small percentage. Although they’re very reliable and have a long life, however, wet-cell batteries exhibit downsides. They require more safety measures and a space-consuming separate battery room.
Regardless of the differences in UPS battery types, both require monitoring and maintenance to ensure maximum life and system availability.
Battery Arrangement and Power
In most UPSs, you don’t use just one cell at a time. They’re normally grouped together serially to create higher voltages or in parallel to deliver higher currents. In a serial arrangement, the voltages add up. In a parallel arrangement, the currents add up.
But batteries are less linear, as the graphic depicts. For example, all batteries have a maximum current they can produce; a 500 milliamp-hour battery can’t produce 30,000 milliamps for one second, because there’s no way for its chemical reactions to happen that quickly. It is also important to realize that at higher currents, batteries can produce a lot of heat, wasting some of their power.
Like all batteries, UPS batteries are electrochemical devices. A UPS employs a lead-acid storage battery in which the electrodes are grids of lead containing lead oxides that change in composition during charging and discharging, and the electrolyte is dilute sulfuric acid. In other words, they contain components that react with each other to create DC electrical current. These components are the following:
- Electrolyte—The medium (comprising purified water and sulfuric acid) that provides the ion-transport mechanism between the positive and negative electrodes of a cell, immobilized in VRLA batteries, and in liquid form in flooded-cell batteries
- Grid—A perforated or corrugated lead or lead-alloy plate used as a conductor and support for the active material
- Anode—The terminal where the current flows in
- Cathode—The terminal where the current flows out
- Valve (used in VRLA batteries)—A means to vent the buildup of gas that goes beyond predetermined levels
- Separator—A device for the physical separation and electrical isolation of electrodes with opposing polarities
- Jar—The container holding the battery components
Shelf Life and Storage
To improve service-life expectations and reliability, it’s important to ensure that the batteries are properly stored before being installed and placed into service. Storage facilities should be climate controlled with proper ventilation capabilities so batteries can remain cool and dry. Failure to comply with proper storage leads to shortened run times and reduced capacity.
A rule of thumb in terms of time is no more than six months of storage in a properly designed storage facility. When the battery system is commissioned, an acceptance test should be performed to identify any flaws in the manufacturing process, improper storage or perhaps even hidden damage. Environmentally controlled storage facilities are recommended.
The rated capacity of a lead-acid battery is based on an ambient temperature of 77°F. It’s important to realize that any variation from this operating temperature can alter the battery’s performance and shorten its expected life.
For every 15°F average annual temperature above 77° F, the life of the battery drops by 50 percent. Ambient temperatures below 77°F may reduce the battery backup time, similar to a car battery on a cold morning.
UPS batteries are electrochemical devices whose ability to store and deliver power slowly decreases over time. Even if you follow all the guidelines for proper storage, usage and maintenance, batteries still require replacement after a certain period.
Positive-grid corrosion has been the most common end-of-life factor for UPS batteries. This problem is a result of the normal aging process owing to UPS battery chemistry and involves the gradual breakdown of the inner wires of the positive grid inside the battery.
During brownouts and blackouts caused by utility failure, the UPS operates on battery power. Once the utility power is restored, the battery is recharged for future use. This entire loop is called a discharge cycle. Each discharge and subsequent recharge reduces the relative capacity of the battery by a small amount. The length of the discharge cycle will determine the rate of reduction in battery capacity.
A good analogy is a loaf of bread. It can be sliced into many thin slices, or a few thicker slices. You still have the same amount of bread either way. Similarly, a UPS battery’s capacity can be used up over a large number of short cycles or fewer cycles of longer duration.
Lead-acid chemistry, like others used in rechargeable batteries, can only undergo a maximum number of discharge/recharge cycles before the chemistry is depleted. Once the chemistry is depleted, the cells fail and the battery must be replaced.
The most important thing is to maximize your UPS uptime. With proper checks and maintenance, the end of battery life can be estimated and replacements can be scheduled without any interruption or loss of backup power. Without regular maintenance or checkups, your UPS battery may experience heat-generating resistance at the terminals, improper (unbalanced) loading, reduced power protection and premature failure.
Even though sealed batteries are sometimes referred to as maintenance free, they still require scheduled maintenance and service. Maintenance free simply refers to the fact that they don’t require you to add water.
Monitoring and Management
In a perfect world, you’d be able to monitor your batteries and IT infrastructure equipment constantly, making sure they’re protecting critical power and running efficiently. That’s not reality, though, as your team has more to tend to than just the IT environment. That’s where a comprehensive monitoring and management platform can make a big difference, helping to qualify the factors listed above and act as a second set of eyes on your equipment.
Using a next-level monitoring and management service can be helpful for collecting and analyzing data from batteries and other power-infrastructure devices, providing the insight needed to make recommendations and act on your behalf. It means continuous monitoring of your batteries, time savings and peace of mind that your batteries are covered.
Batteries are a critical part of the UPS, and determining battery life can be tricky. It’s a specification that’s often promoted on the basis design life, defined as how long the battery can be expected to perform under ideal conditions. For more information on UPS battery maintenance, visit www.eaton.com/upsbatteries and download the UPS battery handbook.
About the Author
Ed Spears is a product-marketing manager in Eaton’s Critical Power Solutions Division in Raleigh, North Carolina. A 36-year veteran of the power-systems industry, Ed has experience in UPS-systems testing, sales, applications engineering and training—as well as working in power-quality engineering and marketing for telecommunications, data centers, cable television and broadband public networks. He can be reached at EdSpears@Eaton.com, or find more information at www.powerquality.eaton.com. | <urn:uuid:2e88bad9-d93b-45af-90ec-01be560189cb> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/large-ups-battery-considerations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93328 | 2,057 | 3.375 | 3 |
By Matthew Bickford
Automatic transmissions are becoming more popular on vehicles today. Eighty-eight percent of light vehicles manufactured in 1999 were equipped with an automatic transmission. Although, this number is only a few percentage points higher than a few years ago, this slightly higher percentage is applied to higher new light vehicle sales. In 1999, 18.49 million new vehicles were sold in Canada and the United States. When a higher factory installation rate of automatic transmissions is applied to 1999 new vehicle sales, it represents 16.27 million vehicles equipped with automatic transmission.
There are several reasons for automatic transmissions becoming more popular. The main reason is that automatic transmissions today perform better than they did in the past. There have been many technological breakthroughs that make automatic transmissions perform as well as a manual transmissions. For this reason, customers now prefer to have automatic transmissions.
Technical Advancements to Automatic Transmissions
Automatic transmissions today are very complex, as they are controlled by advanced electronics. These advanced electronics prevent the transmission from hunting between gears. They also make the transmission shift smoother. With the use of electronics there are less mechanical parts, so automatic transmissions now weigh less. Also contributing to the weight loss is improved castings and lighter metal such as aluminum being used. Improved fuel economy is another benefit of the advanced electronics and weight loss. | <urn:uuid:b8289845-75d3-493a-80ee-2022dca2d74e> | CC-MAIN-2017-04 | https://www.frost.com/sublib/display-market-insight.do?id=IMAY-4USKDP | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961538 | 267 | 2.75 | 3 |
I was recently reading about an exciting new application introduced by our business partner Iridium, which enables airlines to fly directly over the Earth's Polar Regions instead of around them. This unmanned aircraft flight path was not possible before an innovator like Iridium made it so and it got me thinking about other life saving or enhancing applications or solutions – miraculous solutions, if you will – that could be enabled by M2M connections using always-on, hybrid cellular/satellite connectivity. Perhaps the term "miraculous" is a leap, but there are certainly a number of intriguing applications of this technology to either create massive efficiencies or keep people out of harm's way.
For example, one could easily envision an M2M-enabled, rugged device that could be dropped into the "hot zone" of a forest fire to evaluate the exact conditions and assess the heat, size and spread of the blaze — before a helicopter or other air support could get there. Armed with this knowledge, fire fighters could, for example, pinpoint the most effective water drop area, optimize location of fire barriers and help first responder aircraft avoid the hottest areas of the fire or the heaviest smoke fields. Or, think of the inevitable marine search that seems to happen annually in nearly every high-traffic coastal region. What if all recreational boats, or even each person on these boats, were equipped with an M2M-based safety device, designed to transmit location information in the event of emergency? How many more lives might be saved if rescuers were able to hone in on a distress signal?
In each of these cases, M2M delivers a multilateral benefit. It provides an added measure of peace-of-mind and safety to the human players, decreases the cost of potential search and rescue missions and more than likely improves the outcome of a rescue. In fact, it could easily be said that M2M changes the very nature of Search and Rescue, altering it from a hunt-and-peck operation to one of surgical precision.
Thinking back to the earthquake in Japan and the ensuing nuclear crisis, consider the role that un-manned remote sensors could have played in determining the state of the breached reactor and defining its danger zones. With M2M, human technicians could have maintained a safe distance from the source of radiation while measuring the exact ground conditions simultaneously in real-time. There would not have been such a flurry of speculation and panic in those days following the quake and perhaps the extent of the disaster could have been contained with earlier knowledge of what was truly occurring on the ground.
On a lighter note, for those of us old enough to remember the surreal, low-speed O.J. Simpson chase prior to the even more surreal arrest and trial, think of how many taxpayer dollars could have been saved if it was not a police helicopter and the corresponding parade of police cars following behind O.J.'s SUV, but rather a M2M-enabled camera mounted on a remote-controlled drone aircraft communicating wirelessly to police on the ground to optimize the location of a road block. With M2M rising, the days of these easily deployable air vehicles, whose sole purpose is to be an omniscient eye in the sky for municipal and state law enforcement, are not far off. Like it or not – when used for the greater good, I consider these miracle applications.
By Stein Soelberg, Director of Marketing
Stein leads a team whose responsibility is to own the branding, advertising, customer engagement, loyalty, partnership and public relations initiatives designed to propel KORE into the 21st century. With over 15 years of technology marketing experience in the business to business software, Internet services and telecommunications industries, Stein brings a proven track record of launching successful MVNOs and building those brands into leaders. | <urn:uuid:0fa7c4fb-c8d6-4163-8787-b29dfe9947c4> | CC-MAIN-2017-04 | http://www.koretelematics.com/blog/m2m-unlocks-the-art-of-the-possible | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00419-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953121 | 772 | 2.859375 | 3 |
In four years, NASA will pick an asteroid to capture and just a year later it expects to launch a robotic spacecraft to direct the asteroid into an orbit around the moon.
That's the latest plan NASA announced today to capture and redirect an asteroid for study.
"Observing these elusive remnants that may date from the formation of our solar system as they come close to Earth is expanding our understanding of our world and the space it resides in," said John Grunsfeld, associate administrator for NASA's Science Mission Directorate. "Closer study of these objects challenges our capabilities for future exploration and will help us test ways to protect our planet from impact."
NASA scientists have been working on a plan to use robotics to get close enough to study an asteroid some time in the 2020s. By capturing an asteroid and examining its makeup, scientists hope to better prepare to send humans to Mars.
The space agency said its scientists are focusing on two different concepts. One entails capturing a "very small asteroid" in open space, the other envisions collecting a boulder-sized sample off a much larger asteroid.
NASA is expected to decide which course to take sometime this year.
Both concepts call for the robotic spacecraft to redirect an asteroid less than 32 feet in size into the moon's orbit.
Once the asteroid is in a steady orbit, a team of astronauts would blast off in an Orion spacecraft atop a heavy lift rocket. Once in space, the crew would set off on an expected nine-day journey to the asteroid. After docking the spacecraft with the robotic capture vehicle, astronauts would conduct spacewalks to explore the asteroid and take samples.
"With these system concept studies, we are taking the next steps to develop capabilities needed to send humans deeper into space than ever before, and ultimately to Mars, while testing new techniques to protect Earth from asteroids," said William Gerstenmaier, associate administrator for NASA's Human Exploration and Operations Mission Directorate.
NASA has been looking to launch a plan to capture a near-Earth asteroid that could weigh as much as 500 tons. Engineers expect it could happen as early as 2021.
Astronomers used NASA's Spitzer Space Telescope to spot a asteroid, dubbed 2011 MD. So far, it appears to be a good candidate for capture. But the space agency continues to look for others.
So far, nine asteroids have been identified as potential candidates for the mission.
"This mission represents an unprecedented technological feat that will lead to new scientific discoveries and technological capabilities and help protect our home planet," NASA Administrator Charles Bolden had said in an earlier statement. "We will use existing capabilities, such as the Orion crew capsule and Space Launch System rocket, and develop new technologies like solar electric propulsion and laser communications -- all critical components of deep space exploration."
NASA's proposed $17.5 billion proposed fiscal 2015 budget, released in March, sets aside money to send humans to Mars by the 2030s and to study near-Earth asteroids.
The space agency, though, is looking to launch another robotic asteroid mission -- and this one is set to launch much sooner.
NASA announced in 2011 that it's working to send a robotic spacecraft to an asteroid in 2016 in an effort to help scientists discover how life began.
The $800 million mission, which will call on a robot to collect pieces of an asteroid -- named 1999 RQ36 -- will be the first U.S. mission to carry asteroid samples back to Earth. The spacecraft is scheduled to reach the asteroid, which is about the size of five football fields, by 2020 and then return to Earth with samples in 2023.
Asteroids are leftovers formed from the cloud of gas and dust that collapsed to form our sun and the planets about 4.5 billion years ago. Scientists calculate that they contain original planet- and star-forming material, which they hope can tell us about the conditions of our solar system's birth.
Since asteroids are thought to have changed little over time, they likely represent a snapshot of our solar system's infancy.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA to Launch Asteroid-Grabbing Spacecraft in 2019" was originally published by Computerworld. | <urn:uuid:50d86f3b-1c48-4c9a-a31f-b61deca345af> | CC-MAIN-2017-04 | http://www.cio.com/article/2375313/hardware/nasa-to-launch-asteroid-grabbing-spacecraft-in-2019.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00327-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939613 | 919 | 4.1875 | 4 |
Over the past few years, the public sector spent considerable time and money making myriad transactions available to the public via the Internet.
People appreciate the convenience, and they appreciate "their" government responding to their wants. The public sector is slowly changing people's perception by creating an image of government agencies that care about their customers and want to nurture the relationship.
Some governments now use the Web to provide information to help citizens become smarter consumers. Public agencies have long collected price and performance data from a wide range of industries, but rarely made it available in a user-friendly form for average citizens. Agencies at all governmental levels are beginning to offer online services that help customers make better decisions on everything from gasoline purchases and investing to choosing hospitals and schools.
That's new ground. In the past, information tended to flow one way: from citizens and businesses to government. Society gained because the data was used to ensure compliance with environmental, safety and fairness laws, as well as other regulations. But many citizens felt little direct benefit from this activity.
Nowadays, some governments are doing more to help people as they go about their lives, doing the seemingly million and one things they must accomplish on any given day.
Florida's Legislature passed the Affordable Health Care for Floridians Act in 2004. The bill directed state government to implement a consumer-focused, transparent health-care delivery system in the state. The bill also stipulated that the state create a mechanism to publicly report health-care performance measures and distribute consumer health-care information.
Florida's Agency for Health Care Administration (AHCA) is revamping its Web presence to report and distribute health-care data to consumers. One expanded site, floridahealthstat.com
, delivers health-care data collected by the AHCA's State Center for Health Statistics to consumers.
The site is designed to make it easy for health-care consumers, purchasers and professionals to access information on quality, pricing and performance. One such tool is Florida Compare Care
, which was launched in November 2005.
Through the Compare CareWeb site, Florida now reveals data on infections, deaths, complications and prices for each of its 207 hospitals. Residents can use the site to compare short-term-care hospitals and outpatient medical centers in various categories, such as length of stay, mortality, complications and infections.
The Web site lists hospitals' rates of medical problems in seven categories, and provides patient death rates in 10 areas, including heart attacks, strokes and pneumonia.
When the AHCA started devising Florida Compare Care, the agency turned to the Comprehensive Health Information System (CHIS) Advisory Council for help. The council and various CHIS technical workgroups, which include hospital representatives and various other stakeholders, were involved in the Web site's development from the very beginning, said Toby Philpot, the AHCA's deputy press secretary.
"The workgroups have studied the technical issues of reporting performance data, as well as discussed the most appropriate options for reporting and displaying the information on the Web site," Philpot said.
Creating a Web site such as Florida Compare Care is dicey because of the complex nature of the information being presented, he said.
"Because of their expertise, some hospitals treat more high-risk patients," Philpot explained. "Some patients arrive at hospitals sicker than others, and often, sicker patients are transferred to specialty hospitals. That makes comparing hospitals for patients with the same condition but different health status difficult."
To get the most accurate data on the Web site, he said, each hospital's data is risk adjusted to reflect the score the hospital would have if it provided services to the average mix of sick, complicated patients.
The risk adjustment is performed by 3M Corp.'s All Patient Refined-Diagnosis Related Groups.
"This adjustment should allow comparisons between facilities that reflect the differences in care delivered, rather than the differences in the patients," Philpot said.
State Rep. Frank Farkas, a chiropractor, said he sponsored the Affordable Health Care for Floridians Act, in part, to help health-care consumers.
"High on the list was transparency -- and transparency in a couple of forms," Farkas said. "One was being able to shop price comparisons for pharmaceutical drugs. The second part was hospital outcomes for mortality and infection rates. This is all information that our department had that was being given to them by the pharmacies and hospitals, but nothing was ever done with it."
The most difficult part of creating the Florida Compare Care Web site was creating consumer-friendly information out of federal reporting data, Farkas said, explaining that such standards didn't hit the level of detail needed for Florida's new Web site.
He cited infection rate data as a prime example because it was difficult for the AHCA to determine whether patients came in with infections prior to admission or developed infections while in the hospital.
"The way it's measured right now, it just shows an infection rate for the hospital, but it doesn't break it out."
The federal government, which was redesigning a form hospitals use to report data to states, added a new data entry point to extrapolate infection rate information, he said.
Interestingly enough, collecting the information didn't create much of a hardship for the AHCA, which had been gathering medical data all along.
"The information was required, yet the AHCA was never required to do anything with it," Farkas said. "It was basically information that I'd say was useless because you're requiring hospitals to provide it, but it was just reams of paper sitting in a room."
The Florida Hospital Association (FHA) supports the creation of Florida Compare Care, as well as other transparency issues in health care, because of the state's approach, said Rich Rasmussen, the FHA's vice president of strategic communications.
"What Florida tried to do was include everybody in this transparency effort, so that we'd have transparency on the pharmaceutical side, the hospital side, the health plan side and the physician side," Rasmussen said.
Though the Florida Compare Care site helps individual consumers, Rasmussen said the benefits extend to wider audiences, such as employers, health planners, health plans, insurance companies, hospitals and the FHA itself.
"We purchase that data and we customize it for our members," he said. "If you're trying to do strategic planning and want to look at, for example, the health disparities in the community, we can do that. If you wanted to find out how many heart procedures were performed in a certain ZIP code, we can give you that. All of that information is very helpful when you're doing that planning."
Employers can make excellent use of this information too. The transparency effort enjoyed strong support from the business community, Rasmussen said, because companies view the data as a valuable tool for large purchasers of health-care services.
The next phases of the project will incorporate similar information on health plans and physicians to the FloridaHealthStat site, he added.
"Most employers don't shop hospitals; most consumers don't shop hospitals," Rasmussen said. "But they shop health plans. So having good information out there will help consumers and employers know what the out-of-pocket [costs] will be for their employees, what the co-payments and deductibles are going to be, what the exclusions are going to be, what performance measures are used by health plans.
"As we roll more of this information out, consumers -- and I lump into that group purchasers, such as businesses -- will have a better idea of the total continuum of services they're buying."
Florida's new approach to providing health-care data came about because of the trend toward consumer-directed health care, explained Farkas.
"We're seeing a huge increase in the amount of health savings accounts being sold nationwide," Farkas said. "People are using their own money. We want to make sure they're getting information to make good decisions. It's interesting health care is the only industry that you've not been able to price shop or even quality shop.
"You've never been able to really see how safe a hospital is compared to other hospitals; which doctors had the best outcomes; how much they charge for elective procedures -- those are things you're going to start seeing on this Web page."
This is part of a larger movement in which the public sector is enabling constituents to choose service providers, said Bill Eggers, global director for Deloitte Research, Public Sector, and a senior fellow at the Manhattan Institute for Policy Research.
"Yet to have choice, you need good information about price, quality and performance, and that was typically not available for many public services, whether it was education, social services or health care," Eggers said. "Choice became very, very difficult to implement well in practice."
The Internet, of course, makes it much simpler for the public sector to take the complex information it stores, package that data and present it in a way that helps consumers.
At the federal level, Eggers said, the U.S. Securities and Exchange Commission (SEC) embarked on an effort to give better informational tools to investors and analysts.
"The SEC is putting together a lot of technology initiatives to change the way public companies, mutual funds and so forth disclose financial data," Eggers said.
In early March 2006, the SEC announced it would hold a series of roundtables throughout 2006 at its headquarters in Washington, D.C., to discuss the best way to hasten implementation of these new Internet tools.
The roundtables also will review first-year results of a pilot that has been using interactive data from company filings with the SEC to let Internet users search and use individual data from financial reports, such as net income, executive compensation or mutual fund expenses. Currently financial information is generally presented in the form of entire pages of data that can't easily be separated by people reviewing the data from PCs.
The rationale for the SEC's initiative is that if investors have better information to make choices, the market will be improved because companies and mutual funds will have to improve performance to attract investor attention.
"It's yet another example of something where you had a marketplace and a lot of information," Eggers said. "But now the SEC, which was holding a lot of different information, has the ability to improve the marketplace by putting the information online and nudging some of the mutual funds and companies to do so also."
Local governments also collect a wealth of information that can help constituents make the most of their buying decisions.
Nearly five years ago, Westchester County, N.Y., posted a gasoline price database on the Department of Consumer Protection's Web site, said John Gaccione, the department's deputy director.
"The department always conducted a gasoline price survey -- it was a random sample -- and then we would release an average price," Gaccione said, noting that the county executive and the director of the Department of Consumer Protection wanted to modernize the existing process by putting the survey information online.
"It [the site] also allows us to give trends in prices or show stations in a particular area or ZIP code so that a consumer is empowered with information and can make better choices, or choices that better fit their needs," Gaccione said.
The department surveys 400 gas stations bi-monthly and informs consumers on which stations have the best prices. The online database also provides information about specials, such as "cents off" days, and stations that offer diesel fuel.
Gaccione said people told the department they appreciate the service, recalling that some feedback indicated consumers were surprised by the database's availability.
"It gave them access to information they didn't even know existed," he said, explaining the constituents' surprise. "Second, it allowed them instant access to that information. There are people, traveling salesmen, that if they know they can go to a certain place and gas up their car and save 10 cents a gallon, that's something they're going to rely on in the course of their everyday life or business."
It's something people want from the public sector, he said.
"You can get a sense that there's a growing expectation of, 'If government is collecting this data and it can help the average person, get it out there.'" | <urn:uuid:cc97a23e-cec6-4471-8871-c7cd7262e946> | CC-MAIN-2017-04 | http://www.govtech.com/featured/99395379.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966224 | 2,533 | 2.84375 | 3 |
As the FCC and public safety officials work to define and build out the next-generation 911 infrastructure that will be capable of providing first responders with detailed situational awareness of the scene of an emergency, universities, localities and states have begun building and using some of the pieces envisioned in the completed whole. Providing beneficial information to first responders when a 911 call is received could decrease response times and increase safety.
Numerous projects are under way to increase the flow of information between governments and citizens. For example, state and local governments, including Delaware and two Georgia cities, are encouraging citizens to verify their address and register basic medical information with them to facilitate emergency assistance. And universities have begun deploying platforms that allow users to text 911.
Now a California school district is piloting a smartphone application that brings all those capabilities together and allows students to reach out for emergency help with a touch of a button. So far 12 students in the Alhambra Unified School District are testing the app, called SafeKidZone, which creates a personal safety network for children that quickly connects to 911 when they push the “panic button.”
Go to Emergency Management to read more about the emergency app. | <urn:uuid:13d15e62-bbbb-4a55-a661-f005a42c6d09> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Smartphone-Panic-Button-Connects-Children-911.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933788 | 238 | 3.359375 | 3 |
NASA this week said it was considering a new Centennial Challenge: Build and airship capable of long duration flight for scientific missions.
The agency issued a Request For information to see if there was enough industry interest in the challenge and to further develop rules for the competition. You may recall that NASA’s Centennial Challenges Program sets up challenging contests for the public, academia, and industry with an eye towards developing innovative technologies.
+More on Network World: The most momentous tech events of the past 30 years+
In this case the so-called 20-20-20 Airship Challenge would award seed money to the first 10 Teams to present and pass an Airship scalability review (~$20K per team). The Challenge would award prizes for successful demonstration of a stratospheric airship that would be required to accomplish the following:
• Reach a minimum altitude of 20 km.
• Maintain the altitude for 20 hours (200 hours for Tier 2 competition)
• Remain within a 20 km diameter station area (and navigate between two designated points for Tier 2)
• Successfully return the 20 kg payload (200 kg for Tier 2 competition) and payload data.
• Show Airship scalability for longer duration flights with larger payloads through a scalability review.
“There are few opportunities for space missions in astronomy and Earth science. Airships (powered, maneuverable, lighter-than-air vehicles that can navigate a designated course) could offer significant gains in observational persistence over local and regional areas, sky and ground coverage, data downlink capability, payload flexibility, and over existing suborbital options at competitive prices. We seek to spur a demonstration of the capability for sustained airship flights as astronomy and Earth science platforms in a way that is complementary with broad industry interests,” NASA stated.
+More on Network World: NASA pondering two public contests to build small space exploration satellites+
The proposed prize structure for this competition is: Award 1-- A proposed $1.0M will be split between teams successfully completing Tier 1 within 3 years of the challenge initiation. A possible scenario for splitting the Tier 1 prize money is 4 prizes of $500k, $250k, $125k and $125k, starting from the first to demonstrate to the fourth.
Award 2: A proposed $1.5M will be awarded to the first successful demonstration of Tier 2 within four years of challenge initiation.
Check out these other hot stories: | <urn:uuid:7bf704bf-5268-4598-aa20-e4d109e18b9f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3048272/careers/nasa-competition-could-net-you-1-5m-for-next-great-airship.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920003 | 506 | 2.546875 | 3 |
The Internet of Things could be the next big thing in tech: A world of connected devices, from thermostats, refrigerators to enterprise tools like fleets of cars and data center switches.
But it could also turning into a real headache for security folks.
“The more data there is, the more opportunity there is for something to go wrong,” says Christopher Budd, a global threat communications manager at Trend Micro. Budd says a world of unsecured connected devices “scares the living crap out of me.”
+MORE AT NETWORK WORLD: This is what the new hybrid cloud looks like +
“The kind of data we’re talking about is a lot scarier and a lot more meaningful than the information that has typically been stored on a PC five or 10 years ago,” he says. “As the devices connected to the Internet get closer to the physical person, the information they collect becomes more personal.” And so therefore the loss or illicit use of that data is that much more personal.
Think about it: imagine a world of connected location devices that track where a person is. An unsecure connection of that device could make location data readily available for anyone to see. imagine robbers checking online to see when their victim isn’t home. Or worse, an attacker being able to pinpoint the location of their victim to track them down.
On the business side, it could be just as bad. Devices can report a plethora of data out into the web. That’s a breeding ground for hackers to launch “beachhead” attacks to penetrate corporate firewalls and carry out even more sophisticated attacks.
Imagine an organization has dozens, if not hundreds or thousands of sensors out in the field all reporting back to a central repository. How easy could it be for a hacker to disguise a malware, virus or other threat into that stream of data and creep behind a corporate firewall? Even if there are strong network protections, what if a worker brings an infected FitBit into the office and plugs it into their work desktop? “It’s pretty scary to extrapolate all the things that we can expect,” says Budd. | <urn:uuid:80892b16-f44c-4c7b-9aae-1583f439c7ee> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2451607/internet-of-things/internet-of-things-why-the-internet-of-things-scares-the-living-crap-out-of-this-security-guru.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00080-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937151 | 451 | 2.6875 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The report was commissioned by the US Department of Homeland Security to examine the lessons learned in fighting the Conficker worm, designed to create a botnet.
The report records the events surrounding the creation and operation of the Conficker Working Group (CWG) so that it could be used as a model in future.
The CWG grew out of an informal coalition of security researchers working to resist the world's largest known computer worm infection.
Click here to down the full report on Conflicker from the Conflicker Working Group. (Requires registration)
Despite a few errors, the report found the CWG was successful in preventing Conficker's author from gaining control of the botnet through an "unprecedented act of co-ordination and collaboration" by the cybersecurity community, including Microsoft, ICANN, domain registry operators, anti-virus vendors, and academic researchers.
Rodney Joffe senior technologist at Neustar and director of the CWG said the group demonstrated how the global community, public and private, can and should in the future come together to combat common threats.
"However it is also a clear example of how this 'best of breed' co-operation is generally powerless to stop determined attacks. Conficker remains undefeated, and no arrests have yet been made," he said.
The CWG estimates that more than seven million government, business and home computers in over 200 countries are still infected by Conficker and potentially under its control.
The Conficker Working Group, the report said, teaches us that private sector collaboration, public-private information sharing, support to law enforcement, resources and legislative reform are among the many urgent requirements if the cyber security community is to stay ahead of impending threats.
This and other lessons learned and recommendations are detailed in the report, with specific reference to group structure, operations, data usage and relation with stakeholders.
Sign-up to Computer Weekly to download more reports on security:
Ovum: Security Trends in 2011
The 11 Security Commandments
How to tackle internet filtering and reporting
Architecture for de-perimitisation of IT security | <urn:uuid:36eeefa0-316a-4812-80d7-940ef37fcd4d> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280094953/Cybersecurity-community-learned-valuable-lessons-from-Conficker | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942595 | 460 | 2.5625 | 3 |
If you are considering using the different twisted-pair copper cables to transmit data in the networks or other applications, it is more or less that you will repeatedly come across terms like Category 5, Category 5e, Category 6, or even Category 6e, the network Ethernet cable standard defined by the Electronic Industries Association (EIA) and Telecommunications Industry Association
(TIA). Cat5 and Cat5e are two of the most popular network cables for most wired local area networks (LANs) today. Just in case you are not too familiar with this two copper wiring technology, I here would like to provide you with a few knowledge and tips in the way each media handled, network support, crosstalk and bandwidth of Cat5 network cables and Cat5e network cables, hope it will help you make better decision when choose the right one for you critical applications.
Cat5 is the fifth generation of twisted pair Ethernet technology and the most commonly used network cables than any other category twisted pair cables. Cat5 cable contains four pairs of copper wires, just the same as Cat5e cables. Cat5 cable can support 10BASE-T and 100BASE-T network standards. Cat5 cable is available into two sub-types: unshielded twisted pair (UTP) and shielded twisted pair (S/FTP) measure of extra protection against interference, which is widely used in Europe.
Category 5 cable can be either solid type or stranded type: Solid cat5 cable is more rigid and supports longer length runs, the solid Cat5 cable is more used for fixed wiring configurations such as office buildings. While stranded Cat5 cable due to its flexible and pliable features, is most likely to be used as patch cables for shorter distance applications.
Cat5e stands for Category 5, enhanced cable which developed on the base of Cat5. Except that it fulfills higher standards of data transmission, it almost goes the same line with the basic Cat5. Cat5e supports networking at Gigabit Ethernet (1000BASE-T), network running speed up to 1000 Mbps, Cat5e cable is completely backwards compatible with Cat5, and can be used in any
application in which you would normally use Cat5 cable. Category 5e is indeed improved the specifications of Category 5 by reducing some crosstalk from one cable to anther cables.
As with all other types of twisted pair EIA/TIA cabling, Cat5e cable runs are limited to a maximum recommended run length of 100m (328 feet). In normal practice it is limited to 90 m to allow for up to 5 m of Cat5e patch cable at each end.
As all the comparisons above, Cat5e runs a faster pushing data across network with the 350Mhz versus 100Mhz of Cat5, coupled with other more stringent specifications, Cat5e is ideal for networks which plan to operate at Gigabit Ethernet speeds. If you are creating a new network or upgrading an existing one, it is recommended that you go with Cat5e network cable, or newer cable technologies like CAT6 and CAT7, because although Cat5 is falling further and further behind ever-advancing cabling performance standards, while the small increase in price of Cat5e over Cat5 is more than made up for by “future proofing” your network’s cabling infrastructure. | <urn:uuid:9f27573d-7c47-4fe5-93c5-4658faa91363> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-difference-between-cat5-network-cables-and-cat5e-network-cables.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.889328 | 687 | 2.5625 | 3 |
An optical multiplexer, basically, a device, an input can be routed to the many different output, usually is 16. It utilizes fibr optic technology, is usually controlled by use of software and a totator block, and has an optical path that is actually coupled through several COL-UV/VIS collimating lenses. So, this is basically what a fiber optic multiplexer is…but what exactly do these devices do, and what are they used for? Well, that is a good question, and here is some information that might help you. Although it is hard to tell you all about optical multiplexer in an article, here are some things that may help you better understand the complex operation of equipment.
Basically, optical multiplexer using one side of a fiber optic cable, so a lot of things can send information in the same line. It is like a giant multiple input connector that allow several signal input, and then send a fiber optic cable. This information travels along this wire until, it comes into contact with a demultiplexer, which is like another attachment at the end of the cable that again splits up the signals and sends them on their way.
One of the most obvious uses for a optical multiplexer is the fact that it saves a lot money. Therefore, by placing one end of the multiplexer, a signal separator in another company can save a lot of money in the fiber optic cable.
To some extent, the resulting network information, and the way to travel, can be compared to a large highway. This large highway connections may be two very big city, in the morning, there may be a lot of traffic on a highway to go to other city. However, although all use the same highway, traffic is it does not actually come from the same place, also is not all in the same place.
On the highway traffic slowly from a different side of the road in a city, it will exit in the same way when it arrives at destination. In this way, the illustration is a lot like a fiber multiplexer/demultiplexer system, with the cars being the information, the cities being the multiplexers, and the freeway being the fiber optic cable.
This is a very basic method to describe the optical multiplexer how to work, and i hope it can help you to understand. It is not a too hard to master the concept of, even if the fiber is still a technology, can make people don’t use it all the time. The best way to learn about fiber optic cable is to either research it, or to actually work with it in a network or a system. A lot of different industries now make use of the speed with which fiber optic can deliver information. Light travels a lot faster than electric signals, after all. Optical Multiplexer is popularly with telecommunication operator and suitable in business for communication operator, government and kinds of entities. It is one of the most transmission equipment in point-point fiber optic network. Typical optical multiplexers are Video & Data & Audio Multiplexers, PDH Multiplexer. | <urn:uuid:91919b31-eda5-4d43-90e1-794ede058cc6> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-are-optical-multiplexers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951044 | 640 | 3.28125 | 3 |
A kind of fingerprinting, using the unique noises that emanate from hybrid cyber-physical systems could be used to thwart large-infrastructure attacks that some experts think are a danger.
Fake, malicious control commands injected into electrical grids and other large-scale hybrid physical and cyber installations could devastate systems. But existing control equipment sometimes can’t run encryption; is often remote, therefore hard to patch frequently; and can lack redundancy, so needs to be kept running. It can’t be shut down to be updated like regular networks.
Scientists think that one answer is to harness a major advantage of physical-cyber hybrid equipment—which is that the industrial control performs a physical action, such as turning a valve, or motor on. The action not only creates a unique sound, but also takes a specific amount of time to be performed. The theory is that by knowing what the characteristics should be, anomalies can get spotted—such as a spoofing.
“The stakes are extremely high,” Raheem Beyah, an associate professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, says on the school’s website. “But the systems are very different from home or office computer networks,” he explains.
In the proposed fingerprinting, the scientists use “physics and mathematics to analyze and build a model” based on the equipment, Beyah says.
“Schematics and specifications allow us to determine how the devices are actually operating,” he says.
The team creates computer models to understand the unique device fingerprint. So far, they say they’ve addressed half of the devices used on the electrical grid and reckon they’ve demonstrated that their concept works at two electrical substations.
The sound and time it takes for a control to perform an action “passively fingerprints different devices that are part of critical infrastructure networks.” Beyah says.
It’s not the first time that sound has been used to identify things in an industrial context. Sound monitoring is used to predict mechanical failure too. Connecting vibration and ultrasonic Internet of Things sensors to machines lets algorithms predict problems based on the sound the machine makes.
I wrote about that equipment last year. If you know what the machine should sound like, and it doesn’t sound right, you know there’s a problem. I used the analogy of a washing machine spin cycle that’s been overloaded. It sounded a lot different to one with the right number of towels in it.
That idea is similar to the Georgia Institute of Technology fingerprinting. The spoof doesn’t sound right, or take the correct amount of time. It’s thus bogus.
Beyah reckons his team’s idea also applies to Internet of Things. Those IoT devices have “specific signatures related to switching them on and off,” the Georgia Tech website explains.
“There will be a physical action occurring, which is similar to what we have studied with valves and actuators” in the electrical grid scenario, says Beyah.
So conceivably small IoT devices could ultimately see future cyber protections that don’t involve chip-hogging software. All one might need for IoT security, ultimately, is an adjacent microphone sensor and clock chip, along with a set of algorithms.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:23b79d8a-aae6-4d20-ad75-e1b1d6d336f0> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3040184/security/how-sound-fingerprinting-could-spot-grid-attackers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940219 | 713 | 3.015625 | 3 |
All computer operating systems verify whether the file systems that
they mount at boot time are consistent, meaning that there are no
errors in their internal data structures and the associated
storage that they map to. UNIX, Linux®, and other UNIX-like
operating systems take a clever approach to determining whether
the consistency of a file system needs to be verified (typically by
fast command). When these systems mount a file system,
they set a value in the file system header that marks the
file system as
DIRTY, meaning that it is in use and may be
transiently inconsistent as updates are being written to it. When
file systems are unmounted as part of a system shutdown, they are
CLEAN. When the system reboots,
only file systems that are still marked as
DIRTY need to be checked
File systems are automatically unmounted as part of the system shutdown process, which usually occurs after all non-system processes have been terminated. Regardless, unmounting a file system can still fail with the traditional message:
$ sudo umount /mnt/NAS umount: /mnt/NAS: device is busy
In this case,
busy means that a process is writing to or
running from that file system. The fact that you cannot unmount a
file system in either of these cases is one of the basic rules of
computer systems. If
this were not the case, you could unmount a file system while some
process is writing to a file that it contains, which could leave
the file in an inconsistent state while the file system itself is
The standard Linux version of the
includes a lazy unmount option,
help unmount file systems that are in use. This command
requires Linux kernel version 2.4.11 or greater, which isn't much
of a problem today. Executing
umount -l /name/of/file system
detaches the specified file system from the system's directory
hierarchy so that no new processes can use the file system and
then unmounts the file system when all of the processes
that were accessing it terminate. This can be handy but is
not exactly what you want to use when you need to unmount a
file system now.
If you need to unmount a file system now, and that file system reports that it is busy, you still have some options. If you are the only user of a system, terminating the processes that are preventing you from unmounting a file system is as easy as looking through all your windows for suspended or background processes that are writing to the partition in question or using it as their current working directory and terminating them. However, on multi-user systems with many local and remote users this approach isn't practical. Luckily, the open source community offers some commands that make it easy to identify and terminate such processes.
Locating open files with
lsof (list open files) command displays a list of all open
files and the processes associated with them on a
specific file system, directory, or device. The
lsof command is
available for most UNIX and UNIX-like systems, including IBM® AIX®,
Berkeley Software Distribution (BSD®), Hewlett Packard UNIX (HP-UX®),
Linux, and Solaris®. See Resources for
information about obtaining
lsof for your system.
By default, the
lsof command lists all files, shared libraries,
and directories that are currently open and provides as much
information as possible about each of them. The output from this
is huge, even on a lightly loaded system, so you typically either
supply the name of a directory as a command-line argument or use a
pipe to filter its output. For example, suppose that you want to
unmount a file system that is mounted on the /opt2 directory. To
see all of the processes associated with the /opt2 directory, you
execute a command such as the one shown in Listing 1.
Listing 1. Processes associated with a mounted file system
$ lsof /opt2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 23334 wvh cwd DIR 8,17 4096 2 /opt2 more 23402 wvh cwd DIR 8,17 4096 2 /opt2 more 23402 wvh 3r REG 8,17 10095 264 /opt2/resume.txt
You need to terminate all of these processes before you
can unmount the /opt2 partition. Because none of the processes in
this listing can be writing any files, you could use the
command to terminate the process IDs (PIDs) that are listed in the second
column and then unmount the partition with no problems. Note that
PID 23402 is associated with the last two lines—the
first line indicates that the
more command is running with a
current working directory (
cwd) of /opt2, and the second indicates
more command has the /opt2/resume.txt file open.
However, suppose that the output of the
lsof command looks like Listing 2.
Listing 2. A different set of processes associated with a mounted file system
$ lsof /opt2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 23334 wvh cwd DIR 8,17 4096 2 /opt2 more 23402 wvh cwd DIR 8,17 4096 2 /opt2 more 23402 wvh 3r REG 8,17 10095 264 /opt2/resume.txt bash 21343 djf cwd DIR 8,17 4096 2 /opt2 emacs 21405 djf cwd DIR 8,17 4096 2 /opt2
The first three commands associated with the /opt2 directory are
the same, but the last two are being run by another user. Of
emacs command is designed for editing files, so you
might want to have the user listed in the
USER column save and exit before you terminate that process.
The previous section showed how to identify open files and directories on a local device, but you can just as easily get the same information about a mounted remote file system.
To provide a consistent set of examples for this article, all of the command and output examples refer to mounted partitions from the system shown in Listing 3.
Listing 3. File systems used in this article
$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 230528596 201462232 17356188 93% / /dev/sdb1 240362656 12533532 215619324 6% /opt2 //nas.vonhagen.org/writing 100790048 75945920 197241926 80% /mnt/NAS 192.168.6.166:/mnt/disk1 714854640 386972432 291569696 58% /mnt/yellowmachine
As shown in Listing 3, /mnt/NAS is
the mountpoint for a Samba share called writing that is shared
from the device nas.vonhagen.org. Specifying the name of the
mountpoint as an argument to the
lsof command creates output similar
to Listing 2 but specific to that device and
directory, as shown in Listing 4.
Listing 4. Processes associated with a remote file system
$ lsof /mnt/NAS COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME bash 23236 wvh cwd DIR 0,27 4096 6406145 /mnt/NAS/writing \ (nas.vonhagen.org:/writing)
lsof command also provides options that enable you to restrict
its output to reporting open files and directories on specific
types of devices. For example, as shown in Listing 3, the
/mnt/yellowmachine directory is a mountpoint for a Network File System (NFS) mount of
the /mnt/disk1 directory on the 192.168.6.166 device. You can
easily supply the name of the mountpoint for this device as an
argument to the
lsof command, as shown in Listing 5.
Listing 5. Processes associated with a remote NFS file system
$ lsof /mnt/yellowmachine COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 23334 wvh cwd DIR 0,23 4096 2 /mnt/yellowmachine \ (192.168.6.166:/mnt/disk1)
You can also use the
-N option to list only files and directories that are in use on
NFS-mounted devices, as shown in Listing 6.
Listing 6. Processes associated with all mounted NFS partitions
$ lsof -N COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 23334 wvh cwd DIR 0,23 4096 2 /mnt/yellowmachine (192.168.6.166:/mnt/disk1)
lsof command has many more options that can help you
identify open files and directories on different types of
file systems, as well as the processes that have network sockets
open, who is using specific binaries, and more. The downside
lsof command is that you always have to either contact
users and ask them to terminate certain processes or manually
terminate them yourself. The
fuser command is a more
cryptic but also more powerful command that can do much of your
process termination work for you when run as the root user.
Finding user processes with
fuser (find user processes) command is another open source
application that can help identify processes that are preventing
you from unmounting file systems. The
command finds user processes that are associated with whatever
files, directories, or file system mountpoints that you supply as
command-line arguments. This article focuses on using
file system mountpoints. For more generic information about the
fuser command, see its online reference information. The
command requires that your system supports the
/proc file system. Therefore, it's available for all Linux
distributions and FreeBSD systems. See Resources for
information about obtaining the source code for the
As with the
lsof command, supplying the name of a file system
mountpoint as a command-line argument is the simplest way to use the
command to identify processes that are preventing you from
unmounting a file system:
$ fuser /mnt/yellowmachine /mnt/yellowmachine: 23334c 23697c
The output of the
fuser command simply identifies the PIDs of
processes that are using the specified mountpoint. Each PID
is followed by a single letter that identifies the way in
which the process associated with that PID is using the specified
mountpoint. The most common of these is the letter
shown in the previous example, which indicates the specified
process is using a directory on that file system as its current
Unfortunately, the default output of the
fuser command isn't
end-user friendly, even by Linux standards. The
-v option that adds some output that is similar to the output of the standard
ps command to the
output of the
fuser command, as shown in Listing 7.
Listing 7. User processes on a mounted NFS file system
$ fuser -v /mnt/yellowmachine USER PID ACCESS COMMAND /mnt/yellowmachine: wvh 23334 ..c.. bash wvh 23697 ..c.. emacs
This is handier because it at least identifies what the
processes are. After you obtain the PID information from the
command, you can always use a combination of the standard
egrep commands to get as much detail as possible about the
processes before terminating them, as shown in Listing 8.
Listing 8. Search for specific processes on a system
# ps alxww |egrep '23334|23697' 4 1000 23334 23332 20 0 18148 2076 wait Ss pts/13 0:00 -bash 0 1000 23697 23334 20 0 75964 12352 poll_s S+ pts/13 0:00 emacs -nw file2.txt 0 0 23703 23665 20 0 6060 632 - R+ pts/16 0:00 egrep 23334|23697
You can then use the standard
kill command to terminate the
specified processes manually or, as explained in the next section,
use some of the advanced capabilities of the
fuser command to
terminate them automatically.
Terminating processes with
-k option automatically terminates processes
that it detects are using a mountpoint that you specify as an
argument. You must, of course, execute the
fuser command as root
to be able to terminate processes that may be owned by other
users, as shown in Listing 9.
Listing 9. Terminating processes associated with a mounted NFS file system
# fuser -k /mnt/yellowmachine /mnt/yellowmachine: 23334c 23697c Could not kill process 23697: No such process
In this case, the second process (emacs) was a child of the first
(the bash shell), and therefore terminated when the first was
killed by the
If you want to specify the name of an underlying physical device
rather than simply the mountpoint for the file system that it
contains, you must also specify the
-m option, as shown in Listing 10.
Listing 10. Process listings for mountpoint and devices
# fuser -v /opt2 USER PID ACCESS COMMAND /opt2: wvh 23712 ..c.. bash wvh 23753 ..c.. emacs # fuser -v /dev/sdb1 # fuser -vm /dev/sdb1 USER PID ACCESS COMMAND /dev/sdb1: wvh 23712 ..c.. bash wvh 23753 ..c.. emacs
The first command returns the output that you
would expect because it references the mountpoint for a
file system. The second command shows that you cannot directly
query the underlying device by using the standard
options. The third illustrates that the
-m option enables you to
specify a device directly. You could add the
-k option to either of the first or third
commands in this example to terminate the processes in the
file system that is located on the /dev/sdb1 device.
At some point, every Linux or UNIX systems
administrator needs to unmount a partition in response to some
emergency or simply to remove a device such as a mounted CD-ROM
or DVD. When the system won't let you unmount a
device because that device is busy, examining every process on the
system is both irritating and slow. The
make it easy to identify the processes
that are preventing you from unmounting a file system. The
fuser command even terminates them for you if
you're in a hurry.
- The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills.
- New to AIX and UNIX? Visit the New to AIX and UNIX page to learn more.
- Browse the technology bookstore for books on this and other technical topics.
Get products and technologies
lsofcommand is installed by default on most Linux systems, and source packages for
lsofare available in the repositories used by those distributions. The latest version of
lsofis available via anonymous File Transfer Protocol (FTP) from lsof.itap.purdue.edu in the pub/tools/unix/lsof directory. You can also get
lsoffrom an FTP URL: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof.
fusercommand is part of the PSmisc package, which also includes commands such as
The Linux Foundation provides a simple script called Forced Unmount,
fumount, that tries to automatically combine a number of passes of
fuserwith the appropriate unmount commands to forcibly unmount a specified partition.
- Check out developerWorks blogs and get involved in the developerWorks community.
- Follow developerWorks on Twitter.
- Get involved in the My developerWorks community.
- Participate in the AIX and UNIX forums: | <urn:uuid:56f45b8c-5c1e-43b5-9039-bda08fbab5c8> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/aix/library/au-unmount_partitions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884815 | 3,536 | 3.140625 | 3 |
Why Are Security Technologies Failing Us?
The year 2003 was the year of the worm—in January “Slammer” brought down Bank of America ATMs and grounded Continental Airline flights. In August the SoBig.F worm replicated 1 million times in the first 24 hours alone. While SoBig.F was a social engineering hack that fooled computer users into opening infected e-mail attachments, Slammer was a tiny piece of code that could replicate itself without human action by transmitting itself through a six-month-old hole in Microsoft software. Both attacks caused widespread damage and worldwide panic among corporate and home users alike. Will 2004 be any different?
In order to understand why Internet attacks are still penetrating our best defenses even though we have multiple security solutions in place, we must examine four critical elements of security: people, policy, procedure and product.
Human error often accounts for the reason why Internet attacks are able to spread very quickly. The SoBig.F worm affected so many computers because network administrators and home users who had security software installed on their computers simply neglected to download updates from their security vendors. Computer users need to take a certain level of responsibility to better educate themselves on computer security and how to prevent Internet attacks. Best practices around how to deal with spam, opening suspicious e-mail from anonymous senders or friends and commonly known virus extensions (i.e., .scr, .pif, .exe) are all very important for users to know. Individual users are typically the last line of defense should a virus get past all other security measures in place.
Security policies are not only unique to each company, but are also unique to each department within a company. One size does not fit all, and security policies must be granular and tailored to address the computer habits of each individual group. These policies should also be revisited on a regular basis to ensure that they are able to address the latest computer threats and common user habits. Clearly defined steps computer users should take when they are hit with a virus or some other kind of Internet attack should also be made very clear to employees at all levels within a company. Companies must take some level of responsibility in educating their employees on how to protect themselves from becoming victims.
The Slammer worm was able to shoot through a hole that was discovered six months earlier in Microsoft, but was still left open by many companies due to poor patch management. Procedures behind how to manage updates and patches in the shortest amount of time possible are important in order to address the latest known vulnerabilities. But keep in mind that updates and patches are still reactive by nature. The time between an initial virus outbreak and the time that your vendor is able to provide a patch to you for download leaves you completely open to attacks. A window of vulnerability exists and is left wide open during the time you are waiting for the latest update. Malicious hackers are using more and more sophisticated methods of attack, and our technology needs to be able to keep up and evolve in the same way.
According to the 2003 CSI/FBI Computer Crime and Security Survey, 82 percent of companies reported virus attacks, even though 99 percent reported having anti-virus software. This is because anti-virus software works by comparing incoming names of code with a database of listed and known viruses. Anti-virus software excels at preventing existing viruses from entering your network, but fails to stop any new Internet threats until they are identified and patched by the vendor.
There are newer technologies that can actually identify code as malicious based on what it does rather than what it is called. This “behavior-monitoring” technology stops code that might try to access your address books or write to your registry, for instance, or any other damaging action you identify. Behavior-monitoring technology can protect against a greater number of threats than anti-virus software alone and keeps your network safe during vulnerable periods between when a virus is identified and when a patch is created by the vendor and from when the patch is created until it is installed at your site.
The benefits we receive from the Internet are great, but the threats coming from it are just as great, especially since more and more mission-critical information is now electronic. The dangers and consequences of an Internet attack can be very damaging, but they are also completely preventable. With the right combination of people, policy, procedure and product, you can greatly reduce your chances of becoming a victim during the next virus outbreak of 2004.
Shlomo Touboul is founder and CEO of San Jose-based Finjan Software Ltd. You can reach him at firstname.lastname@example.org. | <urn:uuid:1a9b1198-a0d6-495f-952d-5dd42a93be56> | CC-MAIN-2017-04 | http://certmag.com/why-are-security-technologies-failing-us/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963875 | 941 | 2.828125 | 3 |
At some point in time in the evolution of humans, we began speaking. And while I always had my money on "run away" being the first words uttered by man, it turns out they didn't make the early list. A new paper in the Proceedings of the National Academy of Sciences argues that 23 specific words date as far back as 15,000 years, making them the oldest know words to date. Researchers say the words emanate from seven language families originating in Europe and Asia that may have originated from a common language. The 23 oldest known words are: I Thou We Not That Who This What Ye Old Mother Hand Fire Bark Ashes Worm Black Man/Male To hear To give To pull To flow To spit The research was led by Mark Pagel of the University of Reading's School of Biological Sciences. In the PNAS abstract, Pagel notes that the "search for ever deeper relationships among the World’s languages is bedeviled by the fact that most words evolve too rapidly to preserve evidence of their ancestry beyond 5,000 to 9,000 years." So to get around that inconvenience, Pagel and his team "began with 200 words that linguists agree are common among all European and Asian languages. They then determined which sounded similar and had comparable meanings across the different languages," as Discovery.com explains. From there, the researchers analyzed the roots of these words, reducing the list to an original 23. Now read this:
Some of today's 'desktop' mini-PCs make laptops seem downright bulky in comparison.
Among many other provisions, the legislation "explicitly prohibits" the replacement of American workers...
Sensing a possible stall in your coding career? Here’s how to break free and tap your true potential
Sponsored by Puppet
The YETI Hopper 20 ice-for-days portable cooler and is tough as nails so it can be hauled anywhere you...
A flash drive, reinvented. With the SanDisk Connect Stick in your pocket, in your bag or across the...
Instead of making unfashionable smart glasses, Apple will make fashionable glasses smart. | <urn:uuid:c3b238c1-587d-4683-8886-555f2b9d6b85> | CC-MAIN-2017-04 | http://www.itworld.com/article/2709909/enterprise-software/here-are-mankind-s-oldest-23-words.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943033 | 430 | 3.171875 | 3 |
VLANs are a simple example. 802.1Q tagging supports a theoretical limit of 4,096 VLANs, with actual implementation typically being lower. This means that in a multitenant environment, your scale is limited to about 4,000 tenants--in theory. In reality, the number is much lower because we tie IP subnets to VLANs, and tenants will typically require more than one subnet.
VRFs become another issue as tenants expand. Each tenant network is different and may require separate routing decisions, overlapping subnets, and so on. This leads to hardware limitations, as VRFs are typically run as separate instances of the routing protocol, requiring CPU resources.
Security is another example of unintended interdependency. Today's networks deploy security based on constructs such as addressing, location and VLAN. This has been necessary but is not ideal. The application or service dictates security requirements, so those requirements should be coupled there instead.
Layer 2 adjacency is another complex issue for modern networks. Many applications must exist in the same Layer 2 domain to support capabilities such as virtual machine motion, which causes a need for larger and larger L2 domains. This requires that the VLANs be configured on, and trunked to, any physical switches that a VM may end up on.
While each of these constructs has individual complexities, the real problem arises with the unintended dependencies. IP addressing is broken down into subnets traditionally tied to VLANs on a 1-to-1 basis. This means that an application's L3 communication is dictated by its broadcast domain needs and vice versa. Routing is then tied to the IP scheme, and security, load balancing and quality-of-service policy is often applied based on the VLAN or subnet. These are further tied to physical location based on device configuration (including VLAN, VRF and QoS settings).
There is a need for abstraction of these constructs to provide the originally intended independence that will allow networks to scale as required. This need is shown in current standards pushes: LISP, SDN and VXLAN, for example, are all aimed in some way at removing the tie of location and allowing the application to dictate requirements rather than the infrastructure dictating it.
Within the data center, overlays such as VXLAN are one possible solution. Overlay technologies allow for independent logical networks to be built on top of existing IP infrastructure. They provide some of the abstraction tools required, such as allowing L2 adjacency across L3 networks. Additionally, overlays greatly increase the scale of constructs such as VLANs, moving from 4,000-plus logical networks well into the millions.
These overlays provide one piece of the puzzle of network abstraction. The next step is policy configuration. Rather than traditional methods of applying policy such as security, load balancing and QoS to underlying constructs, these policies should be applied to the applications themselves. Systems like OpenFlow are moving toward this through flow-level programmability, but still have a way to go.
The end goal of the modern network will be service-driven policies and controls. By removing the interdependencies that have been built into today's networks, we will gain the flexibility required by modern compute needs. The purpose of the data center is service delivery, and all aspects must be designed to accomplish that goal.
Disclaimer: This post is not intended as an endorsement for any vendors, services or products. | <urn:uuid:44a4f1dd-c432-498c-8236-9c13cc121078> | CC-MAIN-2017-04 | http://www.networkcomputing.com/networking/why-we-need-network-abstraction/515690215 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00576-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944466 | 707 | 2.53125 | 3 |
Workforce Preview: What to Expect From Gen Z
Some of them aren't old enough to drive yet, but you shouldn't overlook the inevitable influx of Gen Z workers within the next several years. Currently between the ages of 13 and 18, these future professionals may bring the highest level of tech connectivity experience of any prior generation. At least, that's what findings from a recent Wikia survey reveal. The report, "GenZ: The Limitless Generation," indicates that many of these young people spend virtually every waking moment connected to a computer, tablet, smartphone or other electronic device. Such exposure is helping them cultivate collaborative and intellectual exploration skills that will deliver dividends to their future employers. "This generation is using technology in a way that is smarter, more involved and beneficial to their future," says Jimmy Wales, co-founder of Wikia. "Everyone can learn from the ways in which this unbounded younger generation interacts with technology and is able to quickly adapt to the rapidly changing media landscape." These are all skills that bode well for the future workforce. More than 1,200 Wikia users ages 13 to 18 took part in the research, which was conducted in association with Ipsos MediaCT. | <urn:uuid:77a9431d-c2de-47fe-bad9-5bc8e042b7e0> | CC-MAIN-2017-04 | http://www.baselinemag.com/it-management/slideshows/workforce-preview-what-to-expect-from-gen-z | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00173-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967734 | 242 | 2.640625 | 3 |
Open Virtualization Blog from IBM - "On the Origin of KVM" by Dan Frye
IBM's Dan Frye writes about the evolution of KVM in his blog post: As virtualization becomes ubiquitous in enterprises, it's important to note that KVM in particular offers advantages that stem from its simple architecture. At its most basic, KVM is a feature added to Linux that allows Linux and other operating systems such as Windows to be virtualized - as opposed to a separate complete virtualized operating system à la the Xen model. This not only simplifies KVM, but allows KVM to leverage the world's largest and best development community (that's the Linux community in case you were confused...) instead of having to duplicate core operating system development work. Read on. | <urn:uuid:6ca2f4d5-5a22-4964-aa38-6f6ba22faeaf> | CC-MAIN-2017-04 | http://www.dbta.com/Editorial/Linux-News/-Open-Virtualization-Blog-from-IBM---On-the-Origin-of-KVM-by-Dan-Frye-81415.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957283 | 157 | 2.546875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.