text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Hydrogen fuel cell electric vehicles may come to the market soon, according to the U.S. Department of Energy (DOE). As part of an effort to bring hydrogen-powered vehicles to consumer markets and lower costs, the DOE recently completed a seven-year project evaluating the viability of hydrogen fuel cell technology and infrastructure in real-world settings, the agency reported. The project was led by the DOE's National Renewable Energy Laboratory (NREL) and received funding from the DOE's Office of Energy Efficiency and Renewable Energy.
“The project results show that fuel cell electric vehicles have advanced rapidly,” said Keith Wipke, acting manager of NREL’s Fuel Cell and Hydrogen Technologies Program and the report’s lead author. “As vehicle manufacturers and other researchers worldwide continue to focus on the remaining challenges of balancing durability, cost and high-volume manufacturability, there is optimism that manufacturers will introduce FCEVs [fuel cell electric vehicles] to the market within the next few years.”
Over the seven-year project, NREL analyzed data from more than 500,000 vehicle trips comprising 3.6 million miles. The project's goals were to create vehicles that had a 250-mile driving range, 2,000 hours of fuel cell durability, and a $3 per gallon gasoline equivalent for hydrogen production costs. Of the four teams that worked on the project, at least one team exceeded the targets, with one team achieving a 254-mile driving range and a team showing projected average fuel cell stack durability of 2,521 hours. | <urn:uuid:4e65bb0a-b8ca-493b-bf95-10b409ea2d2e> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Hydrogen-Fuel-Cell-Electric-Vehicles-Coming-Soon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00064-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95547 | 322 | 2.96875 | 3 |
Researchers find potential flu strain cure-all
Tuesday, Sep 24th 2013
Now that the weather is growing colder, many medical professionals are urging best practices to guard against colds and influenza. As flu season nears, a major breakthrough in flu vaccines could potentially fight against all forms of the disease in the future. Experiments at the Imperial College London yielded significant results that could give insight to more effectively beating the flu and preventing lives from succumbing to the illness.
Major pandemics like the Spanish flu of 1918 and the 2009 swine flu caused numerous deaths due to the rarity of the strains. However, researchers at the college monitored more than 300 students and staff during the swine flu outbreak and identified a stronger presence of CD8 T cells in those that were only minimally affected by the illness, according to the Mother Nature Network. This study has given medical professionals a potential key to offer a universal vaccine that would be effective against all flu strains due to the CD8 T cell properties to naturally fight viruses. The active cell presence will significantly help further vaccine development and possibly mitigate the need to receive shots every year to combat the most recent strain.
"We already know how to stimulate the immune system to make CD8 T cells by vaccination. … Now that we know these T cells may protect, we can design a vaccine to prevent people getting symptoms and transmitting infection to others," Professor Ajit Lalvani told the news source. "This could curb seasonal flu annually and protect people against future pandemics."
Enforce vaccine protocols
Many people rely on vaccines throughout the year, however, flu season may draw the biggest need for immunizations due to the pervasiveness of the virus. Therefore, it's important to keep vaccines in the proper environment in order to give patients effective treatment. Here are a few best practices for handling vaccines:
- Differentiate vaccine types
There are live and inactivated vaccines, both of which have specific requirements for their optimal conditions. Using environmental control systems, medical professionals should keep the live vaccines in a continuously frozen state at 5 degrees Fahrenheit due to its heat sensitivity, according to the National Center for Immunization and Respiratory Diseases. Inactivated vaccines, on the other hand, are easily affected by excessive heat and cold, putting their optimal temperature at 40 degrees Fahrenheit. Understanding these different vaccine types will help better protect them and keep them effective for patient treatments.
- Observe storage procedures
Cycling through vaccines by using the oldest supplies first will ensure that patients receive safe and effective products. Labeling improperly handled vaccines will help medical professionals identify them from the viable items and make them more efficient in observing proper disposal, according to the Centers for Disease Control and Prevention. The vaccines should also be kept in a separate refrigeration unit from food and drinks. Storing these items with the medical products could contaminate them and risk patient safety.
- Label everything
Although it may be easy to tell one product from another, it's always best to apply labels to each vaccine. By posting descriptions of each item, doctors can identify when the vaccine was opened or reconstituted, allowing better establishment of what products should be used first. This will help medical providers ensure that no patient receives the wrong vaccine and create a system for utilizing the product while it is still viable.??
"For example, multidose vials of meningococcal vaccine should be discarded if not used within 35 days after reconstitution, even if the expiration date printed on the vial by the manufacturer has not passed," according to the National Center for Immunization and Respiratory Diseases.
With flu season getting closer, the vaccine breakthrough could potentially protect people from numerous strains of the disease for years to come. With good temperature monitoring systems, medical professionals can ensure that they are keeping the products in a safe environment and will promote improved patient treatment. | <urn:uuid:21a25f02-4a9c-4bbb-8e18-50bac8f8ad63> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/researchers-find-potential-flu-strain-cure-all-512334 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00550-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949277 | 782 | 3.65625 | 4 |
“At the root of the problem,” says the New Scientist, “are ‘two major gaps in the architecture of the internet’, according to a report from the New England Complex Systems Institute, compiled in 2008 for the US Navy and released to the public this week.” Those ‘gaps’ include firstly an inability to block malware as a whole rather than after recognizing individual instances, and secondly – although not made explicit in the article – the lack of IPv4 capacity for future internet expansion.
The two technologies that are best suited to solve these problems are SAVA for malware and IPv6 for space – both of which are being implemented in China’s next-generation internet project. But SAVA is hardly new, nor its use by China unknown. In 2007 Jianping Wu at China’s Beijing Tsinghua University published a paper, Source Address Validation: Architecture and Protocol Design, that explained, “This architecture is deployed into the CNGI-CERNET2 infrastructure - a large-scale native IPv6 backbone network of the China Next Generation Internet project. We believe that the Source Address Validation Architecture will help the transition to a new, more secure and sustainable Internet.”
Wu expanded on this in 2008, in Building a next generation Internet with source address validation architecture. In this he explains how SAVA can be implemented to make the internet more secure since every packet transmitted across the network will hold an authenticated source IP address. That address must be authorized, unique and traceable. “The packets that do not hold an authenticated source address will not be forwarded in network. Therefore it is impossible to launch network attacks with spoofed source addresses,” he wrote.
Other advantages he mentions include fine grained network management, where providers “can easily bill users based on their end-to-end usage, as is the case with telephony;” application authentication without the need for cryptography; and the acceleration of new internet applications. For the last, he notes, “P2P applications and other large scale multimedia applications (for example, VoIP using SIP), can be accelerated in deployment and improved in performance by using globally unique authenticated IPv6 addresses.”
That last point is important. “While SAVA is applicable for IPv4 networks it is designed for IPv6 networks,” he continues. The fundamental reason for China’s next-generation internet being more advanced than anything in the West is not some secret project but its more rapid deployment of IPv6, something the West is still struggling with. The New Scientist quotes Donald Riley, an information systems specialist at the University of Maryland: “If you are thinking about the future of the internet, anyone that explores that territory and maps it out first has a definite competitive advantage; especially with the resources available to China.” | <urn:uuid:d8cd3a6f-d5d3-4ffc-9d2c-75c90e929735> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/chinas-next-generation-internet-is-streets-ahead/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929106 | 594 | 2.953125 | 3 |
As a system administrator, you run across numerous challenges and problems. Managing users, disk space, processes, devices, and backups can cause many system administrators to lose their hair, good humor, or sanity. Shell scripts can help, but they often have frustrating limitations. This is where a full-featured scripting language, such as Python, can turn a tedious task into an easy and, dare I say it, fun one.
The examples in this article demonstrate different Python features that you can put to practical use. If you work through them, you'll be well on your way to understanding the power of Python.
A module is an important Python concept. Basically, a module is a resource
you import in order to use it. This process is comparable to taking a piece of
paper out of a file cabinet and putting it on your desk, ready for use. You import
modules using the
import command, which appears at the
top of each of the example programs. Modules are available for database
connectivity, network programming, operating system services, and hundreds of
other useful areas.
Put Python to work
Python is a full-featured, robust programming language and, as such, it has tons of features. Learning it could be a task of epic proportions. However, remember that many Python features, such as the GUI toolkits, are of limited value to system administrators. That's why this article uses specific examples: They demonstrate the skills you need to effectively write Python scripts to manage systems.
Notes on the examples
- Each example includes a
except:with a surrounding block of code. This is an implementation of rudimentary error handling. Python has extensive support for handling all types of exceptions but, for the purposes of these example programs, I've kept it simple.
- These examples were run on Python 2.5 running on a Linux® box, but they should work on any UNIX®/Linux machine.
- You'll undoubtedly think of ways these scripts can be improved. This is good! The nature of Python scripts is that they can be easily modified and customized without needing to recompile code.
Example 1: Search for files and show permissions in a friendly format
The first example program (see Listing 1) searches for files that match a pattern
(based on user input) and displays the results to the screen, along with the
permissions assigned to the particular files. At first, you might think this program
doesn't do much more than execute a
however, it displays results in a customized way, and your options for displaying this
enhanced find are limitless. The example shows you how to take a system command
and make it better (or at least more customized).
The script basically performs three tasks:
- Get the search pattern from the user.
- Perform the search.
- Present the results to the user.
In writing the script, constantly ask yourself this question, "Which task is this code supporting?" Asking yourself this question can increase the focus of your work and efficiency.
Listing 1. Search for files and list results with file permissions
import stat, sys, os, string, commands #Getting search pattern from user and assigning it to a list try: #run a 'find' command and assign results to a variable pattern = raw_input("Enter the file pattern to search for:\n") commandString = "find " + pattern commandOutput = commands.getoutput(commandString) findResults = string.split(commandOutput, "\n") #output find results, along with permissions print "Files:" print commandOutput print "================================" for file in findResults: mode=stat.S_IMODE(os.lstat(file)[stat.ST_MODE]) print "\nPermissions for file ", file, ":" for level in "USR", "GRP", "OTH": for perm in "R", "W", "X": if mode & getattr(stat,"S_I"+perm+level): print level, " has ", perm, " permission" else: print level, " does NOT have ", perm, " permission" except: print "There was a problem - check the message above"
The program follows these steps:
- Ask the user for a search pattern (lines 7-9).
- Print a listing of files found (lines 12-14).
- Using the
statmodule, get permissions for each file found and display them to the screen (lines 15-23).
When the program is run, the output looks like that shown in Listing 2.
Listing 2. Output of the first example
$ python example1.py Enter the file pattern to search for: j*.py FILES FOUND FOR PATTERN j*.py : jim.py jim2.py ================================ Permissions for file jim.py : USR R USR W USR X GRP - GRP - GRP - OTH - OTH - OTH - Permissions for file jim2.py : USR R USR W USR X GRP R GRP - GRP X OTH R OTH - OTH X
Example 2: Perform operations on a tar archive that is based on menu selection
The previous example prompted the user for a search pattern to use. Another way to get information from the user is through a command-line argument. The program in Listing 3 shows how to do that in Python: The code takes a tar filename as a command-line argument and then prompts the user with several options.
This example also shows a new way to attack the problem. The first example used the
command module to run the
find command and capture the
output. This approach can be clumsy and isn't very "Pythonic." This example uses
the tarfile module to open a tar file, which has the advantage of allowing you to
use Python attributes and methods as you manipulate the file. With many modules
provided by Python, you can do things that can't be done through the command line.
This is a good example of implementing a menu system in Python. The program performs different actions based on your selection:
- If you press 1, the program prompts you for the file name in the archive to extract the current directory to and then extracts the file.
- If you press 2, the program prompts you for the file name and then displays the file information.
- If you press 3, the program lists all the files in the archive.
Listing 3. Perform actions on a tar archive based on your menu selection
import tarfile, sys try: #open tarfile tar = tarfile.open(sys.argv, "r:tar") #present menu and get selection selection = raw_input("Enter\n\ 1 to extract a file\n\ 2 to display information on a file in the archive\n\ 3 to list all the files in the archive\n\n") #perform actions based on selection above if selection == "1": filename = raw_input("enter the filename to extract: ") tar.extract(filename) elif selection == "2": filename = raw_input("enter the filename to inspect: ") for tarinfo in tar: if tarinfo.name == filename: print "\n\ Filename:\t\t", tarinfo.name, "\n\ Size:\t\t", tarinfo.size, "bytes\n\ elif selection == "3": print tar.list(verbose=True) except: print "There was a problem running the program"
The program follows these steps:
- Open the tar file (line 5).
- Present the menu and get the user selection (lines 8-11).
- If you press 1 (lines 14-16), extract a file from the archive.
- If you press 2 (lines 17-23), present information about a selected file.
- If you press 3 (lines 24-25), present information about all the files in the archive.
The output is shown in Listing 4.
Listing 4. User menu for second example
$ python example2.py jimstar.tar Enter 1 to extract a file 2 to display information on a file in the archive 3 to list all the files in the archive
Example 3: Check for a running process and show information in a friendly format
One of the most important duties of a system administrator is checking on running
processes. The script in Listing 5 gives you some ideas. The
program takes advantage of UNIX's ability to run a
command on output generated by a command, which lets you automatically narrow the
data Python has to parse.
This program also uses the string module. Get to know this module—you'll use it often.
Listing 5. Display information on a running process in a friendly format
import commands, os, string program = raw_input("Enter the name of the program to check: ") try: #perform a ps command and assign results to a list output = commands.getoutput("ps -f|grep " + program) proginfo = string.split(output) #display results print "\n\ Full path:\t\t", proginfo, "\n\ Owner:\t\t\t", proginfo, "\n\ Process ID:\t\t", proginfo, "\n\ Parent process ID:\t", proginfo, "\n\ Time started:\t\t", proginfo except: print "There was a problem with the program."
The program follows these steps:
- Get the name of a process to check and assign it to a variable (line 3).
- Run the
pscommand and assign the results to a list (lines 7-8).
- Display detailed information about the process with English terms (lines 11-16).
The output is shown in Listing 6.
Listing 6. Output of the third example
$ python example3.py Enter the name of the program to check: xterm Full path: /usr/bin/xterm Owner: knowltoj Process ID: 3220 Parent process ID: 4308 Time started: 16:51:46
Example 4: Check userids and passwords for policy compliance
Managing security is a critical part of the job for any system administrator. Python makes this job easier, as the last example illustrates.
The program in Listing 7 uses the pwd module to access the password database. It checks userids and passwords for security policy compliance (in this case, that userids are at least six characters long and passwords are at least eight characters long).
There are two caveats:
- This program works only if you have full rights to /etc/passwd.
- If you use shadow passwords, this script won't work (however, Python 2.5 does have a spwd module that does the job).
Listing 7. Check userids and passwords for compliance with security policy
import pwd #initialize counters erroruser = errorpass = #get password database passwd_db = pwd.getpwall() try: #check each user and password for validity for entry in passwd_db: username = entry password = entry if len(username) < 6: erroruser.append(username) if len(password) < 8: errorpass.append(username) #print results to screen print "The following users have an invalid userid (less than six characters):" for item in erroruser: print item print "\nThe following users have invalid password(less than eight characters):" for item in errorpass: print item except: print "There was a problem running the script."
The program follows these steps:
- Initialize the counter lists (lines 4-5).
- Open the password database and assign data to a list (line 8).
- Check users and passwords for validity (lines 12-18).
- Output invalid users and passwords (lines 21-26).
The output is shown in Listing 8.
Listing 8. Output of the fourth example
$ python example4.py The following users have an invalid userid (less than six characters): Guest The following users have invalid password(less than eight characters): Guest johnsmith joewilson suejones
Other uses for scripts
You can use Python in a number of ways to manage systems. One of the best things you can do is analyze your work, determine which tasks you perform repeatedly, and explore whether Python modules are available to help you with those tasks—almost certainly they are.
Some specific areas where Python can be of great help are as follows:
- Managing servers: Checks patch levels for a particular application across a set of servers and updates them automatically.
- Logging: Sends an e-mail automatically if a particular type of error shows up in the syslog.
- Networking: Makes a Telnet connection to a server and monitors the status of the connection.
- Testing Web applications: Uses freely available tools to emulate a Web browser and verifies Web application functionality and performance.
These are just a few examples—I'm sure you can add useful ideas of your own.
With its easy-to-learn language; its ability to handle files, processes, strings, and numbers; and its almost endless array of helper modules, Python is a scripting language that looks like it was made for system administrators. It's a valuable tool for any system administrator's toolbox.
- The Python tutorial: This tutorial is a great source of basic language information.
- "Discover Python" (Robert J. Brunner, developerWorks): Read all the articles in the developerWorks "Discover Python" series.
- Official Python Web site: This site provides a wealth of information and downloads.
- The Python Cookbook: This site is maintained by ActiveState and is a user community with user-contributed scripts on virtually every programming topic.
- Planet Python: This site covers all things Python.
- AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills.
- New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX.
- AIX Wiki: A collaborative environment for technical information related to AIX.
- Search the AIX and UNIX library by topic:
- Browse the Python section of the Safari Technology Bookstore for books on these and other technical topics.
- Safari bookstore: Visit this e-reference library to find specific technical resources.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
- Podcasts: Tune in and catch up with IBM technical experts.
Get products and technologies
- IBM trial software: Build your next development project with software for download directly from developerWorks.
- Download the latest version of Python.
- Participate in the developerWorks blogs and get involved in the developerWorks community.
- Participate in the AIX and UNIX forums:
- You can also visit www.jamesknowlton.com, the author's Web site. | <urn:uuid:622ca77a-be3e-4627-b030-e204a472bf3f> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/aix/library/au-python/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.848382 | 3,182 | 3.046875 | 3 |
iPad features enhance learning in special education
Apple Education has a long history of supporting students with special needs. Since their early days, they incorporated text-to-speech in their operating systems, a function that enables audio playback of text. And in Apple iOS, students can highlight text and listen to it – a function that allows them to self-check their work or listen as they read.
As iOS improved, so did the iPad features aimed at special education. Voice-over, zoom, speech, hearing aids and inverted colors are some of the options that allow students with special needs to more easily interact and learn in the classroom. Additionally, auto correct, define, keyboard shortcuts, predictive keyboard and spellcheck are features that help students – all managed in Casper Suite.
Other assistive technologies for the iPad include:
- Bluetooth input devices (Includes keyboards, switches, multiple switches and assistive touch accommodations.)
- Braille keyboards
- Guided Access (Allows use of an app without ability to leave or access specific iOS functions.)
- Mirrored handwriting (Used by dyslexic students to promote free-formed thinking. An enhanced experience with iPad Pro and Apple Pencil.)
Don’t want all of this functionality available all of the time? No problem. The groups functionality within JAMF Software’s Casper Suite allows school admin to decide which students receive which capabilities at pre-determined times.
See Apple’s current work around special education here.
JAMF extends Apple functionality in Casper Focus
JAMF Software extended Apple’s Guided Access to Casper Focus – a free feature added to Casper Suite. It’s available worldwide to those with or without VPP or DEP. By using Guided Access, teachers can engage with students’ iPads from across the room. This functionality allows educators to discretely guide a student to a website without alerting others of their need for help.
Through the Casper Focus feature, teacher-to-student messages, teachers have the ability to send messages to one, some or all of their students. The message appears on a locked home screen until the teacher releases student devices. This ability allows them to:
- Quickly gain students’ attention
- Unobtrusively prompt a student(s) to get on task
- Send a personal reminder to a student in order to help with transitions, i.e. “Five minutes before clean up.”
Additionally, teacher access to reset passcodes allows for a quick and easy method to resolve students accidently being locked out of their iPad, while also protecting students’ personal privacy. While some schools may block the use of passcodes, this leaves the students’ content accessible to anyone.
Coming soon from Apple
Apple’s iOS 10 will extend the iPad capabilities to support all users, regardless of ability. In their recent WWDC session, What’s new in Accessibility, Apple highlighted the following enhanced features coming in the fall release.
With enchanted typing feedback, iOS 10 will allow the last typed character or word to be read immediately after it’s typed. The feedback feature was especially designed for dyslexic students and adults. When paired with Speak Selection, which was hinted to have improved capabilities, it provides users self-check abilities, enabling them to proof-listen to their own writing.
The new Magnifier functionality utilizes the iPad’s native camera resolution to enable zoom controls, freeze frame capturing and the color application filters for increased contrast.
To further support hearing accessibility, iOS 10 will include a software version of Text Telephone (TTY), removing the barrier of access due to the need for specialized equipment. This will allow easy dialing of non-TTY phones and recognized standard TTY quick type predictions.
Creating an experience that’s customized for students’ individual needs is powerful. Discover how you can enhance student learning with personalized iPads. | <urn:uuid:8101c6a3-2020-4564-9a4c-e7e7b022dece> | CC-MAIN-2017-04 | https://www.jamf.com/blog/new-ipad-features-mean-brighter-futures-for-students-especially-in-special-ed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924246 | 814 | 2.921875 | 3 |
In 2010 there was more than 580 million web-based attacks against users’ computers — nearly eight times more than the number of online attacks recorded in 2009, according to Kaspersky Lab’s annual Threat Evolution report.
This surge is related to the prevalence of exploits that allow hackers to infect website visitors’ computers without them noticing, using the notorious drive-by download technology. A single malicious program can penetrate a user's computer via dozens of vulnerabilities in browsers and other applications used to process web content, which has led to a proportionate increase in the number of online attacks.
In 2010, the total number of online attacks logged by Kaspersky Lab online antivirus products, and local virus incidents logged on user computers, exceeded 1.9 billion. Attacks launched via web browsers represented more than a third of this indicator, which is over 500 million attacks. Browsers became the primary route for infecting users’ computers with malware and Kaspersky Lab experts don’t expect that to change in the near future.
According to Kaspersky Lab, P2P networks are the second most commonly used channel for spreading threats. Cybercriminals are also actively using popular social networks such as Facebook and Twitter to spread their misery. The rapid advance of malicious code is aided by the numerous vulnerabilities in these sites, which means the number of social network-based attacks will continue to grow.
Although new malicious programs appeared in 2010 at the same rate as in 2009, their complexity and functionality — and thus the threat they pose to users — increased. Some of the most complex threats used new technologies to penetrate the 64-bit platform, and many others propagated using the zero-day vulnerabilities. Examples of the most sophisticated threats include the Mariposa, ZeuS, Bredolab, TDSS, Koobface, Sinowal and Black Energy 2.0 botnets, each of which brought together millions of infected computers and the TDSS backdoor, which infects the MBR and launches destructive activity even before the OS boots up.
The Stuxnet worm represents today’s technological peak in virus writing. This malicious program simultaneously uses several vulnerabilities in the Microsoft Windows operating system, bypasses system verification using legitimate digital certificates (that have since been revoked), and attempts to control programmable logic controllers and the frequency converters involved in critical engineering processes.
Malicious programs similar to Stuxnet could be used in targeted attacks against specific companies. The increased number of targeted attacks was another trend noted in 2010. Examples include some very narrowly-focused cyber attacks, such as Aurora, which was launched in order to steal user information and source code from software projects of several major companies, including Google and Adobe. It is possible that now, programs like Stuxnet will be more frequently included in the arsenal of some companies and secret services.
The detection of threats that have already penetrated users’ systems gives us a picture of the computer infection level of any given country. The dubious honour of leading positions in this category was shared by developing countries in Asia and Africa in 2010, due to the rapid pace at which Internet access is becoming available, combined with low levels of computer literacy among the users in those regions. The countries with the lowest percentage of infected computers in 2010 were Japan, Germany, Luxembourg, Austria and Switzerland.
For a complete version of Kaspersky Lab’s Threat Evolution report, please visit: | <urn:uuid:0f72f6dd-95da-4561-8c9a-1ec604cbe4a2> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2011/Number_of_online_attacks_soar_in_2010 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946032 | 699 | 2.984375 | 3 |
22.214.171.124 What is Cipher Feedback Mode?
In CFB mode (see Figure 2.4), the previous ciphertext block is encrypted and the output produced is combined with the plaintext block using XOR to produce the current ciphertext block. It is possible to define CFB mode so it uses feedback that is less than one full data block. An initialization vector c0 is used as a "seed" for the process.
ci = Ek(ci-1) Åmi mi = Ek(ci-1) Åci
Figure 2.4: Cipher Feedback mode (click for a larger image)
CFB mode is as secure as the underlying cipher and plaintext patterns are concealed in the ciphertext by the use of the XOR operation. Plaintext cannot be manipulated directly except by the removal of blocks from the beginning or the end of the ciphertext; see next question for some additional comments. With CFB mode and full feedback, when two ciphertext blocks are identical, the outputs from the block cipher operation at the next step are also identical. This allows information about plaintext blocks to leak. The security considerations for the initialization vector are the same as in CBC mode, except that the attack described in Question 126.96.36.199 is not applicable. Instead, the last ciphertext block can be attacked.
When using full feedback, the speed of encryption is identical to that of the block cipher, but the encryption process cannot be easily parallelized. | <urn:uuid:a322d3f6-cfc0-4ef0-b246-3349a25de029> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-cipher-feedback-mode.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918995 | 305 | 3.328125 | 3 |
The use of supercomputing to help maintain the US nuclear weapons arsenal is one of the more specialized applications of high performance computing. Simulating the behavior of these devices inside a computer has allowed the US to adhere to the Comprehensive Test Ban Treaty (CTBT), while maintaining some confidence that the country’s nuclear deterrence capabilities remain intact. The responsibility to support our nuclear arsenal virtually has fallen on the NNSA’s Stockpile Stewardship Program, under the Department of Energy.
But the ability of these supercomputing models to be able to replace actual nuclear testing is still somewhat controversial. A report by Chris Schneidmiller at Global Security Newswire weighs some of pros and cons of physical versus simulated nuclear testing and the ramifications of our CTBT obligations. In particular, Schneidmiller begins by pointing out that skeptics believe that “computer modeling cannot effectively replace actual testing in terms of ensuring the upkeep of today’s stockpile, nor for preparing new nuclear weapons that might one day be necessary to safeguard the United States from future threats.”
In addition new types of weapons might need to be developed to counter new types of threats. The Bush administration’s proposal for the so-called “bunker busting” nuke is one such example. Having to develop an entirely new bomb without ever being able to detonate it is problematic at best.
The problem is that without some sort of physical testing, there is no assurance that the real-world behavior of the weapons is being reflected in computer model. As former Defense Secretary Caspar Weinberger pointed out, the confidence that the weapons will work is the whole basis of our nuclear deterrence strategy. And the only way to demonstrate that is to test the devices.
Of course, the whole idea behind the Stockpile Stewardship Program is to demonstrate that confidence without the testing. According to Undersecretary of State for Arms Control and International Security Ellen Tauscher, the directors of national labs maintain that the program has “provided a deeper understanding of our arsenal than they ever had when testing was commonplace.”
A 2002 study from the National Academy of Sciences concluded that the US nuclear stockpile could indeed be maintained, given enough computing power and other technical resources. Particularly in the 1990s, whether supercomputers were capable of accurately simulating these weapon systems was an open question. Today, with petascale machines available, there is less concern about capability.
In March at the Carnegie International Nuclear Policy Conference, CTBT opponent Senator Jon Kyl said that Stockpile Stewardship Program offered “both good news and bad news” regarding our nuclear arsenal, but expressed reservations that the program was the ultimate answer to maintaining our nuclear deterrence. | <urn:uuid:e6b3b42f-d6cf-466e-b82c-89d6948fb4ce> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/07/15/nuclear_deterrence_in_supercomputing_we_trust/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932483 | 557 | 2.921875 | 3 |
US defence agency DARPA is planning to harness the power of the Internet of Things (IoT) for military purposes.
With the US looking for increasingly innovative ways to gain an advantage in the battlefield, Defense Advanced Research Projects Agency (DARPA) is investing in the development of sensors and artificial intelligence systems that could facilitate the extraction and analysis of information from enemy devices and communication.
With information and intelligence becoming more and more key in the guerrilla warfare of the 21st century, tailored IoT systems could arm the US with the data needed to stay one step ahead.
DARPA research and funding has been partly responsible for plenty of technologies that are commonplace today. It played a part in the development of the internet, the precursor of what we now know as virtual reality, and modern global positioning systems (GPS).
Read more: US Air Force Mulls IoT Deployment
But DARPA is also looking into ways the IoT can be used in security networks in the case of an attack on US soil. A research program aimed at preventing attacks involving radiological “dirty bombs” and other nuclear threats has successfully developed and demonstrated a network of smartphone-sized mobile devices that can detect the tiniest traces of radioactive materials, according to a news post on the agency’s website.
“Combined with larger detectors along major roadways, bridges, other fixed infrastructure, and in vehicles, the new networked devices promise significantly enhanced awareness of radiation sources and greater advance warning of possible threats,” it said.
The news post from August 2016 read: “Combined with larger detectors along major roadways, bridges, other fixed infrastructure, and in vehicles, the new networked devices promise significantly enhanced awareness of radiation sources and greater advance warning of possible threats.”
Fighters become a part of the IoT
Graham Grose, Industry Director of the IFS Aerospace & Defence Centre of Excellence, pointed out that fighter jets are becomingly increasingly connected, and able to gather huge quantities of data from a single flight.
“The military is no stranger to new technology,” he said. “Companies in the field have been taking advantage of 3D printing, wearable and virtual reality technology to improve efficiency and reduce operating costs – IoT included.”
“With IoT, inexpensive sensors can collect important flight data. For example, at the unveiling of the new Bombardier C series at the Paris Airshow last year, it was reported that the Pratt & Whitney PW1000G family engine has around 5000 sensors able to generate up to 10GB of data per second. This means a single twin-engine aircraft with an average flight time of 12 hours can be producing up to 844TB of data. To put this in perspective, it is estimated that Facebook accumulates around 600TB of data per day.”
Speaking to Internet of Business, Grose also highlighted the importance of separating useful data from the ‘noise’. “The next step is for the support of a maintenance system that can filter out the ‘noise’ and suggest actions that provide real business benefits,” he said. “In A&D, these include shortening flight times, optimising jet fuel consumption, improving engine efficiency and reducing maintenance time and cost.” | <urn:uuid:cb5a41c4-fbd8-4754-af0b-bc43d3ab82e8> | CC-MAIN-2017-04 | https://internetofbusiness.com/darpa-wants-militarise-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00082-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939246 | 676 | 2.890625 | 3 |
As the saying goes, to outcompete, a nation or business must out-compute. An explosion in the number of computationally-driven disciplines has created a huge demand for highly-trained scientists and engineers. Congress is currently considering a bill that would help bridge the skills gap and bolster national competitiveness.
The Computer Science Education Act (HR 2536) seeks to make computer science a core competency by strengthening elementary and secondary computer science education. While science and engineering were hallmarks of innovation over the 19th and 20th centuries, what sets the 21st century apart is the rise of information technology and the knowledge-based economy. As the bill’s authors point out, computer science drives the information technology sector of the United States, which is a key contributor to the economic output of the nation.
Last week, the bill hit at tipping point, reaching the level of 100 supporters from both sides of the aisle, making it the most broadly cosponsored education bill in the House. At last count, 60 Republicans and 52 Democrats had signed on as co-sponsors.
The news prompted Code.org COO Cameron Wilson to blog:
“Even in a polarized Congress, computer science education has momentum and bipartisan backing.”
The act, which was introduced by Representatives Susan Brooks (R-IN) and Jared Polis (D-CO) on June 27, 2013, makes computer science a core academic subject by amending title IX (General Provisions) of the Elementary and Secondary Education Act of 1965.
Under the amendment, “computer science” is defined as the study of computers and algorithmic processes, including the study of computing principles, computer hardware and software design, computer applications, and the impact of computers on society.
The folks at Code.org are working hard to bring attention to this bill, which is cost-neutral and doesn’t introduce new programs or mandates.
“[The bill] removes barriers that make it harder for states to use Federal funding for computer science education,” explains Wilson, and “clarifies that federal programs can fund computer science programs and can support local educators who want to put computer science in our schools.”
The full text presents a list of findings that argue in favor of stronger computer science education, including the following:
- The Bureau of Labor Statistics predicts that there will be 9,200,000 jobs in the fields of science, technology, engineering, and mathematics by the year 2020. Half of these, or 4,600,000 jobs, will be in computing.
- In the 2012-2013 school year, only nine states allowed computer science courses to count toward secondary school core graduation requirements, chilling student interest in computer science courses.
- While students who take the College Board’s AP computer science test are eight times more likely to major in computer science in college, in 2011, only 1 percent of all AP exams were in computer science. The test also highlighted the STEM gender gap with male test-takers outnumbering females by four to one.
A curriculum framework that would support the goals of this bill already exists. The Association for Computing Machinery and the Computer Science Teachers Association established a four-part, grade-appropriate framework of standards for computer science education to guide local and state efforts. The first part (Level I), intended for K-8, focuses on basic computer literacy skills. There is also a second and third level, and even an optional fourth level for advanced high school learners.
The backers of the “Computing in the Core” curriculum write:
“After completing any of these courses, students have useful and marketable knowledge and skills. The highest-level courses will impart very specific skills, including the ability to design and implement solutions to problems by writing, running, and debugging computer programs, the ability to use and implement commonly‐used algorithms and data structures, and to develop and select appropriate algorithms and data structures to solve problems. While these skills may sound highly technical, they teach core critical thinking skills that young people need to be successful – in computing or any field – in the 21st Century.” | <urn:uuid:6dc2ced8-d5fe-4ca7-be34-ea590c5c9386> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/08/04/computer-science-education-act-hits-critical-milestone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941779 | 851 | 2.859375 | 3 |
After the Stuxnet worm exploited a zero-day vulnerability in a popular industrial controller, it's clear that operators of large-scale infrastructure management systems need to work with the IT security community to better safeguard these critical systems.
Industrial Control Systems (ICS) are used by utility companies and manufacturers to manage critical infrastructures worldwide, including electric power plants, oil/gas operations, pipelines, mining operations and transportation. Today's security problems are like never before— which is why those working in ICS need help from those working in the IT security industry.
ICSs include Supervisory Control and Data Acquisition (SCADA); Distributed Control Systems (DCS); Programmable Logic Controllers (PLC); Remote Terminal Units (RTU); Intelligent Electronic Devices (IED); field controllers; sensors; emission controls; building controls such as fire suppression, thermostats and elevator controls; and automated business and residential meters.
ICSs measure, control and provide the operator a view of the process. The operator view is often Windows-based and appears to be traditional business IT technology. However, the field devices that measure and control the process use proprietary operating systems and communication protocols and have their own unique characteristics. These field systems do not look like business IT systems and are technically and administratively different from IT systems. Even security policies are different: ISO-27001 applies to IT, but ICSs utilize ICS-specific policies such as those from the International Society for Automation (ISA). ICSs used to be isolated – out of sight, out of mind.
But that's all changing. ICSs are being upgraded with advanced communication capabilities and networked (including to the Internet) to improve process efficiency, productivity, regulatory compliance and safety.
These networks can be within a facility or even between facilities that are continents apart. When an ICS does not operate properly, the resulting problems can range in impact from minor to catastrophic, including deaths and physical destruction.
Until recently, ICS were not specifically targeted by hackers and were only impacted by the law of unintended consequences when these systems were connected to the Internet.
That changed last month with the Stuxnet worm. The worm was directed at a very popular process controller (Siemens Simatic Programmable Logic Controller) and exploited a zero-day vulnerability in the PLC's WINCC SQL database.
The exploit lay bare the disconnect between the IT and ICS communities. This particular PLC (as well as many other ICSs) burned the default passwords in software. The hackers exploited this design to get access to the database.
The nominal response would be to change the default password. However, because of the controller software, a change to the default password would shut down the PLC since the applications depend on that password.
Now what's needed is for the IT community to help the ICS community secure these thousands of devices, even though the default passwords cannot be changed.
Cultural issues compound security
It can be argued that the ICS community is about 10 years behind the IT community in securing systems. We need help to catch up. However, cultural issues between the IT and ICS communities make this difficult.
Unfortunately, there are competing technical and administrative requirements between IT and ICS systems as well as inter-departmental conflicts because of scarce dollars. The IT community understands security, but not the technical domain of these systems. Conversely, the ICS community understands the technical domain but not security. We need to get both sides working together.
Moreover, the Stuxnet worm should once and for all dispel the notion that ICSs are not susceptible to targeted cyber attacks. I've written a book, Protecting Industrial Controls Systems from Electronic Threats that details the specific differences between IT and ICS systems and provides approximately 20 actual ICS cyber incidents (there have been more than 170 ICS cyber incidents to date including four that have killed people).
We also need the forensics community working with us as there are minimal ICS cyber forensics capabilities. Most of the 170 ICS cyber incidents were not identified as cyber. The 10th Applied Control Solutions ICS Cyber Security Conference will be held Sept. 20-23 in Rockville, Md. This conference is focused exclusively on cyber security of ICSs. As an example of the need for ICS cyber forensics, two ICS engineers spoke at last year's conference. Each had multiple ICS cyber impacts - one actually shut down a major coal-fired power plant. However, in both cases, the logging was not capable of identifying who or when.
The bottom line is that we need help and soon. A major ICS cyber incident can cause mind-numbing consequences as can be seen from the recent BP oil spill.
Joe Weiss is managing partner at Applied Control Solutions and author of "Protecting Industrial Control Systems from Electronic Threats."
Read more about wide area network in Network World's Wide Area Network section.
This story, "Opinion: IT Needs to Help Secure Industrial Control Systems" was originally published by Network World. | <urn:uuid:a1b12024-c59f-4f1e-9178-b408a91972f3> | CC-MAIN-2017-04 | http://www.cio.com/article/2416071/security0/opinion--it-needs-to-help-secure-industrial-control-systems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955672 | 1,037 | 2.6875 | 3 |
One of the most enduring software components in the world is now celebrating a 25th birthday, the GNU C Compiler or GCC as it's commonly known.
The GNU C Compiler is a tool that compiles the even more enduring C programming language into binaries that can be executed on a variety of platforms. When Richard Stallman launched the first version in 1987, few have believed how far the open source tool would go.
GCC had a critical role to play in pioneering computing and it would also come to be synonymous with the open source operating system Linux, which makes continual use of the compiler in every day activities such as updating packages and applying updates.
In fact GCC is a critical software component in much of the computing infrastructure of the world. Even while commercial alternatives tend to be used to build Windows ecosystem applications, GCC itself is taught to every computer science student precisely because of the open source and cross platform nature of the compiler.
Far from being some piece of digital history, GCC is being improved with new versions just as rapidly today as at any time in the previous 25 years. On the eve of the anniversary milestone, the GCC developers unveiled version 4.7.0 with a raft of new features.
One such feature is an "improved link-time optimisation" system which reduces memory use considerable. With earlier versions of GCC, compiling the Firefox web browser needed 8GB of RAM to be optimised while on the new version it needs just 3GB.
GCC 4.7.0 also adds support for further elements of the proposed the next-generation version of C programming language dubbed C++11. A full list of changes for GCC 4.7.0 is here and Linux.com has a report featuring a brief history of the compiler. | <urn:uuid:a5b44914-1100-44d9-a87d-f56875bd6c75> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/gnu-c-compiler-turns-25/028123 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934866 | 357 | 3.71875 | 4 |
Wilson W.C.,Arthropod Borne Animal Diseases Research Unit |
Bennett K.E.,Arthropod Borne Animal Diseases Research Unit
Journal of Medical Entomology | Year: 2010
To determine which arthropods should be targeted for control should Rift Valley fever virus (RVFV) be detected in North America, we evaluated Culex erraticus (Dyar and Knab), Culex erythrothorax Dyar, Culex nigripalpus Theobald, Culex pipiens L., Culex quinquefasciatus Say, Culex tarsalis Coquillett, Aedes dorsalis (Wiedemann), Aedes vexans (Meigen), Anopheles quadrimaculatus Say, and Culicoides sonorensis Wirth and Jones from the western, midwestern, and southern United States for their ability to transmit RVFV. Female mosquitoes were allowed to feed on adult hamsters inoculated with RVFV, after which engorged mosquitoes were incubated for 721 d at 26°C, then allowed to refeed on susceptible hamsters, and tested to determine infection, dissemination, and transmission rates. Other specimens were inoculated intrathoracically, held for 7 d, and then allowed to feed on a susceptible hamster to check for a salivary gland barrier. When exposed to hamsters with viremias ≥108.8 plaque-forming units/ml blood, Cx. tarsalis transmitted RVFV efficiently (infection rate = 93%, dissemination rate = 56%, and estimated transmission rate = 52%). In contrast, when exposed to the same virus dose, none of the other species tested transmitted RVFV efficiently. Estimated transmission rates for Cx. erythrothorax, Cx. pipiens, Cx. erraticus, and Ae. dorsalis were 10, 8, 4, and 2%, respectively, and for the remaining species were ≤1%. With the exception of Cx. tarsalis and Cx. pipiens, all species tested had moderate to major salivary gland barriers. None of the C. sonorensis became infected and none of the An. quadrimaculatus tested transmitted RVFV by bite, even after intrathoracic inoculation, indicating that these species would not be competent vectors of RVFV. Although Ae. vexans from Florida and Louisiana were relatively efficient vectors of RVFV, specimens of this species captured in Colorado or California were virtually incompetent, illustrating the need to evaluate local population for their ability to transmit a pathogen. In addition to laboratory vector competence, factors such as seasonal density, host feeding preference, longevity, and foraging behavior should be considered when determining the potential role that these species could play in RVFV transmission. Source
Wilson W.C.,Arthropod Borne Animal Diseases Research Unit |
Romito M.,Onderstepoort Veterinary Institute |
Jasperson D.C.,Arthropod Borne Animal Diseases Research Unit |
Weingartl H.,Canadian Food Inspection Agency |
And 6 more authors.
Journal of Virological Methods | Year: 2013
Outbreaks of Rift Valley fever in Kenya, Madagascar, Mauritania, and South Africa had devastating effects on livestock and human health. In addition, this disease is a food security issue for endemic countries. There is growing concern for the potential introduction of RVF into non-endemic countries. A number of single-gene target amplification assays have been developed for the rapid detection of RVF viral RNA. This paper describes the development of an improved amplification assay that includes two confirmatory target RNA segments (L and M) and a third target gene, NSs, which is deleted in the Clone 13 commercial vaccine and other candidate vaccines. The assay also contains an exogenous RNA control added during the PCR setup for detection of amplification inhibitors. The assay was evaluated initially with samples from experimentally infected animals, after which clinical veterinary and human samples from endemic countries were tested for further evaluation. The assay has a sensitivity range of 66.7-100% and a specificity of 92.0-100% depending on the comparison. The assay has an overall sensitivity of 92.5%, specificity of 95% and a positive predictive value of 98.7%. The single-tube assay provides confirmation of the presence of RVFV RNA for improved confidence in diagnostic results and a "differentiate infected from vaccinated animals" (DIVA) - compatible marker for RVFV NSs - deleted vaccines, which is useful for RVF endemic countries, but especially important in non-endemic countries. © 2013. Source
Miller M.M.,University of Wyoming |
Bennett K.E.,Arthropod Borne Animal Diseases Research Unit |
Bennett K.E.,Colorado State University |
Drolet B.S.,Arthropod Borne Animal Diseases Research Unit |
And 5 more authors.
Clinical and Vaccine Immunology | Year: 2015
Rift Valley fever virus (RVFV) causes serious disease in ruminants and humans in Africa. In North America, there are susceptible ruminant hosts and competent mosquito vectors, yet there are no fully licensed animal vaccines for this arthropod-borne virus, should it be introduced. Studies in sheep and cattle have found the attenuated strain of RVFV, MP-12, to be both safe and efficacious based on early testing, and a 2-year conditional license for use in U.S. livestock has been issued. The purpose of this study was to further determine the vaccine's potential to infect mosquitoes, the duration of humoral immunity to 24 months postvaccination, and the ability to prevent disease and viremia from a virulent challenge. Vaccination experiments conducted in sheep found no evidence of a potential for vector transmission to 4 North American mosquito species. Neutralizing antibodies were elicited, with titers of > 1:40 still present at 24 months postvaccination. Vaccinates were protected from clinical signs and detectable viremia after challenge with virulent virus, while control sheep had fever and high-titered viremia extending for 5 days. Antibodies to three viral proteins (nucleocapsid N, the N-terminal half of glycoprotein GN, and the nonstructural protein from the short segment NSs) were also detected to 24 months using competitive enzyme-linked immunosorbent assays. This study demonstrates that the MP-12 vaccine given as a single dose in sheep generates protective immunity to a virulent challenge with antibody duration of at least 2 years, with no evidence of a risk for vector transmission. Copyright © 2015, American Society for Microbiology. All Rights Reserved. Source
Gaudreault N.N.,Arthropod Borne Animal Diseases Research Unit |
Indran S.V.,Kansas State University |
Indran S.V.,Institute of Life science |
Bryant P.K.,Arthropod Borne Animal Diseases Research Unit |
And 2 more authors.
Frontiers in Microbiology | Year: 2015
Rift Valley fever virus (RVFV) causes disease outbreaks across Africa and the Arabian Peninsula, resulting in high morbidity and mortality among young domestic livestock, frequent abortions in pregnant animals, and potentially severe or fatal disease in humans. The possibility of RVFV spreading to the United States or other countries worldwide is of significant concern to animal and public health, livestock production, and trade. The mechanism for persistence of RVFV during inter-epidemic periods may be through mosquito transovarial transmission and/or by means of a wildlife reservoir. Field investigations in endemic areas and previous in vivo studies have demonstrated that RVFV can infect a wide range of animals, including indigenous wild ruminants of Africa. Yet no predominant wildlife reservoir has been identified, and gaps in our knowledge of RVFV permissive hosts still remain. In North America, domestic goats, sheep, and cattle are susceptible hosts for RVFV and several competent vectors exist. Wild ruminants such as deer might serve as a virus reservoir and given their abundance, wide distribution, and overlap with livestock farms and human populated areas could represent an important risk factor. The objective of this study was to assess a variety of cell lines derived from North American livestock and wildlife for susceptibility and permissiveness to RVFV. Results of this study suggest that RVFV could potentially replicate in native deer species such as white-tailed deer, and possibly a wide range of non-ruminant animals. This work serves to guide and support future animal model studies and risk model assessment regarding this high-consequence zoonotic pathogen. © 2015 Gaudreault, Indran, Bryant, Richt and Wilson. Source
Britch S.C.,Center for Medical |
Binepal Y.S.,Kenya Agricultural Research Institute |
Ruder M.G.,Arthropod Borne Animal Diseases Research Unit |
Kariithi H.M.,Kenya Agricultural Research Institute |
And 8 more authors.
PLoS ONE | Year: 2013
Since the first isolation of Rift Valley fever virus (RVFV) in the 1930s, there have been multiple epizootics and epidemics in animals and humans in sub-Saharan Africa. Prospective climate-based models have recently been developed that flag areas at risk of RVFV transmission in endemic regions based on key environmental indicators that precede Rift Valley fever (RVF) epizootics and epidemics. Although the timing and locations of human case data from the 2006-2007 RVF outbreak in Kenya have been compared to risk zones flagged by the model, seroprevalence of RVF antibodies in wildlife has not yet been analyzed in light of temporal and spatial predictions of RVF activity. Primarily wild ungulate serum samples from periods before, during, and after the 2006-2007 RVF epizootic were analyzed for the presence of RVFV IgM and/or IgG antibody. Results show an increase in RVF seropositivity from samples collected in 2007 (31.8%), compared to antibody prevalence observed from 2000-2006 (3.3%). After the epizootic, average RVF seropositivity diminished to 5% in samples collected from 2008-2009. Overlaying maps of modeled RVF risk assessments with sampling locations indicated positive RVF serology in several species of wild ungulate in or near areas flagged as being at risk for RVF. Our results establish the need to continue and expand sero-surveillance of wildlife species Kenya and elsewhere in the Horn of Africa to further calibrate and improve the RVF risk model, and better understand the dynamics of RVFV transmission. Source | <urn:uuid:8fe30399-4a04-4f67-80bf-33e45cd02c8a> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/arthropod-borne-animal-diseases-research-unit-2054656/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916456 | 2,235 | 2.65625 | 3 |
Cybercriminals are getting better at creating fake emails
Not every fake e-mail is as obvious as those telling you that you won the lottery in a foreign country and that they need a small fee upfront for currency exchange. Spotting the difference between legitimate and fake e-mails is getting more difficult as criminals become more sophisticated in their efforts. We have included a few signs below to help you determine if the e-mail you received may be spoofed:
- Don’t open e-mails with attachments or links from people you do not know. These types of e-mails are often vehicles for malicious software.
- Examine e-mail addresses closely. Fake e-mails often use e-mail addresses with similar-sounding titles but from fictitious e-mail boxes. Quite often, a fake e-mail address will include signs, symbols, or strings of letters.
- Don’t respond to e-mails with deadlines, or those marked “urgent,” as they are often fake e-mails. The need to respond to a “limited time” offer, or a request to respond to avoid penalties, are often signs that the e-mail is not legitimate.
- Watch out for poor grammar – it is often a telltale sign of a fake e-mail. Many e-mail scams originate in foreign countries, which means the author of the e-mail doesn’t speak or write English fluently.
- Watch out for official-looking e-mail addresses that end with free e-mail services, such as support@gmail, cardservices@hotmail, or technicalsupport@yahoo. A quick glance may make you think it is a real e-mail, but it is not. Scammers use this approach because they know people are used to being contacted with similar e-mail addresses from trusted companies. | <urn:uuid:28c4bcab-7152-495a-9c84-e5a01e0f1381> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/security/blogpost-1455915001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00093-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970572 | 384 | 2.59375 | 3 |
Scientists at Stanford devised a “virtual earthquake” technique capable of predicting the effects of a major quake occurring along the southern San Andreas Fault.
The remarkable thing about the new technique is that it relies on weak vibrations generated by the Earth’s oceans to create these ‘virtual earthquakes’ in order to forecast resultant ground movement and shaking hazard in the event of a real quake.
The innovative research appears in the Jan. 24 issue of the journal Science. The results show that if a major quake occurs south of the city, Los Angeles will experience stronger-than-expected ground movement.
Lead author Marine Denolle recently received a PhD in geophysics from Stanford and now works at the Scripps Institution of Oceanography in San Diego. “We used our virtual earthquake approach to reconstruct large earthquakes on the southern San Andreas Fault and studied the responses of the urban environment of Los Angeles to such earthquakes,” she explains.
The research is based on the fact that even in the absence of a quake, there is still background seismic activity. “If you put a seismometer in the ground and there’s no earthquake, what do you record? It turns out that you record something,” notes study leader Greg Beroza, a geophysics professor at Stanford.
The instruments are perceiving a continuous signal called the ambient seismic field. The field is generated by ocean waves interacting with solid Earth. The crashing of the waves creates a pressure pulse that undulates through the ocean to the sea floor and into the Earth’s crust. Beroza explains that the waves are billions of times weaker than the seismic waves caused by earthquakes.
Although the scientific community has known about the ambient seismic field for a century or so, it was mainly viewed as a nuisance to conducting earthquake research. Because the field is weak and appears randomly in the earth’s crust, it was also difficult to isolate, but techniques have improved over the last decade. New signal-processing techniques are better able to track the waves as they travel through one seismometer to another.
The research group further refined these techniques and set up seismometers along the San Andreas Fault to measure ambient seismic waves. Using the data from the seismometers, the group employed mathematical techniques to make the waves appear as if they came from deep within the Earth, where real earthquakes occur.
The team confirmed the accuracy of their virtual earthquake approach by comparing the new predictions with supercomputer simulations from 2006. The promising aspect of the new technique is that it does not require the same level of computational power, so it is much more affordable.
The primary finding from the study was that Los Angeles is at a risk for increased ground movement if a large earthquake, magnitude 7.0 or greater, were to take place along the southern San Andreas Fault, near the Salton Sea.
“The seismic waves are essentially guided into the sedimentary basin that underlies Los Angeles,” Beroza said. “Once there, the waves reverberate and are amplified, causing stronger shaking than would otherwise occur.”
The next step for the group is to test their virtual earthquake technique in other cities around the world that are located on top of sedimentary basins. Examples of such locales include Tokyo, Mexico City, Seattle and parts of the San Francisco Bay area. | <urn:uuid:f740100a-cc46-44f9-a9b9-074339523ff7> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/28/virtual-earthquakes-predict-quake-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950476 | 682 | 4.0625 | 4 |
A couple of weeks ago I wrote about NASA's commitment to get "back to the moon," as an agency official said. I was unenthused:
Maybe it's best to be cautious, but this strikes me as an unnecessary and wasteful space goal. There's nothing interesting on the moon. We've been there before. If we're going to do space, let's do it boldly. Anything less is pointless.
A couple of readers politely objected. Here's astropaz:
We are far from knowing all there is to know about the Moon, the Apollo missions brought back a wealth of knowledge about how the solar system has formed, a telescope on the far side of the Moon would out perform any land based or LEO based telescope and answer many more of the questions we have about the universe.
And JH commented:
If someone builds a giant sea going vessel, should they do the final assembly and launch it from Nebraska? or is it better to build a dry dock then assemble and launch the vessel from a port as close to the water as possible?
Both of these commenters are right. There's plenty more to learn about the moon (and, therefore, Earth, our solar system and beyond), plus the moon can serve as a useful base for scientific and logistical reasons.
But if I had written that post the day before or the day after, I easily could have argued that returning to the moon was a logical step. The truth is I go back and forth about how ambitious human space exploration should be.
(You also can read: Bizarre stuff you never knew about Venus and Mars and The (literally) unbelievable UFO war in Antarctica)
And that's because I'm objective about the technologically daunting, fantastically expensive and dangerous challenge of sending humans to live in space and on another planet. Space is an absolutely unforgiving environment for humans. And so are our nearest planetary neighbors.
Which leads to billionaire Elon Musk's ambitious goal to start what he hopes eventually could be a colony of 80,000 humans living on Mars.
Musk, founder of private spaceflight company SpaceX, laid out his plans in a recent speech to the Royal Aeronautical Society in London. As reported by Space.com's Rob Coppinger, Musk "wants to help establish a Mars colony of up to 80,000 people by ferrying explorers to the Red Planet for perhaps $500,000 a trip."
In Musk's vision, the ambitious Mars settlement program would start with a pioneering group of fewer than 10 people, who would journey to the Red Planet aboard a huge reusable rocket powered by liquid oxygen and methane. ...
Accompanying the founders of the new Mars colony would be large amounts of equipment, including machines to produce fertilizer, methane and oxygen from Mars’ atmospheric nitrogen and carbon dioxide and the planet's subsurface water ice.The Red Planet pioneers would also take construction materials to build transparent domes, which when pressurized with Mars’ atmospheric CO2 could grow Earth crops in Martian soil. As the Mars colony became more self sufficient, the big rocket would start to transport more people and fewer supplies and equipment.
What the article doesn't mention is how fast the "big rocket" will travel in space. Right now it takes from 150 to 300 days for spacecraft to reach Mars. I'm not sure whether the extra size of the rocket -- which is now in prototype and which Musk hopes is ready to use in five or six years -- is intended to cut the travel time or carry a large payload. If it's the latter, we're talking about humans traveling in space for anywhere from five to 10 months. There are a lot of unknowns involved in that alone, never mind trying to create a sustainable environment for humans on Mars.
The other question I have about Musk's vision is the time frame. Let's say SpaceX can get people to the surface of Mars by 2025. How long will it take to go from that initial handful of Mars colonists to a community of 80,000? By the end of the century? By 2075?
I really, really want to believe this is possible in my lifetime. You're reading the words of a guy who told his lunch tablemates in second grade that his father was an astronaut (a total lie, but unverifiable by a bunch of 7-year-olds in the pre-Internet era!), a guy who was riveted by Neil Armstrong's first steps on the moon, a guy who was thrilled by the initial reports (later dismissed) that the Viking landers had detected evidence of life on Mars in 1976, a guy who loved the exploits of the Robinson Family in Lost in Space. I want us to reach for the stars! Because space is the final frontier!
But I don't want humans -- even willing and daring ones -- to die a lonely, horrible death in space because we really weren't technologically, scientifically or physically advanced enough for survivable deep-space travel.
I'm curious to see what readers think about this whole topic. Specifically:
* Can a colony of humans survive indefinitely on Mars?
* If so, how soon could that happen?
* What types of people should be among the pioneers? (Let's assume price is not an object because money could be raised to cover the ticket price for certain people.)
* When will we see a rocket fast enough to make the trip to Mars in a matter of weeks or even days?
* What would the propellant be?
* Is there an upper limit to how fast humans can safely travel in space? | <urn:uuid:98e999ec-3247-4c31-9bfb-e1336ee8d9f8> | CC-MAIN-2017-04 | http://www.itworld.com/article/2716177/hardware/elon-musk-s-80-000-person-mars-colony--daring-dream-or-crazy-talk--tell-us-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00083-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960811 | 1,136 | 2.578125 | 3 |
Many industry observers have noticed that as each generation of Intel processors has delivered more compute power than its predecessor thorough a combination of faster clock rates and core multiplication, each generation of disk drives got not faster but bigger. In fact, this growing performance gap is frequently used as a justification for flash-based solid-state drives (SSDs). After all, if your disk drives can't keep your servers busy processing data, introducing some flash can speed up your applications.
Since the controllers on almost all of today's storage systems are based on the same processors as your servers, the processor/disk performance gap has empowered manufacturers to add CPU-intensive features like thin provisioning, snapshots and replication while also having each generation of controllers manage more capacity. A modular array with a petabyte of storage would have been unthinkable just a few years ago, but most vendors' products can do that today.
As vendors have added SSD support to their existing storage systems, they've discovered that for the first time in years the processors in those systems are running short on compute power. The problem is that the amount of processing power a controller needs isn't a function of the capacity it manages but the number of IOPS the storage it manages can deliver. The 1,000 disk drives a typical modular array can manage deliver a total of somewhere between 100,000 and 200,000 IOPS, while a single typical MLC SSD can deliver 20,000 to 40,000 IOPS. Put more than a handful of SSDs in an array designed for spinning disks, and the bottleneck will quickly shift from the disk drives to the controller.
Just as flash has forced us to start thinking about storage costs in dollar per IOP in addition to dollar per gigabyte, storage system designers have to think not about CPU cycles per gigabyte or CPU cycles per spindle, but CPU cycles per IOP when designing their systems.
If you look at the latest crop of all-solid-state or clean-slate hybrid array designs from companies like Pure Storage, Nimble, NexGen, Tegile or Tintri, they aren't traditional scale-up designs that support four or more drive shelves from a single set of controllers. Instead, these vendors have limited expandability to make sure they have enough CPU to manage the storage in each system. This also ensures that they have CPU cycles for features like compression and data deduplication that reduce the cost/capacity gap between flash and disk storage.
Clearly, if we're going to have all-solid-state or even substantially solid-state arrays that manage more than 50 or so SSDs, those systems are going to need more compute horsepower. The easiest way to deliver that is a scale-out architecture. The next-generation vendors that are supplying significant expansion like Kaminario, SolidFire, Whiptail and XtremIO are using a scale-out architecture that adds compute power as it adds storage capacity. Those that don't are relying on host storage management features like vSphere Storage DRS and Windows Server 2012's Storage Spaces to make managing multiple independent storage systems easier.
I have seen the future and it is scale-out. Not just for files and big data, but for everyone. | <urn:uuid:a803d859-89e9-452c-8677-c631cfa033ff> | CC-MAIN-2017-04 | http://www.networkcomputing.com/storage/i-have-seen-future-solid-state-storage-and-its-scale-out/1519498914 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945538 | 652 | 2.53125 | 3 |
Apple products have historically been thoughtfully designed so that people with disabilities can enjoy using them without any hindrance. User accessibility is so important to Apple that they even have a page on their website devoted to assistive technology, which it describes thusly:
Apple includes assistive technology in its products as standard features — at no additional cost. For example, iPhone, iPad, iPod, and Mac OS X include screen magnification and VoiceOver, a screen-access technology, for the blind and visually impaired. To assist those with cognitive and learning disabilities, every Mac includes an alternative, simplified user interface that rewards exploration and learning...
Apple continues to set a high standard for accessibility. Inventions such as braille mirroring, which enables deaf and blind kids to work together on the same computer at the same time; the world’s first screen reader that can be controlled using gestures; and captioning of downloadable digital movies are perfect examples of Apple innovation.
Recently, there have been a number of stories discussing the iPad's impact on the visually impaired. In one case, a woman suffering from an eye disorder called Macular degeneration was aboe to see the faces of her kids for the first time in more than 30 years.
With Apple's attention to assistive technology as a backdrop, Stevie Wonder recently thanked Steve Jobs for all he's done with the iPhone and iPad. The pertinent portion of the video starts at 4:37.
"And I want you all to give a hand to someone that you know whose health is very bad at this time... his company took the challenge in making his technology accessible to everyone. In the spirit of caring and moving the world forward - Steve Jobs. Because there's nothing on the iPhone or iPad that you can do that I can't do. As a matter of fact, i can be talking to you, you can be looking at me, and I can be doing whatever I need to do and you don't even know what I'm doing. Yeah!
via The Next Web | <urn:uuid:4835e974-add3-4547-9b7c-d1349fa0f9ab> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220656/wireless/stevie-wonder-thanks-steve-jobs-for-making-the-iphone-and-ipad-so-accessible.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963138 | 410 | 2.921875 | 3 |
The ability to explore the ramifications of disaster-level events is a critical skill that all IT departments should develop. Use scenario planning as a valuable tool for testing the organization's disaster recovery plan (DRP).
What Is Scenario Planning?
The purpose of scenario planning is to determine if a company's strategies are strong enough to ensure IT continuity in the face of a disaster or unplanned downtime. It is an exercise in speculation, where multiple worst-case situations are imagined and response strategies for dealing with them are mapped out. These can be scenarios for technological, economic, political, or environmental calamities. | <urn:uuid:120e8926-9850-47db-a960-24f710ccd626> | CC-MAIN-2017-04 | https://www.infotech.com/research/act-out-the-worst-case-with-scenario-planning | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911524 | 122 | 3.234375 | 3 |
When well-known lawyer and Stanford law lecturer Jonathan Mayer was invited to teach a course on government surveillance on Coursera, the popular online website offering free online university-level courses, he was excited.
But being also a computer scientist, he didn’t resist analyzing and poking around the platform that enables the teachers to teach and the course-takers to learn, and he found some issues that can be exploited to compromise the privacy of the students, namely to:
- Make a complete list of all the students (names and email addresses),
- Reveal information about the courses they take to random websites, and
- Undo the protection (supposedly) provided them by the use of external and internal IDs.
To prove the exploitation potential of his findings, he created PoC code for the first two vulnerabilities. He has managed to fetch 1,000 user names and email addresses from the student database, and for extracting course information about the users, he implemented code in a test page that retrieves it.
The last issue had to do with the fact that external IDs were easily reversible hashes of either a small number or the internal ID and, knowing this, it is trivial to build a dictionary of internal and external IDs, Mayer noted. But this particular problem can be easily solved by removing external IDs altogether, as their existence and use does not bring any security or privacy benefit, he pointed out.
He notified Coursera of all of these pitfalls, and the company has partially solved the first one but has yet to address the second one. Luckily, changes to solve these problems should be easy to implement.
For more information about the flaws, check out the original blog post. | <urn:uuid:b2cf8dd9-6779-49cd-9110-5903ce37abab> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/09/05/coursera-privacy-issues-exposed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954765 | 346 | 2.65625 | 3 |
The URL is kind of important in the browser, but a recent update to the Google Chrome Canary build has removed it. So instead of seeing the complete address you're visiting, you'll only see the domain name and a prompt to search Google or type in another URL. This is a horrible idea, for a number of reasons.
Savvy tech users sometimes edit parts of the URL in the address bar to achieve different things or more quickly access other pages on a website. For example, add "deturl.com/" to the beginning of a URL to unlock image editing, YouTube and other video tools, and more. Or get real-time search results from Google by tweaking the URL. With the full URL hidden, those power tools are no longer easily available.
More importantly, though, burying the URL could weaken users' security. Security company PhishMe discovered a flaw that would make it easier for attackers to trick users. On their blog post, PhishMe researchers wrote: "We’ve discovered that if a URL is long enough, Canary will not display any domain or URL at all, instead showing an empty text box with the ghost text “Search Google or type URL.” While Canary is intended to help the user identify a link’s true destination, it will actually make it impossible for even the savviest users to evaluate the authenticity of a URL." (The full blog post is 404ing, but you can still read the intro on PhishMe's blog.)
Even without this security flaw, hiding the URL (ugly or not) just seems wrong. As Allen Pike points out in his post: "I realize that URLs are ugly to look at, hard to remember, and a nightmare for security. Still, they are the entire point of the web."
Thankfully, the change is only in the experimental version of Chrome--and still just an experiment.
[h/t Ars Technica]
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:c2f4233d-a9bc-44ed-abd0-fc2671e72282> | CC-MAIN-2017-04 | http://www.itworld.com/article/2698679/consumerization/burying-the-url-in-the-address-bar-is-a-very-bad-idea.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888599 | 453 | 2.5625 | 3 |
Technological Singularity: What’s the Future of Artificial Intelligence?
While it may sound a bit too much like science fiction, Technological Singularity is a term used to describe the change that would occur when humans, technology, and artificial intelligence would intersect to such an extent that we are incapable of comprehending or predicting what the new race would be like, and humans after the change would no longer be able to fully relate to the previous race. Author Ray Kurzweil, a leading inventor and futurist who has made accurate predictions about technology in the past, describes The Singularity as “an era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today – the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity.”
The future of technology and artificial intelligence is hard for anyone to predict, as technology is enhancing at such rapid rates today. Will we have the ability to create superhuman intelligence to the point that human era will end? Some mathematicians, technology experts, and computer science experts think it is a possibility.
The History and Evolution of Technological Singularity Theory
Over the course of time, Singularity has evolved and changed as various experts have made different interpretations. The term “Singularity” in the sense of technology being responsible for a fundamental change in humans, was first used in 1958 by John von Neumann, a Hungarian mathematician and physicist. Stanislaw Ulam, also in 1958, imagined technology accelerating at such a rate that it would change human life to the point that life as we know it would forever change. Statistician I.J. Good coined the term “intelligence explosion,” rather than using the term “Singularity.” Good envisioned “a positive feedback cycle within which minds will make technology to improve on minds which once started will rapidly surge upwards and create super-intelligence.” Good influenced mathematician, computer scientist, and science fiction author Vernor Vinge, who was the first to use “Singularity” in a technological sense, in 1986. He cited various potential causes of Singularity, including artificial intelligence, human biological enhancement, or brain-computer interfaces.
Technological Singularity Considerations
Depending on your personal views of technology, you may think that computers already have replaced humans. Factories employ robots, smartphones communicate for us, computers control dangerous weapons, cars can drive themselves, and so on. Yet, with all of this complexity of technology, these computers and machines continue to rely on human ingenuity and control. Humans program them, and they are not intuitive, so they cannot truly think or be self-aware.
Yet, as Jonathan Strickland points out in his article, Vernor Vinge warns that humans could “evolve beyond our understanding through the use of technology,” and achieve Singularity. In his essay, The Coming Technological Singularity: How to Survive in the Post-Human Era, Vinge predicts that superhuman intelligence will be developed prior to 2030. He envisions this happening in one of four ways: scientists will develop advancements in artificial intelligence, computer networks may become self-aware, computer-human interfaces will become advanced enough that humans will evolve into a new species, or biological advancements will allow humans to physically engineer human intelligence.
Of his four scenarios, Vinge discusses the first in greatest detail in the essay. Strickland breaks down Vinge’s theory by relating it to Moore’s Law, “which states that transistors double in power every 18 months.” According to Vinge, at that rate, it’s inevitable that humans will build a machine that can think like a human. This takes care of the hardware aspect, but Strickland reminds us that software will need to be developed that allows machines “to analyze data, make decisions, and act autonomously” if machines are truly going to begin to design and build better versions of themselves. In this scenario, which may seem like a movie, humans would be taken out of the equation as superhuman intelligence takes over and we would reach Singularity.
It is Kurzweil, though, who is regarded as having the most plausible theory of Technological Singularity, which often is referred to as “the accelerating change thesis.” Kurzweil’s book The Singularity Is Near: When Humans Transcend Biology defines Technological Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.”
Further Reading and Information on Technological Singularity
Have we reached Singularity yet? No. Are we on the cusp? It’s possible, given that technological advances are made at such a lightning-fast pace today. If you’re intrigued by the possibility (or plausibility) of Singularity, you may want to check out some of the following links for additional information. | <urn:uuid:550200bd-3f77-44b6-bc47-9359d2e4040d> | CC-MAIN-2017-04 | https://www.clicksoftware.com/blog/technological-singularity-whats-the-future-of-artificial-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947372 | 1,071 | 3.265625 | 3 |
Deep learning efforts today are run on standard computer hardware using convolutional neural networks. Indeed the approach has proven powerful by pioneers such as Google and Microsoft. In contrast neuromorphic computing, whose spiking neuron architecture more closely mimics human brain function, has generated less enthusiasm in the deep learning community. Now, work by IBM using its TrueNorth chip as a test case may bring deep learning to neuromorphic architectures.
Writing in the Proceedings of the National Academy of Science (PNAS) in August (Convolutional networks for fast, energy-efficient neuromorphic computing), researchers from IBM Research report, “[We] demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, perform inference while preserving the hardware’s underlying energy-efficiency and high throughput.”
The impact could be significant as neuromorphic hardware and software technology have been rapidly advancing on several fronts. IBM researchers ran the datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per watt). They report their approach allowed networks to be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. Basically, the new approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors.
“The new milestone provides a palpable proof of concept that the efficiency of brain-inspired computing can be merged with the effectiveness of deep learning, paving the path towards a new generation of chips and algorithms with even greater efficiency and effectiveness,” said Dharmendra Modha, chief scientist for brain-inspired computing at IBM Research-Almaden, in an interesting article by Jeremy Hsu on the IBM work posted this week on the IEEE Spectrum (IBM’s Brain-Inspired Chip Tested for Deep Learning.)
Shown here are dataset samples the researcher worked with.
As Hsu points out in the IEEE Spectrum article, “Deep-learning experts have generally viewed spiking neural networks as inefficient – at least, compared with convolutional neural networks – for the purposes of deep learning. Yann LeCun, director of AI research at Facebook and a pioneer in deep learning, previously critiqued IBM’s TrueNorth chip because it primarily supports spiking neural networks. (See IEEE Spectrum’s previous interview with LeCun on deep learning.)
“The IBM TrueNorth design may better support the goals of neuromorphic computing that focus on closely mimicking and understanding biological brains, says Zachary Chase Lipton, a deep-learning researcher in the Artificial Intelligence Group at the University of California, San Diego. By comparison, deep-learning researchers are more interested in getting practical results for AI-powered services and products.”
IBM is trying to widen that perspective. Clearly, understanding brain function better is an important element neuromorphic computing research but so, increasingly, is developing real-world applications. Lawrence Livermore National Laboratory has purchased a True-North-bases system to explore and in Europe the Human Brain Project has opened up its two big machines, SpiNNaker at Manchester University, U.K., and BrainSaleS in Germany to researchers to develop applications and explore neuromorphic computing.
The IBM paper authors describe the traditional deep learning challenge well: “Contemporary convolutional networks typically use high precision (32-bit) neurons and synapses to provide continuous derivatives and support small incremental changes to network state, both formally required for back-propagation-based gradient learning. In comparison, neuromorphic designs can use one-bit spikes to provide event-based computation and communication (consuming energy only when necessary) and can use low-precision synapses to co- locate memory with computation (keeping data movement local and avoiding off-chip memory bottlenecks).”
By introducing two constraints into the learning rule – binary-valued neurons with approximate derivatives and trinary-valued synapses – the researchers say it is possible to adapt backpropagation to create networks directly implementable using energy efficient neuromorphic dynamics.
“For structure, typical convolutional networks place no constraints on filter sizes, whereas neuromorphic systems can take advantage of blockwise connectivity that limits filter sizes, thereby saving energy because weights can now be stored in local on-chip memory within dedicated neural cores. Here, we present a convolutional network structure that naturally maps to the efficient connection primitives used in contemporary neuromorphic systems. We enforce this connectivity constraint by partitioning filters into multiple groups and yet maintain network integration by interspersing layers whose filter support region is able to cover incoming features from many groups by using a small topographic size,” write the researchers whose project was funded by DAPRA as part of its Cortical Processor program aimed at brain-inspired AI that can recognize complex patterns and adapt to changing environments,” write the researchers.
Shown below is a figure of both conventional convolutional network and the TrueNorth approach.
In the IEEE article, Modha notes TrueNorth’s general design as an advantage over those of more specialized deep-learning hardware designed to run only convolutional neural networks because it will likely allow the running of multiple types of AI networks on the same chip. He’s quoted saying, “Not only is TrueNorth capable of implementing these convolutional networks, which it was not originally designed for, but it also supports a variety of connectivity patterns (feedback and lateral, as well as feed forward) and can simultaneously implement a wide range of other algorithms.”
In their paper, the authors emphasize that their work demonstrates more generally that “the structural and operational differences between neuromorphic computing and deep learning are not fundamental and points to the richness of neural network constructs and the adaptability of backpropagation. This effort marks an important step toward a new generation of applications based on embedded neural networks.” It’s bet to read the paper in full for details of the work.
Link to Paper: http://www.pnas.org/content/early/2016/09/19/1604850113.full
Link to Jeremy Hsu’s IEEE Spectrum article: http://spectrum.ieee.org/tech-talk/computing/hardware/ibms-braininspired-chip-tested-on-deep-learning
Link to related HPCwire coverage: Think Fast – Is Neuromorphic Computing Set to Leap Forward? | <urn:uuid:70c097ef-bee5-4fd0-83ad-9a0a7754736e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2016/09/29/ibm-advances-neuromorphic-computing-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00221-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918683 | 1,364 | 3.296875 | 3 |
Part Three: The Identity Context
All attacks involve some form of stolen identity.
According to Mandiant’s threat landscape study, 100 percent of breaches they investigated involve stolen credentials.
In our own studies — where we reverse-engineered malware and studied the source code of viruses such as Zeus — much of the functionality is designed to steal usernames, passwords and even SMS-delivered one-time passcodes.
As an example, Eurograbber redirected Android SMS messages, often referred to as TANs (Transaction Authorization Numbers). Another common example of stolen credentials is hashed Windows desktop credentials, which is known to be how hackers laterally move within a corporate network.
Kill Chain – Identity
The word identity only appears once in Lockheed Martin’s paper on the Kill Chain, and this was in relation to the identity of the intruder, not a stolen identity of the victim.
It could be argued that identity is at the core of every item on Lockheed Martin’s kill chain. From earliest reconnaissance to final actions on the target, an identity is either being actively searched, stolen or utilized.
Simplistic Identity Context
Some security vendors refer to the identity context and offer a customer the ability to create blacklist and whitelist rule sets such as:
– A finance employee should never have login access to computers containing sensitive intellectual property.
– Employee logins should not be authorized to access sensitive computers past 7 p.m., or on weekends.
– Never let an IT administrator access central financial servers (a rule which may often get overruled)
These rule sets (simplified from the complex levels they can attain), whether enforced at endpoints or by an internal firewall or IDS/IPS system, are all useful. But this does not take in to account inevitable flaws in the operating systems on the network. Nor does it address the biggest problem of securing identities, which is when they are ambiguous.
Identity-based security that does not address identity ambiguity is a way to give the advantage to the adversary.
– Privilege Escalation. Malware on a desktop PC assumes the identity of the person logged in. Privilege escalation can enable a hacker to move further toward their goal.
– Pass the Hash. Stolen hashes can assume the identity of anyone who has logged in to compromised PC. Adversaries can collect credentials as they move laterally in your network.
– Spoof. It is possible to spoof MAC addresses, IP addresses, etc. Protecting the device identity is very important. This is where device certificates can be very effective.
– Session-Riding. Authentication is vital, but insufficient for more sophisticated forms of malware.
Too Much Trust: Flat Networks
A critical infrastructure penetration tester once told me an anecdote about an enterprise that was faced with a unique situation. Parts of the organization’s network were inoperable at certain parts of the day, intermittently. It turned out that a person, who was tasked with mapping their network resources in a Visio diagram, was unknowingly taxing the entire corporate network — sometimes over very limited bandwidths to remote locations.
Network isolation and segmentation is not a hallmark of most corporate networks. Technologies like Active Directory were originally offered as a way to both simplify deployment and manage large numbers of network-connected objects and identities.
The lack of segmentation of these networks is patterned on the selling point of large-scale management. End-point security and network perimeters were good enough to protect us, right? We now know better. Networks are too flat, and too trusting.
Worst-Case scenario: When Critical Functions Trust Everyone
Imagine for a moment a programmable logic controller (PLC) device, which controls pumps and valves in a critical infrastructure plant. That PLC device has firmware that controls all of its functions and is connected to an Ethernet jack where it listens to commands and sends status reports. One of the commands it listens for is a firmware reprogramming.
If the PLC device receives a valid command to reprogram itself, it will. And it will never question the identity of who sent the command. If anyone remembers Stuxtnet, this may all sound very familiar.
Was the command authorized by the plant manager? Or could the command have been sent by malware? Or, maybe the command was sent by an insider threat, unbeknownst to the plant manager.
The PLC device trusts everyone on the network and will act on the command, regardless of the origin. It had always been assumed that perimeter defenses would never allow malicious activity inside the perimeter, but we now know this assumption to be a fallacy.
Internet of Things
Some people on the bleeding edge of technology may like the idea of their refrigerator automatically ordering groceries when supplies get low. And now Google has purchased the company that created the always-connected Nest smart thermostat. More and more things in our daily life will be connected to the Internet.
The explosion of connected devices now makes identity context even more important. Some may not be worried about refrigerator malware. And some may even trust Google to do non-evil things with the data they could potentially mine from connected thermostats and sensors. But I suggest that there are other things to worry about.
What about an Internet-connected pacemaker? The benefits of having a stream of data being sent from a pacemaker to a hospital would be very useful. But Dick Cheney’s doctors disconnected his pacemaker from the Internet for fears that it could be used to hurt him.
So, what is the underlying problem? These connected devices quite often trust the network too much. Sometimes their authentication strength isn’t strong enough and, even after mutual authentication, they continue to trust transactional commands being issued.
It isn’t Just Authentication Strength
Malicious weaponization has the capacity to sit and wait for strong authentication to occur. This is true on a desktop PC or an Internet-of-Things-connected device.
Commands that occur after strong authentication may not originate from you. During a session-riding attack, your identity, for the duration of the authenticated session, has essentially been stolen. This does not have to be a cookie-stealing event on a website, but complete PC takeover, either by remote control or by ‘logic bomb.’
Remote control would necessarily require some kind of command and control communication — part of the Kill Chain — and potentially can be detected. Other types of malware have shellcode loaders that enabled cloaked instruction transfer. Some malware comes pre-packaged with all the instructions it requires to achieve its goal — all without requiring external commands.
The sophistication of most of the attacks we have seen has been low to moderate. Attackers are bound by the same economic laws as the rest of us and will only spend as much resource as necessary to achieve their goal. The technology pipeline of the attackers is also stacked with things we have not seen yet.
If an inventory list of critical transactions for your corporate, government or critical infrastructure environment were in front of you, it would be frightening to consider that a malicious actor could enact those transactions without your knowledge or permission. The silence of the malicious toolsets is a reality, but it can be reduced by identity assurance. | <urn:uuid:e61b0f5a-51c0-47d6-8b1e-3e92007ca372> | CC-MAIN-2017-04 | https://www.entrust.com/identity-context/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944508 | 1,491 | 2.796875 | 3 |
SSL is the Secure Socket Layer. It is a protocol that encrypts a single TCP session. Using this Asymmetric Encryption, all data exchanged over a TCP socket can be cryptographically protected. SSL is the base of HTTPS - the secure World-Wide Web protocol.
SSL was designed by Netscape using algorithms invented by RSA (Rivest-Shamir-Adelman). Commercial implementations may be purchased from RSA. A free and robust implementation called SSLeay is also internationally available. Check your local legislation about Encryption to see if your government will let you download and use this software. | <urn:uuid:fce2edbd-b03c-4087-bbd7-e4b9cb34b37a> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/ssl.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888249 | 121 | 3.515625 | 4 |
ZIFFPAGE TITLEA Hex in ComputingBy Tom Steinert-Threlkeld | Posted 2005-07-08 Email Print
Decades before the idea took hold in the dot-com era, Reader's Digest kept a "360-degree view" of each of its customerstracking every contact it ever had with a subscriber to its magazine or a purchaser of any of its condensed books or o
A Hex in Computing
The practical reality, in the early days, was much more prosaic.
There wasn't much discussion of "data warehousing" or "data mining" or anything you might want to call "business intelligence.''
In fact, before the Univac II came along, Reader's Digest's data warehouse consisted of 18 million stencils, little metal plates with subscribers' names, addresses and expiration dates on the front. They were used to create mailing labels, by pressing ink on them. Their edges were notched, to add marketing information to stencils selected for marketing campaigns.
Several rooms in the company's headquarters were devoted to this "prehistoric" system, as Otten terms it. About 100 women and men would toil in a stencil room, making sure each stencil was in the right sequence in the right tray for the right postal code. And, once removed, returned to its right place.
What they couldn't do was easily put customers in buckets they could do something with, like sell customers a new book or record. Or just simply put names in alphabetical order without shuffling cards by hand.
With the new file system and the battery of IBM 360s, they finally had a way of putting order into the universe.
"It was wonderfully awesome to do a sort of 10 million names,'' Burns says.
Being able to sort millions of records by name, state or street address was not the point. Figuring out what prompted each customer to buy more products was the mystery worth solving.
For Burns and 29 other programmers, that meant devising a schema that would compact a record of any offer made to any customer in an "atomic record" of four bytes per event. One byte would record the name of the product; a second the action that resulted (promoted, paid bill, canceled, etc.); a third the type of marketing effort (direct mail piece, house ad, etc.) that spurred the action; and another the month and year of the mailing or other marketing campaign.
Each byte mattered in an era of expensive hardware and expensive memory. IBM had spent $5 billion in 1964 ($28 billion in today's dollars) just to launch the 360.
In fact, Reader's Digest programmers had to figure out how to squeeze lots of information into fields that might only be one byte long. That meant writing in hexadecimal code, an approach taken to maximize use of the limited memory of the IBM machines. "That was the nature of the beast,'' Otten says.
Everyday math is based on decimal code: the numbers 0 through 9. The base is 10: the characters you know as numbers.
Hexadecimal code takes those 10 numbers and adds six letters. Readers' Digest chose A through F.
With decimal code, you can store only 100 different values in two digits: 10 times 10. With hexadecimal code, you can store 256 different values: 16 times 16.
Which happens to be the maximum amount of information that can be stored in a byte. IBM, with the 360, established the standard in the computing industry that a byte would be the equivalent of eight bits of information fed to a machine at a time. Those bits were ones or zeroes. Two different values, eight digits, multiplied into all their possible permutations equals256.
So, November 1987 would become 1C. November 1994 would be 70 and November 2004 was E8. More than two decades of months and years could be captured in 256 two-character combinations.
How could you tell what E8 meant? By looking it up in a table, kept on paper. Or in one's head.
Kahrs and Ritchie, the two lead developers, would define much of the foundation of the system, such as what values to put in the compacted fields. Did you need hex codes for active customers? Expired? Deadbeat? Temporary? Gift recipient?
But everyday "users" of the system wouldn't have to know hex code, project manager Otten decided. Key Reader's Digest policies, such as when to stop shipping products to a particular customer, subscription rates and who was entitled to which rate, would be kept in tables that could be pulled up on screen, altered and fed back into the system.
That separation of business purposeand putting it on screen in a form an everyday worker could see and deal withwas "unique" in a period when only gods working in air-conditioned rooms with raised floors could be experts in computing, according to Burns.
"To think of the system user was quite advanced,'' he says. Until then, dabbling in hex code or putting it to any kind of use "was strictly up the Mr. Wizards and the Mrs. Wizards." | <urn:uuid:02e66c8b-527e-491d-a00a-1c2dbc4e3088> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Data-Analysis/Readers-Digest-The-Longest-Goodbye/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963205 | 1,063 | 2.65625 | 3 |
Author: Richard Staron
Are you frustrated by your attempts to learn Oracle or improve your Oracle skills because of the sheer amount of technical documentation you have to wade through? This tutorial walks you step-by step through the process, showing you exactly what you need to know to install, create, and support a successful Oracle8i or 9i environment with Web capabilities.
About the author
Richard J. Staron is manager of IS applications at Eastern Connecticut State University. He has more than twenty years of experience in the data processing field, including hierarchical and relational database design. Richard has extensive experience managing Oracle database installations, supervising Oracle programmers, and teaching Oracle programming.
Inside the book
Are you an Oracle newbie? If your answer is yes, read on. You want to learn Oracle, right? After all, it’s not that hard, if you have a good guide. Here comes Guerrilla Oracle. The book consists of 15 chapters. After the introduction, chapter 2 offers a concise overview of the most significant dates and events in the history of databases.
Chapter 3 starts with a discussion on the key concepts behind the relational databases (tables, relationships and data integrity enforcement with constraints). The discussion of the key concepts is extended from theoretical aspects at the beginning of the chapter to more “practical” aspects. Here the author describes the Oracle Server environment, what SQL is, what the basic SQL commands are and what PL/SQL is. In addition, an overview of SQL*Plus, Oracle Forms and Reports builder, and some of the other development tools is also given.
Chapter 4 and chapter 5 are organized as Oracle schema design case study. A set of database objects (tables, etc.) are designed from the ground up.
Chapter 6 covers some of the most important components of an Oracle server. Here you will find what is an Oracle instance, database physical layout and tablespaces and what is a user schema. This knowledge is essential to understand and successfully accomplish the process of installing an Oracle server, as described in chapter 7. How to configure a fresh installed Oracle server, mostly from a security standpoint is discussed in chapter 8. Topics such as system security, data security, user security, password management policy, auditing, creating users, roles, granting privileges and finding all the relevant information from the data dictionary are described here.
In chapter 9 we could find how to create a table space, and how to populate a user schema with objects (database tables, etc.) designed from the ground up in chapters 4 and 5. When the user account is populated with objects, the Oracle data dictionary is the right place to find and access all the relevant information about database objects. Oracle provides views and tables of the database, all its users, and its various structures. Here you’ll find how the dictionary works, what the data dictionary views are, and how to usFor those who would like to know how to publish forms on the web the requirements list i.e. the available information.
In chapter 11 the process of installing an Oracle client is described. This is a very important step for connecting from client machines to an Oracle instance on the server machine.
Chapter 12 aimed to teach the users how to perform some of the common and very important tasks usually performed by a real DBA (database administrator). Tasks such as how to upgrade a system, Oracle licensing policy, how to startup and shutdown an Oracle server, how to perform logical and physical backups and how to tune the Oracle instance are described here.
The rest of the book is focused on the most used development tools in Oracle projects today. In chapter 13, firstable, you’ll learn how to install the Forms6i builder. Then, the author guides through the various components of an Oracle form, and finally how to build a real GUI form. When building a form, there is a need to display the data loaded in the database. Chapter 14 explains how to do simple reporting, perform calculations, how to ask the database (build queries), etc.
Steps on how to move forms from a client/server based model to the Web and HTTP access is described in the rest of the chapter 15. How to acquire a very basic understanding of Oracle9i Application Server, what is a three-tier system, what is iSQL*Plus and how to install Oracle9i Application Server is described here.
What I think of it
As already stated, this book is organized as a concise step by step tutorial. It is certainly not meant to be a definitive guide to the Oracle technology world as only the official Oracle products documentation has more than 40,000 pages. For an absolute beginner, this book offers deeper understanding of the Oracle RDBMS, and will certainly help to learn concrete skills, strategies and techniques you need to make your first steps in the Oracle world and begin exploring more complex technical topics. | <urn:uuid:5286e843-a0c5-451c-bf12-78076a889439> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/06/20/guerrilla-oracle-the-succinct-windows-perspective/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921939 | 997 | 2.734375 | 3 |
We’ve already discussed how the P (Provider) routers swap or exchange labels at each hop within the WAN cloud. The sequence of routers and labels used for a particular path is referred to as the LSP (Label-Switched Path). In general, the LSP going between the sites in the reverse direction does not use the same label values. In fact, unlike a Frame Relay PVC, with MPLS there isn’t even a requirement that the same physical path be used in both directions. In other words, an MPLS LSP is unidirectional, whereas a Frame Relay PVC is bidirectional.
You might be wondering how the PE (Provider Edge) and P routers know which label values to use when doing a “push” or a “swap”. There are three protocols that can be used to advertise LSP labels between routers (TDP, LDP and RSVP), and we’ll discuss them in a later post.
Congratulations … you’re now doing MPLS, or Multi-Protocol Label Switching!
It gets its name from the fact that the P routers are “Label Switching”, and therefore don’t care about the “Multi-Protocols” used by the customer (and therefore should support any routed protocols). The PE routers only need to know the routes for customers to which they are directly attached, and the P routers do not need to know any customer routes, for any protocol. Finally, the CE (Customer Edge) routers know nothing about labels at all, because they never see one.
Now that we have an idea of how MPLS works, we can define some additional terms. We know that a CE router is located at a customer site, and thus is CPE (Customer Premises Equipment). A CE generally deals with unlabeled packets, sending to and receiving from, a PE router.
A PE is located at one of the provider’s POPs (Points of Presence). A PE pushes labels onto packets it receives from a CE before forwarding the packets to a P router, and “pops” labels from packets received from a P router before forwarding the packets to a CE.
The P routers are located within the core of the provider’s cloud. Because P routers primarily do label swaps, a P router can also be referred to as a LSR (Label-Switch Router). Likewise, a PE can be called an Edge LSR, or LER (Label Edge Router).
Here’s a summary of the terminology when it comes to the provider routers involved with MPLS:
- PE = POP = Edge LSR = LER — they “push” and “pop” labels
- P = LSR = Core — they “swap” labels
Next time, we’ll discuss MPLS in more detail, and see how it deals with LSP labels.
Author: Al Friebe | <urn:uuid:1e142ad0-4a82-4709-88b8-cb8c67208f08> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/05/20/mpls-part-7/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00240-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93376 | 622 | 2.75 | 3 |
Originally published January 12, 2006
*Software licensing and copyright are legal issues, and if you have specific questions about their practice and application you should consult an attorney. This article should not be considered legal advice as I am not a lawyer.
Copyright can be applied to software to limit how it may be copied and distributed. As I examined in my last article, Open Source Software and the Myth of Viral Licensing, copyright alone is not enough to define what a software purchaser may do. Thus, in addition to being copyrighted, virtually all software is distributed under the terms of some license that spells out what the user is and is not permitted to do with the code.
Proprietary commercial licenses, like those under which Microsoft and Oracle offer their packages, generally forbid customers from examining the source code or even decompiling executable files; users may not make copies or otherwise share the software with others; finally, users are not permitted to distribute any modifications to the licensed software.
In other words, proprietary software vendors use licenses to maintain complete control over what goes into their software, who can use the software, who sees how it works, and who patches or improves it. Even when you are permitted to pass along copies of proprietary software, as with run-time licenses, those copies are still the property of the vendor. And the recipients are still bound by the terms of the license.
Proprietary Software Problems
When you buy a book, you own it: you can dog ear the pages, write in the margins and highlight portions of the text. You can loan it to your friends, give it away as a gift or donation, sell it or even take it apart and hang the individual pages on the wall. You can read it in bed, on a plane, at work or at home.
But proprietary software is different: the terms of the license forbid you from doing almost anything beyond executing the binary instructions on your computer. You do not have access to the original source code, you can not make copies to run on other computers, and you are not permitted to make any changes to the code and then pass them along to anyone else. This raises some issues, mostly about trust. When you choose proprietary software, you trust that the vendor will:
All of these problems derive from so-called "vendor lockin," the commitment a customer makes when purchasing proprietary software. These problems can be solved when source code is made freely
This is what the open source software movement is all about: making sure that software does what you want it to, and giving you the right to fix it if it doesn't.
Open Source Software
The alternative to proprietary—or closed source—software is open source software. Just as every proprietary software vendor creates the terms for its own licenses, sometimes it seems as if every open source software project comes up with its own, unique license. In fact, there are a handful of dominant open source licenses and a few dozen more that are used less often.
Open source software is sometimes referred to as "free"; a word loaded with political, philosophical and emotional baggage that creates further confusion with its many different meanings. There are several forms of software licensing that are more or less "free" in some way, but not truly "open.” There is also "Free Software,” which we'll discuss further below.
In particular, there is a family of "freeware" software released under licenses that allow copying and use without fees. Such freeware otherwise restricts access to source code, allows only educational or personal use, or places some other restrictions on the code. Software isn’t “free” simply because you don’t have to pay for it.
Similarly, just because you have access to the source doesn't make it "open." Some proprietary software vendors, including Microsoft, offer licenses under which you can have access to source code. However, you still may not copy, modify, or redistribute that code: you can just look at it.
The Open Source Initiative (OSI) is a compliance manager for the open source community, defining and promoting the ideas of open source. There, you can find the Open Source Definition, a list of ten criteria defining open source licenses.
Most of the attributes are encapsulated in the first three criteria:
Open source licensing has its origins in the academic world. Regulations sometimes mandate software developed in state universities under government funding be released in a form that makes it accessible to those who paid for it: the public.
By the early 1980s, the University of California, Berkeley, was distributing their UNIX-like OS, the Berkeley Software Distribution (BSD) under the BSD License.
The BSD License grants permission to do whatever you want, as long as you incorporate the copyright and a disclaimer of liability. This is one of the earliest open source licenses. Under its terms you may do virtually whatever you want. For example, you could adapt BSD-licensed source code to create a proprietary, commercial product.
Originally, the BSD license required that advertising for any products include a line crediting the original authors of the software. Though not an issue where the author is "the University of California, Berkeley and its contributors," the clause causes serious problem when there are dozens of contributors, or when the product includes work based on dozens of different BSD-licensed projects, all of which would have to be identified in any ads. As a result, the BSD license has been amended to eliminate this problematic clause.
Even more permissive is the MIT License, reproduced here:
The MIT License
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
In other words, as long as you retain the copyright and permission notice and the disclaimer of warranty, you can do whatever you like with the software.
Are these licenses "viral"? Yes, in the sense that they restrict the ways in which you can use licensed software. The fact that they allow the software to be modified and redistributed makes them far more permissive than proprietary licenses, but simultaneously allows for greater transmissibility of the license "virus." Every program that uses a piece of BSD-licensed code, including early Microsoft Windows TCP/IP networking stacks and Apple's Mac OS X, must be acknowledged with the appropriate copyright and warranty disclaimer.
When open source software is released under BSD-style licenses, proprietary software vendors (like Microsoft and Apple) are free to adapt the source to improve their own products. So even if such open source software gains market share, its threat is not as great to proprietary software vendors.
It's the Free Software movement that complicates matters.
Free Software and GNU
While the BSD license was introduced, Richard M. Stallman was working at MIT and examining how people use software and how it is published. Then, most software was proprietary, particularly software sold for use on then-novel personal computers. But Stallman concluded that keeping software proprietary benefited only software sellers which resulted in the production of inferior software.
But Stallman also turned his conception of Free Software into a philosophical and political movement, in which free distribution, open access to source, and the freedom to modify and redistribute the code are not enough to insure that software is Free (with a capital "F"). It's not enough, for Stallman and the Free Software movement that a program starts out as open source. If it is modified and redistributed, it must continue to be open source. The problem with the BSD and MIT licenses (and other similar open source licenses) is that they allow redistribution of modified code in binary-only form, without the source. According to Stallman, this was a fatal flaw.
At the time, Stallman was working on a UNIX-like operating system that he recursively christened GNU is Not UNIX (GNU), and started publishing the code under the GNU General Public License. Eventually, the Free Software Foundation was founded, to help support the movement; you can find a treasure trove of Free Software philosophy links. You can also read a recent interview with Richard Stallman for more.
Despite being viewed by many in the open source community as extreme, the Free Software movement is extremely successful. The vast majority of open source software is released under some form of the GPL.
The GPL is much longer than most open source licenses. This is because it incorporates much of the Free Software philosophy, as well as the license terms itself. What makes the GPL different is that not only while users are permitted to modify and redistribute source code, if they choose to do so they must do so under the same license as the original code.
Stallman set out to recursively hack the concept of "copyright" by using it to create a concept of "copyleft": you can do whatever you want with the content, including change and redistribute it, as long as you use the same license terms as the originator of the content.
Under “Copyleft,” the author copyrights the material, a process normally used to reserve all rights in the work to the owner. Under the GPL, the owner then grants the rights to copy, change, and distribute modifications to the material—but ONLY if you do so under the same terms as the original license.
The Question of Virulence
Is the GPL a virus that can infect your organization? Although it makes a large body of good source code available for anyone to use in (almost) any way, it imposes obligations for its use that are unacceptable to some users. However, GPL’ed software works well for most uses and most users. Here are some examples:
While BSD-style licenses say, "Here is some software, use it as you like," the GPL says "Here is some software, use it as you like, but if you modify and/or redistribute it you must give your users the same rights you are being given."
Things get fuzzier when dealing with tools like database servers, where you build your own applications to use with the open source tool. Here is another example:
MySQL is distributed under the GPL in this case. Is there a way for you to keep your own software proprietary while still using the GPL'ed MySQL database server?
Dual Licensing with Open Source
If you create software, you get to choose how it is distributed. You may want to distribute it in more than one way. That's why MySQL AB dual-licenses its database server. Virtually identical software can be downloaded, used, modified and redistributed freely under the GPL. Such software can be purchased directly from MySQL AB.
MySQL AB offers a proprietary license so developers can distribute applications built with MySQL under proprietary software licenses. This is similar to what can be done for applications built on proprietary database servers sold by Microsoft, Oracle and others.
This approach allows MySQL to continue as a robust open source software project, with significant corporate support from MySQL AB. At the same time, it allows developers and resellers to use MySQL as a platform for their own businesses, as well as any users who need direct corporate support from the vendor.
MySQL AB publishes this overview to their dual-licensing approach; increasingly, other open source businesses are following suit.
The growth of the open source community would not hurt proprietary vendors so badly if it were not for the GPL. It restricts how software may be used that makes it impossible for traditional proprietary software vendors to benefit.
As Apple has shown with Mac OS X, proprietary vendors can provide significant user benefits by building their products WITH established open source software; one wonders whether OS X would have been as good if Apple had been forced to write the whole thing from scratch. If spreading great software of the type so often found in Linux distributions is the point, then more permissive licensing terms might be preferable. But if keeping the software free and accessible to everyone is the point, then USING THE stricter GPL makes more long term sense.
Recent articles by Pete Loshin | <urn:uuid:ecb9221b-1652-491f-9c3b-1879075e03ca> | CC-MAIN-2017-04 | http://www.b-eye-network.com/view/2205 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93561 | 2,693 | 2.515625 | 3 |
The Internet of Things: a silly term for a very important concept. Originally coined back in the 1990s, the phrase “Internet of Things” (IoT) describes a scenario – at the time theorized as something of a technological pipe dream, but now quickly becoming an everyday reality – in which objects, animals, or people (the titular “things”) are able to transfer data over a network without having to interact with a computer as an intermediary. This core concept serves as the foundation for an exciting new type of computing, one in which devices are so seamlessly interconnected that the way we interact with them in a way that was never before possible.
But what does that mean in layman’s terms? Examples of connected devices you’re probably familiar with include streaming TV players like Roku and AppleTV, de rigueur learning thermostat Nest, digital media player Chromecast, and fitness device Fitbit, to name just a few. Users love these devices because they’re intuitive, and that intuitiveness is powerful. The same people who ten years ago were pulling their hair out in frustration because they couldn’t get the cable box to work are binge-watching Netflix’s Orange Is the New Black on their Rokus while their Fitbit tracks their workout stats and their Nest – which knows that 6:30 is workout time on Tuesdays – automatically lowers the temperature so they don’t get too sweaty. The Jetsons ain’t got nuthin’ on that.
Given the sheer functionality of connected devices, it’s no surprise that the user base is rapidly expanding, or that investors are lining up to get in on the ground floor of the next big thing. Last year, for example, investors fought each other tooth and nail for the privilege of giving Roku an incredible $130 million in investment capital; now, IT research agency IDC is estimating that by 2020, IoT will be a $7.1 trillion industry. We don’t need to tell you that that’s a truly staggering number. What’s more, ABI Research estimates that by the same year, there will be more than 30 billion individual devices connected through the IoT.
The Internet of Things is already having a big impact on businesses, which are now able to afford to outsource IT services thanks to “the Cloud.” The Internet of Things for business has changed what success looks like in communication, collaboration and connection. Technology is finally working around users, rather than the other way around. And these advances let users forget about worrying how the technology works, ever again.
There’s no doubt about it: our first steps towards a true Internet of Things offers us a glimpse at a brave new world of computing possibilities. As more and more things go on the Cloud, the barriers between our lives and our technology will increasingly fade, until our devices are extensions of ourselves. It’s an exciting time to be in the technology business, as we’ve proven with our Cloud-powered Lifesize Icon + Lifesize Cloud video conferencing solution, and we look forward to seeing how this trend will grow in the next five to ten years. Now if we could just do something about those flying cars The Jetsons promised us… | <urn:uuid:f630089e-c24f-4046-918d-b8e24263639c> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/the-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948628 | 682 | 2.6875 | 3 |
Four years ago, Microsoft Founder Bill Gates predicted that technology and the Internet would make “place-based colleges” less relevant and bring down the cost of a college education to just $2,000.
Now, students around the world tend to agree, with many citing a belief that the university of the future will be accessible, flexible, innovative and job-focused, with a particular emphasis on lifelong learning.
That’s according to a new survey commissioned by Laureate International and performed by Zogby Analytics, which found that students predict a future where classes will be offered at various times throughout the day and year. Courses will be more affordable and virtual, and lifelong learning through certificate programs, refresher courses and online mentoring will replace traditional college degrees.
“Familiar institutions which have provided stability, security and opportunity for a millennium are withering amidst rapid technological change,” the report states. “It is an era the world has not seen since the end of the Middle Ages and the rise of the Renaissance, the New World and the Enlightenment. New institutions, driven by the needs of the actual prosumers, are changing the landscape of politics, nongovernmental organizations, economies and finance, and education.”
The survey of more than 20,800 students worldwide found that 43 percent believe future course content will be provided for free online, while 59 percent believe it will be more common for students to use social media to learn and teach other students. Course materials, books and other resources will also be available in free online libraries, according to 68 percent of respondents.
As the workplace is changing to becoming more flexible and innovative, so will education, according to the survey. More than half of students (52 percent) believe courses will be offered at all times day or night. Forty-one percent said they believe traditional two- or four-year degrees will be replaced by specialized certificates that enable students to take courses at their own pace.
And as the workplace looks to foster greater innovation, students believe higher education will evolve in the same way. More than half (54 percent) say future courses will focus on collaboration between students, and 43 percent say personalized online instruction or tutoring will render traditional classrooms less important.
Finally, course content and requirements will be more market driven in the future, preparing students to excel in in-demand fields. Nearly two-thirds (61 percent) of students believe future course offerings will be designed by industry experts, and more than seven in 10 think career-oriented skills will be more of a focus than subject matter in future university programs.
With the rapid pace of technological change, IT is one area already moving in this direction, particularly with the field’s emphasis on certifications and continuous learning so most workers simply can keep up.
Computer science degrees also are moving to more affordable, online formats. Last year, the Georgia Institute for Technology announced the launch of its first professional online Master of Science degree in computer science that can be earned completely through a massive open online course, or MOOC, format. The degree, provided in collaboration with Udacity Inc. and AT&T, takes three years to complete at a total cost of just $7,000. | <urn:uuid:a23e6a89-9967-45e8-b1be-3d87eb1a6bfe> | CC-MAIN-2017-04 | http://www.nextgov.com/cio-briefing/wired-workplace/2014/06/what-university-future-will-look/86275/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938248 | 657 | 2.671875 | 3 |
Changing the game: First DNA, now info sharing
In the past two decades, one of the most significant developments in solving violent crimes has been DNA evidence. Where once investigators were hamstrung by more primitive forms of physical evidence and less-than-reliable eyewitness accounts, today DNA evidence provides a more scientific and verifiable means of linking specific perpetrators to their crimes.
Of course, there have always been concerns about whether law enforcement’s use of DNA evidence violates suspects’ rights. But in practice, it has actually proven invaluable in confirming the innocence of the wrongly accused. In short, DNA testing helps protect the liberty of the innocent while sealing the fate of the guilty.
Today, a new technology is providing similar benefits to law enforcement -- and meeting with similar public resistance. But this time, the technology revolves around the sharing of data, information and intelligence between agencies and jurisdictions.
Such initiatives give law enforcement officials access to large volumes of data, thereby improving their ability to conduct analyses and detect patterns of criminal activity. The more information that police officers have to discern such patterns, the more likely they are to interdict, prevent and solve crimes that occur in multiple jurisdictions. These are all positives that result from sharing and accessing information across jurisdictional boundaries.
The FBI’s new National Data Exchange program for sharing criminal justice information is facilitating that detection by enabling police jurisdictions nationwide to share incident-based data with one another -- information to which they would normally not have automated access.
However, data- and information-sharing have not met with universal acclaim. Privacy and civil rights groups want to place limits on how much data police save, store and search. They worry that access to the information will be abused, with the result that innocent people will be made into suspects.
There are certainly validity and precedent to those concerns. Yet the fact is that privacy rights and individuals’ civil liberties are much less likely to be violated when information is automated, as it is with the new systems.
Automated systems can protect civil liberties by being set up to search in a way that mitigates the bias of the person performing the search. Rather than relying on a personal relationship with a counterpart in another jurisdiction that determines who or what an investigator looks at, an officer using an automated search seeks objective data and facts. When they are coupled with a valid, logical and competent police investigation, the human element is removed, and the result is the best practice for solving crimes.
It’s no different from substituting DNA evidence for eyewitness accounts. Speculation and opinion are replaced with hard, verifiable facts – a methodology that is far more likely to return accurate results.
With proper controls in place to make sure information sharing adheres to the relevant regulations, it is poised to do for informational evidence what DNA testing has done for physical evidence.
Stephen G. Serrao is a former New Jersey State Police Counterterrorism Bureau Chief. He now serves as director of Law Enforcement Solutions on the Memex Solutions Team at SAS. | <urn:uuid:d9fb640d-fd70-4073-8a9d-229b42b131b7> | CC-MAIN-2017-04 | https://fcw.com/articles/2009/12/03/serrao-law-enforcement-info-sharing.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941042 | 614 | 2.734375 | 3 |
Why are transportation problems popular applications for
In the early 1970s, many researchers were trying to apply mathematical
programming to business problems. The transportation problem was often
discussed as an application that would benefit from computerization.
Why? I think it is because this type of problem can be formulated
quantitatively and because such problems are often complex enough to
benefit from using a model. Also, the allocation of transportation
resources among competing uses is of interest to business
decision-makers in a number of different industries. In general,
real-world transportation problems are often important!
We have seen many different software programs for solving transportation
problems, but the basic need remains the same. Managers want help in
allocating a scarce resource. The basic problem formulation (cf.,
Hitchcock, 1941) has been adapted and expanded to a number of
situations. A major application is scheduling airline routes. The
following examples help explain why solving transportation problems are
important to airlines.
David Field in USAToday on April 19, 1999 explained briefly how airlines
make decisions about adding flights. Continental Airlines bases its
route and schedule decisions on daily ticket data. Continental uses a
computer program developed by American Airlines' Sabre unit. Field
quoted Robert Merz, director of network operations at United, "You
schedule to maximize profit ..."
At about the same time, Jessica Davis reported in InfoWorld that using
the "Broadbase data mart, United's staff of 60 analyst/schedulers,
typically MBA/economists, can load 'what if' scenarios -- testing
whether a new flight to Chicago would be more profitable using a larger
or a smaller aircraft". She noted schedulers take into consideration
passenger demand, constraints of airports, the maintenance needs of the
aircraft, the cost of flying individual aircraft, crew resources, and
Davis quoted Bob Bongiorno, United Airline director of research and
development, "Scheduling is the single most important thing we do at
this airline." Bongiorno said "We've got to fly to the right places with
the right frequency at the right times to make money."
Recently, Southwest Airlines implemented CALEB(TM) Technologies'
CrewSolver DSS to reduce the cost from traffic control delays and
mechanical and weather-related disruptions. For more information, check
the April 9, 2001 press release from CALEB Technologies at
So using Model-Driven DSS to solve transportation problems can improve
profitability!! On a cautionary note Professor N. K. Kwak noted almost
30 years ago that "mathematical programming provides quantitative bases
for management decisions -- bases with which management manipulates and
controls various activities to achieve the optimal outcomes of business
problems. Management can make better and more effective judgment by use
of mathematical programming. However, it is no substitute for the
decision maker's ultimate judgment." (p. 6)
Davis, J. L. "United overhaul brings decision-making down to earth",
InfoWorld, March 1, 1999.
Field, D. "Airlines pursue the trail of bucks", USAToday, April 19, 1999
at URL http://www.usatoday.com/life/travel/business/1999/t0419ad.htm.
Hitchcock, F. L. "Distribution of a Product from Several Sources to
Numerous Localities", The Journal of Mathematics and Physics, vol. 20,
August 1941, pp. 224-230.
Kwak, N. K. Mathematical Programming with Business Applications. New
York: McGraw-Hill, Inc., 1973. | <urn:uuid:914ed593-ee82-43c4-9081-06587ace016a> | CC-MAIN-2017-04 | http://dssresources.com/subscriber/password/askdan/transprob.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90286 | 776 | 2.875 | 3 |
If you're using the same presentation materials and approach in front of a small audience that you do when you're working a much larger room (like for a keynote speech), you're probably going about things the wrong way.
"Many meetings or pitches involve fewer than 10 participants in a room, where everyone remains seated and walks through the same slide deck together," writes J.D. Schramm, a communications expert who teaches at the Stanford Graduate School, in an essay for the Harvard Business Review. "This is quite a different scenario with greater constraints on the presenter and fewer tools to engage the audience."
Schramm is the founder of the school's Mastery in Communications Initiative. Here are his primary suggestions:
- Don't rely too much on extra handwritten notes that will take your focus away from making eye contact with the audience.
- If you're printing out slides, leave space for notes or for participants to fill in more details that you're sharing verbally. When they refer to the materials later, this will make a more lasting impression.
- Make an impact with other "props," if appropriate. (Schramm notes that many of his clients that work in architecture often bring floor plans that everyone can look at together.)
- Stand up when you can, usually during the formal part of the remarks, but keep it conversational when you can.
- Pick your seat wisely, preferably next to or adjacent to the decision-maker. (Note: sitting across from someone could be considered "adversarial.")
- Delay distributing anything but the most vital presentation materials, in order to keep their attention longer.
When you're in a small room, it's especially important to remember and control not just what you say, but what you do. "Knowing our personal predisposition in terms of space use, gestures and nonverbal communication can be a great start to maximizing your non-verbal power," Schramm writes. | <urn:uuid:a838c3f5-9e63-4934-9149-05d5a7bfdfe4> | CC-MAIN-2017-04 | http://www.itbestofbreed.com/article/6-things-consider-small-room-presentation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957973 | 393 | 2.578125 | 3 |
Five years after it was first introduced, Google’s Safe Browsing program continues to provide an invaluable service to the 600 million Chrome, Firefox, and Safari users, as well as those searching for content through the company’s eponymous search engine.
According to Google Security Team member Niels Provos, the program detects about 9,500 new malicious websites and pops up several million warnings every day to Internet users.
“Approximately 12-14 million Google Search queries per day show our warning to caution users from going to sites that are currently compromised. Once a site has been cleaned up, the warning is lifted,” he pointed out, and added that they provide malware warnings for about 300 thousand downloads per day through their download protection service for Chrome.
Webmasters, ISPs and CERTs are also the beneficiaries of the program, and receive warnings about compromised websites if they sign up for them.
“By protecting Internet users, webmasters, ISPs, and Google over the years, we’ve built up a steadily more sophisticated understanding of web-based malware and phishing. These aren’t completely solvable problems because threats continue to evolve, but our technologies and processes do, too,” he says.
Phishing pages have become more diverse and extremely targeted, remain online for a lesser period than before, and have also become a way to distribute malware. But most worryingly, their number rises with each passing month.
Websites leading to malware are still often legitimate websites that got compromised and redirect the users to other attack sites, but websites that are specifically built to distribute malware are also used in increasing numbers. Still, the total number shows a downward trend.
“As companies have designed browsers and plugins to be more secure over time, malware purveyors have also employed social engineering, where the malware author tries to deceive the user into installing malicious software without the need for any software vulnerabilities,” provos says.
“While we see socially engineered attacks still trailing behind drive by downloads in frequency, this is a fast-growing category likely due to improved browser security.”
He concluded by saying that even though Google is doing the best it can do to protect its customers, the users can also help themselves by not ignoring the warnings that they are faced with, by flagging bad sites, and by registering their website with Google Webmaster Tools in order to receive warnings if their sites get compromised. | <urn:uuid:9bcfcbc4-9580-48a4-a524-832aae9f5509> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/06/20/google-detects-9500-malicious-sites-per-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956348 | 502 | 2.578125 | 3 |
Yesterday, we brought you a story about the iconic CDC 6500 supercomputer, which is currently undergoing restoration at the Living Computer Museum in Seattle. The CDC 6500 system, built by Control Data Corporation in 1967, was part of the CDC 6000 line, designed by Seymour Cray in the 1960s. The most famous of these was the CDC 6600. When it was released in 1964, the 6600 surpassed the competition by a factor of ten – earning it a place in history as the first successful supercomputer.
With a performance of about 1 megaflops, the 6600 was the fastest computer in the world until the introduction of the CDC 7600 in 1969. It was also the first computer designed in CDC’s Chippewa Falls, Wisc., lab – the birthplace of Seymour Cray and the future home of Cray Research. With a base model price of $6,891,300, the 6600 went for between $6 and $10 million, depending on options. Control Data Corp. sold more than one hundred of these machines mainly to government and university labs. Among the earliest customers was CERN, the European Organization for Nuclear Research, based in Geneva, Switzerland.
Striking video of the CERN installation emerged this week, bringing to life the big event that was the arrival of a new supercomputer. The 18-minute film below was shot in January 1965.
The high-quality archive footage details the role of this ground-breaking supercomputer at CERN, where the system was employed in the analysis of 2-3 million photographs of bubble-chamber tracks that CERN experiments were producing each year.
With 400,000 transistors and a clock speed of 100 nanoseconds, the CDC 6600 was a trend setter. In addition to being by far the fastest of its era, it was also one of the first to cool refrigerator-style with Freon and the first to use a CRT console. And while most computers of the day used a single CPU, the 6600 had a remarkable peripheral design, which the video recalls in detail.
“Because of its unique organization, the 6600 can accept information simultaneously from a wide variety of sources,” the narrator tell us. “It is a combination of an extremely high-speed central processor and ten peripheral processors that work together as a single system. Information entering the data channels is controlled by and passed into the peripheral processors, two of which are used by the SIPROS control system. These perform housekeeping chores to keep the central processor free to perform arithmetic calculations only.
“The central memory has 131,000 60-bit words of very fast access storage. The arithmetic section of the central processor is divided in ten parts and is designed for concurrent or parallel operation. This means that ten arithmetic operations can be performed at the same time at speeds approximating 3 million operations per second. Each of the ten peripheral processors also has computational capability with its own storage capacity of 4,000 12-bit words. Once calculations have been performed, information may be sent remotely to paper tape punches, consoles, typewriters or plotters via teletype, datalink or data phone – or output directly to online printers.” | <urn:uuid:40f8cc2b-fda2-4db8-9378-ed2fef550327> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/14/window-time-opens-cerns-supercomputing-history/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956827 | 665 | 3.4375 | 3 |
1.2 What is cryptography?
As the field of cryptography has advanced, the dividing lines for what is and what is not cryptography have become blurred. Cryptography today might be summed up as the study of techniques and applications that depend on the existence of difficult problems. Cryptanalysis is the study of how to compromise (defeat) cryptographic mechanisms, and cryptology (from the Greek kryptós lógos, meaning ``hidden word'') is the discipline of cryptography and cryptanalysis combined. To most people, cryptography is concerned with keeping communications private. Indeed, the protection of sensitive communications has been the emphasis of cryptography throughout much of its history [Kah67]. However, this is only one part of today's cryptography.
Encryption is the transformation of data into a form that is as close to impossible as possible to read without the appropriate knowledge (a key; see below). Its purpose is to ensure privacy by keeping information hidden from anyone for whom it is not intended, even those who have access to the encrypted data. Decryption is the reverse of encryption; it is the transformation of encrypted data back into an intelligible form.
Encryption and decryption generally require the use of some secret information, referred to as a key. For some encryption mechanisms, the same key is used for both encryption and decryption; for other mechanisms, the keys used for encryption and decryption are different (see Question 2.1.1).
Today's cryptography is more than encryption and decryption. Authentication is as fundamentally a part of our lives as privacy. We use authentication throughout our everyday lives - when we sign our name to some document for instance - and, as we move to a world where our decisions and agreements are communicated electronically, we need to have electronic techniques for providing authentication.
Cryptography provides mechanisms for such procedures. A digital signature (see Question 2.2.2) binds a document to the possessor of a particular key, while a digital timestamp (see Question 7.11) binds a document to its creation at a particular time. These cryptographic mechanisms can be used to control access to a shared disk drive, a high security installation, or a pay-per-view TV channel.
The field of cryptography encompasses other uses as well. With just a few basic cryptographic tools, it is possible to build elaborate schemes and protocols that allow us to pay using electronic money (see Question 4.2.1), to prove we know certain information without revealing the information itself (see Question 2.1.8), and to share a secret quantity in such a way that a subset of the shares can reconstruct the secret (see Question 2.1.9).
While modern cryptography is growing increasingly diverse, cryptography is fundamentally based on problems that are difficult to solve. A problem may be difficult because its solution requires some secret knowledge, such as decrypting an encrypted message or signing some digital document. The problem may also be hard because it is intrinsically difficult to complete, such as finding a message that produces a given hash value (see Question 2.1.6).
Surveys by Rivest [Riv90] and Brassard [Bra88] form an excellent introduction to modern cryptography. Some textbook treatments are provided by Stinson [Sti95] and Stallings [Sta95], while Simmons provides an in-depth coverage of the technical aspects of cryptography [Sim92]. A comprehensive review of modern cryptography can also be found in Applied Cryptography [Sch96]; Ford [For94] provides detailed coverage of issues such as cryptography standards and secure communication.
Top of the page
- 1.1 What is RSA Laboratories' Frequently Asked Questions About Today's Cryptography?
- 1.2 What is cryptography?
- 1.3 What are some of the more popular techniques in cryptography?
- 1.4 How is cryptography applied?
- 1.5 What are cryptography standards?
- 1.6 What is the role of the United States government in cryptography?
- 1.7 Why is cryptography important? | <urn:uuid:ed34e7cd-fc8c-4252-ad59-8a9239a9b5dc> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-cryptography.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00241-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936853 | 822 | 3.484375 | 3 |
LLDP – Link Layer Discovery Protocol is an industry-standard, vendor-neutral method to allow networked devices to advertise capabilities, identity, and other information onto a LAN. LLDP is Layer 2 protocol described in IEEE 802.1AB-2005 standard. It replaces several proprietary protocols implemented by individual vendors for their equipment like the most known protocol of this kind, CDP – Cisco Discovery Protocol.
LLDP allows network devices like bridges and switches that operate at the lower layers of OSI model to learn some of the capabilities and characteristics of LAN devices available to higher layer protocols, such as IP addresses. The information that they get through LLDP operation is stored in a network device and is queried with SNMP. Topology information can also be gathered from this database.
|If you are interested in the Cisco proprietary solution for LLDP functionality check out CDP:|
Some of the information that can be shown by LLDP:
- System name and description
- Port name and description
- VLAN name and identifier
- IP network management address
- Capabilities of the device
- MAC address and physical layer information
- Power information
- Link aggregation information
LLDP frames are sent at intervals on each port on the device that runs LLDP. LLDP protocol data units (LLDP PDUs) are sent inside Ethernet frames and identified by their destination Media Access Control (MAC) address (01:80:C2:00:00:0E) and Ethertype (0x88CC). Mandatory information supplied by LLDP is chassis ID, port ID, and a time-to-live value for this information.
LLDP is a powerful way to allow Layer 2 devices to gather details about other network-attached devices. | <urn:uuid:2fe073e0-8097-469c-824f-4b77eef271f6> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/lldp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00149-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891779 | 358 | 2.578125 | 3 |
CrossNodes Briefing: BIND
The Berkley Internet Name Domain (BIND) offers Domain Name Services (DNS) for many Internet servers. The basic functionality of the open software remains deceptively simple. When a server receives a request for an Internet site, for example, www.crossnodes.com, it checks a database of names to find the appropriate IP address. If the name is not found, the server forwards the request to known servers on the network. This process repeats until a server that recognizes the name provides the connection.
Although the basic functionality seems simple, BIND remains a complex software program. Multiple versions exist, and recently, reports noted security holes in the various versions. Updates to the code are now available.
The software, which is distributed as open source by the Internet Services Consortium (ISC), runs on Unix systems. Some estimate that BIND enables up to 90 percent of all Internet connections, but ISC reports that users run BIND on such systems as AIX, HP-UX, Linux, Solaris, and Windows NT and 2000.
A Problem of Versions
University of California, Berkeley graduate students developed the first version, but the ISC released several versions in the intervening years. In addition, it is open source software, and users have customized the software in the field. This means that several sites still use earlier versions to preserve their customized code. This makes it difficult to think about BIND as a single product. The most popular versions include:
- Version 4.X: an early version, BIND 4 establishes primary, secondary, and cached servers. It does not support dynamic updates to its database of sites, and it lacks any method of collecting change notices from other BIND servers. In addition, it sends a single message each time it forwards a request. ISC recommends using the latest version of BIND and warns that some exchanges between Version 9.X and Version 4.X are unpredictable.
- Version 8.X: based on the core code used in early versions, this software supports dynamic updates to the DNS and accepts change notifications from other servers running BIND 8.x. It also extends logging and security, and it improves performance. Version 8.X uses a master-slave model that allows one server to control a zone, while the other servers in the zone use copies of the DNS. Version 8.X bundles requests to other servers to better utilize communications links, and it supports Internet Protocol version 6.
- Version 9.X: created from scratch, Version 9.x represents a more robust implementation of BIND. The software supports Internet Protocol version 6, a user-configurable cache, improved performance, and enhanced auditing capabilities. It adds a level of security with its support for DNSSEC, which supports signed zones, and TSIG for signed DNS requests. Version 9.X also supports multiprocessor servers.
Global Load Balancers (GLB)
Communications managers also use BIND or an add-on product to help balance processing requests between servers. BIND servers use GLB to re-route traffic to preferred servers or servers with a lighter workload. Vendors provide three approaches to balance processor loads. The BIND DNS can route requests to the GLB, which in turn, routes the request. As an alternative, some users implement the GLB to monitor traffic and change addresses as needed. Other servers integrate the GLB with the DNS.
Some ISPs use blocking as a security measure, and this can disrupt the GLB. Communications managers, therefore, must confer with their ISP and verify that their firewall permits readdressing before they install the GLB.
Communication managers who want to implement an Internet server need to consider BIND. It is best to use the latest versions and to monitor the ISC web site for upgrades after the program is installed. Managers also need to ensure that their ISP, firewall, and other security components support BIND, especially if they plan to implement GLB with the software. They also need to realize that this is open source software. With a little searching, managers may find a customized BIND implementation that eliminates their investment in getting the software to work the way they need it to operate. Taking the time now to investigate BIND can save time and money later.
Gerald Williams serves as Director of Quality Assurance for Dolphin Inc., a software development company. Williams has extensive background in technology and testing, previously serving as Editorial Director with National Software Testing Labs (NSTL), Executive Editor with Datapro Research, and Managing Editor of Datapro's PC Communications reference service. | <urn:uuid:d82c51f6-659e-4912-b2a9-bc1d702258ec> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/942551/CrossNodes-Briefing-BIND.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00021-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91493 | 944 | 2.65625 | 3 |
It may not be surprising for you to learn that email is not a secure medium of communication; however, it may surprise you to learn just how inherently insecure it really is – how messages you thought deleted could be sitting on servers half way around the world years after being sent, how people can read and modify your messages in transit, and how the very username and password that you use to login to your email servers can be stolen and used by hackers!
This non-technical article is designed to educate you about how email really works, what the real security issues are, what the solutions are, and how you can mitigate your exposure to these security risks.
Security and information integrity is increasingly important. More and more business is done strictly over email. While reading this article, imagine how these problems could affect your business or personal life…. they can.
Download the paper in PDF format here. | <urn:uuid:c31b9194-d978-4177-93c5-df72659a07ac> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/03/30/the-case-for-secure-email/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00021-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95116 | 179 | 2.515625 | 3 |
In this interview, Julie Peeler, the Director of the (ISC)2 Foundation, discusses the biggest online threats to kids and provides tips on how to teach children to stay safe online. Peeler also introduces the work of the (ISC)2 Foundation and shows how you can get involved, spread the positive message and teach Internet safety to kids.
What do you see as the most significant threat to young net users today?
The biggest threat to young Internet users is a lack of education. From the time that children are born, they’re taught how to access the world through technology – how to manipulate technology to finish their school work, how to use technology for social development. They’re online constantly but are not taught anything about online security. You witness these children doing all kinds of actions that lack traditional good sense. As an adult, you can apply common sense to scenarios that you’re faced with online but children, however, simply don’t have any of the background information needed to understand where and why they are vulnerable.
Kids tend to be more tech savvy than their parents and adopt new methods of social networking quickly. What should parents do in order to stay on top of the fast paced IT landscape?
As a parent, it’s up to you to keep a very close eye on your child’s online behavior and online consumption. A lot of times parents will say to us: “aren’t there privacy issues with monitoring what my child is doing?” My response has always been, it’s your house, your technology, your Internet access, your electricity; therefore, you should be overseeing what’s happening on all of these online devices your child has access to.
The need to secure someone under the age of consent who you are tasked with raising to adulthood is greater than that child’s very short term need for privacy. When your child goes out to go play with his or her friends, you ask them a series of questions primarily so you have a wealth of information should you need to find them quickly. The same technique that you use to secure them in the “real world” should apply to the cyber world, as well.
It’s important to note that there are just as many, if not more, threats in the cyber world as there are in the real world. Parents have to think about what devices their children have access to and for how long they are on these devices. The longer they’re logged in to these differing online channels without the proper knowledge of how to protect themselves, the more vulnerable it leaves them to becoming the targets of identity theft, a child predator and cyber bullying.
Based on your experience, what are the most effective ways to teach children how to stay secure online?
Using good analogies and examples are the best and most effective way to teach children how to stay secure. With kids, it’s not about telling them what they should and need to be doing, it’s about why. Children need to seriously understand the consequences of their online actions and not just understand them in a vague way. They need to know that they’re just as susceptible to targeted online attacks as those who are profiled in the media. We unfortunately have so many examples of this in the news media. The more real-world examples you can utilize in an education program, the more children will understand the exact repercussions of their behavior online.
Every child will be saying in their head: “that will never happen, let alone happen to me.” To be able to stand in front of them and give them an array of examples is extremely impactful. Real world examples scare children, and scare tactics work sometimes too.
In addition to providing them with examples, involving them in their own learning has shown to be an effective way to teach children, as well. Asking them questions like: “how would you feel if this happened to you,” “why do you think this happened,” or “how could this situation have been avoided,” is a great way to get them to think and apply these scenarios to their own life.
Kids don’t always listen to their teachers, and they don’t always listen to their parents, either. Through the (ISC)2 Safe and Secure Online Program, we’re able to bring in an objective third party who is highly trained in online security and safety, to kids in classrooms all over the world. They provide credibility and knowledge about online safety that kids can really learn from. To date, the program has helped educate over 100,000 children globally on how to become, and remain, secure online.
The (ISC)2 Foundation connects highly skilled IT pros with students, teachers and the general public so that they can learn how to secure their online life. What is your mission? What type of global progress are you seeing?
Our mission is to empower people to secure their online life through awareness and education programs. When I started two years ago, there were 200 volunteers in the Safe and Secure Online program. Now we have over 1000 volunteers. We were in four countries initially, and now we are in seven. In addition to the classroom program, our volunteers are working on developing useful online resources that will be available next year and we are expanding our program to include senior citizens as well, since they are also highly vulnerable.
Additionally, since the inception of the program, we’ve seen quite an upswing of children interested in pursuing careers in the IT profession which is extremely important in developing a strong security workforce of the future.
I’m sure many would like to spread the positive message and teach Internet safety to kids. How can they get involved? What are the prerequisites?
In order to get involved in Safe and Secure Online, you have to first be a member of (ISC)2 and then register on the (ISC)2 Safe and Secure Online web page. Once you’ve registered as a volunteer, you’ll be enrolled in a training program and may be subject to a background check, depending on the rules of your home country. But if you’re not a member of (ISC)2 we encourage you to go to the web site and request a presentation for your child’s school, sports club, scouting group, etc., or to tell a teacher about the program. | <urn:uuid:3f94b212-63f6-4220-8f3d-daa0ce7cc2cd> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/12/18/teaching-children-information-security-skills/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966697 | 1,320 | 2.796875 | 3 |
Making Things Simpler
“The way to build a complex system that works is to build it from very simple systems that work.” – Kevin Kelly, founding editor of Wired Magazine
In many ways, we are most comfortable with complexity. As engineers, we enjoy the process of creating, modifying, re-creating, and maybe at some point in time actually producing something that works. We find that more often than not we enjoy the process more than the creation itself. So we undertake the analysis, re-analysis, re-re-analysis, of solving a given problem and often find the most complex, sometimes most costly, often difficult, solution as it appeals to the problem solver in us all.
I often remember the following cartoon when I am faced with solving a technology problem…
Back in the good old days when I was involved in developing UNIX kernels, we often played games with code trying intentionally to make it as obfuscated as possible. We even gave prizes out for whoever could write the most unreadable code. We considered it a fun way to solve problems and also impress our peers with our programming skills.
What we lost in the process was that there were people actually using our software and that others in the future would have to maintain it. As I look back at code I wrote in the 80s, I have no idea what I was trying to do unless there were some semblance of comments scattered haphazardly throughout the code. It was fun back then, but today I would not be happy having to need to make a code change.
As engineers, we need to maintain the balance between simple solutions and complex answers. Occam's Razor has proven true all too many times for us to ignore it and yet at times we do. We become so enamored of our favorite shiny object that we develop an amazingly transparent blindness to others' blind spots, to anything other than what has become the new toy in our toy box. However, we do know we cannot completely depend on it. We may build large, complex, and even unwieldy, solutions to posed problems, but very often we are solving for things that have never even been presented as a concern.
For my architecture team, we maintain five simple rules:
1. Simple, modular architectures always win
2. Centralize what you can, distribute what you must
3. Silicon matters for scale, availability, and resilience
4. Automate anything that can be automated
5. Support open standards
The more complexity we introduce into an already complex ecosystem creates a difficult road to navigate over time. Each change creates a ping-pong effect that often touches remote pieces of the design in our minds, never to be impacted. But in some small way they are changed enough to cause havoc that we end up spending much time troubleshooting.
Back in my last column, I spent time discussing the delineation between network layers in the cable modem termination system (CMTS) functional block diagram that I've been using for a few articles to show how this simplification may help. Putting the physical layer components (and possibly MAC) into a remote device helps with the scale issues at hand while also simplifying the architecture. (See Embracing Technological Change and Learning From Mistakes.)
There are many ways to solve the same problems using monolithic architectures that are completely sound both technically and financially. But do they get us where we want in the long run? How do we simplify even more? What can we do to not only break the functional blocks and layers apart further, but also provide a communication path between them?
Enter SDN and NFV…
While they are our industry's current shiny objects, if we treat them as another tool in our toolbox they do provide a framework for achieving this goal. I am often involved in discussions about how they can be used to solve almost every imaginable issue simply by decomposing functions or using OpenFlow (or other protocols) as a standard communication mechanism. Realistically, both SDN and NFV are finding their way through the complex organism we call our network. But, in order for them to flourish in our technological world, we need some quick wins to show how they may help.
So what are they? I like to look for ways to reduce the complexity both in design and configuration. As we add more devices, paths, circuits, flows, routes, etc, we make things more complex. As we are required to configure more equipment with device-specific configurations using unique command line interfaces, we simply increase the complexity.
So how can we simplify things? One way is to use an abstraction that allows us to define things in such a way that it is applicable to multiple physical manifestations. Rather than force us to integrate device and service-level provisioning, is there a way to focus on the services and let the devices provision themselves? To me, it is a holy grail in network design and management; but through standardization we are getting much closer.
In the excellent work done at CableLabs, with MSO and vendor support, on Converged Cable Access Platform (CCAP) and DOCSIS 3.1, we are seeing a real-world impact through the use of YANG models for device and service abstraction, and NETCONF for configuring devices. In many ways, this is the beginning of a whole new way to view the cable ecosystem. We are no longer encumbered with doing things as we have always done them; it is a completely new way to envision how we may be able to manage our networks.
Stay tuned for future blog posts and ruminations on ways to think about N2GCable, i.e. the next-next generation of cable we are now entering…
— Jeff Finkelstein, Executive Director of Strategic Architecture, Cox Communications | <urn:uuid:6fdb9056-6875-4402-a207-93ab33d2bd79> | CC-MAIN-2017-04 | http://www.lightreading.com/cable-video/ccap-next-gen-nets/making-things-simpler/a/d-id/709463?_mc=RSS_LR_EDT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963488 | 1,188 | 2.671875 | 3 |
Java started its life in the early 1990s as an attempt to develop an architecture-independent language that could be used in consumer electronics and other embedded contexts. It found itself in the right place at the right time when the web exploded in the mid-1990s and over the next 10 years became one of the mainstays of web development. Today, Java remains as popular as ever. It’s arguably the most popular programming language of our generation.
After all this time, it’s tempting to consider the language “finished” or at least think that the baggage of backward compatibility makes innovation difficult. But commercial owner Oracle continues to invest heavily in Java, and the now widely adopted Java 8 and upcoming Java 9 release includes significant new goodies for the Java developer.
Java 8 Lambda expressions allowed for a form of functional programming and in many cases radically simplify program structure. Lambda expressions allow functions to be passed as arguments to methods. Furthermore, they can represent anonymous functions that do not require explicit naming.
One of the advantages of functional programming is the ability to more easily implement parallelisable programs. Java 8 Streams builds on the Java 8 functional primitives to provide more elegant and parallelizable methods for processing Java collections (Java structures such as Lists and Maps).
Java 9 is still at least 6 months away but it has been feature complete for quite a while. One of the most noticeable new features is the JShell utility, which runs a simple Java shell. This shell can be used to execute arbitrary Java commands without having to create a whole runtime class. JShell can be used to test language structures, perform simple calculations or as a simple interpreter for automation tasks. The JShell is an example of a REPL (Read-Eval-Print-Loop) utility.
More fundamentally, Java 9 completely refactors Java’s modularity paradigm.
Anyone who’s programmed in Java will know that typically you import dozens of packages which contain library and other necessary classes. Each package is represented by a “JAR” file which contains package interface and implementation. Java searches for JAR files on a “classpath” analogous to the command line PATH used since time immemorial in DOS and UNIX.
Unfortunately, the JAR file/classpath mechanism allows for ambiguity when resolving dependencies, and creates security issues and performance problems. For instance, when one package needs an API the java runtime simply scans the classpath looking for that API. A hacker could arrange for malicious code to run by inserting a new package into the classpath. Furthermore, multiple versions of a JAR file might exist within the classpath – Java will simply load the one it encounters “first.”
Java 9 modules combine all the required packages into a single unit with explicit dependencies. Furthermore, the module explicitly defines which classes will be exposed to other modules. This further tightens encapsulation since a JAR file within a module might export an interface required by other JAR files within the module, but the module itself need not export all of those interfaces.
Oracle recently slipped the expected delivery date of Java 9 from March to September 2017 – and this has not been the first such delay. The modules implementation is so core to the Java framework and the Java framework has become so mammoth, that it’s becoming increasingly hard to implement major enhancements.
It’s often tempting to think of Java as a modern-day COBOL – dominant through sheer mass of existing implementations and trained programmers, but representing the past rather than the future. As we’ve seen, it’s getting harder for Java to maintain forward motion – the sheer mass of legacy creates an inertia that is hard to overcome. It’s also true that Java is not what the cool kids like to code in these days - more modern languages such as Go and Scala have that honour. Still, there’s life in the old girl yet…. | <urn:uuid:ead766d6-0a2c-4ae4-bcf9-f3dbe954eddf> | CC-MAIN-2017-04 | http://www.dbta.com/Columns/Applications-Insight/Whats-New-in-Java-115503.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909487 | 806 | 2.90625 | 3 |
Ameen Pishdadi is the CTO at GigeNET. In this interview he discusses the various types of DDoS attacks, tells us who is at risk, tackles information gathering during attacks, lays out the lessons that he’s learned when he mitigated large DDoS attacks, and more.
While most have heard of DDoS attacks, not everyone knows that there are several types of such attacks. Can you provide an overview of the different kinds and illustrate their severity? What kind of damage can a DDoS attack do?
Well the easiest way to define DDoS is to discuss what it stands for. It really originated from DoS which was short for Denial of Service. The 2nd D stands for “Distributed.” In the late 90’s to early 00’s, the first true Distributed DoS attacked occurred. If I remember correctly, one of the first publicized tools for executing a DDoS was called “trinoo.” It was the first of its kind, where infected machines were able to receive commands from a central location which is called a botnet C&C (command-and-control). Botnet makers got smarter and instead of hosting the C&C from a single host, they started to use IRC (Internet Relay Chat). The compromised machine would connect to a hostname and port that were hardcoded in the botnet code and connect to a channel where a single chat entry need only entered once, but then be seen by tens of thousands of compromised machines and then execute their attack.
The first widely publicized attack was early 2000’s when internet giants such as Yahoo! were taken down. The amount of bandwidth that was required for this would have to have been enormous in those days. This is when the botnet / DDoS scene began to take off.
The goal of a DDoS is to cause a ‘denial of service’ to the user or end users of whatever is being attacked. This can be done in a few different ways. The three most common are as follows:
1. Saturate the connections that the target has to the internet, thus preventing real users from being able to connect. This is usually done with a UDP flood, and lately a UDP reflection flood.
2. Saturate the CPU of the router or host machine by sending more packets per second then it can handle. When this occurs, pretty much anything trying to connect does not get processed by the CPU of the device nor forwarded to the destination. This is usually done with a synflood.
3. Overload the application with requests that look like real users. An example would be having a thousand servers making a request to your website’s page all at the same time. These days, since websites are primarily database driven, this effect is even greater. The webserver and database servers become overloaded quickly.
We’ve seen a significant rise in DDoS attacks in the past year. What are the reasons behind this trend? What type of organization is most at risk?
The significant increase is a direct result of the misuse of information for marketing purposes. While the method that was used to take down Spamhaus was fairly well known and had been around for awhile, media attention was purposefully exploited by CloudFlare for its own gain. This exposed this type of attack to a much wider audience. It basically laid out the blueprints and also broadcast how massive the DDoS could be if done right and with the proper resources.
Most of the time, people who DDoS have no idea how large a reaction they are generating. They are merely trying to achieve their goal of taking down the target. Well, CloudFlare publicized the size of this attack on a daily basis and it enlightened a lot of new crowd to the method. Although it is believed that the CloudFlare final number of 300Gb/s was quite padded and that the real number was more believably around 100Gb/s, this was still a massive amount of bandwidth. As a result, tools popped up all over the place for scanning host machines to add to your database along with tools to execute the attack. This made the process so simple that a 10 year old with Windows had the ability to point and click and in seconds, generate a few Gb/s of UDP traffic.
The unfortunate truth is that EVERYONE is at risk. Sometimes people get attacked and they have no idea why! The source is often someone who doesn’t like one’s business, perhaps a competitor or someone trying to extort money.
How important is intelligence gathering when it comes to mitigating the effects of a massive DDoS attack? What type of information are you looking for?
It is extremely important for the entire online community. Mitigating the attack only stops the attack from hurting one specific target, but if you can find the information that will lead to the C&C, this can be reported to several “white hat” groups who volunteer their time into dismantling these botnets so they cannot attack anyone else. It is also important to figure out who the attacker is, in the event that criminal prosecution can be pursued.
What are some of the lessons that you’ve learned when you mitigated large DDoS attacks impacting your clients?
I learned quickly that no attack is the same. There is no “one size fits all” device out there that will stop every attack. To be responsible, a person needs to have many different tools in his or her arsenal, sometimes used together along with some manual work, to stop some of the more intelligent attacks.
Never assume that you have seen an attack as big as it would ever get. But also, it is worth noting that size isn’t everything. It can actually be the smaller attacks, the ones which look quite similar to normal traffic, which are the hardest to stop.
What advice would you give to organizations interested in getting DDoS protection? How can they make sure that they make the right choice when evaluating providers?
When evaluating any potential provider, look at their history. See how long they have been around and ask for some proof. Check there website for original content. There is smaller company out there who is decently known, but their entire site is plagiarized from different companies who sell DDoS mitigation devices. If they cannot write original text on their own site, then I really would not have too much faith in them protecting my interests as a client.
What are the advantages of using GigeNET DDoS protection? What makes you stand out from the competition?
Without a doubt, our best asset is our experience. We are tried and true. I began defending DoS attacks in 1998 when we used to run a shell server and attackers would DoS other people off of IRC chats.
Paul, our network engineer, started the first fully dedicated DDoS protection company in the late 90’s and pioneered many of the methods of protection. We joined forces in 2005 and have been at the forefront of the industry ever since. | <urn:uuid:87b81078-2dd9-4953-b6cc-16f07815aa2d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/06/24/ddos-attacks-what-they-are-and-how-to-protect-yourself/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00433-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9749 | 1,452 | 2.953125 | 3 |
Blogmaster Note: This was originally posted on April 23, 2012 to ComputerWorld UK’s Security Spotlight Blog.
Are there any insights left to be wrung from the code breaker’s papers?
Chris Vallance of the BBC reports that GCHQ has released some of Alan Turing’s papers on the theory of code breaking. They’re not on display at the National Archives at Kew. I’ve checked the web pages of the Archives and GCHQ, and there is as of my writing nothing up there, yet.
The two papers are titled, The Applications of Probability to Crypt” and Paper on the Statistics of Repetitions. They discuss the use of mathematics to cryptanalysis. This might seem a bit obvious now, but at the time cryptanalysis was largely done by smart people and not by machines. A code-breaker was more likely someone who was good at solving complex crossword puzzles than working with numbers. It was unusual to bring in someone like Turing to a cryptology lab.
It wasn’t until machine cryptography was developed after WWI that codes were developed that were so complex humans couldn’t break them. The Enigma machine is the most famous, but there were others used all around the world.
However, using statistics has been a staple of code-breakers for centuries. It was used by British code-breaker George Scovell, to break Napoleon’s codes back in the early 1800s.
The BBC quotes a GCHQ mathematician that the papers discuss “mathematical analysis to try and determine which are the more likely settings so that they can be tried as quickly as possible.” Indeed, we know that the Engima codes were broken daily through flaws in distributing daily settings for the code machines themselves as much as breaking the actual cryptography.
It will be interesting to see what is in those papers. GCHQ says they have squeezed all the juice out of them, and therefore they are not likely to hold surprises for us in the private sector. Nonetheless, many of us will be interested in reading Turing’s words on the subjects. | <urn:uuid:60b352c2-3c15-4994-b2be-fce656aba35b> | CC-MAIN-2017-04 | https://www.entrust.com/alan-turing-notes-on-cryptography-released/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973252 | 437 | 2.6875 | 3 |
HOW IT WORKS
The return of the sneaker net
- By John Breeden II
- Sep 13, 2012
The other day I happened to use the term “sneaker net” and found that a young colleague didn’t know what I was talking about. True, it’s been a while since the heyday of sneaker nets, but they’re still around and, in fact, are even starting to come back. So for the young folks out there who never had to deal with dial-up, here’s a primer on the term.
What it is: "Sneaker net" is a somewhat comical term that came into fashion in the 1980s when bandwidth was low. It became easier and quicker to simply put a file onto a disk and then walk it over to a new computer to transfer it. This was mostly done by techies wearing sneakers, hence the term. To the ear, it sounds a little bit like Ethernet.
Once bandwidth got plentiful, sneaker nets fell out of fashion. Even the Usain Bolt of techies couldn’t outrace a 10/100 megabits/sec connection for short distances and small files.
But now that files are getting to be hundreds of gigabytes or even terabytes in size, the sneaker net is back. So get those running shoes ready again.
Examples: The SETI@home project, which searches for signs of extraterrestrial life, uses a form of sneaker net to transport massive amounts of data gathered by the radio telescope in Arecibo, Puerto Rico. Data is put onto magnetic tapes and then mailed to Berkeley, Calif., for processing. So it’s the mailman’s sneakers being used.
One of the most radical examples of sneaker net came from employees of a South African company who got tired of the slow transmission speeds they were getting from their provider. In 2009, they tried to transfer 4G of data 60 miles between cities using an ASDL line. At the same time, they put the data on a key drive and used a carrier pigeon to carry it the same distance, so it was sort of of a pigeon toe net. The bird made the flight in 1 hour, 8 minutes and it took another hour to transfer the data off the memory stick. Only 4 percent of the data had been transferred the traditional way by the time the pigeon was done.
Bottom Line: Although they went out of fashion as bandwidth increased, sneaker nets started to come back into vogue as file sizes rose. As long as a techie with a pair of sneakers, or a pigeon in some cases, can get the job done faster than a digital transfer, sneaker nets will live on.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:67c5bc8b-1992-4be2-9659-b880cac81c77> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/09/13/how-it-works-sneaker-net.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960185 | 579 | 2.6875 | 3 |
What is NetFlow?
NetFlow is a network protocol developed by Cisco in order to collect and monitor IP network traffic. By utilizing NetFlow, IT teams can analyze traffic flow and determine the traffic source, traffic direction, and how much traffic is being generated. To help you better understand the NetFlow process, I like to use the following analogy from our Product Manager, Ulrica de Fort-Menares.
Think of NetFlow the way you think of a phone bill. When you get your phone bill, you usually see a record of conversations listed. The information regarding these conversations includes the time the call occurred, who was called, how long the conversation was, the actual metadata from the phone call–but not the actual audio data packet.
Why is this concept like NetFlow?
Similar to NetFlow, the header information for data packets that traverse through a device are stored in the device’s cache and then exported to a collector. A collector is very important to analyzing NetFlow data. Without one, you could attempt to verify the cache for what data is currently traversing through a device, but as you can see below that is highly ineffective and time consuming.
What about other flow types–sFlow, jFlow?
While NetFlow is a commonly used name for flow export, NetFlow is vendor specific to Cisco. jFlow is vendor specific to Juniper and sFlow is an industry-standard flow. The key difference between sFlow and NetFlow is that sFlow is sampled flow and NetFlow is not sampled. Fortunately, our network management platform, LiveNX, is vendor agnostic when it comes to our flow collection and if your device supports any type of flow export the data can be collected by LiveNX. Please see our specifications page for more information.
What do I do with the flow data?
You could attempt to analyze a pcap if you had plenty of time, or more realistically you could use a flow collector to store and analyze the metadata to make sense of the information. For example, in the image below, I have a real-time view of a Palo Alto firewall being monitored by LiveNX. In the data set, I see a blue highlighted row that represents a conversation traversing through the firewall. Notice the information contained in this flow includes source and destination IP address, source and destination ports, TOS, utilization and even an application name—all of this is derived from flow!
You can learn more about our real-time flow monitoring here.
Using LiveNX you are able to take that flow metadata and visualize it across a topology to track a conversation through the network. For example, in the image below, I’m focused on user voice calls between the LA and Toronto offices utilizing a filter based on subnets and ports. Notice anything strange about the DSCP markings?
Watch more on how we visualize flows here.
Using flow data, it’s also possible to better understand and manage WAN bandwidth (BW). In the example below, I’m able to see that most of the outbound data on the GE0/0 is video-over-http. I can also see the total utilization for a specified time range, as well as average and peak-rate information.
Watch more about WAN BW Management here.
As more and more applications fight for expensive BW, flow data becomes the path of enlightenment in the network. In the past, if you were to traditionally derive this information, it would take the deployment and management of probes. Now, just by enabling features already available on your devices, you can export flow data to a solution like LiveNX—ultimately helping you to analyze and make sense of the collected metadata.
Read more about sFlow here: http://www.sflow.org/about/index.php
View the NetFlow RFC here: https://www.ietf.org/rfc/rfc3954.txt
Date: October 26, 2016
Author: Alex Cameron | <urn:uuid:8b183444-3343-4d9f-a16b-8c83fc1db269> | CC-MAIN-2017-04 | http://www.liveaction.com/what-is-netflow/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917512 | 823 | 2.5625 | 3 |
A Security researcher, Riyaz Ahemed Walikar, has posted evidence of a serious persistent Cross Site Scripting(XSS) vulnerability on Tumblr, the popular microblogging platform.
XSS flaws are highly common on websites these days, but most of them are non-persistent and implicitly less dangerous.
"XSS can cause a lot of serious problems. An attacker can steal cookies, redirect users to fake or malicious sites, control a user's browser using automated frameworks like BeEF and download and execute exploits on the victim's computer," Researcher said in the blog post.
"Stored XSS is even more dangerous since the script is stored on the server and is executed everytime user visits an infected page."
Researcher found vulnerability on the 'Register Application' page at http://www.tumblr.com/oauth/apps. The application was not sanitizing user input when a user would create a new application. An XSS attack vector like tester "><img src='x' onerror="alert(document.cookie)"/> would trigger an alert box, displaying the user's cookie, in the browser.
Tumblr were notified more than three weeks ago on the issue. Finally, they fixed the vulnerability Today(july 14).
If you don't know what XSS is, you can read this article "Xss For Beginners". | <urn:uuid:5744880f-5ac4-4f99-ba2f-ca7dc063e5c3> | CC-MAIN-2017-04 | http://www.ehackingnews.com/2012/07/tumblr-patched-critical-persistent-xss.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910948 | 283 | 2.515625 | 3 |
The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can take advantage of all those cores requires the concerted efforts of highly-skilled programmers.
Current HPC programming tools are failing to meet the challenges presented by large-scale, heterogenous architectures and the demands of big data. Frameworks like MPI can be difficult to learn and use and time-consuming even for established experts. A new open source collaboration called “Julia” aims to simplify the coding process by providing “a powerful but flexible programming language for high performance computing.”
“In recent years, people have started to do many more sophisticated things with big data, like large-scale data analysis and large-scale optimization of portfolios,” says Alan Edelman, a professor of applied mathematics who is leading the Julia project. “There’s demand for everything from recognizing handwriting to automatically grading exams.”
Edelman, who is affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory, points to a lack of professionals capable of coding at this level, noting that it’s not just difficult, it’s time-intensive.
“At HPC conferences, people tend to stand up and boast that they’ve written a program so it runs 10 or 20 times faster,” Edelman says. “But it’s the human time that in the end matters the most.”
The origins of Julia can be traced back to an HPC startup that Edelman was involved in, called Interactive Supercomputing. After the business was acquired by Microsoft in 2009, Edelman launched a new project with the goal of developing a novel, high-level programming environment that was both fast and efficient and suitable for domain experts as well as expert coders.
The development group includes Jeff Bezanson, a PhD student at MIT, and Stefan Karpinski and Viral Shah, both formerly at the University of California at Santa Barbara. They had all tried MPI (message-passing interface), the popular parallel processing tool, but found it was not the easiest interface to work with.
“When you program in MPI, you’re so happy to have finished the job and gotten any kind of performance at all, you’ll never tweak it or change it,” Edelman says.
The group made it their mission to develop a new language with the parallel-processing support of MPI that could generate code that ran as fast as C. It also had to be as easy to learn and use as Matlab, Mathematica, Maple, Python, and R, and it should be open-source, like Python and R.
The effort led to the launch of Julia in 2012, released under an MIT open-source license.
Edelman reports that Julia, while still a work in progress, has surpassed the group’s expectations.
“Julia allows you to get in there and quickly develop something usable, and then modify the code in a very flexible way,” he says. “With Julia, we can play around with the code and improve it, and become very sophisticated very quickly. We’re all superheroes now — we can do things we didn’t even know we could do before.”
The language uses a “multiple dispatch” approach which enables users to define function behavior across combinations of argument types. A dynamic type system enables greater abstraction, which bolsters performance and supports large data. Programs can be created quickly; when equally good programmers compete, the Julia programmer always wins, according to Edelman.
Edelman is not only a Julia creator and developer, he uses the language for Monte Carlo simulations for his “other” job as a theoretical mathematician.
“I love using Julia for Monte Carlo because it lends itself to lots of parallelism,” he explains. “I can grab as many processors as I need. I can grab shared or distributed memory from different computers and put them altogether. When you use one processor, it’s like having a magnifying glass, but with Julia I feel like I’ve got an electron microscope. For a little while nobody else had that and it was all mine. I loved that.”
Perhaps the coolest thing about Julia is that it the spirit of collaboration and extended community that is being enabled by the combination of ease-of-use and open-source licensing. Edelman says that people from all over the world working on the project. Geographically separate parties can even work on the same piece of software in real time. | <urn:uuid:5bb37027-f77b-476e-8536-26a7b5e73169> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/06/18/easier-faster-programming-language/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951478 | 1,010 | 2.59375 | 3 |
Program Provides High Schoolers Access to IT Careers
While some kids spent the past few months skiing and bowling in front of their Wiis or walking through the mall picking out the latest spring attire, more than 100 Chicago Public Schools students were preparing for an IT certification exam. On May 5 and 6, these students sat for a rigorous CompTIA exam, a move that will help them find gainful employment or give them valuable experience before moving on to higher education.
Roughly 650 students in six high schools participate in CompTIA’s Education to Careers (E2C). The program takes place as an elective during the regular school day and students can gain IT certifications. The program at Chicago Public Schools (CPS) engages students in two courses, A+ and Network+, teaching them the skills to pass the respective certification exams.
The program has been in place at CPS for about a decade. CompTIA began collaborating with the district several years ago to encourage new workers to enter IT. “CompTIA is interested in [getting new entrants into the field] for our corporate members who are looking for an ongoing pipeline [to] bring on board successful employees [for] the future,” explained Gretchen Koch, director of skills development programs at CompTIA.
“We work very actively with these communities and with these schools to [develop] an interest in young people in IT as a wonderful profession to pursue,” she said.
Charles Willard, CPS’ career cluster manager for IT, identified a number of benefits the program provides. First, it gives the students a meaningful skill. “As we know, computers are widely used across the board in many levels of industry,” he said. “[E2C] has afforded [students] the opportunity to step up into the world of employment at a higher level.”
Students acquire jobs while in high school with companies such as Best Buy and Circuit City, practicing the skills they learn in the classroom. For example, the students might work on diagnosing problems on PCs and then repairing them.
Not only does the program allow students to use the classroom-learned skills to find employment, it engages them in higher reasoning. “Eighty percent of resolving an [IT] issue is all mental diagnostics,” said Willard. “You’ve got to be able to walk mentally through the PC or the network and eliminate factors that might be causing the problem, and that takes higher reasoning.”
CPS perpetuates this engagement in higher reasoning through its articulation agreements with higher education institutions. “We do prepare them for a career if they so desire,” said Willard, “but at the same time, we have articulation agreements with the city colleges [and] universities, so that if a student wishes, they can go on to post-secondary school and enhance their skills.”
Students in the E2C program, as well as other similar programs, perform better academically than those who don’t get involved. “What we’ve found, and what research has found — in the career and technical education area across the U.S. and also in Illinois — is that our students tend to stay the course longer,” said Willard. “That is, they stay in school longer, they have better attendance and they have a higher graduation rate.”
The program gives students “opportunities they would not otherwise have,” he stated. And CompTIA has offerings in place to help ensure these opportunities are a bit easier to attain.
One accommodation the IT education provider makes is providing free vouchers for the teachers of E2C programs. The free vouchers “encourage the teachers to be certified themselves,” said Koch, “so they have a good indication of what it takes to pass the exam for their students.”
The company also offers discounted exam tickets to its members to help students pay for the exams. “[CompTIA] looked at the price of [its] certifications and saw that pricing could potentially be an impediment, particularly in publicly funded institutions,” said Koch. To help with this, “member institutions can purchase vouchers for their students at a significant discount for [CompTIA] certifications.”
Even with these features of the program, CPS recognizes it can still improve. One area in which CPS is pushing for improvement is community involvement, said Willard. “[CPS is trying] to get the business community around Chicago to embrace the Chicago Public Schools and give our students more opportunities for internships, paid and unpaid. We’re pushing towards the business world to reach back into the community, open up the doors and give our kids opportunities to really put their skills to work.”
“Internships can lead to full-time employment for these kids,” said Koch. “And internships give them real-life experience that they can add to their resumes.
“These children are so impressive; they are so smart,” she added. “They just love computers. This is the computer generation, [and] they’ve grown up with this stuff. These kids are really into it, and they’re really doing a good job.” | <urn:uuid:f795f805-b977-44f7-9af5-ced2a634ecbd> | CC-MAIN-2017-04 | http://certmag.com/program-provides-high-schoolers-access-to-it-careers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00243-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968519 | 1,111 | 2.625 | 3 |
Flight demonstrations may be pricey, but when industry partners are willing to pay for them they can help speed new NASA technology from the lab into real-life use, experts said in a report issued on Wednesday.
The report scolds NASA's reliance on 40-year-old technology and says the agency needs to focus on 16 areas that include better rockets, better ways to propel spacecraft in space, safer ways to land, and new ways to feed and protect astronauts on long missions.
"To send humans to the moon, Mars, and other destinations beyond low Earth orbit, new technologies are needed to mitigate the effects of space radiation ... advance the state of the art in environmental control and life support systems so that they are highly reliable ... and provide advanced fail-safe mobile pressure suits, lightweight rovers ... and other mechanical systems that can operate in dusty, reduced-gravity environments," stated the report from the National Research Council.
"It has been years since NASA has had a vigorous, broad-based program in advanced space-technology development," said Raymond Colladay, president of Golden, Colo.-based RC Space Enterprises and chair of the committee that wrote the report. "Success in executing future NASA space missions will depend on advanced developments that should already be under way."
President Obama shut down months of vigorous debate on where NASA should go after ending the shuttle program last summer by cancelling predecessor George W. Bush's Constellation program aimed at putting people back on the moon. President Obama instead sent the space agency further out, directing it to develop robotic and science missions and to eventually aim to land astronauts on Mars and, perhaps, an asteroid.
The report from a team of space experts, including private NASA contractors, engineering experts, and others, said problems range from protecting astronauts from radiation to finding cheaper ways to get a spaceship up into orbit - and paying for it with a projected technology budget of $500 million to $1 billion a year.
"Lifetime radiation exposure is already a limiting assignment factor for career astronauts on the International Space Station," they wrote. Current models suggest that astronauts cannot spend more than three months beyond low Earth orbit because of health worries, the report said. Yet not much work has been done on space suits, for instance, since the Apollo missions 40 years ago.
And venerable old rocket systems work well but they are expensive, the report noted.
"In spite of billions of dollars in investment over the last several decades, the cost of launch has not decreased. In fact, with the end of the Space Shuttle program and uncertainty in the future direction of human spaceflight, launch costs for NASA science missions are actually increasing. This is because without the space shuttle or human space-flight program, the propulsion industrial base is at significant overcapacity."
NASA Chief Technologist Mason Peck said he welcomed the report.
"The report confirms the value of our technology development strategy to date. NASA currently invests in all of the highest-priority technologies and will study the report and adjust its investment portfolio as needed," he said in a statement. | <urn:uuid:97e3e781-89da-438f-813e-3b09bc0c57dd> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2012/02/nasa-is-too-reliant-on-old-technology-report-says/50557/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953379 | 617 | 3.375 | 3 |
By Balaji. K.
Coronary Obstructive Pulmonary Disease (COPD) consisting of chronic bronchitis and emphysema, which affect the lungs, is expected to become the fourth most prevalent killer disease in the world by 2020. The lung capacities of patients are reduced by as much as 40 percent in most cases which makes basic physical activities extremely difficult, thus, incapacitating patients.
Smoking has been found to be the primary cause of COPD, while air pollution has been pegged as the second causative agent. Occupational pollution, particularly from cadmium and silica, has proved to be another major risk factor among mine and construction workers. A rare genetic disease, alpha-1 antitrypsin (AAT) deficiency, can also cause COPD. This is usually treated by supplementing the deficient AAT through injections. Infectious diseases are the fifth common cause of COPD.
Current Treatment Trends
The first step in the treatment of COPD has been to ensure that the progression of the disease is slowed and premature death prevented. This involves removing or minimizing the exposure to the causative agent responsible for COPD; that is, if the patient is a smoker, he is advised to stop smoking. This is followed by a wide range of drug therapy that depends on the diagnosis and efficacy of the treatment. Long- and short-acting beta-agonist bronchodilators have played a crucial role in the treatment of COPD. Anti-cholinergic agents, oral corticosteroids, and oral glucocorticoids are also used if needed. Mucokinetic agents and antibiotics to treat chest infection, expectorants, and oxygen therapy (non-pharmacological therapy) are also used in conjunction with a pulmonary rehabilitation program. | <urn:uuid:f73506f5-26b0-48ad-990e-9f05dd6cb808> | CC-MAIN-2017-04 | http://www.frost.com/sublib/display-market-insight-top.do?id=MKEE-58PTSV | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00363-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95191 | 365 | 3.09375 | 3 |
In my last post, I wrote about collecting data to provide information for decision-making. There is no question that data is where we begin, but facts are not enough. What we really need is information—a meaningful interpretation and presentation of the data that gives us insight into a condition or situation.
For example, when I manage a project, I collect data about task performance, such as “Task A” started on January 5 and finished on January 12, it took 27 hours of effort and we spent $3000 on travel. Interesting facts, but they really don’t tell me enough to evaluate the task’s performance, its impact on the project, or help me make decisions about the remaining work or cost.
It turns out that “Task A” was scheduled to complete on January 14, making it ahead of schedule. Unfortunately, it isn’t on the critical path, so there’s no impact to the project schedule. We had planned to use 25 hours to complete the work, but even though we were two hours over our plan for the task, we’ve been running significantly below our labor estimates on the work performed to date, and we’re already about halfway to completion. Now for the bad news: the total travel budget for the project is $5000 and we still have two more trips planned. What was supposed to be a one-day trip, costing $1000 for three team members, turned into a three-day trip, costing $3000 because the team got stranded in a Chicago blizzard.
So what makes the second paragraph more useful than the first? In general, the second paragraph tells a complete story that informs, highlights what’s important and clearly identifies the actions or decisions needed. If we further dissect that paragraph, we can see a few specific attributes that make it more meaningful and useful: content, context, contrast and consequence.
Content: In the first paragraph, we know the facts, but the second paragraph takes the time to combine the facts into a story. By adding a little more detail, or content, the reader has a better grasp of “the five W’s”: who, what, where, when and why. Our desire to be brief and direct often results in repeated question-answer cycles, leaving the decision-maker to ferret out the relevant and important information before taking an appropriate action, delaying the process.
Context: While the first paragraph describes the amount of money that was spent on travel, it does nothing to explain the circumstances or context behind the expenditure. By providing the information about the blizzard, the decision-maker better understands why the expenditure was high and in a better position to make an informed decision regarding future travel expenditures.
Contrast: In the first paragraph, we know what date the task finished, but in the second paragraph, we know that it finished late. By including data from the project plan, the second paragraph is able to compare what was supposed to happen with what actually happened. This provides the decision-maker with a much better sense of the problem and apply an appropriate decision.
Consequence: In addition to understanding that a variance exists, we also need to be clear about the impact or consequence of the difference. Sure, the first task took a couple more hours than we had originally planned, but in the overall scheme of things, it really doesn’t require action. The budget variance on the other hand is significant since it pretty much blows our travel budget out of the water.
Turning data into meaningful information to drive a decision isn’t rocket science, but it does require some thought. Next time you put together a report, ask yourself: does this tell the full story? If not, it’s probably just data.
In my next post, I’ll write about presentation in the context of decision-making, including some thoughts on dashboards. | <urn:uuid:0d9e9f36-11dd-484c-93ee-e0d936734193> | CC-MAIN-2017-04 | http://blogs.daptiv.com/tag/actionable-information/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959704 | 803 | 2.75 | 3 |
Everyday Science Quiz Questions & Answers - Part 1
This everyday science quiz contains collection of free quiz questions and facts about everyday science. It is a place where you find answers to the most commonly asked everyday science questions. Test and increase your science knowledge with these quiz questions.
Everyday Science Quiz Questions & Answers
1. Question: A man with a load jumps from a high building. What will be the load experienced by him?
Answer: Zero, because while falling, both the man and the load are falling at the same acceleration i.e. acceleration due to gravity.
2. Question: A piece of chalk when immersed in water emits bubbles. Why?
Answer: Chalk consists of pores forming capillaries. When it is immersed in water, the water begins to rise in the capillaries and air present there is expelled in the form of bubbles.
3. Question: Why does a liquid remain hot or cold for a long time inside a thermos flask?
Answer: The presence of air, a poor conductor of heat, between the double glass wall of a thermos flask, keeps the liquid hot or cold inside a flask for a long time.
4. Question: Why does a ball bounce upon falling?
Answer: When a ball falls, it is temporarily deformed. Because of elasticity, the ball tends to regain its original shape for which it presses the ground and bounces up (Newton's Third Law of Motion).
5 Question: Why is standing in boats or double-decker buses not allowed, particularly in the upper deck of buses?
Answer: On tilting the centre of gravity of the boat or bus is lowered and it is likely to overturn.
6. Question: Why is it recommended to add salt to water while boiling daal?
Answer: By addition of salt, the boiled point of water gets raised which helps in cooking the daal sooner.
7. Question: Why is it the boiling point of sea water more than that of pure water?
Answer: Sea water contains salt, and other impurities which cause an elevation in its boiling point.
8. Question: Why is it easier to spray water to which soap is added?
Answer: Addition of soap decreases the surface tension of water. The energy for spraying is directly proportional to surface tension.
9. Question: Which is more elastic, rubber or steel?
Answer: Steel is more elastic for the same stress produced compared with rubber.
10. Question: Why is the sky blue?
Answer: Violet and blue light have short waves which are scattered more than red light waves. While red light goes almost straight through the atmosphere, blue and violet light are scattered by particles in the atmosphere. Thus, we see a blue sky.
11. Question: Why Does ink leak out of partially filled pen when taken to a higher altitude?
Answer: As we go up, the pressure and density of air goes on decreasing. A Partially filled pen leaks when taken to a higher altitude because the pressure of air acting on the ink inside the tube of the pen is greater than the pressure of the air outside.
12. Question: On the moon, will the weight of a man be less or more than his weight on the earth?
Answer: The gravity of the moon is one-sixth that of the earth; hence the weight of a person on the surface of the moon will be one-sixth of his actual weight on earth.
13. Question: Why do some liquid burn while others do not?
Answer: A liquid burns if its molecules can combine with oxygen in the air with the production of heat. Hence, oil burns but water does not.
14. Question: Why can we see ourselves in a mirror?
Answer: We see objects when light rays from them reach our eyes. As mirrors have a shiny surface, the light rays are reflected back to us and enter our eyes.
15. Question: Why does a solid chunk of iron sink in water but float in mercury?
Answer: Because the density of iron is more than that of water bus less than that of mercury.
16. Question: Why is cooking quicker in a pressure cooker?
Answer: As the pressure inside the cooker increases, the boiling point of water is raised, hence, the cooking process is quicker.
17. Question: When wood burns it crackles. Explain?
Answer: Wood contains a complex mixture of gases and tar forming vapors trapped under its surface. These gases and tar vapors escape, making a cracking sound.
18. Question: Why do stars twinkle?
Answer: The light from a star reaches us after refraction as it passes through various layers of air. When the light passes through the earth?s atmosphere, it is made to flicker by the hot and cold ripples of air and it appears as if the stars are twinkling.
19. Question: Why is it easier to roll a barrel than to pull it?
Answer: Because the rolling force of friction is less than the dynamic force of sliding friction.
20. Question: If a feather, a wooden ball and a steel ball fall simultaneously in a vacuum, which one of these would fall faster?
Answer: All will fall at the same speed in vacuum because there will be no air resistance and the earth?s gravity will exert a similar gravitational pull on all.
21. Question: When a man fires a gun, he is pushed back slightly. Why?
Answer: As the bullet leaves the nozzle of the gun's barrel with momentum in a forward direction, as per Newton's Third Law of Motion, the ejection imparts to the gun as equal momentum in a backward direction.
22. Question: Ice wrapped in a blanket or saw dust does not melt quickly. Why?
Answer: Both wood and wool are bad conductors of heat. They do not permit heat rays to reach the ice easily.
23. Question: Why do we perspire on a hot day?
Answer: When the body temperature rises, the sweat glands are stimulated to secrete perspiration. It is nature's way to keep the body cool. During the process of evaporation of sweat, body heat is taken away, thus giving a sense of coolness.
24. Question: Why does ice float on water but sink in alcohol?
Answer: Because ice is lighter than water it floats on it. However, ice is heavier than alcohol and therefore it sinks in alcohol.
25. Question: Why do we perspire before rains?
Answer: Before the rain falls, the atmosphere gets saturated with water vapors; as a result, the process of evaporation of sweat is delayed.
26. Question: How do birds sit safely on electric power lines?
Answer: This is possible because a bird only touches one line. If the bird were to touch another line or pole the electricity would travel through the bird, either to the ground or another wire.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:5d81e170-7c26-4b4d-ae2c-ff68da41c141> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-645.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00207-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932413 | 1,528 | 3.53125 | 4 |
The Device Statistics page is not documented anywhere because it should be self-explanatory; however, there may be confusion regarding some of the information.
This article identifies some possible sources of confusion and provides clarification.
"Sheets" refers to physical sheets of media or paper, whereas "sides" refers to printed sides. For example, one sheet of paper can have two sides of printed output.
"Picked" refers to paper that has been successfully pulled from an input tray, whereas "printed" refers to paper that actually exited the output bin.
For example, you can calculate the number of pages jammed in the printer by taking the number of pages picked and subtracting the printed number of pages from that number. In brief:
quantity (#) of pages picked - (minus ) # of pages printed = # of paper jams.
If you need additional assistance, please contact Lexmark Technical Support. NOTE: When calling for support you will need your printer machine/model type and serial number.
Please call from near the computer and printer in case the technician requires you to perform a task on one of these devices. | <urn:uuid:15c9beab-fa36-47bf-bc50-729bc3810994> | CC-MAIN-2017-04 | http://support.lexmark.com/index?page=content&id=HO3425&locale=en&userlocale=EN_US | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917835 | 231 | 2.546875 | 3 |
In attempting to relieve the increasing bottlenecks and performance issues plaguing cloud applications, IBM inventors have come up with a method for dynamically managing network bandwidth in cloud environments. The invention, dubbed Dynamically Provisioning Virtual Machines, automatically determines the best way for users to access cloud computing resources based on the availability of network bandwidth.
The way it works is this: As the virtual machines that serve as gateways to cloud services become overtaxed by growing numbers of users requesting access, thus constraining applications, IBM’s new methodology reassigns workloads from one system node to another based on bandwidth requirements and availability. The result, IBM says, is the removal of a major roadblock to cloud efficiency.
Today’s users “have zero tolerance for network bandwidth bottlenecks,” said Ed Suffern, IBM systems engineer and lead inventor, in a statement announcing the breakthrough. “IBM’s patented dynamic provisioning invention will help cloud service providers increase network performance and improve customer satisfaction.”
The new methodology should provide a significant improvement over existing approaches, said Cliff Grossner, directing analyst at Infonetics Research, via email.
“This innovation allows adjustment of workloads based upon new capabilities to determine network state, rather than rebalancing workloads only on server state, which can produce non-optimal results,” Grossner said.
[Startup ThousandEyes monitors the performance of cloud applications. Read how it can help IT identify and troubleshoot problems in "ThousandEyes Peers Into Cloud Performance."]
IBM said its new method is ideally suited for cloud apps that commonly experience dramatic or unexpected peaks and valleys in demand for services. Examples include online retailers facing crushing holiday traffic, search engines contending with surges in response to current events, and government and news media sites that are overrun in response to events such as elections, conflicts and natural disasters.
“This capability would enable an automated data center to quickly react and rebalance workloads to remove network bottlenecks,” said Grossner.
In addition to immediate practical applications, IBM maintains that its approach provides a foundation for software-defined networking (SDN). The invention relies on software to manage network resources by obtaining data from network switches to determine the amount of bandwidth being used by IP addresses assigned to VMs. As network bandwidth becomes constrained on one node, the system automatically reassigns VMs to another node with available bandwidth.
Grossner said IBM’s invention could prove beneficial to both traditional and SDN-architected networks in analyzing and reacting to the needs of virtualized workloads.
“This invention can be used by traditional L2 and L3 switches, and also be included in SDN controllers going forward,” he said. “This will improve the capability of both traditional and SDN-architected networks to analyze the needs of virtualized workloads and react.”
IBM’s method can run on a variety of operating systems, including Windows, Linux, UNIX and CentOS. | <urn:uuid:cb66cca1-1b89-44af-a429-49475f750440> | CC-MAIN-2017-04 | http://www.networkcomputing.com/networking/ibm-invention-aims-fix-cloud-bottlenecks/1437329047 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00013-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912308 | 639 | 2.671875 | 3 |
How Cyber Attacks Compromise Your Network
Attackers know exactly what they want and how traditional network security fails
Cyber attacks have changed. Broad, scattershot attacks designed for mischief have been replaced with advanced persistent threats focused on acquiring valuable data from an organization. Modern cyber attacks are often conducted across multiple vectors and stages. The have a plan to get in, signal back from the compromised network, and extract valuable data despite network security measures.
Traditional defense-in-depth security measures, such as next-generation firewalls, antivirus (AV), web gateways and even newer sandbox technologies only look for the first move—the inbound attack. Advanced cyber attacks are designed to evade traditional network security.
Cyber Attacks Exploit Network Vulnerabilities
Next-generation cyber attacks target specific individuals and organizations to steal data. They use multiple vectors, including web, email, and malicious files and dynamically adapt to exploit zero-day and other network vulnerabilities.
Advanced cyber attacks succeed because they are carefully planned, methodical and patient. Malware used in such attacks:
- Settles into a system
- Tries to hide
- Searches out network vulnerabilities
- Disables network security measures
- Infects more endpoints and other devices
- Calls back to command-and-control (CnC) servers
- Waits for instructions to start extracting data from the network
By the time most organizations realize they've suffered a data breach, they have actually been under attack for weeks, months, or even years. Most traditional defense-in-depth cyber security measures, such as AV or next-generation firewalls, fail to use signature- and pattern-based techniques to detect threats, and don't monitor malware call backs to CnC servers.
Advanced cyber attacks take many forms, including virus, Trojan, spyware, rootkit, spear phishing, malicious email attachment and drive-by download. To properly protect against these attacks, defenses must monitor the entire life cycle of the attack, from delivery, to call backs and reconnaissance, to data exfiltration.
With Adaptive Defense, FireEye monitors the entire life cycle of advanced attacks to help organizations detect, analyze, and respond to cyber attacks. | <urn:uuid:5aaefde3-0df4-4671-8df9-8cebd6f68132> | CC-MAIN-2017-04 | https://www.fireeye.com/current-threats/how-cyber-attackers-get-in.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92313 | 457 | 3.015625 | 3 |
Definition of SOA
Can you pull back a bit and give me a basic definition of what is an SOA? Essentially, in brief, an SOA is distributed computing where you identify the different units of work or units of activity as services. So a service is some piece of software that you can issue queries to, issue commands to in some way, basically tell it to do something, and it responds back to you. Its critical that there is a large degree of standardization in how you actually define these services. That is, we cant have one language for talking about this service and another language for talking about that service. The key is to try to make what is essentially an extremely heterogeneous implementation to look as homogeneous as possiblethat is, your service or another service can be described in exactly the same terms and therefore processed by exactly the same tools.So it basically boils down to distributed computing with standards that tell us how to invoke different applications as services in a secure and reliable way and then how we can link the different services together using choreography to create business processes. And then finally so that we can manage these services so that ultimately we can manage and monitor our business performance. Why all of a sudden has it become a buzzword? I think its become a buzzword in large part because Web services is taking off. Web services is really the best pure way we know of doing service-oriented architecture right now. Web services is taking off because its finally sinking into people that the value that we saw in the Internet around Web pages in the last decade is going to translate to the ability for businesses to really connect with each other and do transactions across the Internet. So once you get to a point where you realize something is inevitable, you then go down a little bit deeper into it. When you talk about new technologies like Web services you have to say but how does the rest of my world play? The last thing people want to say is you have to rip out everything. So therefore once there are new technologies that get a sufficient following, people start thinking about the bigger picture. And they start thinking about how they can evolve their existing implementations to take advantage of these things. And they start thinking about roadmaps. And in the IT world when you talk about bigger pictures youre talking about architecture. So the reason is Web services was enabled because of the success of the use of XML on the Internet. Once people started to get enough success around Web services for projects, and enough understanding on what the point was, they wanted to know what was the larger picture and how did it relate to the infrastructure they already put in place. That means a move to architecture. The architecture that Web services is a part of is a service-oriented architecture and therefore that now brings in the much larger consideration of what youre doing with IT. There is really tremendous acceptance about what weve been talking about with on demand. On demand is an IBM phrase, but the things I said before about becoming more flexible and responding to business opportunities, thats very straightforward. When people say, "How do I do that?" fundamentally theyre saying, "How do I build my business processes to get the business value and to do the appropriate connections with my customers, my suppliers and my partners to make this work?" It turns out that SOA maps very, very closely to this notion of business processes and to business requirements. So therefore you come up from below from a technological perspective of looking at the bigger IT picture, and SOA makes sense. When you come down from the business-level discussion of what youre trying to accomplish and what the actual business processes are that you need to run your business, theres a very close mapping between the notion of services and the components of business processes that gives an extra little kick as well. Do you think the term is overused? No, I dont. I think its actually nice to settle in. And in fact if you go back 15 years when people were looking at object-oriented systems, well before Java, the notion of things that are in Java would have been considered absolutely radical 15 to 20 years ago. And its just obvious now. But at that time there was a major move away from building applications in a structured manner to building them in an object-oriented way. It came out of academics. And people really started examining how you structured applications, how you structured data and how you could start to share these things. And a lot of the seeds of SOA were in that. I was in IBM Research for 15 years, and there were things that we did in the late 1980s that were more sophisticated and are not yet in the object-oriented systems that were doing now. We were dealing with mathematics in the sense of modeling things. So a lot of the seeds of what was eventually SOA came out of the object-oriented view of the world. So its a long time in coming. And its not a term thats hyped because its brand new and its the latest fad. Its something people have been thinking about for a very long time. Next page: Customer reaction to the SOA message.
Given this notion that I can describe services, I can get those descriptions, I then need to connect to them. And I have certain requirements about that connectivity. So I have requirements about reliability, that is I know if I invoke a service Id like to know that something actually happened. That it got the message and responded back to me. | <urn:uuid:0fe0d3e4-cf52-4e74-9fef-bc3b77a0056d> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Web-Services-Web-20-and-SOA/IBMs-Sutor-SOA-Is-So-Necessary/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.978719 | 1,106 | 2.890625 | 3 |
Build a line of defense with these network security tips
This feature first appeared in the Summer 2014 issue of Certification Magazine.
Criminals lurk in many of the Internet’s dark corners, scouting out victims and eager to pounce whenever they identify a network vulnerability. Organizations that fail to apply basic security controls to their networks face a multitude of risks from anonymous attackers, including theft of sensitive information, destruction of critical resources and interference with business processes.
Network security is a broad field that includes a variety of controls designed to protect both the confidentiality of information transmitted over networks, and the availability of those networks to authorized users. Certified network security professionals work to protect networks from potential attackers, detect attacks in progress, and react swiftly and surely to successful network intrusions. They have a variety of technology at their disposal to assist with these challenges. Deploying a few simple advance precautions, however, can tilt the playing field in a good network security manager’s favor. Consider taking these five steps to protect your network against attackers.
1. Watch for intruders and prevent common attacks.
One of the most important activities undertaken by network security professionals is monitoring their networks for the signs of attacks in progress. To achieve this, they rely on a technology known as an Intrusion Detection System (IDS). These systems sit at critical points on an organization’s network and monitor all of the traffic passing that point. The IDS contains a signature database with information about thousands of known attacks. It compares the network traffic that it sees to those signatures, looking for potential attacks on the network. Once an IDS detects a potential attack, it alerts network administrators to the possible intrusion so that they may take appropriate action. Security professionals often describe intrusion detection systems as the “burglar alarms” of the network.
Once you’ve wired your network with an IDS “burglar alarm,” you can take things to the next level by giving the system the ability to proactively respond to detected security threats. Systems run in this mode, known as Intrusion Prevention Systems (IPS) are able to block suspicious traffic before it enters the protected network. One word of caution — make sure that you test this functionality rigorously before deploying it on your network. A misconfigured IPS that accidentally blocks legitimate network traffic can cause serious issues in your data center!
2. Implement consolidated network, server and application logging.
In the event of a security incident, organizations shift into incident response mode, where they are seeking to contain and assess damage to the networks and restore operations to a normal state as quickly as possible. Successfully completing these steps requires reconstructing the events surrounding a security incident. That reconstruction is often only possible if your organization has been maintaining adequate network logs. These logs should contain not only error and security messages created by network devices, but also a record of activity that took place on the network.
While space constraints normally prevent logging the detailed content of all network communications, many organizations do maintain network flow data. These flow records provide a level of detail often compared to that found on a telephone bill — which systems talked to each other, the date and time of the communication and the amount of data exchanged. Those records can be extremely useful when trying to identify the source(s) of an attack, or determine the amount of data that left an organization’s network.
In addition to ensuring that network devices generate adequate logs, you should also take steps to protect those log entries in a safe location. It is not sufficient, for two reasons, to keep the logs on the device that generated them. First, an intruder who gains access to a network device may be able to delete or modify the logs stored on that device. Second, the event may render the device itself inaccessible or nonfunctional. You can work around these limitations by creating a centralized log server that provides a protected refuge for log data — it’s easy to send data in but difficult to remove or modify existing log entries. This centralized server then acts as a single point for the collection and analysis of log records from a variety of sources, including network devices, servers and applications.
3. Encrypt sensitive information in transit on the network.
Networks often carry sensitive information that requires added protection against eavesdropping. This is especially true when the data crosses the public internet. The use of encryption technology to protect this information allows administrators to rest easy, knowing that their data is safe from prying eyes, inaccessible to anyone lacking the correct decryption key.
The first way that you can implement encryption is at the application level — through the use of Secure Sockets Layer (SSL) and/or Transport Layer Security (TLS). These protocols are used to add encryption to other application protocols. For example, the HTTPS protocol uses SSL and TLS to add encryption to standard HTTP-based web communications.
The second way encryption can secure your network is by creating encrypted links between locations that are geographically separate. Virtual Private Networks (VPNs) use encryption to connect remote users and sites to a central location over an encrypted tunnel. Once set up, the tunnel is transparent to the end user, but protects all traffic sent over the encrypted link.
4. Build redundancy into your network.
One of the biggest mistakes that organizations make when thinking about network security is focusing exclusively on the confidentiality of data. While it is certainly important to protect sensitive information, network security strategies must also include controls that preserve the availability of the network to authorized users. Network outages can cause significant losses in productivity, sales and efficiency.
An important way to improve the availability of your network is to identify any single points of failure in your network, and implement redundancy where practical. For example, if you have only a single device routing traffic at your network border, are you prepared in the event that the router fails? If it’s financially practical, adding a second router can protect you in the event of a device failure. If that’s not in the cards, then consider improving the redundancy of critical components in that device. For example, power supplies are one of the components most likely to fail. Most network devices are now available with either standard or optional dual power supplies — that’s a good investment!
5. Scan networks for vulnerabilities regularly.
The last tip that you should follow when securing your network is to regularly scan it for vulnerabilities. Remember, your network is a changing environment and new vulnerabilities are introduced every day. You should have a network vulnerability scanner installed on your network and use it to test the security of devices connected to your network on a regular basis. System and device administrators should review reports quickly and address any vulnerabilities they identify.
Most organizations choose to run vulnerability scans on either a daily or weekly cycle. You’ll need to choose the interval that makes sense and balances your security requirements with the resources available to both conduct scans and act upon the results. Remember, a scan is not a helpful security tool if nobody reads the report and addresses security issues! One helpful tip is to configure your scanner to only inform administrators when a new vulnerability is detected. This stems the tide of “everything is OK” reports and increases the likelihood that administrators will take actual reports seriously when they occur.
Securing your network is an important way that you can protect the confidentiality and availability of your organization’s computing resources and sensitive information. Taking the time to follow these five network security tips will put you well on the way to providing a secure network infrastructure. | <urn:uuid:7cb84384-5610-43d8-a698-d9613b4b8242> | CC-MAIN-2017-04 | http://certmag.com/build-line-defense-network-security-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924636 | 1,528 | 2.5625 | 3 |
FileGen is a command-line program to create test files of different lengths.
FileGen takes 2 or 3 parameters: filegen file size [byte]
- filegen test 1000
this will create a file named test and 1000 bytes long, the bytes are random
- filegen test 1000 0
this will create a file named test and 1000 bytes long, and all the bytes will be zero
The size and byte can be specified in hexadecimal notation, like this:
filegen test 0x100 0xA0
When you create a random byte sequence, C’s pseudo random number generation function rand is seeded with the current time (srand(time(NULL))). This means that the generated byte sequence is different each time you run the command.
The algorithm is not optimized for speed.
FileGen doesn’t test if the generated file already exists, it will be overwritten without warning. And it will not test if you have the required disk space to create the file.
Generating a 1.000.000.000 bytes long random file takes 95 seconds on my 2GHz machine.
Compiled with Borland’s free C++ 5.5 compiler. | <urn:uuid:a7a7ce0d-0a0d-444b-802b-a4027c8d8871> | CC-MAIN-2017-04 | https://blog.didierstevens.com/programs/filegen/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00280-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.656326 | 251 | 3.3125 | 3 |
IT, operational procedures have to adjust to big data
The movement surrounding big data involves aligning information from unstructured, structured and transactional sources into usable knowledge that can provide strategic guidance for businesses. Accomplishing this involves storing and analyzing customer-created content from social media sites, data gathered by monitoring systems in mobile devices, application information and other forms of content that can be difficult to coherently track. According to a recent J.P. Morgan study, big data is becoming a strategically important movement in the enterprise, but is incredibly difficult to deal with.
J.P. Morgan found big data creates major challenges because it combines issues of scale and complexity. On one hand, businesses have to manage extremely large quantities of information that has to be stored in a cohesive way. At the same time, that data is emerging in a variety of formats, from a diverse range of sources and is needed for different purposes. Because of this, organizations have to find a way to not only develop a working system to organize and store information effectively, they also have to develop metadata systems that provide context for the information. Otherwise, analysis can be overwhelming.
Essentially, big data systems look a lot like an insect colony, the news source explained. The incredibly large quantity of data combines with the diverse ranges of sources to make the storage system look like an overly complex swarm of information that does not make any sense. Similarly, an anthill can look like chaos as bugs walk on top of each other and move haphazardly through tunnels and chambers trying to reach a destination unknown to the casual observer. But a close analysis of insect life reveals that there is actually a staggering amount of order to life in the colony. Similarly, a big data system needs solutions that impose order on the chaotic storage and analysis environment.
One way to enable big data to work effectively from an operational standpoint, not just at the storage level, is to use business process management software. The technology automates many of the tedious processes involved with adding context to information, accessing the right databases for certain functions and organizing data in light of use requirements. As a result, employees using mobile devices, operating out of the cloud or running applications on traditional desktop computers in the office can access the data they need in such a way that the technology aligns with their process requirements, leading to major business gains. | <urn:uuid:30b3eda4-51ad-451e-8499-919e81db5ecf> | CC-MAIN-2017-04 | http://www.appian.com/blog/bpm/it-operational-procedures-have-to-adjust-to-big-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934626 | 476 | 2.5625 | 3 |
If you are thinking about a change in careers, or looking for a career that has plenty of jobs available, you should consider a career as a programmer. Learning to program is much easier than it seems, and taking online programming classes makes it easy and affordable to learn. Here are the top 5 reasons learning to program and code is the way to go.
- Programming jobs are abundant. Unlike other career fields where demand can quickly dry up during poor economic conditions, programmers are in high demand all of the time. This means you can have a wide range of potential jobs as you become more proficient and experienced in programming.
- Programming can open up new job opportunities at your current employer. Most every company has a need for programmers at some level, from writing code, to automating certain processes, to developing new products and solutions. In addition, programmers have much more telecommuting and flex-time opportunities than other careers.
- Training and educational requirements are fewer compared to other decent paying careers. Learning programming can be accomplished fairly quickly, depending on your desire to start or change your career. It is possible to complete training in six months or less. After starting a new job as a programmer, you can continue to complete additional training courses to stay current on the latest technologies and further enhance your career opportunities.
- Programmers’ earnings are higher than most other career fields. You will discover programming jobs tend to pay much more than other positions. In addition, programmers can work for a specific company, on a contract basis, or as freelancers working on multiple projects for several organizations. According to the Bureau of Labor Statistics, the 2014 median salary for programmers was $77,550, with salaries ranging from $44,140 to $127,640.
- Programming helps you develop better creativity, critical thinking, reasoning, and problem solving skills. Programmers are responsible for developing new ways to solve problems, which requires the ability to essentially “think-outside-the-box” to develop solutions.
For more information about online IT and programming courses, or to get started on your training today, enroll online at GogoTraining, or contact us at 877.546.4446. | <urn:uuid:e65406be-2b62-4a8d-90ba-626357b03666> | CC-MAIN-2017-04 | https://gogotraining.com/blog/2016/10/5-reasons-to-become-a-computer-programmer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956918 | 445 | 2.65625 | 3 |
Since the mid 90′s, the internet has had a profound impact on our personal and business lives. It’s transformed how we communicate, how we shop, even how we heat our homes. In an enterprise setting, the internet is the lifeblood of the organization, the engine which keeps everything moving. And yet the World Wide Web, on which we’ve come to depend, can also pose a significant threat if we don’t take the appropriate steps to mitigate its risks.
Now in its 11th year, National Cyber Security Awareness Month (NCSAM) was developed by the US Department of Homeland Security and the National Cyber Security Alliance to raise awareness of online security for businesses, consumers, educational establishments and young people across the country, each October.
Throughout the month, businesses are encouraged to take part in spreading the cybersecurity message through posting safety and security tips on social networks, educating their customers and employees, displaying posters, holding events and much more.
Here at Avecto, we’ve signed up as a NCSAM Champion, helping our employees, customers and partners to understand the importance of securing their online environment.
Organizations face malware and Advanced Persistent Threats every day, with more and more businesses feeling the effects of a breach. The potency and regularity of these attacks, has led to many questions about the effectiveness of antivirus software, with some even claiming “Antivirus is dead”.
So how can businesses stay safe online?
Despite the threat, the internet needn’t be feared. Simple, proactive steps can provide a much more robust environment for end users to get on with their jobs and their lives. By taking a Defense in Depth approach to security, with a multi-layered strategy that incorporates solutions like patching, application whitelisting and privilege management, organizations can more effectively protect against cyber-borne threats and keep the engine running.
For further information about National Cyber Security Awareness Month, please visit the website at www.staysafeonline.org. | <urn:uuid:03b9f91c-495a-4dba-a84a-d0d5899065e2> | CC-MAIN-2017-04 | https://blog.avecto.com/2014/10/keeping-cyber-security-front-of-mind-this-october/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943672 | 417 | 2.640625 | 3 |
A computer worm is a type of Trojan that is capable of propagating or replicating itself from one system to another. It can do this in a number of ways. Unlike viruses, worms don’t need a host file to latch onto. After arriving and executing on a target system, it can do a number of malicious tasks, such as dropping other malware, copying itself onto devices physically attached to the affected system, deleting files, and consuming bandwidth.
Robert Tappan Morris, a computer science student from Cornell University, created the first known worm and unleashed it in November 1988. Now known as the Morris worm, it was initially designed to count the number of computers connected to a network. However, the worm exhibited unwanted side effects.
The term “worm” actually originated from a fictional novel entitled The Shockwave Rider, which was written by John Brunner in 1975. In the novel, this worm is capable of gathering data, suggesting that it’s acting more like an information stealer than a self-replicating program.
Common infection method
Worms can spread themselves in a number of ways:
- Via software vulnerabilities. Some worm variants look for security holes on systems via installed unpatched software. Once a hole is detected, it infiltrates that system, and then it performs its malicious duties.
- Via email. Some worm variants can drop other malware, such as backdoors. This payload transforms the affected system into a zombie/bot machine and connects it to a botnet. These machines are capable of spamming messages to random or targeted recipients, with the worm file as its attachment. Spam mail sent usually involve some social engineering tactics for greater chances of infection.
- Via external devices. Some worm variants can copy themselves onto devices, such as USB sticks and external hard drives, which are attached to an already affected system. This way, systems where these devices can be connected to will be affected as well.
- Via peer-to-peer (P2P) file sharing networks. Some Internet users have been known to use P2P applications like eMule and Kazaa to share files with friends and family. However, such an activity is exploited by worms. Worms in P2P file networks have been difficult to detect.
- Via social networks. Some worm variants have propagated within known social sites. For example, MySpace has been affected by an XSS-type worm.
- Via compromised sites. Compromised sites may harbor certain variants of worms that are capable of looking for security holes.
- Via instant messengers (IMs) networks. This is another popular method used by worms. Not only can it spread copies of itself via text messages, IMs can also be used to spread this malware via its P2P sharing capabilities. This method tend to spread worms a lot quicker.
From the late 20th century to the early 21st century, worms have been one of the most notorious pieces of software that is found in user systems. Although this type of malware is not as prominent a threat on the Internet as it was before, it remains one of the most dangerous. Below is a short list of reputable and prolific worm families that have caused havoc and is now part of computing history:
- Bagle (aka Beagle, Mitglieder, Lodeight)
- Blaster (aka MSBlast, Lovesan, Lovsan)
- Conficker (aka Downup, Downadup, Kido)
- ILOVEYOU (aka Love Letter)
- Mydoom (aka W32.MyDoom@mm, Novarg, Mimail.R, Shimgapi
- SQL Slammer
- Stration (aka Stratio, Warezov)
Running antivirus and/or anti-malware software usually cleans up affected systems automatically. It is also advisable to call legitimate computer technical support services should one encounter complications or the worm is highly sophisticated and needs additional steps to cleaning the system.
The aftermath of a worm infection is dependent of the variant itself and its payload(s). Here are just some notable side effects observed on affected systems:
- Further infection from other malware types
- Affected system may become part of a botnet due to other malware infection, such as backdoors
- Some highly sophisticated worm variants are capable of stopping or crashing the affected systems. This happened to the nuclear centrifuges at Iran before the Stuxnet worm was discovered.
- Systems slow down.
- Defenses on the system are disabled, including Safe Mode.
- Files are missing from the affected system, causing it to not operate or function normally.
- Systems are compromised, opening them for spying by bad actors and other malicious activities.
Prevention is best when it comes to dealing with malware like worms. Here are practical ways one can avoid getting affected by worms:
- Download and install an antivirus or anti-malware software, if you haven’t already. The majority of worms are detected by these software.
- Keep you OS and other software installed on your system updated.
- Make sure that all firewalls on your system and router(s) are always enabled. Also, configure your firewall to make it more secure.
- Restrict access to computer users also helps by not giving administrator access to every user of a computer device.
- Exercise basic computer security 101 protocols. For example, not clicking links or opening attachments on emails without verifying from the senders that they legitimately sent those messages. | <urn:uuid:c5b377a1-38f8-40bf-8bb2-9f475934a4ad> | CC-MAIN-2017-04 | https://blog.malwarebytes.com/threats/worm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00208-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933446 | 1,138 | 4.0625 | 4 |
Use this DNS lookup tool to easily view the standard DNS records for a domain.
What is a DNS lookup?
A domain has a number of records associated with it, a DNS server can be queried to determine the IP address of the primary domain (A record), mail servers (MX records), DNS servers (NS nameservers) and other items such as SPF records (TXT records).
Different tools provide this functionality, a common one being
nslookup which is available on many operating systems including Microsoft Windows and most Linux distributions. Another tool found on Linux based systems is the dig tool. This is generally a more advanced tool that has a number of features that
nslookup does not.
The DNS lookup tool uses the
dig command line to show the response from a query of type
Security implications of DNS queries
By its nature external facing DNS is an open and public service, while the information is openly available you should be aware of what information is being revealed. Security penetration testers and attackers will use information collected from DNS to expand their knowledge of an organizations information technology infrastructure and from that knowledge begin to understand the attack surface.
For example, the SPF records that an organization can publish in order to improve email security can also reveal the IP addresses or hostnames of systems with the ability to send email. These services can all then become targets to be assessed and attacked.
DNS Lookup API using dig
In addition to the web form on this page there is another way to grab the DNS records for a domain. Use this simple API using
curl or any other HTTP based tool or software. Output is of content type text.
The API is designed to be used in an ad-hoc fashion not for bulk queries and is like all our IP Tools is limited to 100 (total) requests from a single IP Address per day. | <urn:uuid:3d1042a5-57e6-4a8a-a1a6-0b841ffe197a> | CC-MAIN-2017-04 | https://hackertarget.com/dns-lookup/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925371 | 381 | 3.109375 | 3 |
Windows Error leaves user unable to access their data
Recovery Type: Laptop
Drive Capacity: 500GB
Manufacturer: Western Digital
Model Name: Blue
Model Number: WD5000LPVX-75V0TT0
Operating System: Windows
Main Symptom: Windows Error
Type of Data: Homeschooling program files
Data Recovery Grade: 9
Binary Read: 100%
For many people, there are few things in the world more terrifying than a computer that won’t boot up. Computers and hard drives can seem frustrating to anyone not in IT-related fields, and so when they fail without any warning, popping up error messages, their failure can often seem confusing and unprovoked. That heart-stopping moment when your computer spits out an error message can put at risk your day, your business… and in this particular incident, your child’s education. In this case, the client homeschooled their child, and when they received a “Windows error” when attempting to boot up their computer, they could not access the important educational files, documents, and records they kept on their hard drive. Perhaps for the child this meant a vacation—but for the parent, it meant disaster. In need of our recovery services, the client sent their hard drive to us.
Upon evaluating the client’s hard drive, our engineers found no evidence that the drive had suffered any sort of mechanical failure. However, while imaging the drive, Andy, our logical data recovery engineer, noticed a large number of files which had been flagged as “deleted”, and suspecting a logical issue, approached this case as a deleted file recovery case. In many of the cases where a data storage device fails, our engineers only need to read enough of the drive’s raw binary sectors to recover the used areas on the drive (or, given how much physical damage the drive has sustained, as much of the used area as possible).
However, in situations where the drive has sustained logical damage, our engineers must read and make an image of the entire drive for careful logical analysis, as there may be critical data dwelling in the spaces the drive claims to be “blank”. This is because when data is initially deleted or a drive is reformatted, the lost data is not immediately and irrevocably erased, but rather tagged as “unused” space so that any new data written to the drive can use that space. For Windows drives formatted using the NTFS filesystem, the bitmap is the part of the drive which keeps a record of “used” and “unused” sectors.
Andy, curiosity piqued by the suspicious abundance of deleted files, had the entire drive imaged without encountering any bad sectors and began logical analysis, sifting through all of the “unused” sectors of the drive for any data that seemed to be important. In the end, though, the logical analysis turned up nothing of significance. In this case, the hard drive seemed to not have sustained any logical damage after all. There was no sign of an operating system having been reinstalled, or of the drive having been reformatted, and none of the client’s vital data appeared to have been accidentally deleted. The vast majority of the client’s important files were successfully recovered.
Data recovery requires our engineers to have very keen eyes and pay close attention to every case they see. In many cases, unless there is an obvious “tell” (such as the aptly-named “Click of Death”), an owner of a failed hard drive might not always be able to actually tell with much certainty or accuracy why their computer won’t boot or their external drive can’t be read. Our engineers must always be alert, because even a simple-looking case can be more than meets the eye.
On the other hand, as Sigmund Freud once remarked, sometimes a cigar is just a cigar. | <urn:uuid:712cb8b4-0b4f-4481-97e3-1e81913b04a2> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery-case/data-recovery-case-study-western-digital-wd5000lpvx-75v0tt0-laptop-hard-drive-failed-to-boot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959546 | 829 | 2.546875 | 3 |
Qiaoli Z.,Dongguan Municipal CDC |
Jianfeng H.,Guangdong Provincial CDC |
De W.,Guangdong Provincial CDC |
Zijun W.,China CDC |
And 12 more authors.
PLoS ONE | Year: 2012
Background: This study was conducted to identify epidemiological characteristics of the first documented CHIK fever outbreak in China and evaluate the effect of the preventive measures taken. Methodology/Principal Findings: From September 1 to October 29, 2010, China's first documented outbreak of CHIK fever occurred in the Xincun community of Wanjiang District of Dongguan city, Guangdong province; 253 case-patients were recorded, of which 129 were laboratory confirmed, with an attack rate of 1%. Before September 18th the number of CHIK fever cases remained relatively low in the Xincun community; from September 19th onwards, the number of cases increased drastically, with an outbreak peak on October 4th. Cases were distributed across nine small village groups in the Xincun community, with an attack rate of 0-12% at the village level. The household attack rates ranged between 20% and 100%. No significant difference was found in the attack rate between males and females. There was a significant difference in the attack rate in different age groups (chi-square = 18.35, p = 0.005); highest in patients aged 60 years or older and the lowest in patients aged under 10. The major clinical characteristics of patients are fever (100%), joint pain (79%) and rash (54%). Phylogenetic analysis of the E1 gene on the five earliest confirmed cases showed that the strains of CHIKV isolated from their sera were highly homologous (up to 99%) with isogeneic strains isolated in Thailand in 2009. After control measures were taken, including killing adult mosquitoes and cleaning breeding habitats of Aedes mosquitoes, the Breteau index and Mosq-ovitrap index decreased rapidly, and the outbreak ended on October 29. Conclusion/Significance: The infection source of the outbreak was imported. Cases showed obvious temporal, spatial, and population aggregation during the outbreak. Comprehensive control measures based on reducing the density of Aedes mosquitoes were effective in controlling the epidemic. © 2012 Qiaoli et al. Source | <urn:uuid:eacd25ad-18b7-4f08-ba75-c9294283bf52> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/dongguan-municipal-cdc-473217/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964143 | 475 | 2.875 | 3 |
Hotel guests have certain demands: cable television, a pool, or perhaps 4-star room service. Yet more likely, their primary demand is increasingly to have consistent, robust wireless internet access. Now, what if they want wireless by the pool, or in that outlying cabin surrounded by forest? What if the hotel property is in a location or country without a reliable electrical infrastructure?
Operators have two choices: 1) spend a few thousand dollars and a few weeks with electricians
digging trenches and laying new wires to place an outlet on a rooftop to plug in a WiFi radio within range; or b) install a solar-powered long-range WiFi radio.
When planning a WiFi deployment in a hotel of any size, one must consider and accommodate a whole host of variables: the construction materials (masonry and steel may suppress the signal significantly more than woodframe construction); terrain (are there buildings amongst trees and hills that may impede the signal?), and distance between access points.
Although it is rarely anticipated in the way the presence of cable or DSL infrastructure is in pre-deployment planning, access to electrical power can be a roadblock of epic and expensive proportions. Beachfront, poolside, or remote buildings may not have electrical wiring at all. Extending electrical up onto a roof can cost thousands of dollars before hotels even have installed a WiFi radio. A solar solution could provide a cost-effective alternative.
Solar WiFi hardware works much the same as a plugged in or power over Ethernet enabled unit. When considering the options, hoteliers will want to decide how much they want to do themselves and how much they can afford to pay an installer and systems integrator. The hardware available often requires considerable technical know-how to cobble together and bring online with an existing network.
When considering a solar wireless network, operators need to evaluate different systems' battery technology, the power sub-system, and intelligent power management software. These considerations pay off in the end, because hoteliers will need assurance that the electricity generated is efficiently stored for use at night and on cloudy days.
Those who like to get their hands dirty might choose a more traditional solar-powered wireless system and host it locally. Generally, these require a complex systems integration effort, where systems integrators build the network from the ground up using multiple components: charge controllers, batteries, panels, software, and wireless radios.
For those without a technical staff, an integrated solution may be more desirable. For example, The Sandman
is not a big chain hotel, and there are no IT professionals on-site. Management needed a fast, inexpensive solution that wouldn't require an engineering degree to deploy. Going off of a recommendation by a local tech contractor, The Sandman chose Meraki
. Meraki offers an integrated solution and its solar units are essentially plug-and-play. It integrated into our existing mesh network automatically and was instantly online. All the installer had to do was climb onto the roof once, secure the unit in an optimal position to maximize sun exposure.
The savings of installing a solar WiFi solution are vast thanks to the affordable hardware and ease of installation. It operates completely off the grid, so it also reduces electrical bills.
Not only does going solar with your WiFi solve many practical issues for a difficult installation, it provides a very marketable hotel amenity. It doesn't hurt a hotel to be seen as "green" and "state-of-the-art" in a tight, competitive market. Many properties are "going green" because guests are demanding it.
Jason McCarthy has been general manager at The Sandman Inn since 2000, and has been with the hotel's parent company for 20 years. He is the past president of the Greater Santa Barbara Lodging & Restaurant Association (GSBLRA), a member of the CH&LA's Education Committee, and serves on the Advisory Board for the Santa Barbara City College School of Culinary Arts and Hospitality Management. | <urn:uuid:177c150d-d23e-44c8-9e1b-963b87d498ea> | CC-MAIN-2017-04 | http://hospitalitytechnology.edgl.com/news/Hotels-Cut-Costs,-Go-Green-with-Solar-Powered-WiFi55042 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939447 | 804 | 2.609375 | 3 |
Before one dives into the details of various routing protocols such as OSPF or EIGRP, you need to learn the basics of routing. What is meant by the term “Routing”? Why is this technology needed in modern networks? What exactly happens to an IP Packet and an Ethernet Frame when it is routed? Where, in the architecture of a router, are “routes” stored? How does a router select which route to use when it knows of multiple routes to the same destination? This and much more are covered in this course. This course is intended for those studying for their ICND1 CCNA Exam. | <urn:uuid:81c57c42-8dc7-4d12-8a58-bda85d39294a> | CC-MAIN-2017-04 | https://streaming.ine.com/c/ine-ip-routing-basics | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00189-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972459 | 133 | 4.0625 | 4 |
An important goal of SOA design is the identification of services and their specifications. In other words: Which functions and data should I expose as a service and how do I define and model those identified services? The IBM methodology for defining the SOA analysis and design process is the Service Oriented Modeling and Architecture (SOMA) (see Resources).
SOMA (and many other SOA methodologies) relies heavily on business process analysis and use case design to resolve service interface design at the appropriate level of granularity, establish reuse, and so on. Often, the information perspective of SOA is limited to implementing a small number of services as database queries exposed as Web services. This narrow view completely misses the value that established information architecture concepts and patterns can bring to the SOA solution. To fully support scalable, consistent and reusable access to information, the SOA solution needs to include a broader set of design concerns, reflecting information architecture best practices.
Information as a Service applies a set of structured techniques to address the information aspects of SOA design. The goal is that by understanding what business information exists in the solution, informed decisions can be made to ensure that information is leveraged in ways that best support the technical and business objectives of the SOA solution:
- That services are reusable across the entire enterprise.
- That the business data exposed to consumers is accurate, complete and timely.
- That data shared across business domains and technology layers has a commonly understood structure and meaning for all parties.
- That the core data entities linking together the business domains of an enterprise are consistent and trusted across all lines of business.
- That an enterprise gains maximum business value from its data and data systems.
These objectives are valid for all parts of an SOA solution regardless of technology and implementation choices. Exposing an existing application programming interface (API) as a service, for example, requires an understanding of the data being exposed: Is it reliable and accurate? How does it relate to other data in the enterprise? Is it being presented in an understandable format for the consumers? Applying a structured approach to data analysis, modeling and design in an SOA project leads to a solution implementation that is better at meeting existing business requirements as well as being better prepared to adapt to new ones.
Most of the patterns discussed in the information perspective of SOA design apply to any service. They are independent of how the service is realized and are not limited to information services. These patterns are described in a later section.
However, information architecture concepts -- and in particular IBM's Information on Demand approach to information architecture -- can also provide the best implementation choice for some SOA components. For example, the Data Federation pattern is often the best option to implement an SOA component that aggregates data from disparate systems in real time and then exposes it through a common service interface (see Resources). This article includes considerations related to the realization of information services.
General information-related SOA design patterns
Figure 1 shows the three pillars that the information perspective to SOA design is based on. These pillars are to:
- Define the data semantics through a business glossary
- Define the structure of the data through canonical modeling
- Analyze the data quality
Figure 1. Overview
In subsequent articles in this series, learn about the role and value of the pattern for each pillar. Then, get an introduction to the corresponding IBM technology to this pattern.
The Business Glossary
A foundation for any successful SOA is the establishment of a common, easily accessible business glossary that defines the terms related to processes, services, and data. Often, practitioners discover inconsistencies in terminology while trying to learn the accepted business language and abbreviations within an organization. Without an agreement on the definition of key terms such as customer, channel, revenue and so on, it becomes impossible to implement services related to those terms. If stakeholders differ in their interpretation of the meaning of the parameters of a service, or indeed the data set it retrieves, it is unlikely that a service implementation can be successful.
It is critical that business analysts and the technical community have a common understanding of the terminology used across all aspects of the SOA domain, including processes, services and data. The business glossary eliminates ambiguity of language around core business concepts that could otherwise lead to misunderstandings of data requirements.
A business glossary eliminates misinterpretations by establishing a common vocabulary which controls the definition of terms. Each term is defined with a description and other metadata and is positioned in a taxonomy. Stewards are responsible for their assigned terms: they help to define and to support the governance of those terms. Details for the business glossary pattern are discussed in a future article in this series.
A key success factor of a business glossary is to make it easily accessible, to link it to other important modeling artifacts, and also to demand that it is actively used in the design phase of the project. This pattern is supported by InfoSphere ® Business Glossary, which is part of IBM Information Server. This product is described in more detail in a future article in this series.
As well as a tool to manage and share a glossary, IBM also delivers industry-specific intellectual property, in the form of models. These models contain thousands of business terms, clearly defined, to enable data requirements and analysis discussions with stakeholders.
The canonical data model
Consistent terminology is a good starting point when designing services, but this in itself is not sufficient. You must also have a clear understanding of the way business information is structured. The input and output parameters of services, that is, the messages, are often far more complex than single data types. They represent complex definitions of entities and the relationships between them. The development time and quality of SOA projects can be greatly improved if SOA architects leverage a canonical model when designing the exposed data formats of service models. The resulting alignment of process, service/message, and data models accelerates the design, leverages normative guidance for data modeling and avoids unnecessary transformations. Equally important is surfacing the detailed service data model to stakeholders early in the SOA lifecycle. This facilitates identification of the most reusable data sets across multiple business domains, resulting in service definitions that meet the needs of a wide range of service consumers, thus reducing service duplication.
The key problem addressed in this and subsequent articles is how to best ensure a consistent format for information horizontally across the services and vertically between the process, the service, and the data layers in the SOA context. A canonical data model provides a consistent definition of key entities, their attributes and relationships across the various systems that hold relevant data for the SOA project. The canonical data model establishes this common format on the data layer while the canonical message model defines this uniform format on the services layer. The pattern of a canonical data and message model is presented in a future article in this series.
Industry Models provide an integrated set of process, service and data models that can be used to drive analysis and design of service architectures, ensuring a tight alignment of data definitions across modeling domains. They define best practices for modeling a particular industry domain and provide and an extensible framework so that you don't have to constantly redesign your SOA as you add more and more services.
A future article discusses the related data modeling tool Rational Data Architect, and relevant structures from models in greater detail.
Data quality analysis
Practitioners who have considered the concepts described above can deliver service designs with a high degree of consistency across models and metadata artifacts. However, this is no guarantee that the quality of the data that is being returned by services is acceptable. Data which meets the rules and constraints of its original repository and application may not satisfy requirements on an enterprise level. For example, an identifier might be unique within a single system but is it really unique across the enterprise? Quality issues which are insignificant within the original single application may cause significant problems when exposed more broadly through an SOA on an enterprise level. For example, missing values, redundant entries, and inconsistent data formats are sometimes hidden within the original scope of the application and become problematic when exposed to new consumers in an SOA.
The problems therefore are whether the quality of the data to be exposed meets the requirements of the SOA project and how to effectively make that determination. The proposed solution is to conduct a data quality assessment during service analysis and design. After you catalog the source systems that support a service, you can start to investigate them for data quality issues. For example, you should verify that data conforms to the integrity rules that define it. You should verify if data duplication exists and how this can be resolved during data matching and aggregation. On the basis of these types of analysis, you can take appropriate actions to ensure that service implementation choices meet the demanded levels of data accuracy and meaning within the context of the potential service consumers. A future article in this series describes this pattern.
The effectiveness of the data quality assessment can be greatly enhanced with the right tooling decision. InfoSphere Information Analyzer, which is part of IBM Information Server, supports the data quality analysis pattern and is described in a separate article in this series.
The issues and concepts described so far apply to any service in an SOA. Canonical modeling and data quality analysis can provide value to the consistency of services and to its output data regardless of the type of service.
Information services specific patterns
Information services are services whose realization depends on information architecture, or Information on Demand, where a separation of information from applications and processes provides benefits.
Most SOA projects do not start on a green field but are based on an existing IT environment. Some of the challenges are unique to SOA, but, more often than not, well-known problems in traditional information architecture fall within the scope of SOA as well. A typical organization's information environment is often not in an ideal state to enable an effective SOA transformation. From an enterprise perspective, there's often a lack of authoritative data sources offering a complete and accurate view of the organization's core information. Instead, there is a wide variety and technologies used for storing and processing data differently across lines of business, channels or product types. Many large organizations have their core enterprise information spread out and replicated across multiple vertical systems, each maintaining information within its specific context rather than the context of the enterprise. These further drive inconsistencies within the business processes -- which themselves are usually dramatically different within different parts of the enterprise. Information On Demand -- in particular data, content, information integration, master data, and analytic services -- can be leveraged to realize information services that provide accurate, consistent, integrated information in the right context.
Consider the lack of an authoritative, trusted source or single system of record as an illustrative example. Suppose that in an organization's supply chain system's portfolio, there are five systems that hold supplier information internally. Each of these can be considered a legitimate source of supplier data within the owning department. When building a service to share supplier data, what should be the source of supplier data?
- Is it one of the five current systems that have their own copy of the supplier data? If so, which one?
- Is it a new database that's created for this specific purpose? How does this data source relate to the existing sources?
- Does data have to come concurrently from all of the five systems? If so, is it the responsibility of the data architect, the service designer, the business process designer, or the business analyst to understand the rules for combining and transforming the data to a format required by the consumer?
Often an understanding of these disparate data definitions can only be obtained by mapping back to a reference model (often a logical data model), allowing overlaps, gaps and inconsistencies in data definitions to be identified. Reusable, strategic enterprise information should be viewed as sets of business entities, standardized for re-use across the entire organization and made compliant with industry standard structures, semantics and service contracts. The goal is to create a set of information services that becomes the authoritative, unique, and consistent way to access the enterprise information. Allowing access to any information only through an application limits the scope of the information to the context of the application rather than that of the enterprise as required in an SOA. In this target service-oriented environment, an organization's business functionality and data can be leveraged as enterprise assets that are reusable across multiple departments and lines of business. This enables the following principles of information services:
- Single, logical sources from which to get a consistent and complete view of information through service interfaces. This is often referred to as delivering trusted information.
- The underlying heterogeneity that may exist underneath this information service layer and its related complexity is hidden when required (for example, during runtime). However, the lineage of the information -- the mapping of logical business entities to actual data stores -- is available when appropriate (for example, for data stewards to support data governance, impact analysis, etc.).
- The authoritative data sources of the information service are clearly identified and are effectively used throughout the enterprise.
- Valuable metadata about the information service is available:
- The quality of the information exposed through the service is known and meets the expectations of the business. The information services are compliant with data standards that have been defined.
- The currency of the information (how "old" the data is) is known. Effective mechanisms are available to deliver the information with the required latency.
- The structure and the semantics of the information are known and commonly represented on different architecture layers (data persistence layer, application layer, service/message layer, and process layer)
- The information service may be governed based on appropriate
processes, policies and organizational structures:
- The security of the information is guaranteed and incorporated into the solution rather than implemented as an afterthought and follows security and privacy policies.
- The change of the service may be audited.
- The information service is easily discoverable by potential consumers across the organization.
- A holistic governance approach is in place that addresses both the service and the information layer.
Information as a Service is about leveraging information architecture concepts and capabilities -- as defined through Information On Demand -- in the context of SOA. There are important capabilities and concepts in SOA that are not included in Information On Demand and vice versa. But there is also a substantial overlap between them -- such as leveraging content, information integration, and master data services -- which significantly improve the delivery of an SOA project. The following diagram illustrates the alignment between the SOA reference architecture shown on the left (see also Resources) and the Information On Demand reference architecture on the right.
Figure 2. Information services in SOA
As part of the SOA design phase, architects may need to make architecture decisions regarding which patterns to use based on the requirements in the project. Table 1 describes some of the key, but high-level, patterns that may apply.
Table 1.High-level categorization of information service patterns
|Data services||How do I expose structured data as a service?||Implement a query to gather the relevant data in the desired format and then expose it as a service.|
|Content services||How do I best manage (possibly distributed and heterogeneous) unstructured information so that a service consumer can access the content effectively?||Provide a consistent service interface to content no matter where it resides, maintaining the relationship between content and master data.|
|Information integration services||How do I provide a service consumer access to consistent and integrated data that resides in heterogeneous sources?||Understand your legacy data and its quality, cleanse it, transform it, and deliver it as a service.|
|Master data services||How can consumers access consistent, complete, contextual and accurate master data even though the data resides in heterogeneous inconsistent systems?||Establish and maintain an authoritative source of master data as a system of record for enterprise master data.|
|Analytic services||How do I access analytic data out of raw heterogeneous structured and unstructured data?||Consolidate, aggregate and summarize structured and unstructured data and calculate analytic insight such as scores, trends, and predictions.|
IBM Information Server plays an important role in the SOA design phase by providing a unified metadata management platform. This platform consists of a repository and a framework that allows various design tools to access, maintain, and share their artifacts with other IBM Information Server components and third party tools. The value of this shared metadata platform is that metadata artifacts can be easily shared between the tools and is kept consistent.
The purpose of this article is to give you an introduction to the information perspective of SOA design and some of the key patterns -- the business glossary, canonical models, data quality analysis, and information services. You should see the role of leveraging industry models in those design activities. If any of these topics has sparked your interest, be sure to read the coming articles in this series.
- Check out the rest of this series to learn more about topics that were introduced in this article.
- The "Information service patterns" series discusses the information service patterns addressed in this article. (developerWorks. 2006-2007).
- Read "Design an SOA solution using a reference architecture" to get more information on the SOA reference architecture.
Get products and technologies
- Create, manage & share an enterprise vocabulary and classification system with IBM Information Server and in particular InfoSphere Business Glossary.
- Simplify data modeling and integration design with Rational Data Architect.
- Accelerate projects and reduce risk with IBM Industry Models.
- Understand the structure, content and quality of your data sources with InfoSphere Information Analyzer.
- Participate in developerWorks blogs and get involved in the developerWorks community. | <urn:uuid:4217f342-0e92-4d19-b52b-44ec81082c56> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/data/library/techarticle/dm-0801sauter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917287 | 3,623 | 2.671875 | 3 |
HIGH-VALUES (the highest ASCII value the character can hold).
In contrast to Low-Value, High-Value is the
highest value in the computer's collating sequence. It
is valid only with alphanumeric fields. When compared to
any other field, High-Value is always greater. The
internal representation of High-Value in most computers
is that of all bits in a byte set to one.
High-Value is not equal to either the letter Z or the number 9 unless
those characters are the highest characters in the
computer's collating sequence. | <urn:uuid:d23a5898-d410-4fdb-93a1-504418b3653b> | CC-MAIN-2017-04 | http://ibmmainframes.com/about2418.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.889291 | 122 | 2.921875 | 3 |
Whether citizens like it or not, their governments are anxious to know everything about them.
There are plenty of technologies they can harness to this purpose. But the trick is to find a politically and culturally acceptable way to apply them.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The UK's controversial identity card and, more importantly, its associated National Identity Register, is a case in point. The government would like to have a single database that contains 49 items of information about each citizen.
The register would include biometric data such as fingerprints, facial images and until recently, iris scans, as well as biographic data such as name, date and place of birth, address, sex, nationality, entitlement to remain in the UK, as well as the particulars of everyone who supplies information to corroborate a person's identity.
Crucially, each person would have a unique identity number or token. Different database owners could use it to associate other data items with that identity. For example, in theory, if Land Registry and the Driver, Vehicles Licensing Authority both used your ID number to identify their records with you, HM Revenue & Customs could use it to see how many cars or properties you own and see if it was a reasonable reflection of your declared income.
Government agencies and private firms could then identify a person uniquely, monitor that person, and intervene when they wanted. For example, it could allow the Department of Work & Pensions to identify the 36% of pensioners the National Audit Office estimates are entitled to but not claiming social benefits and to ensure that they receive what they are due.
A number of polls, most recently one by Mori, have shown that eight out of 10 Britons have no objection in principle to ID cards. Indeed, 80% already have a passport.
Nevertheless, the government is walking through a political minefield over ID cards. Restricted documents on NIS roll-out plans have leaked from inside the Identity & Passport Service the prime minister has made equivocal statements over its future the opposition has threatened to scrap ID cards if it comes to power some MPs have said they would go to jail rather than accept it, polls show public support for the card is slipping, and the press is increasingly sceptical.
How might the government turn it around, and are there any examples it might learn from?
Europe, which has a long-standing tradition of identity documents, offers many examples. Estonia and Finland are both thought particularly successful, but each has special circumstances that do not apply to Britain. Instead, Hong Kong may provide some more pertinent examples.
When Britain handed Hong Kong back to China in 1998, the new government kept on the colonial identity cards that all Hong Kong residents carried by law. Crucially, it also kept the island's status as a free port. This made the right of abode highly desirable. The Hong Kong residents' card thus became a passport to privileges rather than a resented burden and intrusion on their privacy.
In developing a new smart card-based biometric identity card, the Chinese authorities made it easy for citizens to decide to enrol. Entering Hong Kong from the mainland remains a bureaucratic time-consuming nightmare for travellers. However, within 10 seconds readers at the border can scan a traveller's thumb prints, compare them with the images stored on the ID card and open the gates to the promised islands. Moreover, the cards are free and voluntary.
Small traders who cross the border often were the first to take it up. This spread to family and business partners. Soon enough cards were in circulation to make it attractive for the Hong Kong Post Office to add a free digital certificate application to the card. Card holders can now use this to authenticate online transactions such as gambling bets, open bank accounts, hire cars, rent flats and so on.
With the card even new immigrants can acquire a bank account, tax number and accommodation inside a day. They soon sign up.
The contrast with the mooted UK card is stark. Hong Kong offered residents a single obvious personal benefit - time saved at immigration control. The UK card has no such clarity.
According to a widely leaked document, the fundamental objectives of the National Identity Scheme are to improve the efficiency and effectiveness of border, immigration and labour controls, to cut serious crime committed using faked identities, and reduce the risk of terorist incidents. None represents a direct benefit to individuals.
In addition, the Hong Kong card simply verifies the unique identity of the card holder extra functions stem from market opportunities it creates. The UK card already appears blurred by bureaucratic function-creep. Its original purpose was to identify people (citizens) entitled to receive state benefits. This morphed into a national address register, and then into a crime-fighting tool.
Above all, the Hong Kong authorities have kept their citizens' goodwill by making enrolment voluntary. The residents and immigrants consent to give their details. The UK government has hinted it might force people to use its card. This has fuelled suspicion and resentment.
Is it too late for Gordon Brown to pull this rabbit from the hat? To do so he must make it possible for people to use the card, and only the card, as a sufficient proof of who they are. If one needs the card plus other evidence, why does one need the card at all? | <urn:uuid:919f430f-bcf4-44cb-9205-7b3234032dd1> | CC-MAIN-2017-04 | http://www.computerweekly.com/opinion/UK-has-lessons-to-learn-from-Hong-Kong-on-ID-cards | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00089-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956431 | 1,096 | 2.59375 | 3 |
8 Ways to Protect yourself from Forged/Fake Email
The Internet is rife with fake and forged email. Typically these are email messages that appear to be from a friend, relative, business acquaintance, or vendor that ask you to do something. If you trust that the message is really from this person, you are much more likely to take whatever action is requested — often to your detriment.
These are forms of social engineering — the “bad guys” trying to establish a trusted context so that you will give them information or perform actions that you otherwise would not or should not do.
Here we address some of the actions you can take to protect yourself from these attacks as best as possible. We’ll present these in the order of increasing complexity / technical difficulty.
1. Viewing the actual “From” email address of email messages sent to you.
Email messages generally specify from whom they are sent. This From must include the email address of the sender and can also include any textual “name” to go with it. Many email programs by default show only the textual name and hide the actual address … for simplicity. E.g. it is perhaps easier to read “John Smith” in your message list than his actual address “firstname.lastname@example.org”.
However, anyone technically can set these addresses and names to anything they want. So, someone can send you an email that is from any email address of their choosing and have the “name” show up as “John Smith,” or your mother’s name, or your spouse’s name, etc.
So, it is a good practice to have your email or WebMail program show the actual email address of the sender so that you can check that. This could be done in the list of messages or when you actually view the message. Don’t trust that just because a name is presented as the “From”, that the message is actually from that named person — there is no real guarantee of that (unless you take further actions we’ll discuss below).
In LuxSci WebMail, the full from name and address is displayed in the message view pane if you have it expanded (e.g. click the “+” on the left if the From, Date and Subject are displayed on a single concise line).
Additionally, you can change your preferences to “always display the email addresses of message senders and recipients instead of their names” in the WebMail message list.
2. Only view plain text previews of messages until your trust them
When you open an email message “fully”, you render its HTML content, download images, and other things. This action can inform the sender that you have read their message (see how). Once the sender knows that, they know that your email address is “good” and that you will open their messages … and so they are likely to send you more.
Additionally, it is possible that malicious messages that made it by your email filtering system may infect your computer if opened fully (e.g. due to old versions of software on your computer, newly discovered problems with your software or with images). It is best to open the message in a “plain text preview” mode first so you can evaluate if it is legitimate before opening it fully. In a plain text mode, you will not notify the senders of your actions via web bugs, and you are not opening yourself up to attacks.
3. Get a good spam and virus filter
Most good spam and virus filtering systems will detect common social engineering / forged emails that are being sent in bulk to many people on the Internet. E.g. they can stop such things as forged email from your bank asking you to login to verify your account.
These will help prevent many generic attacks, but not necessarily ones targeted directly at you or your domain / organization.
4. Get email link filtering
The most advanced email filtering systems can re-write the Internet links in your email so that when you click on them, the filtering system will scan the target page to see if it is legitimate or contains malicious content. This gives you a good measure of real-time protection against malicious links. See: prevent email phishing attacks with real-time link click protection.
If you do follow links in emails, be aware of the red flags that can tell you if a link is malicious. See the “Phishing” section of “What is Social Engineering?”
5. Use DKIM to identify forged email
DKIM is a system that cryptographically signs email messages sent and allows recipients to ensure that such messages were sent from the email servers owned by the purported sender’s organization. E.g. DKIM prevents a hacker from sending forged email … because the recipient can then easily identify it as such. Your Spam and virus filtering should use DKIM to eliminate forged email.
6. Use DKIM for your own domains
If your email service provider supports it, you should setup DKIM for your own email domains. This allows folks on the Internet to determine if email forged as coming from you is legitimate or not. It also helps you identify email that is coming in forged from yourself. See: Bounce back and backscatter spam — “who stole my email address?”
7. What about SPF?
SPF – Sender Policy Framework – is also a good mechanism to help identify if messages have originated from the trusted servers of the purported sender. It is not quite as good as DKIM, as it doesn’t prevent messages from being captured, altered, and re-sent later — those may also show up as valid.
However, as various spam filtering systems use DKIM and SPF to varying degrees, it is best to setup your domain with both SPF settings and DKIM support — and to enable both in your email filtering software.
8. Common Sense
Last but not least is use of common sense!
- Most forged email does not read exactly like messages that you normally receive from that sender … if they are your family or friend
- If there is a link to click on, hover over it first and see if the link looks funny — going to some domain that doesn’t look like it should. If so … be very wary.
- If the request is unusual or extraordinary, best to verify it with a phone call or text message or something. Many messages appear time sensitive emergencies that require you to do something to help someone, or yourself, fast. They try to use pressure and familiarity to make you skip any checking and just act out of fear or altruism. Unfortunately, you need to be skeptical until you can have some certainty that the request is legitimate….
- Even if DKIM and SPF are all OK — a messages could still be fake. E.g. a virus could have infected the sender’s computer or email account and be sending messages to his contact list through his regular servers. This will all look “Ok” on the surface and will all be valid and may even slip past your filtering software — only the content of the message itself will serve to tip you off as to its veracity – if you keep you eyes open.
If you are using LuxSci WebMail:
- We have a preference for always showing the sender email address
- We have preferences for previewing email messages (text only) and for not showing images in email right away
- We support DKIM and can assist you with setup of DKIM, SPF, and other suggestions listed herein. | <urn:uuid:5c30fb71-be42-4b38-a927-60f9d15f576f> | CC-MAIN-2017-04 | https://luxsci.com/blog/how-to-protect-yourself-from-forgedfake-email.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940116 | 1,584 | 2.53125 | 3 |
Making voice calls via cell phone aboard a plane doesn't hold much interest for U.S. airline passengers or airlines, but there isn't a technological reason to ban them, according to federal authorities.
The debate over making voice calls at 35,000 feet has become like so many debates with technology: Sure, we can do it, but do we want to?
In essence, whether voice calls are banned on planes comes down to a behavioral discussion and not one about technology.
"This is now a political and social question and not one of technology," said Jack Gold, an analyst at J. Gold Associates. "I personally would not want people talking loudly and incessantly during a six-plus hour trip, and I'm betting most airlines will ban in-flight calls in the U.S. because they are worried it will anger their passengers."
U.S. Transportation Secretary Anthony Foxx this week said that voice call concerns have been aired by airlines, travelers, flight attendants and even members of Congress. "I am concerned ... as well," he said in a statement. The Department of Transportation oversees the U.S. aviation industry.
On Thursday, the Federal Communications Commission voted 3 to 2 to start a long public comment period to consider removing a 22-year-old FCC prohibition on phone calls during flights over concerns they would interfere with cellular networks on the ground.
FCC Chairman Thomas Wheeler voted in the majority, saying there is new on-board technology that prevents ground interference and renders the FCC restriction unnecessary. The restriction would remain in place if any airplane didn't have the new equipment to manage cellular signals installed on its planes.
Wheeler conceded in a statement, "I don't want the person in the seat next to me yapping at 35,000 feet any more than anyone else." But he added that removing the prohibition would be a de-regulatory move that "gets the government out from between airlines and their passengers...the free market works best to determine the appropriate outcome."
Wheeler said the DOT would be the body to address "behavioral issues" related to phone calls on planes, not the FCC.
However, at the FCC hearing when the vote was taken, Commissioner Jessica Rosenworcel voted with the majority to let the public comment period start, but said she doesn't ultimately support removing the FCC prohibition on calls, and asserted that the FCC's role goes beyond acting only as technicians. She envisioned a future time when planes will have "quiet" sections that cost more than areas of the plane where calls are allowed, and said the FCC would be adding to that cost burden.
A poll released this week by the Associated Press and GfK found that 48% of Americans oppose allowing cell phones for voice calls during flights, while 19% support it. Thirty percent were undecided.
Southwest Airlines CEO Gary Kelly said Friday that 60% of Southwest's passengers in surveys oppose voice calls during flights. "The vast majority of our customers don't want cell phone calls in flight," he said during an interview on CBS This Morning. "If our customers don't want it, our employees won't want it either. It's an inconvenience to be in such close quarters and overhearing a loud conversation ... It's not a significant safety question at all."
Delta Air Lines has cited overwhelming customer opposition to voice calls on planes and has a ban against such calls. Other airlines have said they are studying the issue.
U.S. House and Senate lawmakers have introduced legislation to ban passengers from talking on cell phones during flights. One measure to limit device use, introduced by Sens. Lamar Alexander (R-Tenn.) and Dianne Feinstein (D-Calif.), is called the Commercial Flight Courtesy Act. | <urn:uuid:b3bdb667-23c2-4655-a1f0-da492a51016f> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2486847/wireless-networking/voice-calls-from-planes--a-social-debate--not-a-technology-dilemma.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974131 | 756 | 2.5625 | 3 |
Viruses pose a danger to health care computer systems. Regardless of the nature of a company, a business is a business — and no business is safe from spyware, viruses and malware.
Even though the purpose of spyware varies — for example, email hacking, identity theft, information theft, etc. — online criminals are creating new ways to wreak havoc and invade the privacy of public domains, businesses and private internet lines.
No one is safe, and the latest victims on the hacker’s list are the data systems of private physicians and health care establishments. The virus responsible for the trouble is a new type of ransomware called SAMSAM.
What is Ransomeware?
Ransomware is a type of virus intentionally placed within a central IT system by hackers who use it to identify and infiltrate vital and confidential data important to the company or business. This type of malware is independent of social engineering and also doesn’t need to use emails to be transmitted.
Ransomware uses unpatched servers to infiltrate the entire cloud system, contaminating other machines. Hackers then use the ransomware to expose the main data systems and hold them ransom — hence the name. They encrypt the data so that the legitimate users cannot access it, and offer to sell the key needed to decode it for money, often in an untraceable, online currency like bitcoin.
How SAMSAM Works
Similar to Locky, SAMSAM — a strain of ransomware — was reportedly responsible for an attack on a hospital in Kentucky.
With SAMSAM, hackers implement an open source application server called JexBoss, as well as other Java-based application systems, to hack into the home servers of hospitals or any other business. They place SAMSAM inside the main Web application server, and the infected home server, which is connected to all the other servers, gives the virus access to connected systems, letting it make its way into the Windows network.
According to Cisco Talos, this malware allows hackers to communicate with the victims, stating that they will not decrypt the malware until their conditions are met. Attackers who are behind SAMSAM malware are able to locate, manually control and delete vital data, and even access network-based backups. They can lock and shut down entire systems, completely blocking out the victim’s access to their own records.
Rather than a virus that just works arbitrarily, the attackers have complete control over what they view and what they can destroy. They are able to find and encrypt the victim’s data so that the victim won’t be able to recognize their own information. Victims have the choice of either paying the ransom fee, or suffer the consequences of never being able to retrieve their data.
How to Prevent Ransomware Invasion
Reports show that SAMSAM ransomware has been raging against the health care industry. The FBI are commandeering IT experts to give emergency relief to victims of ransomeware. It is strongly recommended that physicians and health care establishment managers invest in a solid security system and hire professional IT technicians to install protective software on their data systems.
Professional security systems provide strong passwords and deter easy access of macro loading in Office programs. They also provide recurring patching schedules which prohibit spyware viruses and activity such as hacking and ransomeware. Even though there are always threats to security and data, these preventative measures are still an operable defense.
Why Health Care Companies Need IT Protection
A reliable IT security systems gives sophisticated server protection for virtual, cloud and physical servers. Company applications and information will be secured in spite of business disruptions without the aid of emergency patching. The IT platform is completely handled by the security system and keeps it running smoothly.
For backup protection, the basic 3-2-1 method is still a good option: Make a minimum of three backup copies, placed in two separate locations, one of which should be stored outside your system, for example, a flash drive, a computer not connected to the internet, or an external hard drive not left connected to a computer. | <urn:uuid:9ef72af8-a460-4624-84f5-2f44e3e89ade> | CC-MAIN-2017-04 | https://www.apex.com/what-every-physician-should-know-about-ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93365 | 819 | 2.921875 | 3 |
Kaspersky Lab, a leading developer of Internet threat management solutions that protect against all forms of malicious software including viruses, spyware, hackers and spam, is supporting Safer Internet Day (9th February, 2010) by providing a dedicated website offering information and video tutorials on safer surfing, social networking and avoiding 'digital pollution' at www.kaspersky.eu/stopdigitalpollution.
For users of all ages, the Internet is part of everyday life. Children and young people use it to meet friends via social networks, parents do online shopping and banking, and 'silver surfers' make travel plans. However, digital pollution, which includes threats from cybercriminals and other unwanted intruders, is growing and poses serious risks for Internet users – risks that Safer Internet Day 2010 will bring to the public's attention.
Many Internet users are blasé about digital pollution, believing cybercriminals will not target them and that they don't have anything of value on their PC. However, statistics show that this attitude could be risky, with significant increases in credit card fraud through personal details that have often been stolen through malicious Internet spyware. According to Kaspersky Lab experts, digital pollution is growing rapidly. Up to 30,000 new Internet threats are currently seen every day, and this number is constantly increasing.
Garlik's annual UK Cybercrime report shows that the losses from plastic cards, which are still the predominant method of payment on the Internet, have rapidly increased.
- Losses from plastic card fraud rose by 14% from £535.2m in 2007 to £609.9m in 2008.
- Online banking fraud losses increased by 132% from £22.6m to £52.5m. This constitutes 8.6% of the total figure of loss for 2008
- Cardholder-not-present (CNP) fraud loss has increased by 13% from £290.5m to £328.4m and accounts for a significant 54% of all card fraud losses.
- The total value of online shopping in 2008 was £41.2 billion
Figures from the British Crime Survey support this. The 2008/09 BCS shows there has been an increase in plastic card fraud, with 6.4 per cent of plastic card users being aware that their cards had been fraudulently used in the previous 12 months, compared with 4.7 per cent of card users in the six months up to March 2008. This is also a rise from 3.7 per cent in 2005/06. In contrast to this, figures from the UK Payments Administration show that the success of chip & PIN for offline transactions has meant that over the past four years losses on the UK high street have actually reduced by 55% from £218.8m in 2004 to £98.5m last year. Therefore these figures suggest that criminals are now targeting online shoppers more than ever before.
"We expect crimeware that exploits social networking sites to be one of the most dramatically increasing threats of 2010," says David Emm, a member of the Global Research and Analysis Team at Kaspersky Lab. "We fully support the Safer Internet Day initiative, which helps to promote the safer and more responsible use of the Internet, especially amongst children and the elderly."
Think before you post!
Social networking websites are a modern cultural phenomenon. Facebook alone currently has 300 million active users, 150 million of which logon at least once a day. Each user has on average 130 'friends', with the fastest growing demographic being in those who are 35 years old and over. Many are either unaware or do not care that lax security settings can enable everyone to read their information. Anyone can, for example, easily use a search engine to collect personal data from online sources such as Facebook profiles or Amazon wish lists. They can then use this information to create a complete personal profile. It's also important to consider that it is often impossible to completely delete photos or comments posted on social networking sites. These, in turn, may affect the way in which potential employers assess users' job applications – even years down the line. "Think before you post", the motto of Safer Internet Day 2010, is thus intended as a clear guideline to be followed when using social networks.
Even people who don't spend all their free time on the Internet are now facing increasing dangers and more sophisticated attack scenarios. According to the Kaspersky Lab experts, this is also one of the major computer security trends for 2010. "We expect that this year will bring much more sophisticated malware, causing problems even for casual users," says David Emm. "If a computer is not protected by up-to-date Internet Security, phishing attacks (which use fake website pages to mine sensitive personal data) and drive-by downloads (where malware is installed on an unpatched computer when the victims simply browses a compromised web site) will have a field day. Just a single mouse click is enough to install malicious software in the background. From there it can spy out passwords, credit card information and security numbers."
To learn more about the broad range of activities centred around Safer Internet Day, please visit www.saferinternet.org. | <urn:uuid:5e25556e-7880-4ee8-955d-947b0cdedb7e> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/business/2010/Safer_Internet_Day_2010_ndash_Surf_Smarter_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00419-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949013 | 1,062 | 2.71875 | 3 |
What is Process Management?
Process Management refers to aligning processes with an organization’s strategic goals, designing and implementing process architectures, establishing process measurement systems that align with organizational goals, and educating and organizing managers so that they will manage processes effectively. Business Process Management or BPM can also refer to various automation efforts, including workflow systems, XML Business Process languages and packaged ERP systems. In this case the management emphasizes the ability of workflow engines to control process flows, automatically measure processes, and educating and organizing managers so that they will manage processes effectively.
Process management can now be automated with Business Process Management (BPM) Software.
What is a Business Process?
At its most generic, any set of activities performed by a business that is initiated by an event, transforms information, materials or business commitments, and produces an output. Value chains and large-scale business processes produce outputs that are valued by customers. Other processes generate outputs that are valued by other processes. | <urn:uuid:d8679439-39af-4fc2-80a7-b229bb4230ee> | CC-MAIN-2017-04 | http://www.appian.com/about-bpm/process-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941328 | 197 | 2.78125 | 3 |
Cyber is new. Cyber is hip. Cyber is cool. All these are relative terms. Compared with the evolution of the automobile, cyber is evolving at a pace which approaches incomprehensibility.
The term computer was first used in 1613, according to that great and learned resource, Wikipedia, here. This term, however, referred to a person who performs computations, an all together different meaning from the modern term.
The first probable computer actually dates back to the second century BC, here. This was a mechanical device, in my humble opinion not at all what we associate with a modern electronic computer, but it did perform basic computations. In the 1930s and 1940s various attempts were made and resulted in some functional computers, some not programmable.
Collossus was the first computer, in my humble opinion, to do actual work, breaking German codes in World War II, starting in 1943. This was followed by ENIAC in 1946, again for a military purpose. These used mostly vacuum tubes, but then semiconductors and micro-processors were introduced and computers shrunk in cost and size by the 1970s.
I wrote my first computer program in 1974, in FORTRAN, the same year I bought my first hand-held calculator for a mere $400 from Sears, complimenting my slide-rule. Ah, those were the days. Through the years I always delighted in peeking behind the curtain, at the operating system and figuring out how they worked, why and how I could get the computer to do things it was not designed to do.
In 1989 I bought my first personal computer, even though they were available for over a decade, but they finally fell in price and were almost mandatory for research and professional grade writing. That was the same year I started up a Bulletin Board system, or BBS, in Sierra Vista, Arizona, called The Exchange BBS. It only ran at night, I had to use the phone during the day. I had a whopping 20 MB hard drive and was part of FIDONET and I thought the technology was grand. I was amazed that an ‘email’ sent through FIDONET actually crossed the continent in one day!
I quickly ran out of room on the hard drive and my wife ran out of patience at sharing the phone. I later experimented with CompuServe while stationed in Korea to send and receive correspondence with my wife, but that didn’t last long. I was teaching computer courses while stationed in Korea in 1990/1, the subject was the word processing program called Wordstar. It’s funny how most of the basic keyboard sequences I taught then are still being used in Word, today, unbeknownst to most computer users.
When I returned from Korea I moved to North Carolina. And then came the internet, for me. Using a dial-up modem, it was slow, tedious and frustrating. At work, however, the military was using UNIX computers, KG-84 encryption systems, had dedicated T-1 standalone networks, sometimes over a satellite connection and everything almost worked seamlessly. In my final job at Ft. Bragg I had to design and coordinate feeds from a very large variety of intelligence platforms, all seemingly using different formats and standards.
In 1996 I participated in a groundbreaking study of a Blue Force (meaning friendly forces) tracking system named Grenadier Bratt, part of my job was to arrange the national level participation. I went to Washington DC and participated in a SORS meeting, basically begging for satellite time. I was frustrated when a sarcastic and arrogant Air Force Captain sneered his doubt that I would get even three minutes of time per day.
Little did I know my boss, then Colonel Keith B. Alexander, was pulling in favors behind the scenes. When I returned I discovered I had dedicated 24 hour a day coverage for an entire week! This exercise was a proof of concept for today’s Blue Force tracking system.
This demonstrated a need for operational satellites, not relying on intelligence collection systems. This also demonstrated that computers could track enemy forces and also show friendly forces, all properly separated by classification. I also discovered the precious bandwidth constraints on data feeds, we did not have enough bandwidth and could not exchange data and information at desired speeds.
In 1996, when I moved to Washington DC and got a laptop computer as part of my graduate program, I was firmly embedded into the internet, albeit at dial-up speeds. My job at this time included chasing state-sponsored hacker groups and attempting to bust them. Wow.
The discussions we had back in the mid-90s are still ongoing, we still don’t have a proper information sharing cybersecurity bill and people still don’t trust the government to maintain their privacy. I also discovered the intelligence community was using multiple OC-48 networks to pass data around the Washington DC area, an amazing leap in data rates.
By the time DSL and cable connections came around, I was running a home network and getting my cyber door knocked on by foreign connections almost every second. Me, Joe Citizen, sitting on my home computer. But we finally have a dedicated effort by the Department of Homeland Security, DHS, to help secure our nation’s computers. We have the US Cyber Command, lead by my former mentor, now General Keith B. Alexander, who is unique in his vision and his capabilities – God help his successor, he’ll need it!
We have a pending cybersecurity bill in Congress, which is currently tabled. We have the threat of a cybersecurity Executive Order by the President of the United States, which is most likely a political ploy but it might force Congress to actually do their job when it comes to cybersecurity. We have the release of CCDOE’s Talinn Manual, available in .pdf format here, so the legal community is finally doing their work on something other than a Wang word processor (sorry, I couldn’t resist the dig).
The challenge is to bring the laws up to date and update them at the speed of the 21st century (you’re slowing us down, guys). $13 billion is dedicated to US cybersecurity and reports have indicated that an increase of 1,800 percent (not a typo) is needed to properly secure our networks.
Please don’t forget, also, about Moore’s Law. Our processing power is still increasing, the technology is improving at an ever increasing rate and we are now processing and passing information at incredible speeds. I am now routinely seeing 5 MB per second downloads at home and my system says I am actually capable of a 42 MB per second download rate. I would have filled my first hard drive in less than half a second. *poof* Done.
In relative terms the rate of increase in technology for both computing and communications is increasing at unbelievable speeds. Recently Washington DC and surrounding states were hit by a storm called a derecho. It was fast, it was powerful and it hit the area like a sledgehammer.
I spent the evening in Pennsylvania, with my family and the storm damage was incredible, then I returned to Washington. The power was out for millions in the DC area, businesses were ruined and it took weeks for some lives to get back to a semblance of normalcy.
Cyber is the same way. Please, members of congress and business leaders, please recognize the world of cyber for what it is, a potential derecho. Please work on cybersecurity as if our lives depend on it, because we do.
About the last line. What do you mean “we”, white man, says Tonto to the Lone Ranger when they’re surrounded by warring Indians. If you don’t get this joke you’re not old enough.
We, as in the cyber community. Part of the Information Operations community. We, meaning everybody who works on the internet, makes their living with the aid of the internet, or has their virtual life on the internet. You, me, and almost everybody we know.
And no, not literally our lives, but our business life and much of our personal life depend on the internet, the web.
I’m so deeply entrenched in the cyber world as well as the ‘influence and information’ worlds that I tend to think in terms of we. I constantly remind my wife (no, not the same one mentioned above) that marriage is about we, us and our, not I or mine. It’s a mindset and a figure of speech.
Cross-posted from To Inform is to Influence | <urn:uuid:3f976bd9-d63d-4e1c-97dc-df4effae406b> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/22394-The-Derecho-Named-Cyber.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00383-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973694 | 1,767 | 2.640625 | 3 |
Each spring and fall, hundreds of species of birds migrate up and down throughout the Americas, seeking their preferred climate, food, and nesting locations. To prepare for their long journeys, they must have the right types of food sources available to them at the right times. Mismatches between when birds need food and when food is available can create major problems in ensuring birds have sufficient energy to fly to their destinations.
We know these relationships are changing, but don’t know how—we can’t "see" these changes. The changes within a given year are often subtle and hard to detect. But over time, these small changes add up and can dramatically influence how species interact with each other. We need your help to gather enough data, over enough time, to be able to see these shifts in interactions! | <urn:uuid:a52a6ff0-a06f-4632-ad0f-6031f2ce82d9> | CC-MAIN-2017-04 | https://www.emc.com/microsites/whenology/index.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936233 | 166 | 3.625 | 4 |
The technique enables logging in only after authenticating particular users’ identity.
Researchers at the University of Oxford have explored the possibility of using individuals’ physical activities to securely login to PCs and smartphones.
Called electronically Defined Natural Attributes (eDNA), the string of physical behaviour may enable detection of when an individual has consumed drugs, had sex, or if they might be at risk of a heart attack within the three months’ duration.
Oxford BioChronometrics chief executive, Adrian Neal, was quoted by the Guardian as saying: "Electronic DNA allows us to see vastly more information about you.
"Like DNA it is almost impossible to fake, as it is very hard to go online and not be yourself.
"It is as huge a jump in the amount of information that could be gathered about an individual as the jump from fingerprints to DNA. It is that order of magnitude."
The eDNA would ultimately be used to enable an individual to login on any PC or mobile device, by authenticating their identity.
Oxford BioChronometrics president David Scheckel said that eDNA would be able to spot whether a click on an advert or a site is from an automated programme, or so-called bot, or a real human.
"We can hold companies like Google and Facebook to account ,and they know this technology is coming," Scheckel added. | <urn:uuid:323e041c-2ba6-4c86-85bb-b2bb6071cc89> | CC-MAIN-2017-04 | http://www.cbronline.com/news/cybersecurity/will-electronic-dna-become-the-safest-way-to-login-4322436 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937731 | 287 | 2.828125 | 3 |
Defining the Issue
Defining the Issue
The age of digital media has dramatically changed intellectual property rights (IPR). The proliferation of technologies that enable mass-market digital copying and analog/digital conversion, combined with file-sharing software and peer-to-peer networks that are easily accessible via high-speed Internet connections, have led to increased concerns about distribution of unauthorized copies of copyrighted media. In particular, the movie and music industries continue to search for technical and regulatory solutions to combat digital piracy.
The music and movie industries have long been troubled by the growing trade in pirated media. High-speed digital copying hardware and broadband Internet connections enable mass production of pirated content with no degradation in quality from the original. A significant amount of trafficking of unauthorized copyrighted material occurs via peer-to-peer networks that enable large-scale file sharing among multiple users.
A variety of technical options for protecting digital content (known as digital rights managements systems, or DRMs) are available, including copy control; file access control (limiting the number or length of views); restrictions on altering, sharing, saving, or printing; encrypting files for use only by authorized users; and electronic watermarking, flagging, or tagging to signal to a device that the media is copy-protected. These technologies may be imbedded within the media itself or contained in the operating system, program software, or hardware of a device. In addition, the music and movie industries are promoting the use of special “piracy filters" by Internet service providers (ISPs) that enable them to screen their broadband traffic and identify and contain pirated digital content.
Technology companies are concerned that legislation requiring the inclusion of specific intellectual property protection technologies poses serious threats to privacy, technical innovation, open source software development, and the fair use of copyrighted content. ISPs feel the costs of filtering network traffic will be a burden to them and their customers, and consumers and advocacy groups aren't expected to embrace the restrictions. In addition, past efforts to limit copyright piracy using mandatory technology have often been ultimately unsuccessful as technical progress made the solutions obsolete.
- Protection of intellectual property is critical to Cisco's success and is fundamental to the development of democratic societies and economic systems. Cisco believes that copyright laws promote innovation, should be enforced, and should protect all forms of digital content.
- We respect the needs of digital content industries to protect their product, including using DRMs and other technological protection measures. However, governments should not impose mandatory technical standards or legislation as a solution for achieving this successfully.
- If technology standards are mandated to prevent media piracy, the government will essentially be selecting technology winners and losers. History has shown that the best technology is technology that is developed and tested by the marketplace.
- Mandatory technical standards could result in user-unfriendly products and services and unfairly impact ISPs. Development of such products and services is a costly long-term process that negatively impacts content providers, consumers, the technology industry, and ISPs.
- The private sector should work on its own to solve the issue of protecting digital intellectual property. Many companies are developing competing products and services that prevent the transfer of pirated media without making it difficult to use the Internet for legitimate activities.
- Any solution to this issue must carefully balance the needs of the content industry with the preservation of technological innovation, consumer choice, and the preservation of e-commerce. | <urn:uuid:337a8c4e-7179-4bb5-96f9-96f501102048> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/government-affairs/government-policy-issues/ip-rights.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915294 | 688 | 3.203125 | 3 |
Cybercrime is everywhere. It can involve stealing a person's personal information, stealing trade secrets or taking down a city's infrastructure.
With the goal of preventing new cyberattacks in mind, the House of Representatives in April passed the Cyber Intelligence Sharing and Protection Act (CISPA), and parts of the controversial PRISM program, the National Security Agency's online spying tool, have the same goal.
Though cybercrime is a terrifying thought, its pervasiveness means that "there is strong demand for computer experts who can keep cybercriminals at bay," according to government officials. And as OnlineSchools.com has shared in the following infographic, these trends in hacking are helping to usher in a new breed of soldier – the cyberwarrior. | <urn:uuid:1cc77d83-64d6-402b-abcc-62306fbadd62> | CC-MAIN-2017-04 | http://www.govtech.com/security/How-Cyberthreats-Are-Creating-New-Job-Opportunities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00501-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934866 | 152 | 2.59375 | 3 |
Hello everyone, in this post we are going to use DNS for data ex-filtration to fasten (time based) blind sql injection attacks or make exploitation possible even on random delayed networks/applications. So let us start with basics of DNS.
In internal penetration tests, we simulate attacks that can be performed against on misconfigured services and protocols on network-level.These attacks are mostly caused by the fact that mechanisms such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and Domain Name System (DNS) are not configured properly.One of the most important attacks that can be encountered is undoubtedly Man-in-the-Middle. It allows access to sensitive information by listening to network traffic or manipulating the target to be accessed. Security measures against this attack can be taken on network equipment such as routers and switches. However, due to the inherent weaknesses of some protocols, we can perform the same attack with different methods. For this reason, the main theme of this article will be Man-in-the-Middle attacks against LLMNR, NetBIOS and WPAD mechanisms. Before begin, I would like to explain how the computers have Windows operating system communicate with each other in the same network and perform name resolution. (more…)
A denial of service (DoS) attack is an attempt to make a service unavailable. Unlike other kinds of attacks, which establishes foothold or hijacks data, DoS attacks do not threat sensitive information. It is just an attempt to make a service unavailable to legitimate users. However, sometimes DoS might also be used for creating another attack floor for other malicious activities. (e.g. taking down web application firewalls) (more…)
Data exfiltration, also called data extrusion, is the unauthorized transfer of data from a computer. These type of attacks against corporate network may be manual and carried out by someone with USB or it may be automated and carried out over a network. In this article, we will focus on a network based data exfiltration techniques that must be covered during penetration test. (more…) | <urn:uuid:d1bdfb22-5e0a-43b0-b3c9-882e7aa2b9dd> | CC-MAIN-2017-04 | https://pentest.blog/tag/dns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945586 | 426 | 2.84375 | 3 |
Panfil M.,Katedra Meteorologii i Klimatologii |
Jassal R.,Biometeorology and Soil Physics Group |
Ketler R.,Biometeorology and Soil Physics Group |
Nesic Z.,Biometeorology and Soil Physics Group |
And 6 more authors.
Przeglad Geofizyczny | Year: 2012
The issue of fast-growing crops originally goes back to early 1970th. Fast-growing crops initially dominated as a way of obtaining the raw material for producing cellulose. In subsequent years, with the development of technology, the range of benefits of this type crops began to increase - for example crops found use in the purification of industrial pressure land (reclamation of degraded areas e.g. coal mining) or were used in the production of biofuels (bioethanol, biogas, biomass). With the increasing interest in fast-growing crops, scientific research has been initiated to assess the impact of such crops on the natural environment. One way to analyze the impact of crops on the environment is to measure the emitted or absorbed carbon dioxide and water vapor. Measuring systems consisting of advanced wind sensors, CO 2 and H2O gas analyzers, temperature sensors (air, soil), radiation sensors (shortwave and longwave), rain gauges, snow sensors and soil heat sensors are used for this purpose. All these devices are installed in systems to determine the mass and energy change in the studied area. The eddy covariance method (EC) technique involving the high frequency measurement the vertical velocity component and the scalar (temperature, water vapor and CO 2 mixing ratios) as used to obtain the fluxes half hourly. Source | <urn:uuid:968795fb-5e93-44ff-b3e7-80987b2b6afa> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/biometeorology-and-soil-physics-group-2500356/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00282-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890972 | 358 | 3.21875 | 3 |
Fiber optic cable is unlike most types of cables; it draw on light instead of electricty to transmit signals. As you have already known, the light is the fasterst way to transfer information, and fiber optic cable has additional advantages are immune to electrical interference. So, you can run it anywhere and at any time. Because light meets litter or no resistance, you can run the fiber optic cable in a long distance, literally countries apart, without increasing or clean signal. Imagination process thousands of miles away. It will be impossible.
Optical fiber velocity also has its own advantages. It has a cleaner signal than conventional copper wire and transmit signals over 10 gb/s. Put it into perspective, fiber optic wiring is digital information as an electrical wiring is analog information. They are completely different.
At present, the fiber optic cable used to connect to the network, basically make the short run, the connection layer, construction and connection electric copper cable, fiber optic cable through the Ethernet converters. Despite the fiber optic cable can be very expensive, but because it is becoming more and more popular, it will be, the price of fiber optic cable (and related equipment including Ethernet converter and fiber optic transceiver) should be reduced.
Knowing what’s inside this very functional invention is good to know. A fiber optic cable including the core, cladding, strength member, buffer and jacket as its components. Let’s get to know them more!
Core cable to the path of the transmitted light can flow, by one or more of the glass or plastic fiber. The cladding which provides a refractor light beam reflected back to the core, to continue its journey is usually made of plastic. The buffer consists of one or more layers of plastic and strengthens the cable and prevents damage to the core. As the same implies, the strength members very hard materials, such as glass fiber, steel or kevlar, and provides additional strength for the cable. Finally, the jacket which can either be plenum or non plenum is the outer convering or shield of the cable.
Fiber optic cable comes in two forms: single mode and multi mode. Because single mode cable is so narrow, light can only travel through it in a single path. This cable is very expensive and is hard to work with. Multi mode cable, on the other hand, there are a wide range of core diameter of the optical flow of the freedom to travel several paths. Unfortunately, the multipath configuration multimode optical fiber allows the possibility of signal distortion at the receiving end.
Sometime in your connection, you will come across connecting either a single mode or multi mode fiber optic cable to conventional copper cable. This can be a problem which can cut the communication you have already established. But you don’t have to worry as there are Ethernet converters and transceiver modules that serve to router, boost, and deliver the signals across these two opposite cables. On top of these, there are other related devices such as gigabit converters and SFP mini GBICs readily available on the market that you might find useful in your network. | <urn:uuid:d36466c1-2dd2-4c70-8921-6fd05cf8a699> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-is-inside-of-a-fiber-optic-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936409 | 637 | 2.78125 | 3 |
The National Archives blog recently featured a pretty cool clip showing one of the first “futuristic” video phones – from 1955, manual rotary dial and all.
According to the blog:
“Demonstrated for the first time, the videophone, with two-way picture screens enabling the parties to see, as well as speak to, each other. As simple to operate as today’s dial tone. The videophone included a small screen so that women could ‘primp’ before placing their calls. A mirror would have been less costly and more effective.”
+More on Network World: The IRS uses computers?! The horror!+
The blog notes the mention of “dial tone” – which replaced the need for an operator – which was still a novelty in the early 1950s but by the 196s it had become ubiquitous.
While the video phone might have had an audience if it had been priced reasonably – “According to the Universal news story, the videophone cost $5000, or about $43,000.00 in today’s dollars,” the blog states.
Check out these other hot stories: | <urn:uuid:1c780c6d-3ca0-4246-8b04-39bc3250377e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2600838/videoconferencing/witness-the-future-the-1955-video-phone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00310-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973718 | 244 | 2.90625 | 3 |
After finishing the Knight Lightning game in Scratch, I was feeling better about my Raspberry Pi efforts. After a little over a month I had become familiar with using the Pi, failed at an IP Camera project, and finally actually built something with my Scratch game. It felt like the right time to actually use a “real” programing language, and conveniently the next chapter in my user’s manual was on Python!
I have some experience with Ruby thanks to a course with Teamtreehouse.com, so I thought python would not be too much of a stretch. This turned out to be true, so I just needed to learn more of the nomenclature of Python and not relearn everything that I learned with Ruby. Python is easy to read and emphasises using plain English keywords. This means it is code that sounds more like describing what you want to do and less like techno-babble. Good for Newbies, but still powerful.
The good news with picking Python for your Raspberry Pi project is that Python comes with the starter kit, so I did not have to download anything extra. Let’s look at some simple things you can do with Python, and then I will show you how to save code and run it.
How about I show you by writing some code! The examples below should be enough to get you up and coding in Python.
I will start with
- Data Types
Now let’s look at
Now let’s check out
And everyone loves loops! Some would say they never end….
Last one, we can do this.
Wow, thanks for reading all of that! The good news is that you know most of what I know about the basics of using Python. I also want you to know that Classes and Objects are great things to go learn about, but I will not get into those here!
So, you are familiar with some principles and want to start coding. Where should you do that? If you want to jump right into it you can open up a Text Editor doc. Here is a screenshot of what that looks like on my pi.
I put together some code to take 3 variables and add them to a list one by one. You can see that I used the try: I talked about and I also add the except Error as e: to go with it. As homework you can look up more about how those work. A big thing to remember when you write code in a Text Editor or anywhere really is to save the file as .py. When you run the code this will be very important. Now how do I run this code I saved?
In the top left of your pi menu you will see a what looks like a computer monitor called the LXTerminal. This is the command line terminal that you can run files from. I saved my file on the desktop so I will need to navigate there by typing
Note that the case matter so, cd desktop will not work. Now that the command line is looking at the desktop you can enter
I saved my file as listing.py, so I used
Take a look at the results.
Try it out yourself.
If you do not have a Raspberry Pi you can still use Python! Just download Python to you personal computer and get started. You can find instruction and details at python.org
If you are interested in learning more about using Python with your Raspberry Pi there are some great resources at raspberrypi.org.
Learn more about the Python Idle IDE at raspberrypi.org.
Once I had spent some time practicing with python basics and running my python files through the command line and IDE I decided that I was ready for the big time. I was going to tackle a real project using Python and my Raspberry Pi. More on that next week. Have you ever heard of Twitter? | <urn:uuid:af8e8849-b397-457e-9490-2decd8a4574d> | CC-MAIN-2017-04 | https://blog.bandwidth.com/actually-using-your-raspberry-pi-part-3-teaching-everyone-the-basics-of-python/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00522-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955437 | 788 | 2.84375 | 3 |
Getting More Out OfBy John McCormick | Posted 2002-11-01 Email Print
Project managers at Bristol-Myers Squibb think they can aggregate the processing power of thousands of under-utilized PCs—and save the drugmaker millions in the process.
Getting More Out Of Computers
Computing tasks can be performed by any number of processing schemes, some of which fall under the loose heading of grid computing.
Symmetric Multiprocessing (SMP)
A server computing system where two or more processors are managed by a single operating system. The processors share the same memory and input/output mechanisms.
Massively Parallel Processing (MPP)
A server computing system where a hundred or more processors work on different parts of a task. Each processor has its own operating system and memory.
A scheme to take advantage of underused processors on corporate desktop and laptop computers. Tasks are broken down and assigned to individual processors by a master scheduler, much like MPP.
Also known as a hosted grid. A company activates and deactivates resources as needed from a large information-systems facility. The customer is charged for the services it uses. It's a modern version of computer time-sharing.
Sources: Giga Information Group, Aberdeen Group | <urn:uuid:b6c0cdbf-756d-4202-880c-fd7a3c8e0b93> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Business-Intelligence/BristolMyers-Squibb-Taps-Grid-Computing/3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912659 | 264 | 2.703125 | 3 |
Definition: A path with alternating free and matched edges.
See also augmenting path.
Note: From Algorithms and Theory of Computation Handbook, page 7-3, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "alternating path", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/alterntngpth.html | <urn:uuid:f0c00288-4344-4dd1-9bec-04745e73b2ca> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/alterntngpth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.758249 | 194 | 2.546875 | 3 |
It was 1958, the height of the Cold War. The Soviet Union had just launched the first satellite into Earth's orbit, taking the lead in the nascent space race and freaking out U.S. politicians and military leaders. Surely something had to be done. And some geniuses in the U.S. Air Force thought they had a brilliant plan: Secretly launch a nuclear device into space to be blown up on the moon! It was called "A Study of Lunar Research Flights," or more cryptically, "Project A-119." Physicist Leonard Reiffel, the project leader, talked with CNN about the genesis of the plan after the network came across the declassified documents:
"People were worried very much by (first human in space Soviet cosmonaut Yuri) Gagarin and Sputnik and the very great accomplishments of the Soviet Union in those days, and in comparison, the United States was feared to be looking puny."
Puniness is not the American way, mister! What the Air Force thought was needed, Reiffel told CNN, was a "concept to sort of reassure people that the United States could maintain a mutually assured deterrence, and therefore avoid any huge conflagration on the Earth." And what could be more reassuring than demonstrating that it's possible to launch a nuclear warhead 240,000 miles to the moon, where its detonation would make for a spectacular and sobering display for slack-jawed viewers watching from, I don't know, the Kremlin or someplace like that. Or maybe not.
Contrary to some reports, Reiffel told CNN, the device would not have "blown up" the moon. "Absolutely not. It would have been microscopic, so to speak. It would have been, I think, essentially invisible from the Earth, even with a good telescope."
So much for shock and awe. Interestingly, one of Reiffel's team members was a young graduate student named Carl Sagan, who went on to become one of the world's few celebrity astronomers. Fortunately, by 1959 saner heads in the Air Force prevailed as serious questions were raised concerning radioactivity, the reliability of the nukes and public backlash in the U.S. Project A-119 was quietly forgotten in Pentagon files, as were even crazier ideas such as building nuclear launch sites on the moon from which to attack the Soviet Union. "These are horrendous concepts," Reiffel, now 85 years old and living in Chicago, told CNN. "And they are hopefully going to remain in the realm of science fiction for the rest of eternity." Now read this: | <urn:uuid:8d75d7c1-520c-470f-ab1d-ea7b1849b7e9> | CC-MAIN-2017-04 | http://www.itworld.com/article/2716145/hardware/u-s--air-force-once-wanted-to--nuke--the-moon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00054-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976893 | 534 | 3.109375 | 3 |
Digital telephone systems are the most popular telephone systems in
service today. While obsolete analog phone systems converted voice
conversations (sound waves) into electrical waves, digital phone systems
take the same voice conversations and convert them into a binary format
where the data is compressed into "1"s and "0"s.
More specifically digital phone systems sample your voice using a
method called Time Division Multiplexing or TDM. When you speak into a
digital telephone, your voice is digitally sampled into time slots so
that a conversation doesn’t have to use the entire bandwidth of a
circuit. The system then uses a clock to synchronize the digital
samples and turn them back in to voice. Whereas analog telephone
stations can only handle one conversation at a time, digital phone
stations can compress more than one conversation as well as a variety of
phone system features onto a single pair of wire.
The advantages of a digital telephone station over an analog telephone station include:
Clarity- While analog telephone stations offer richer
sound quality over digital, the binary code used by digital telephone
stations to transmit data keeps it in tact so that the end transmission
is distortion free. This results in clearer phone calls.
Increased Capacity- Digital telephone stations can fit
more conversations on a single pair of wires than an analog station.
This allows for less wiring and more efficient communication than an
More Features- Due to increased capacity, a digital
telephone station can also fit more features on a single pair of wires.
Mute, redial, speed-dial, function keys, call transfers, voicemail,
conferencing, and other features are all available through digital
systems more resourcefully than with analog systems.
Longer Cordless Range - If you need a cordless telephone
at your office, cordless phones with digital technology can apply more
power to the signal and increase the range of the phone.
Each manufacturer produces digital telephone stations that use
slightly differing TDM protocols, so be sure to speak with your phone
installer before purchasing any equipment to make certain they are
compatible with your current system. | <urn:uuid:83d329cd-7c69-4e62-bb42-182d10a9bb02> | CC-MAIN-2017-04 | http://www.metrolinedirect.com/digital-telephone-stations-and-features.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00356-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.893322 | 448 | 3.046875 | 3 |
Amazon S3 is a cloud-based storage service, which make available to store, edit and retrieve Windows Server data in low-cost and save environment. The data is replicated between three different facilities to ensure its safety from disasters and contained in buckets. high level of availability is achieved by using AWS global network and geo-dispersed data centres. There are four major classes within Amazon S3: Simple Storage Service (S3) for "hot" frequently accessed data, Standard I/A — an example of "cool" storage class, Reduced Redundancy Storage (RRS) and Amazon Glacier archival platform. Read more about Amazon S3 in our blog. | <urn:uuid:623a5de9-22b6-4fce-93a3-8b163b64ac55> | CC-MAIN-2017-04 | https://www.cloudberrylab.com/amazon-cloud-server-drive.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907606 | 135 | 2.671875 | 3 |
Raspberry Pi is a tiny and affordable computer consisting of a single, credit card-sized circuit board. You can use it to do almost anything that is doable with a regular desktop computer, such as running desktop applications and playing music files. Raspberry Pi also has great potential for developing automation systems using sensors, relays, lights, and motors due to its small size, low power requirements, and small price tag. Raspberry Pi was conceived and developed by the Raspberry Pi Foundation with the goal to promote teaching of basic computer science in schools. But today it has probably become more popular with enthusiasts. Since the initial launch of Raspberry Pi in February 2012, more than one million units have been sold.
This article will briefly walk you through the process of setting up your Raspberry Pi for the first time. It then presents an interesting application, using the Pi as TFTP server for downloading Cisco IOS software images to routers. This is just one of the many wonderful things that can be done with this miniature computer.
CCNA Training – Resources (Intense)
How to Get Your Pi
The Raspberry Pi comes in two models: the lower-priced model A and the higher-cost model B. We used the Raspberry Pi Model B Revision 2.0 with 512 MB memory to create the examples in this article. This model is available at Amazon.com for around $40. That amount gets you the board alone and does not include am SD card, power adapter, cables, or case. A better option is to purchase a starter kit that comes complete with case, power supply, HDMI cable, and a preloaded SD card. The kit would include everything other than the display, keyboard, and mouse you would need to have a working Raspberry Pi system. Such a kit is manufactured by CanaKit and is available at Amazon.com for around $70 at the time of this writing.
Raspberry Pi Hardware
Raspberry Pi is a computer and, just like every other computer, it has a processor at its heart—a Broadcom BCM2835 system-on-chip (SoC) multimedia processor in this case. The multiprocessor has the majority of system components built into it, including the central and graphics processing units, along with the audio and communications hardware. There is also a 512 MB memory chip at the center of the Raspberry Pi (Model B Revision 2.0) board.
The BCM2835 processor uses an instruction set architecture (ISA) known as ARM. The ARM architecture, though not common in the desktop computer world, is ideally suited for low-power applications. The smartphone you are carrying in your pocket right now quite likely has an ARM-based processor. The BCM2835 processor enables Raspberry Pi to operate on just the power supplied by an onboard micro-USB port and survive without any heat sinks on the device.
The Raspberry Pi is a bare-bones computer that comes without any peripherals. You need to connect a display, keyboard, and mouse before you can do anything useful. The Pi has an HDMI port that you can use to connect to an existing computer monitor with an HDMI port. Even if your monitor only has a DVI port, you can purchase an HDMI-to-DVI cable to connect it to the Pi. You also need to connect a keyboard and optionally a mouse for input devices.
The Raspberry Pi doesn’t have a traditional hard drive like PCs have. It uses a secure digital (SD) memory card to store the entire operating system as well as other software and data. SD cards with the operating system preloaded are available for use with the Pi. You can also install an operating system yourself onto a blank SD card and use it with your Pi.
The Raspberry Pi is powered by a micro-USB connecter that is the same one found on most smartphones. The Pi requires up to 700 mA in order to operate. You must make sure that the charger you are using can supply that much juice or it may cause problems in the operation of Pi. There is no power button on the device, so it will fire up the instant power is connected.
Raspbian Operating System
Linux is an open-source operating system that consists of a kernel at its heart. As a matter of fact, the kernel is Linux but it is often bundled with a collection of different open source software to form different flavors of Linux, known as distributions. Linux is traditionally a command-line interface (CLI) based operating system, though all modern Linux distributions now come with a desktop environment as well.
Debian is one of the numerous Linux distributions and a great choice for Raspberry Pi due to its lightweight nature. The Raspberry Pi runs on Raspbian, which is a modified form of Debian optimized for Raspberry Pi hardware. Raspbian does not include all the software found on desktop versions of Debian. That’s a choice made to keep the image size to the minimum. However, additional software can be easily installed using the advanced packaging tool (APT) that is part of the distribution.
The Debian build for Raspberry Pi includes a desktop environment known as the Lightweight X11 Desktop Environment (LXDE). LXDE uses the X Window System, also known as X11, to offer a familiar point-and-click graphical user interface (GUI) similar to Windows and OS X. The GUI may not load by default in your Raspberry Pi distribution. You may need to log in and then enter startx to leave the text-based console behind and load the GUI.
The configuration examples that follow were developed using the topology shown in Figure 1. It enables our Raspberry Pi system to access the Internet through the gateway as well as offer Cisco IOS images to the router over LAN.
Figure 1. Network Diagram
You can fire up the Raspberry Pi by simply connecting the micro-USB connector. The Pi takes less than a minute to boot, during which the usual Linux boot messages scroll across the screen. You are then presented with the login prompt:
Debian GNU/Linux 7.0 raspberrypi tty1 raspberrypi login:
You can enter pi as login and raspberry as password to get into the system.
pi@raspberrypi ~ $
If you are familiar with the Linux command-line interface (CLI), you can do a lot of things right from the CLI, which, in fact, can be quite powerful for the experienced. But, let’s face the fact that most of us live in a world dominated by Windows and we are not comfortable with the Linux CLI. However, Raspbian comes with a GUI environment just like most other modern Linux distributions. You can launch the GUI environment simply by entering startx at the CLI.
pi@raspberrypi ~ $startx
This starts the GUI system of Linux and presents you with a desktop environment as shown in Figure 2. You can see shortcuts to a few applications right on the desktop including, the Midori Web browser and the LXTerminal terminal application.
Figure 2. Raspbian Desktop
You may recall that we logged in as pi. In order to perform IP configuration, you need to switch to the root user by using the su root command. By default, the root password is not set so you can just hit enter when prompted for root password. You will then be able to configure a root password of your choice.
pi@raspberrypi ~ $su root Password: root@raspberrypi:/home/pi#
The Pi can automatically receive its IP configuration through the dynamic host configuration protocol (DHCP) when connected to a LAN. You can also configure the IP address and other related details manually. We will resort to the manual configuration option.
The list of network interfaces is stored with other useful information in a file called interfaces, located in the folder /etc/network. This file can be edited only by the root user because removing a network interface from this file will cause it to stop working. That is also one of the reasons we switched to the root user using su root command. You have to first launch a terminal application, such as LXTerminal, which is available right on the desktop of user pi. You can then edit this file using a text editor like nano. Open the file for editing using the following command:
You need to edit the line that starts with iface eth0 inet. Just delete dhcp at the end of this line and replace it with static. Then press Enter to go to a new line, and fill in the remaining details in the following format with a tab at the start of each line:
address 192.168.1.2 netmask 255.255.255.0 gateway 192.168.1.1
Figure 3 The nano editor
When you’re done editing, press Ctrl+O to save the changes and then press Ctrl+X to exit nano and return to the terminal. You need to shutdown the Ethernet interface and bring it up again in order to bring the IP configuration into effect.
root@raspberrypi:/home/pi#ifdown eth0 root@raspberrypi:/home/pi#ifup eth0
The IP configuration done so far isn’t enough to get your Pi connected to the outside world. You must tell the Pi which DNS servers to use. The list of DNS servers, known as nameservers in the Linux world, is kept in the file /etc/resolv.conf. We will use nano again and edit that file to configure a couple of nameserver entries like this:
nameserver 18.104.22.168 nameserver 22.214.171.124
The IP addresses 126.96.36.199 and 188.8.131.52 correspond to two DNS servers Google is offering for public use. You should be able to ping the gateway as well as any host on the Internet at this point.
root@raspberrypi:/home/pi# ping -c 4 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=128 time=0.934 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=128 time=0.816 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=128 time=0.856 ms 64 bytes from 192.168.1.1: icmp_req=4 ttl=128 time=0.779 ms --- 192.168.1.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 0.779/0.846/0.934/0.061 ms
Our IP configuration is complete at this point and we can now turn our attention to TFTP. There are multiple TFTP client and server implementations for Debian. We choose to use the package known as atftpd, which is an advanced TFTP server implementing all options specified in various TFTP RFCs. The first step is to install the package, as it does not come bundled with the default Raspberry Pi distribution. Your Raspberry Pi has to be connected to the Internet before you can download packages. Use the command apt-get install aftpd to download and install the aftpd package using the advanced package tool (APT):
root@raspberrypi:/home/pi# apt-get install atftpd Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: rlinetd Recommended packages: inet-superserver The following NEW packages will be installed: atftpd rlinetd 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 166 kB of archives. After this operation, 473 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Get:1 http://mirrordirector.raspbian.org/raspbian/ wheezy/main atftpd armhf 0.7.dfsg-11 [63.3 kB] Get:2 http://mirrordirector.raspbian.org/raspbian/ wheezy/main rlinetd armhf 0.8.2-2 [102 kB] Fetched 166 kB in 6s (26.1 kB/s) Preconfiguring packages ... Selecting previously unselected package atftpd. (Reading database ... 59229 files and directories currently installed.) Unpacking atftpd (from .../atftpd_0.7.dfsg-11_armhf.deb) ... Selecting previously unselected package rlinetd. Unpacking rlinetd (from .../rlinetd_0.8.2-2_armhf.deb) ... Processing triggers for man-db ... Setting up atftpd (0.7.dfsg-11) ... *** WARNING: ucf was run from a maintainer script that uses debconf, but the script did not pass --debconf-ok to ucf. The maintainer script should be fixed to not stop debconf before calling ucf, and pass it this parameter. For now, ucf will revert to using old-style, non-debconf prompting. Ugh! Please inform the package maintainer about this problem. Creating config file /etc/rlinetd.d/tftp_udp with new version rlinetd: no process found Setting up rlinetd (0.8.2-2) ... [ ok ] Starting internet superserver: rlinetd. root@raspberrypi:/home/pi#
You may have noticed from the above output that the atftpd package depends on the rlinetd package, also known as the Internet superserver. The latter also gets installed automatically.
The TFTP server uses /srv/tftp as its home directory by default. You need to put your IOS image files in this directory before your TFTP server is able to serve them to a router. You may use the Midori Web browser to download images from the Internet or company Intranet depending on your situation.
Let’s go to our router now and perform basic IP configuration first. The interface FastEthernet0/0 of the router is assigned the IP address 192.168.1.3 and subnet mask 255.255.255.0.
Router>enable Router#configure terminal Enter configuration commands, one per line. End with CNTL/Z. Router(config)#interface FastEthernet0/0 Router(config-if)#ip address 192.168.1.3 255.255.255.0 Router(config-if)#no shutdown Router(config-if)#end Router#
You can use the copy tftp flash command to download an IOS image stored on the SD card of your Raspberry Pi.
Router#copy tftp flash Address or name of remote host ? 192.168.1.2 Source filename ? c1841-adventerprisek9-mz.124-6.t7 Destination filename [c1841-adventerprisek9-mz.124-6.t7]? Accessing tftp://192.168.1.2/c1841-adventerprisek9-mz.124-6.t7... Loading c1841-adventerprisek9-mz.124-6.t7 from 192.168.1.2 (via FastEthernet0/0): !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! [OK - 25577616 bytes] 25577616 bytes copied in 98.200 secs (260465 bytes/sec) Router#
The low cost Raspberry Pi system can thus be turned into an inexpensive repository of Cisco IOS image files for personal or commercial use. | <urn:uuid:a8318ccc-1fa4-4017-ad38-e0fa023cf019> | CC-MAIN-2017-04 | http://resources.intenseschool.com/introduction-to-raspberry-pi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884346 | 3,356 | 2.84375 | 3 |
The following article is meant to serve as an overview of the operating temperatures of reverse transfer printers and lamination stations. Not all printers are equal, and all printers come with their own default settings for the temperature at which they run. If you are using a reverse transfer printer or laminating your cardstock you should always be certain to use a composite cardstock of no less than 60% PVC and 40% polyester. This blend is extremely durable and will not be affected by the high temperatures of reverse transfer printing or lamination.
The temperature on printer drivers and LCD displays are in Celsius and thus all discussions regarding printer temperatures are in Celsius (and not Fahrenheit).
REVERSE TRANSFER PRINTERS
Reverse transfer printers print the card image on a transfer film and in a second step the film is fused to the plastic card (this is a different process than the more traditional dye-sublimation, direct-to-card (DTC) printing process).
Many ID card printers have the option of adding an extra protective layer (also called overlaminate) to the plastic card. This lamination step takes place after the card is printed. Lamination can take place on both a DTC type printer and a reverse transfer type printer. The laminate that is applied to the card comes in a separate roll.
ADHERING THE FILM TO THE CARD
The process for applying the transfer film to the card is similar to the process for applying a laminate to a card. The following describes this process to join the film to the card using pressure and heat:
Reverse transfer type printers use a very high temperature for merging the transfer film to the card. The temperatures can be between 150 C and 200 C.
To add lamination to a card, the laminate is pressed to the card with rollers for a certain amount of time and heat is applied. This temperature can vary from one printer to another, for example:
For connecting both transfer film and lamination to a card, the card moves down the card path and moves through heads and rollers that "squeeze" and heat up the film to apply onto the card. The faster the card moves down the card path in the printer, the higher the temperature needs to be to join the film or laminate to the card. In other words a certain energy is needed to make the film or laminate stick to the card and this energy can be delivered with a lower temperature with longer dwell time (slower moving card) or it can be delivered with a higher temperature with a faster moving card (shorter dwell time). The definition of Dwell is "To linger over" or "The time during a process which an item is in the vicinity or motionless". This dwell time can vary from printer to printer:
If you'd like to learn more about this article feel free to contact ColorID today and we'll analyze your existing ID system.
About ColorID, LLC
Every year, ColorID assists more than 1000 colleges and universities and their project managers personally oversee 700 custom projects each year, including many small and large recarding projects. ColorID offers best-in-class products and solutions, including: contactless, smart and financial cards from every major manufacturer, multiple ID printer platforms; transaction and point-of-sale software and hardware, a variety of handheld devices for identification and tracking applications and biometrics solutions, including fingerprint and iris readers. The company’s manufacturing partners include: Iris ID, HID, Fargo, Datacard, CardSmith, Gemalto, Zebra, NiSCA, Evolis, Allegion, Aptiq, Magicard, Brady People ID, Integrated Biometrics, Oberthur, NBS, Vision Database Systems and many others.
Contact ColorID at 704-987-2238 or toll free in Canada and the US at 888-682-6567. Visit ColorID on the web at: www.colorid.com or email ColorID at email@example.com.
20480-F Chartwell Center Dr.
Cornelius, NC 28031
ColorID provides the highest quality products with superb service at an exceptional value. We want your experience with ColorID to be a positive one - from the ease of ordering products - to the quality of our products - to our follow up and our attention to detail.
CONVENIENT PAYMENT OPTIONS | <urn:uuid:c9331730-ab87-4753-b7d5-7d3397dd2de2> | CC-MAIN-2017-04 | https://www.colorid.com/learning-center/operating-temperatures-of-your-id-printer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906094 | 906 | 2.71875 | 3 |
Coming in at number one in the OWASP Top Ten Most Critical Web Application Vulnerabilities are injection attacks, and SQL Injection vulnerabilities are the most common and most dangerous in this category. SQL injection is a technique that exploits vulnerable web sites by inserting malicious code into the database that runs it.
What makes the threat of SQL injection attacks so dangerous is the ease in which they can be launched and how many web sites are vulnerable to them.
Attackers often use large botnets to systematically seek out vulnerable web sites to attack with little work being done on their part. Pair this with the fact that the number of sites vulnerable to this type of attack grows each year and it is clear to see why it remains at the top of the most critical vulnerabilities.
Even with the ease that an automated SQL injection attack can be carried out, if the attackers stood to gain nothing this threat would soon disappear. Unfortunately, those who successfully compromise vulnerable web sites can find that this vulnerability can be quite profitable as they give the attacker access to the database so information can be sold or data can be deleted. More advanced techniques can also be used to give the attacker unrestricted access to the system through a backdoor. SQL injection can also be used in tandem with other exploits, such as cross-site scripting, to manipulate how data is displayed to a web site’s visitors.
Not preventing SQL Injection attacks leaves your business at great risk of:
With dotDefender web application firewall you can avoid SQL injection attacks because dotDefender inspects your HTTP traffic and determines if your web site suffers from SQL Injection or other attacks stopping identity theft and preventing data leaks from web applications.
Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against SQL Injection attacks, cross-site scripting, website defacement and many other web attack techniques.
The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
Before a web site can be compromised, an attacker needs to find applications that are vulnerable to SQL injection using queries to learn the SQL application methods and its response mechanisms.
The attacker has two ways to identify SQL injection vulnerabilities:
When the attacker knows how each database is reacting he or she can identify the database type and the server that is running it.
There are several techniques the attacker uses to identify database objects in a SQL statement.
Once the attacker has all information he can build the exploit code.
Some techniques used to execute SQL Injection attacks are:
For example, the attacker decides to go with a basic attack using:
1 = 1--
What happens when this is entered into an input box is that the server recognizes 1 = 1 as a true statement. Since -- is used for commenting, everything after that is ignored making it possible for the attacker to gain access to the database. You can see precisely how this attack works on our SQL injection example page.
SQL injection techniques have been around for over 10 years now, but recent years have seen a dramatic increase in both number of attacks and the extent of damage caused by them. In fact, a sweep of attacks in the second quarter of 2008 alone resulted in over 500,000 exploited web pages that were compromised to deliver password-stealing malware to users' computers. In more recent studies, security firms report attempted attacks reaching totals of 450,000 per day.
The tragedy is that these threats can be mitigated, or even prevented, with the proper tools and knowledge.
The attacker identifies vulnerabilities and obtains database access SQL (Structured Query Language) provides an interface to facilitate access to and interaction with a database. A database usually stores data in tables and procedures.
SQL Injection is a security exploit method in which the attacker aims at penetrating a back-end database to manipulate, steal or modify information in the database. The SQL Injection attack method exploits the Web application by injecting malicious queries, causing the manipulation of data. Almost all SQL databases and programming languages are potentially vulnerable and over 60% of websites turn out to be vulnerable to SQL Injection.
The threat posed by SQL injection attacks are not solitary. Combined with other vulnerabilities like cross-site scripting, path traversal, denial of service attacks, and buffer overflows the need for web site owners and administrators to be vigilant is not only important but overwhelming.
dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase.
The Pattern Recognition web application security engine employed by dotDefender effectively protects against malicious behavior such as SQL Injection and Cross Site Scripting. The patterns are regular expression-based and designed to efficiently and accurately identify a wide array of application-level attack methods. As a result, dotDefender is characterized by an extremely low false positive rate.
dotDefender blocks against various SQL Injection techniques including, but not limited to:
What sets dotDefender apart is that it offers comprehensive protection against SQL injection and other attacks while being one of the easiest solutions to use.
In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your server or web site’s performance. | <urn:uuid:f1de82de-3b73-4517-bf96-4eb5dc0ac06a> | CC-MAIN-2017-04 | http://www.applicure.com/solutions/prevent-sql-injection-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00101-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920922 | 1,134 | 2.90625 | 3 |
DELL EMC Glossary
Flash Technology encompasses all types of non-volatile memory including NAND and NOR as well as combined volatile and non-volatile implementations such as NVDIMM. Flash Technology is pervasive, existing everywhere from consumer devices like your car (NOR), mobile phone, and wearables to traditional computing devices like your laptop all the way large to hyper-scale data centers which employ flash storage (NAND) throughout.
Why should I consider Flash Technology?
Flash Technology uses less energy, is more compact, lighter, and faster than most traditional data storage methods. Removing wait times from your daily life is critical both as an individual and in a business setting. Waiting for data is akin to waiting in line at the Department of Motor Vehicles – nobody wants to do it if they don’t have to. Expectations are changing and the commoditizing of technology is driving flash from consumer devices and into businesses and enterprises everywhere. Do you have a tablet or ultralight laptop? There’s Flash Technology inside enabling the ultra-fast storage and operation of those units. Have you made a reservation on an airline’s website in the last year? There’s Flash Technology driving the results that are delivered to you in near-real-time. Flash Technology is used by almost everyone in their daily lives, if they take the time to pause and look for it.
How does Flash Technology work?
Flash Technology works as an improved response method of storage over other existing technology. Flash Technology consists of microscopic switches (transistors) on silicone that can be programmed as either a 1 or 0 – standard for most block-level binary devices. Flash Memory such as NAND is non-volatile meaning when the power is removed from the unit; the data persists until the power is restored. This is contrary to volatile memory such as DIMM and SIMM which lose their data when they lose power. Flash Technology’s speed of response to requests for writes and reads of data is what differentiates it from other storage methods such as HDD’s, tape, or other magnetic / spinning forms of media. Flash response times are typically measured in the sub millisecond or even nanosecond range; whereas traditional spinning media or magnetic tape and floppy disk must deal with elongated seek times which are typically measured from 1 millisecond up to 30+ milliseconds, depending on current workload and data locality. While there are typically trade-offs with every technology benefit, the exponential response time of requests for data far outweighs the limited duty life cycle of flash memory cells.
What are the benefits of Flash Technology?
Flash Technology enables faster application results and more rapid access to static content. Flash Technology is used almost everywhere in your daily life whether it be your personal or work life. As a consumer, you use flash memory in numerous devices including but not limited to: mobile phones, digital cameras, vehicles, tablets, laptops, children’s toys, and much of the online sites you use on a daily basis are enabled by flash storage in some manner. At work, whether you work in Information Technology or consume your company’s IT resources, Flash Technology likely exists in both your company’s data center and/or a service provider’s data center that your company is partnering with. Whether it be in the form of Flash Memory cards, SSD’s, or NVMe, the uses for Flash are growing rapidly and are being driven heavily by the consumption of Flash Memory in consumer devices which make them more portable and rugged. This demand is pushing the Flash Technology makers to deliver more capacity on their chips while simultaneously driving down prices. | <urn:uuid:4d37448d-317b-455e-950d-9b9ad3c8e84a> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/flash-technology.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00101-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934393 | 746 | 3.28125 | 3 |
Definition: (1) A spatial access method that splits space with hierarchically nested, and possibly overlapping, boxes. The tree is height-balanced. (2) A recursion tree.
See also B-tree, P-tree (2), P-tree (3), R+-tree, R*-tree.
(1) A. Guttman, R-trees: A Dynamic Index Structure for Spatial Searching, Proc ACM SIGMOD International Conference on Management of Data, 47-57, June 1984.
(2) Used in [GCG92]. Suggested by Rama Maiti, 19 August 2001.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 26 May 2011.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "R-tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 26 May 2011. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/rtree.html | <urn:uuid:1f20385f-1b6f-4e20-adeb-64d390bec448> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/rtree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.853828 | 250 | 3.046875 | 3 |
APIs have been making the news lately, especially with Apple’s release of Metal API for iOS. But APIs themselves have been around for a long time. They’re a necessity in today’s ever-changing tech landscape, which is why understanding them is key to providing users with the interoperability they now require.
What is an API?
An API (short for “Application Programming Interface”) is a set of requirements that determine how one application can talk to another.
Why do APIs matter?
The short answer: APIs help make your life easier by making things more efficient. Say you have an application that checks the weather forecast for rain and, if there’s a chance of rain, it’ll display an umbrella icon on your home screen. The app does this by pulling the day’s forecast from weather.com.
Here’s the difference an API would make in our example app:
Without an API:
The app checks the current weekly forecast by opening http://www.weather.com/ and reading the webpage much like a human user would, interpreting the content as it goes. The app knows to look for the weekly forecast in one specific area of the site. However, if the site changes its layout, the app won’t work anymore.
With an API:
The app will call the message listed in weather.com’s API that returns the weekly forecast. Regardless of what the website looks like, the app will get the data it needs and will function as it should.
As you can see, APIs not only make interoperability possible, but offer a host of other benefits like improving functionality and streamlining processes.
How APIs work
APIs themselves are a series of different XML messages, each XML message corresponding to a different function. For example, a cloud hosting API may have an XML message that corresponds with creating a cloud server and one that will reboot a cloud server.
To tap into this functionality, a developer will write code that generates the right XML messages to either create or reboot a server and voila! The servers will be created or rebooted in real time, all without needing to log into a portal.
Advantages of using an API
Part of the reason why releasing APIs has become so popular is because of how much they benefit users. Some of the main benefits are:
- Easy integration. With an API available, developers can easily integrate other services into their existing software.
- Processes are streamlined. Back to our cloud hosting example, developers could integrate cloud hosting functionality into existing applications so companies wouldn’t need to train IT staff and employees on how to administer and use new software.
- It empowers users. APIs enable users to better access and customize a service in a way that suits their needs directly.
Thus, companies who release APIs allow their customers to access their services in newer, more efficient ways. If you’re interested in learning more about APIs and what they can do for you from a cloud perspective, check out our API here. We’d love to get your feedback on it, and know how you’re using it to power your business. | <urn:uuid:9b81a3b9-a42b-4772-9633-3fb62bb32d50> | CC-MAIN-2017-04 | http://www.codero.com/blog/apis-101-what-you-need-to-know-about-the-keys-to-the-kingdom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00367-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920744 | 658 | 3.203125 | 3 |
The main thing keeping Linux desktops out of botnets is the sophistication of their users, but the people who built Psyb0t knew most people don't pay much attention to router security.
calling it the first botnet designed for broadband equipment and routers,
and that it is. But it's also the first of something else: Psyb0t is the first Linux
And even though it's running on hardware devices and even though it's
running on Linux, and an obscure distribution of Linux at that, the basic
mechanisms of it aren't that different from "conventional" botnets
that run on Windows PCs. There's a lesson here.
Linux seems to be a great platform for these little embedded devices. It's
small enough that it can fit in economical hardware, portable enough that you
can put it on almost any processor and platform, and it's got great networking
tools. This particular bot runs on Linux Mipsel devices ("Mipsel" refers to
little-endian implementations on MIPS processors, generally, but not
exclusively, on Linux
). But it's not hard to see the same thing happening
to any sufficiently large population of Internet-facing devices based on Linux
or any other platform. I'm especially curious about DVRs now.
We often speak about how malware writers write for Windows because that's
where the systems are and because that's where the development tools are, for
malware and more generally. The same could be said now of Linux: The fact that
a device runs Linux means it's easy to write binaries for it that do networking
tasks, including hardening the bot and distributed denials of service.
How does Psyb0t work? The main vulnerability it seems to exploit is simply
weak or nonexistent authentication. One involved device is the NetComm NB5 ADSL (asymmetric
earlier versions of which were administrable from the WAN side
by default. In fact, some were administrable without any log-in at all. Of course
updates were made, but when was the last time you applied an update to your
ADSL router? I've seen vaguer reports of other vulnerabilities used.
According to DroneBL, the DNS (Domain Name System) blacklist service that
found the botnet, Psyb0t appears to
have been shut down just recently.
The bot will not persist if the router is power-cycled, but who does that on
purpose? I also wouldn't discount the possibility that such a bot could be
built to flash itself into an EPROM (erasable programmable ROM) or some other
persistent memory, and then the device would probably be unsalvageable. Such an
attack would be highly model-specific. | <urn:uuid:d3f4eebc-9a0a-44e6-a8b6-4794a2c881f8> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/The-First-Linux-Botnet-626424 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00091-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954039 | 583 | 2.703125 | 3 |
An innovative project, called Autonomous Dynamic Analysis of Metaphor and Analogy, or ADAMA, aims to build a software system that can automatically analyze metaphorical speech in five different languages by analyzing huge quantities of online data got off the ground this week when the U.S. Army Research Laboratory awarded a $1.4 million contract to the team conducting the research.
The research is backed by the US Intelligence Advanced Research Projects Activity (IARPA), which develops high-risk, reward research projects for the government, and is intended to build a repository of speech metaphors from American/English Iranian Farsi, Mexican Spanish and Russian speakers. ADAMA could have immediate applications in forensics, intelligence analysis, business intelligence, sociological research and communication studies, researchers stated.
From IARPA: "Metaphors have been known since Aristotle as poetic or rhetorical devices that are unique, creative instances of language artistry (for example: The world is a stage; Time is money). Over the last 30 years, metaphors have been shown to be pervasive in everyday language and to reveal how people in a culture define and understand the world around them," IARPA says.
One of the key goals of the program is to get at the deeper meanings found in metaphoric and figurative language to better understand messages and intentions of people from communities all over the world, said Shlomo Argamon an associate professor of Computer Science with the Illinois Institute of Technology who is heading up the research team. That team includes researchers from Massachusetts Institute of Technology, Georgetown University; Ben-Gurion University of the Negev, Bar-Ilan University, the Center for Advanced Defense Studies.
Argamon says the team will develop software systems to identify, access and analyze large amounts of online documents - like the American National Corpus, which a huge electronic collection of American English -- in several languages. Psychological and cultural experts will also evaluate the results to improve the accuracy and richness of the resulting metaphor collection.
"On a very basic level we want to understand the different ways different word or phrases are interpreted in different language, for example, of I call someone a 'shark' in American/English that could mean they are powerful with great vision but in Iranian Farsi, that same term means smooth-skinned, effeminate, weak - dramatically different meaning," Argamon said. "We will develop technology that can identify such metaphorical speech to get a much better understanding of the way people think about things."
Such language systems are a hot research topic these days.
While not looking at metaphorical speech, IARPA's counterpart, the Defense Advanced Research Projects Agency (DARPA) will this month detail the union of advanced technologies from artificial intelligence, computational linguistics, machine learning, natural-language fields it hopes to bring together to build an automated system that will let analysts and others better grasp meanings from large volumes of text documents.
From DARPA: "Automated, deep natural-language understanding technology may hold a solution for more efficiently processing text information. When processed at its most basic level without ingrained cultural filters, language offers the key to understanding connections in text that might not be readily apparent to humans. Sophisticated artificial intelligence of this nature has the potential to enable defense analysts to efficiently investigate orders of magnitude more documents so they can discover implicitly expressed, actionable information contained within them."
In addition, last year IARPA awarded Raytheon BBN Technologies $3 million to explore new methods of modeling what it calls the brain's sensemaking ability. The research could have commercial and military benefits, such as helping the intelligence community analyze fast-moving battlefield video, audio, and text data quickly and accurately, IARPA stated.
According to IARPA, sensemaking refers to the process by which humans are able to generate explanations for data that are otherwise sparse, noisy, and uncertain. It is a core cognitive ability that is central to the work of intelligence analysts, IARPA says. Yet despite its importance, sensemaking remains a poorly understood phenomenon.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:3dc5e9fd-a0fa-4a92-9ec8-f5c2df7717fe> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222351/applications/us-sets--1-4m-to-get-unique-metaphor-recognizing-software-system-humming.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922516 | 837 | 2.515625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.