text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Fort Worth, Texas, wants to put as much information at the fingertips of its citizens as it can. The city government is committed to making tax-supported information as accessible as possible. Unfortunately, the need for public access rubs up against limited resources, making it hard to honor that pledge.
"We have a lot of individuals who come in and ask for a zoning map or other types of geographic information," commented Tex Norwood, the city's GIS coordinator. "While their needs are important, we have to stop what we're doing and see what they want." The result for Norwood and the city? Lost productivity.
That's why Norwood is excited about using the World Wide Web as a tool for allowing the public to access the city's geographically-based information. He envisions allowing customers to browse the city's database of maps and geographic information from computers in his office or from the user's home. "With the Web," said Norwood, "information sharing is just going to be a lot easier."
Like a hurricane in June, the World Wide Web has, in the past 12 to 18 months, come out of nowhere and blown apart the computing landscape in the United States. While the average citizen might not notice much difference, business, education and government workers have seen radical changes in how information can
be presented and shared. Suddenly, any computer, irrespective of its operating system or microchip, can access a wealth of information in the form of text, graphics and even video and audio.
Besides reading useful and not-so-useful information on the millions of pages that make up the Web, people can also surf for real estate, new cars, hotels and a slew of other products and services. For governments, the Web has opened up a new way to publish and distribute information to the masses.
Thanks to the Web, public access will never be quite the same. Not surprisingly, maps also have begun to appear on the Web. The first examples have been static. But now it's possible to link the HTML (hypertext markup language) format with map features, allowing users to run simple queries and making maps on the Web much more interactive.
IS IT GIS OR JUST A MAP?
Since the beginning of the year, a number of desktop mapping and GIS vendors have introduced a variety of products that allow people with Web browsers to view maps at various
levels of detail or conduct a certain amount of spatial analysis -- a core GIS function.
"Users can do almost anything they can do with standard GIS," said Chris Wemmers, senior product manager for Strategic Mapping Inc.(SMI), in describing the capabilities of his company's new software for the Web. SMI, producers of desktop mapping software and geographic information, has introduced Map/SDK, a Web-enabling toolkit for generating maps over the Web. According to Wemmers, Map/SDK can create Web pages that allow users to zoom in on sections of maps and to run queries, such as finding the location of stores, offices or facilities.
Similarly, Genasys II Inc. unveiled what it touts as the industry's first full-function GIS interface to the Web. The Spatial Web Broker provides a gateway to Genasys' GIS products so that people with Web browsers, such as Netscape's Navigator, can perform queries based on selections made from various menus.
For example, users can select the type of map layer they wish to see -- hydrography, railroads, roads, landmarks, physical terrain -- and choose the zoom factor they want, then submit a query and view the results. The software can create maps that provide detailed directions from one destination to the next, or highlight a location when an address is entered as a query.
Both the Spatial Web Broker and SMI's Map/SDK use a process that converts a spatial query into an HTML command which then retrieves a preprinted map. The map appears as an image, known as a Graphics Interchange Format (GIF) file, which is retrieved from the Web server the same way a Web page is pulled up when a user clicks on a portion of text that has been formatted with HTML.
According to Keith Duncan, a webmaster for Genasys II, the key is creating and displaying the GIS maps as quickly as possible, because most people don't like to wait more than a few seconds for a response. But is all this GIS? Some see it as nothing more than rudimentary mapping.
"There's no real GIS on the Web today," said Ian Nixon, senior marketing manager for Intergraph Corp. "You are just looking at preformatted maps that are being published on the Web." The problem, according to Nixon, is that GIS is a database technology while the Web just deals with documents and images.
Peter Moran, product marketing manager for ESRI, also calls the current crop of geographic applications on the Web just mapping, not GIS. "With the Web you just get one mouse click to make a query," he said. "You can't draw polygons or circles, which require two mouse clicks." The result, he added, is limited functionality.
Moran did point out that some Web sites use check boxes (or, in the case of Genasys' Spatial Web Browser, menu selections) to allow a user to submit a query with values. "But what you are getting is predefined answers," he said. "The current solutions lack high-level analytical or modeling capabilities, all of which make GIS so powerful."
MAPPING ON THE WEB --
If your agency or department is used to working with high-end GIS, will you have any use for today's Web-enabled mapping tools? Not likely, according to Moran and Nixon. "Real GIS users need the ability to direct a query into a database, which can respond with intelligent graphics containing associated attribution," said Nixon.
One government official who agrees with that assessment is Raphael Sussman, GIS manager for Scarborough, Ont. "We get such detailed and varied requests from our departments that we would never be able to retrieve answers to queries in realtime over the Internet. The graphics side of our GIS alone is 28 gigabytes," said Sussman.
But while Sussman's department might not be able to perform GIS over the Web, that doesn't mean the city won't take advantage of it. By September, Scarborough hopes to make certain maps available on its Web site for public consumption.
For example, the city's economic development office publishes a book of all the vacant industrial lots in the city. The book is essentially reproductions of city maps highlighting the locations of the lots. Sussman plans to develop a page on the city's Web site that developers can use to search for digital maps containing the same information. The city can update the maps on a weekly basis, ensuring that users are viewing current information. "It won't be anything high-level," said Sussman about the mapping service. People who have browsers will be able to ask to see all the vacant lots in a certain area of the city. Those areas will be preselected by the city and available as GIF images. "Since we've anticipated the user's query, we'll have an answer in the form of a map ready for viewing," he pointed out. The same technique will be used for people seeking information on the location of hotels, motels and recreation sites.
For some government officials, GIS is a tool for the masses that's been held back from its full potential because of the high cost of access and the complexity of the software. That's why mapping on the Web is so appealing to local governments. "It's exciting to us from the standpoint that it's not going to require an expensive software package to allow people to view our maps," said Norwood.
Public access to Fort Worth's maps via the Web is beneficial for three reasons, according to Norwood. First, it broadens access to government information. Second, it reduces staff time devoted to customer service. Third, it reduces the taxpayer's cost for using the information.
"We have private companies who come in and buy our maps and then turn around and resell them to the public for a profit," mentioned Norwood. "That's fine, if the companies want to do that, but I don't think it's good for citizens to pay for something they have already paid taxes for."
Government workers can also benefit from mapping on the Web. The fastest growing aspect of the Net are intranets: Web pages that exist solely within an organization while electronic firewalls of software protect the information from outside use or tampering. Fort Worth is in the process of building its own intranet, and Norwood sees it as an opportunity to allow each department to have its own piece of GIS. "We have workers who only need to look at maps or run queries once a week, or even less," he said.
For those casual users who don't have time to learn how to run GIS software, the Web is a great solution. For example, the election season places a high demand for maps with street names, housing numbers and other voting information, but their use occurs just once a year. Using a browser, politicians can easily get the information they need in minutes by themselves, versus the hours that professional GIS analysts must commit to perform the work for them.
Intranets can cut GIS computing costs in other ways as well. Workers will only need a browser to access the Web page where the city's maps can be queried. They won't need full-function GIS software on their computer and all the required horsepower, memory and storage to run the application.
The city's computer programmers and networking analysts won't have
to invest time and energy trying to network and program computers
for GIS. Thanks to the browser
and the open protocols of the Internet, compatibility issues are no longer problems.
WHERE NEXT WITH THE FAST-MOVING WEB?
One of the first mapping applications on the World Wide Web began operation nearly two years ago at the Social Sciences Data Center and GIS Laboratory at the University of Virginia Library. Paul F. Bergen developed the application using 1992 TIGER/Line files from the U.S. Department of Census, Arc/Info from ESRI and NCSA's Mosaic browser.
According to Bergen, who is coordinator of the Instructional Computing Group and Social Sciences at Harvard University, the purpose of the project was to help train students on how to create maps. The key to making the project work, he found, was to simplify everything as much as possible. "To work with mapping on the Internet," he said, "simplification is the rule."
Today, simplicity is still a key success factor, but already demand is increasing for some of the sophisticated capabilities that exist in mainstream GIS technology. One way to deliver that capability lies with Java, Sun Microsystem's software programming language for the Web.
Java is an application development tool that allows software developers to write small programs that can move from computer to computer over the Internet. When a person uses a browser to view a Web page, the Java application, known as an applet, is launched, allowing an interaction to occur. That interaction might be an animated sequence or the retrieval of some information from a database.
Some GIS vendors are looking at Java as a way to enhance how people use GIS on the Web. ESRI is working with Java so that when a user browses a Web page, Java presents an interface that replicates the one used with Arc/Info. "Java could give users greater usability and control of our software over the Web," said Moran.
Intergraph is also examining ways to improve the use of GIS over the Web, although the company declined to discuss its strategy. "We're taking the Web very seriously," said Nixon, who believes the greatest opportunities for GIS will be found with intranet applications. He mentioned advances in display techniques, compression technologies and certain formats that can wrap attribute information with vector data, as some of the signs that GIS could move closer to the Web in the near future.
Like so much else about the Web, these developments are coming soon. When asked if these technologies will be available in a few years, Nixon replied, "No, it's just months away."
For more information, contact Raphael Sussman, GIS manager, 416/396-4141.
*A number of sites on the World Wide Web provide examples of interactive mapping. Genasys maintains a site that uses its Web Broker tool at .
Strategic Mapping is working with Oracle to develop a site that runs Map/SDK. For more information, check Oracle's main site at .
Keep up with Intergraph's GIS and Web developments by checking their home page at: .
ESRI also has a home page at . In addition, ESRI provides links to a host of other GIS-related Web sites at: .
According to Peter Moran, some of the choice selections include:
* U.S. Census Bureau . Maintains maps from a special binary version of the TIGER line files.
* University of California, Davis, Information Center for the Environment . Users can query information regarding California's natural resources.
* Xerox PARC Map Viewer . A well-known example of map creation over the Web.
* University of Virginia . Paul Bergen's interactive mapping site of Virginia's counties. Maps can be downloaded in Arc/Info export format. | <urn:uuid:99ba4111-fd52-4a64-be44-db8387f53c3d> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Spatial-Surfing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00212-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944604 | 2,810 | 2.921875 | 3 |
A virus pretends to be a cure against "Chernobyl"
23 Apr 2000
Cambridge, UK, April 24, 2000 - Kaspersky Lab Int., a fast-growing international anti-virus software development company, announces the discovery of a new computer virus Win32.Santana, which has been distributed via the Internet and e-mail under the name of NOCIH.EXE, pretending to be a universal cure against the "Chernobyl" virus.
Detection and disinfection routines for Win32.Santana virus have been included in the emergency update for AntiViral Toolkit Pro (AVP). The update is available on Kaspersky Lab's Web site on www.kasperskylabs.com.
"This virus poses no serious threat to computer users. Nevertheless we would like to warn users about the possible emergence of this dangerous and malicious software, that masquerades as a vaccine against the "Chernobyl" virus, which will be activated on April 26," said Eugene Kaspersky, Head of Anti-Virus Research at Kaspersky Lab. | <urn:uuid:99481806-74b6-41c0-a84d-c86dda9f152f> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2000/A_virus_pretends_to_be_a_cure_against_Chernobyl_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883748 | 230 | 2.515625 | 3 |
Anyone who is in the security arena should know about Windows Alternate Data Streams, otherwise known as ADS. Though not highly publicized, lack of this little known attribute of the Windows NTFS file system may affect how you solve a problem in the future.
ADS were introduced into the Windows NTFS file system starting in Windows NT 3.1. This ?feature? was implemented in order to allow compatibility with the Macintosh Hierarchical File System (HFS). In brief, the Macintosh file system stores its data in two parts, the resource fork and the data fork. The data fork is where the data is actually contained and the resource fork is used to tell the operating system how to use the data portion. Windows does a similar thing through extensions such as .bat, .exe, .txt, .html. These extensions tell the operating system how to use the particular data found in the files.
For windows to be compatible with the Macintosh file system, they introduced alternate data streams. This hidden stream is used as the resource fork was used; to tell the system how to use the data contained in the file.
Though ADS was created for compatibility with the Mac world, it is not solely used for that purpose. Many applications use ADS to store attributes of a file in them. For example if you make a text document, and right click and go into its properties you will see a summary page. This summary information is attached to the file via ADS. I will show you more on that later and applications to see this information.
In summary, think of ADS as hidden files that are attached to the visible ones. The main reason they are so dangerous is that they are not well known, are generally hidden to the user, and that there are few security programs that can recognize them.
Programs to view ADS
Before I continue I want to mention some, not all, programs that can be used to view ADS. This is so as you read this tutorial, and follow some examples, you can actually see the ADS files that you are creating.
The programs are as follows:
Lads - http://www.heysoft.de/Frames/f_sw_la_en.htm
How to make an ADS
From a command prompt, the following is an example on how to make an ADS:
C:\test>echo "ADS" > test.txt:hidden.txt
A new ADS has just been created called hidden.txt and attached to the file test.txt. The ADS file is shown after the : , and : must be used when adding an ADS.
If you do a DIR in that directory all you see is the normal file.
C:\test>dir Volume in drive C has no label. Volume Serial Number is B889-75DB Directory of C:>test 10/22/2003 11:22 AM
On the other hand if you run LADS, you can see the ADS, hidden.txt 9, attached to the test.txt file.
C:\test>lads LADS - Freeware version 3.21 (C) Copyright 1998-2003 Frank Heyne Software (http://www.heysoft.de) This program lists files with alternate data streams (ADS) Use LADS on your own risk! Scanning directory C:\test\ size ADS in file ---------- --------------------------------- 8 C:\test\test.txt:hidden.txt 8 bytes in 1 ADS listed
If you wanted to view the ADS hidden.txt, or add information to it, just run notepad to open the file.
C:\test> notepad test.txt:hidden.txt
This will open the file in notepad and allow you to edit it and save it.
You can also use notepad to create an ADS file. Just type:
Notepad will launch and say this file does not exist and would you like to create it. You would say yes, and then enter the information and save it. This method has just created a new ADS called ads.txt.
ADS files do not have to be attached to a file, but can also be attached a directory. This causes a problem when you create an ADS against a root of a hard drive as it makes it impossible to remove the ADS unless you reformat. If someone knows of a program that can fix this, please let me know.
Here is an example on how to make an ADS against a directory:
C:\test> echo test> :hidden.txt
This command has now attached an ADS to the directory itself. Run LADS to see the ADS attached to the directory.
What is so harmful about this?
What if I told you that ADS can also be used with executable files? Thats right, ADS files that are executable can be attached to any file just like you attached .txt files, and just like the text files, would be hidden from most software.
Here is an example:
C:\test> type c:\windows\notepad.exe > ads.txt:hidden.exe
You have now created an ADS file called hidden.exe and attached it to the text file ads.txt. Once again, if you Dir the directory you will just see ads.txt, and not hidden.exe. Run LADS, and you will see the ADS.
There is a caveat to launching executable files that are ADS files. You must always use the START command to launch the ADS executable and you must always use the full path of the file. Here are some examples of working commands and non-working commands.
I will first make my ADS executable:
C:\test> type c:\windows\notepad.exe > ads.txt:np.exe
Commands that do not launch the np.exe ADS executable:
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.:
Access is denied.
The command that will launch the executable:
As you can see, you must use the full path of the ADS executable file.
How to Delete ADS Files
ADS files are not particularly hard to delete, but they can cause problems. In order to delete an ADS attached to a file, just delete the file. Lets say for example that you have a file called number.txt and there was an ADS attached called hidden.txt. You want to get rid of the hidden.txt file, but keep the info in number.txt, so you just cant delete number.txt.
In order to do this you would do something like the following:
C:\test>ren number.txt temp.txt
C:\test>type temp.txt > number.txt
In order to delete ADS files that are attached to a directory, you need to delete the directory. This can cause a major problem if the ADS is attached to the root of a hard drive. Since you cannot delete the ADS in this way unless you reformat the drive, you can do this to get rid of the unwanted information in the ADS file.
C:\test>echo empty > filler.txt
C:\test>type filler.txt > :badads.txt
Update - 11/11/04
Since I wrote this tutorial there have been a few malware programs that have been released to infect your machine using Alternate Data Stream files. Due to this there have been improvements in the software available to remove these types of programs from your computer. One program that will search for ADS files on your computer and then provide a list that you can remove is ADSSPY. You can find a link to that program below:
Other Uses for ADS
In the beginning I mentioned that there are other uses for ADS files. Certain files in Windows have a summary tab in their properties. One example of this is .txt documents. If you create a new .txt document, and right click on it, and select summary, you can fill in some information.
This information is saved as ADS files attached to the document. For example, we have a file called readme.txt. If I go into the summary section and enter my name into the Title field and press OK, that information will be saved as an ADS.
You can see this as follows:
LADS - Freeware version 3.21
Scanning directory C:\test\
size ADS in file
131 bytes in 3 ADS listed
As you can see ADS can definitely be used for much more than was bargained for when Microsoft introduced them. They have the legitimate uses, but can definitely be used for darker intentions.
In summary here are the reasons why ADS can be considered bad:
- There are few programs that detect ADS.
- Removing ADS can be difficult.
- Explorer and Dir when determining free space do not calculate the space used by ADS.
- You can hide an executable as an ADS.
Too many sources over the years, but the people at NTBugTraq, Heysoft. Security Focus, DiamondCS, Crucial, and the other writers of ADS tutorials deserve mention. There are some excellent articles about ADS found via google that do a much better job than I in explaining ADS. I would recommend you take a look.
Bleeping Computer Advanced Microsoft Concepts
BleepingComputer.com: Computer Support & Tutorials for the beginning computer user.
HijackThis is a utility that produces a listing of certain settings found in your computer. HijackThis will scan your registry and various other files for entries that are similar to what a Spyware or Hijacker program would leave behind. Interpreting these results can be tricky as there are many legitimate programs that are installed in your operating system in a similar manner that Hijackers get ...
One of the top questions I see on forums is "How do I know if I have been hacked?". When something strange occurs on a computer such as programs shutting down on their own, your mouse moving by itself, or your CD constantly opening and closing on its own, the first thing that people think is that they have been hacked. In the vast majority of cases there is a non-malicious explanation ...
If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware.
A common misconception when working on removing malware from a computer is that the only place an infection will start from is in one of the entries enumerated by HijackThis. For the most part these entries are the most common, but it is not always the case. Lately there are more infections installing a part of themselves as a service. Some examples are Ssearch.biz and Home Search Assistant.
Ever since Windows 95, the Windows operating system has been using a centralized hierarchical database to store system settings, hardware configurations, and user preferences. This database is called the Windows Registry or more commonly known as the Registry. When new hardware is installed in the computer, a user changes a settings such as their desktop background, or a new software is installed, ... | <urn:uuid:24630a9a-cfd5-438d-8a01-c95e292be131> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/windows-alternate-data-streams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926111 | 2,310 | 2.6875 | 3 |
CUPERTINO, CA--(Marketwired - February 11, 2014) - By high school, half of all students (51 percent) carry a smartphone to school with them every day* and classroom usage of tablets, netbooks and other portable devices is on the rise. To address this trend, Curriki, the leading K-12 global community for creating, sharing, and finding open learning resources, today unveiled the Curriki Geometry website where usability and page design for its innovative Project Based Learning (PBL) geometry curriculum is optimized for mobile devices.
Available for free, students and teachers now have access to a geometry curriculum that is designed to meet the needs of students born in a global, interactive, digitally-connected world through the use of real-world examples, engaging projects, interactive technologies, videos and directed student feedback.
Curriki Geometry is a set of six Common Core Aligned projects delivered in a mobile-optimized web environment with access points for students and teachers. Teachers are provided with pacing guides, formative assessments, rubrics, guidance on managing a PBL project, tools to help teachers guide students as they learn to collaborate with each other, and reflection tools for both students and teachers. The projects require the use of technology tools, which students use so much in their daily lives, to investigate, evaluate, and collaborate.
The Buck Institute for Education's research shows that PBL is effective in building deep content understanding, raising academic achievement, and encouraging student motivation. "In Curriki Geometry, the projects relate to the real world, so students never have to ask 'why am I learning this?'" commented Kim Jones, Curriki's CEO. "Students become active, not passive, learners. Through working together in a more 'real-world' manner, they take ownership of their learning."
The ConnectED initiative's planned E-Rate Reform will help connect more students and provide faster access to Internet in schools so that digital learning resources like Curriki Geometry can become more widespread.
"Curriki's project-based curriculum delivered through a mobilized platform engages students in learning geometry in a new and innovative way," said Nicole Anderson, Executive Director of Philanthropy, AT&T. "It's exciting to see how the technology kids interact with every day is being applied to teaching and learning STEM skills, which are so critical for the jobs of the future."
Curriki is grateful for the generous support of its sponsor, AT&T. AT&T's support includes a $250,000 contribution and annual in-kind hosting services. The work with Curriki is part of AT&T Aspire, the company's $350 million investment in education to help more students graduate from high school ready for college and careers.
For more information about Curriki Geometry Online, visit www.currikigeometry.org.
A non-profit organization, Curriki is the leading K-12 global community for teachers, students and parents to create, share, and find open learning resources that improve teacher effectiveness and student outcomes. A Computerworld Honors Laureate for 2012, Curriki was selected as the 21st Century Achievement Award winner for Digital Access. With more than 380,000 members and 54,000+ learning assets, Curriki reaches more than 9 million users worldwide. Join today www.curriki.org.
* Living and Learning with Mobile Devices study, Grunwald Associates LLC, 2013
The following files are available for download: | <urn:uuid:764edb43-5e1b-40ef-bf21-efb55a893d84> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/curriki-unveils-free-high-school-geometry-curriculum-for-mobile-devices-1877562.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939415 | 721 | 2.703125 | 3 |
When it comes to your computers, can you really ever be too secure? Seems there’s always something new to learn that can help you ensure your systems are not threatened. For example, did you know you can make Secure Shell (SSH) even more secure so it works even harder for you? SSH adds a layer of encryption to transmissions to make sure you can connect to your dedicated server without having your password intercepted.
Here are five security ideas you may not know about to secure your SSH server:
1.) Brute Force Detection software should be installed. Often attackers will try brute force methods to learn your password and then attack your server. Installing a good brute force detection software helps neutralize these attempts at your sever the minute they start.
2.) Root logins are unnecessary. Under normal circumstances, there is no reason to allow direct root logins to your server and take the risk of having your root account directly exposed to the Internet. By restricting root logins, you make it harder for outside attackers to gain access. The system administrator can become root once logged in using su or sudo.
3.) Use chroot to restrict users to their own/home directories. Linux and Unix servers have permissions in place that prevent a normal user from damaging your files but it does not stop them from seeing those files. So if you use chroot, you can keep those users within their own directories.
4.) Demand secure passwords and periodic rotations. When you are the system admin, you can require computer users to adhere to your password strength requirements and also demand that users change their passwords now and again.
5.) Prevent staying logged in. Also as the system admin, you can set the timeout interval in the SSH configuration file so users do not stay logged in. This can keep people from sneaking into user accounts that are always logged in.
Do you think you can use any of these ideas in the future? What tips do you have that you can share with other system admins? Please let us know!
Tags: online security | <urn:uuid:f180f4a2-8cef-421d-b232-c1bcf7865bad> | CC-MAIN-2017-04 | http://www.codero.com/blog/does-your-security-have-more-bark-than-bite-security-tips-for-system-administrators/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00468-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925723 | 415 | 2.75 | 3 |
Linus Torvalds began writing the Linux operating system in 1991 to teach himself how to use the Intel 80386 chip. As such, the program was initially designed to take advantage of all of the chip's idiosyncrasies. Since then, Linux has become significantly more portable, running on everything from PCs to mainframes to embedded devices.
Already, IBM sells about 50 percent of its solutions on Linux platforms, according to Tom Swett, global general manager, IBM Financial Markets. "We've demonstrated that the cost of computing with Linux is going to be more efficient in a number of ways," he says.
Attempting to regain the ground it ceded to Microsoft and Intel with the growth of Windows in the 1990s, IBM has released a low-end server designed expressly to run Linux. Returning the favor, Linux has evolved the capability to take advantage of the specific features of IBM's Power Architecture, an open chip design initially developed by IBM, Apple Computer (Cupertino, Calif.) and Motorola (Schaumburg, Ill.). "As Linux and its capabilities continue to evolve into the mainstream, we've got the platform that will allow that transition to happen quickly," says Dwight Tausz, business development executive for IBM's Linux on POWER initiative.
IBM Power Architecture can allow a single server to act as if it were several. "You can take a four-way [OpenPower] 720 and make it look like a virtual 40-way server," says Tausz. "Linux kernel 2.6 allows you to take advantage of that processor and that virtualization."
In practical terms, virtualization allows an enterprise to consolidate all of its various servers onto a single box, keeping the individual applications intact and their connections with the network unchanged. While this capability had been present in the Power Architecture chips, it has now been unlocked by the latest version of the Linux operating system.
Once an operating system and a computer chip have learned to work in harmony, the next step is finding applications.
Sybase (Dublin, Calif.) has teamed up with IBM to offer its enterprise database software on these IBM Power Architecture servers using Linux. "This new partnership will provide world-class, mission-critical 24/7 support worldwide," says David Jacobson, senior director of product marketing, Sybase. "It's backed by the largest number of Linux experts anywhere in the world, between Sybase and IBM."
Their first target market is financial services, where Sybase has its strength in capital markets and insurance. In banking, the company's biggest plays can be found overseas. "We just signed Bank of China -- among the three largest banks in China and one of the five largest in the world -- to run Sybase," says Jacobson. "They're looking for departmental solutions to run their bank branches, and they're looking for enterprise-wide systems to roll those bank branches together."
The January '05 issue of BS&T will include further coverage of banks' choices in the operating systems market. Watch for it! | <urn:uuid:6dffbf7c-ff73-4f54-8423-799b1f53e0cb> | CC-MAIN-2017-04 | http://www.banktech.com/infrastructure/http--wwwinformationweekcom-maindocs-archivehtm/d/d-id/1290098 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00287-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956352 | 618 | 2.53125 | 3 |
IP is the Internet Protocol. It is a mechanism by which packets may be routed between computers on a network-of-networks. IP allows computers to be connected using various physical media, ranging from modems to Ethernet cabling, fiber-optic cables and even satellite and radio links.
IP is designed to be robust, and to gracefully handle the loss of some connections. Individual packets of data are routed by hosts with little knowledge of the overall network structure - just a few local routing rules.
As its name implies, the global Internet is constructed using the IP | <urn:uuid:a869235f-4c79-4b18-8a4d-fb91abe14272> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/ip.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94628 | 116 | 3.46875 | 3 |
NASA’s Mercury-exploring spacecraft MESSENGER has one more job – crash its 1,100lb body into the planet on April 30, ending one of the most successful scientific explorations in the space agency’s history.
Messenger is now out of fuel and the Sun's gravity will draw the spacecraft into the planet on April 30, at about 8,750 miles per hour, creating a crater as wide as 52 feet, NASA says. On April 25 MESSENGER was in an orbit with a closest approach of 5.1 miles above the surface of Mercury. With a velocity change of 3.43 miles per hour.
+More on Network World: 10 game-changing space galaxy discoveries+
"Navigating a spacecraft so close to a planet's surface had never been attempted before, but it was a risk worth taking given mission success had already been met, and the novel science observation opportunities available only at such very low altitudes," said Bobby Williams, who leads the KinetX Space Navigation and Flight Dynamics group in a statement.
Launched in 2004, NASA sent its $446 million MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) spacecraft on a rendevous with Mercury where it beamed never-before-available pictures and data on the planet.
The spacecraft discovered a number of important details about the planet. For example, NASA said observations by the MESSENGER spacecraft have provided compelling support for the 20-year old hypothesis that Mercury hosts abundant water ice and other frozen volatile materials like chlorine in its permanently shadowed polar craters.
This week the University of Michigan asked Jim Raines, University of Michigan research scientist and MESSENGER team member, to help quantify the crash and offer up some insight into the mission. He offered the first six fact and we added a few more from NASA.
1. Meteors with the same mass as MESSENGER (513 kg) slam into Mercury about every month or two, and typically with 10 times the speed and 100 times the energy. The planet doesn’t have a thick atmosphere that would slow down objects headed for the surface.
2. The crater the craft will leave near Mercury’s north pole is predicted to be about 50 feet wide. That’s the width of an NBA basketball court.
3. The 1,131-pound spacecraft will hit with the energy of about a ton of TNT, or the force of a car traveling at about 2,000 mph.
4. At almost 9,000 mph, the craft will be traveling three times faster than a speeding bullet and nearly twelve times the speed of sound.
5. On MESSENGER’s last orbit, it will pass just 900 to 1,800 feet over the planet’s surface. We have buildings that tall on Earth.
6. Nearly 55 percent of MESSENGER’s weight at launch was fuel - which is about to run out.
7. Mercury has a diameter of 3,032 miles, about two-fifths of Earth's diameter. Mercury orbits the sun at an average distance of about 36 million miles (58 million kilometers), compared with about 93 million miles for Earth.
8. Because of Mercury's size and proximity to the brightly shining sun, the planet is often hard to see from the Earth without a telescope. At certain times of the year, Mercury can be seen low in the western sky just after sunset. At other times, it can be seen low in the eastern sky just before sunrise.
9. Mercury travels around the Sun in an oval-shaped orbit. The planet is about 28,580,000 miles from the sun at its closest point, and about 43,380,000 miles from the sun at its farthest point. Mercury is about 48,000,000 miles from Earth at its closest approach.
10. Mercury moves around the Sun faster than any other planet. The ancient Romans named it Mercury in honor of the swift messenger of their gods. Mercury travels about 30 miles per second, and goes around the sun once every 88 Earth days. The Earth goes around the sun once every 365 days, or one year.
11. As Mercury moves around the Sun, it rotates on its axis, an imaginary line that runs through its center. The planet rotates once about every 59 Earth days -- a rotation slower than that of any other planet except Venus. As a result of the planet's slow rotation on its axis and rapid movement around the Sun, a day on Mercury -- that is, the interval between one sunrise and the next -- lasts 176 Earth days.
12. MESSENGER is only the second spacecraft sent to Mercury. Mariner 10 flew past it three times in 1974 and 1975 and gathered detailed data on less than half the surface. MESSENGER took advantage of an ingenious trajectory design, lightweight materials, and miniaturization of electronics, all developed in the three decades since Mariner 10 flew past Mercury.
Check out these other hot stories: | <urn:uuid:979e3772-a4af-4003-8080-305d3f6e577f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2916536/education/nasa-s-messenger-spacecraft-will-soon-go-out-with-a-bang-crashing-into-mercury-at-about-8-750mph.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93051 | 1,039 | 3.40625 | 3 |
In 1863, Sacramento was the western terminus for the transcontinental railroad that stretched 1,907 miles and linked the nation in 1869. But these days, California's proud history of railroading isn't pulling much weigh
In 2008, California voters approved $9 billion in rail bonds for construction of the state's high-speed "bullet train" to connect Los Angeles and San Francisco. Since then, however, estimated costs have ballooned to nearly $70 billion, with one initial segment -- from Merced to the San Fernando Valley -- costing an estimated $30 billion. As many in the state anticipated, the going has been rough, with political opposition, environmental concerns, lawsuits and escalating costs.
Gov. Jerry Brown even pushed through legislation to speed up environmental reviews, angering some of his Democratic colleagues.
The latest blow came in November 2012, when Sacramento County Superior Court Judge Michael Kenny restricted use of the voter-approved funds, ruling that the state's 2011 funding plan was inadequate, and refused to validate the bond sale. State Treasurer Bill Lockyer refused to sell the bonds without validation.
While Kenny's ruling won't stop the project, experts say it certainly restricts funding needed to match Federal Railroad Administration grants, and the bond issue may go back to the voters for reconsideration.
But according to a Los Angeles Times article, the judge did not approve a restraining order on the project and did not invalidate the bonds, and so a major concern of proponents is that additional delays may erode public confidence.
Congress is weighing in now, according to media reports, with some Republicans saying they will stop further funding and investigate why $3 billion in federal funds was authorized even though the state matching funds were not forthcoming. | <urn:uuid:a23a774c-02b6-4023-bf1a-4dd4e47cb357> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/Future-of-California-Bullet-Train-Unclear.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00011-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952732 | 351 | 2.6875 | 3 |
The following steps will help you emulate DNS change through Windows 'hosts' on your PC.
The 'hosts' file is a text file that contains IP addresses separated by at least once space and then a domain name, with each entry on its own line that specifies to your PC what IP address a domain name should point to.
During the website migration process, while our Support team tests all migrations performed before marking it as complete, you can also test if the website is working exactly as expected. Our recommended way of testing your website before changing the DNS settings is through the use of a modified 'hosts' file.
Modifying your 'hosts' file allows you to override the DNS for a domain name, on your PC. By changing the 'hosts' file, you can send only your PC to the new server without affecting the live site at all by manually setting the IP address for a particular domain name and telling it where to go in stead.
To perform a 'hosts' file modification, you will need to first run 'Notepad as administrator'. This is because the 'hosts' file is a system file and cannot be modified otherwise.
To perform a 'hosts' file modification in Windows 7, follow the steps as mentioned below.
Note: Modifying the 'hosts' file on your PC incorrectly can interfere with name resolution. Make sure to keep a backup copy of the 'hosts' file before modifying it.
Once you have modified your 'hosts' file, it is recommended that you flush the existing DNS cache from your PC so that your 'hosts' file changes can take effect immediately.
Your DNS cache stores the locations (IP addresses) of web servers that contain web pages which you have recently viewed. If the location of the web server changes before the entry in your DNS cache updates, you cannot access the website. After you clear your DNS cache, your PC will query nameservers for the new DNS information.
To flush your DNS cache if you use Windows 7, perform the following steps:
The easiest way to see if your 'hosts' file modification worked is to run a ping test. To test that the domain name is pointed to the correct IP in your 'hosts' file, please follow the steps as mentioned below:
You are done with.
Updated on Nov 11, 2015
The techReview is an online magazine by Batoi and publishes articles on current trends in technologies across different industry verticals and areas of research. The objective of the online magazine to provide an insight into cutting-edge technologies in their evolution from labs to market. | <urn:uuid:9c090d28-aef5-4dd0-952e-3ff6196705e2> | CC-MAIN-2017-04 | https://www.batoi.com/support/articles/article/how-do-i-emulate-dns-change-on-my-pc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929411 | 539 | 2.515625 | 3 |
Talmadge M.S.,National Renewable Energy Laboratory |
Baldwin R.M.,National Renewable Energy Laboratory |
Biddy M.J.,National Renewable Energy Laboratory |
McCormick R.L.,National Renewable Energy Laboratory |
And 9 more authors.
Green Chemistry | Year: 2014
Pyrolysis offers a rapid and efficient means to depolymerize lignocellulosic biomass, resulting in gas, liquid, and solid products with varying yields and compositions depending on the process conditions. With respect to manufacture of "drop-in" liquid transportation fuels from biomass, a potential benefit from pyrolysis arises from the production of a liquid or vapor that could possibly be integrated into existing refinery infrastructure, thus offsetting the capital-intensive investment needed for a smaller scale, standalone biofuels production facility. However, pyrolysis typically yields a significant amount of reactive, oxygenated species including organic acids, aldehydes, ketones, and oxygenated aromatics. These oxygenated species present significant challenges that will undoubtedly require pre-processing of a pyrolysis-derived stream before the pyrolysis oil can be integrated into the existing refinery infrastructure. Here we present a perspective of how the overall chemistry of pyrolysis products must be modified to ensure optimal integration in standard petroleum refineries, and we explore the various points of integration in the refinery infrastructure. In addition, we identify several research and development needs that will answer critical questions regarding the technical and economic feasibility of refinery integration of pyrolysis-derived products. © 2014 The Royal Society of Chemistry. Source | <urn:uuid:080ec120-de6e-4a5e-bc16-2ea3e716a103> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bp-refining-and-marketing-research-and-technology-602045/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890138 | 334 | 2.90625 | 3 |
The creators of Kano, a kit to assemble your own mini-computer, asked for $100,000 on Kickstarter. One day short of the end of the company’s campaign, they’ve raised $1.35 million from nearly 12,000 backers including Apple co-founder Steve Wozniak and Kickstarter co-founder Yancey Strickler. Alex Klein, one of the three creators of Kano, says people from 44 countries have chipped in (though the majority—some 40%—are American).
Why do these people, all of whom presumably own readymade computers from which they clicked the “back this project” button on Kickstarter, want to build their own machines? It has a lot to do with Kano’s sleek look and its promise to teach them how computers actually work. To do that it uses something called the Raspberry Pi, a $35 British micro-computer that got a lot of attention earlier this year when it hit the twin milestones of shipping a million British-made units and 2 million overall.
The Raspberry Pi, like the Kano, promises a DIY experience to inexperienced amateurs. Yet it poses a challenge for the computer illiterate: On its own, the Pi does nothing. It is just a circuit board. Enthusiastic amateurs who buy a Pi discover that they then have to locate peripherals (like a power cord and a screen), download and install an operating system, learn how to interact with a command line and, when they finally have it set up, figure out what to do next. Despite being built to teach basic computing skills to young people, the Pi found its biggest success among the existing hacker community. | <urn:uuid:8464262b-605e-48ab-bf7e-7bb29acd7f55> | CC-MAIN-2017-04 | http://www.nextgov.com/cio-briefing/2013/12/build-it-yourself-computer-kits-draw-support-hackers-and-apple-cofounder/75627/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00277-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951855 | 348 | 2.734375 | 3 |
This week we're taking you behind the scenes of your cloud environment and looking at vSphere virtual networking, including the difference between virtual switches/networks and their physical counterparts, plus the primary configuration options for vSwitches and vLANs.
Similar to the general concept of virtual machines, virtual network switches aren’t really much different from their physical counterparts. Incoming traffic follows regular TCP/IP network protocol and traverses the same layers, so practical management remains the same. That means the servers running your virtual machines can use regular network adaptors from pretty much any vendor. Unlike physical network components, vSwitches allow on-the-fly configuration changes without ever touching your server hardware.
At the Layer 2 level (OSI Data-Link layer model), vSwitches only pass network traffic to MAC addresses that are assigned to connected devices. In other words, if a virtual machine or port device is not actively using a virtual port on the vSwitch, the switch does not know it exists. With a physical switch, a large table of MAC addresses is stored in memory, and unknown incoming traffic is sent to all of them in order to learn where to send that traffic in the future.
“But wait”, I hear you saying. “What if one of my VMs sends an unkown MAC address through my virtual switch? Won’t it just ignore that traffic?”
vSwitches only ignore incoming network frames when they are coming from an external source. Internal source traffic gets passed on to a physical uplink. External VLANs, like a separate virtual machine cluster or network segment, use Layer 3 switching. Therefore if the MAC address is assigned to another VLAN, the physical uplink is also engaged.
These physical uplinks, or NIC network adapters, pass data into and out of the virtual data center. Your virtual machine host servers will have NICs installed. If you aren’t using your virtual servers to communicate outside of themselves, whether with external networks or the larger internet, you don’t need a NIC. Otherwise, you can use a single NIC or even a couple dozen, allowing many Ethernet ports.
vSwitches can scale their ports to the currently provisions VMs and hosts, keeping memory consumption down and adding them when necessary. These elastic ports are automatically managed by vSphere.
Each virtual network adapter attached to a VM uses a virtual port. Virtual ports act as a bridge between the vSwitch and the NIC. There are also specific types of ports called VMkernel ports. These allow the actual host platform to connect to vCenter and other hosts on the network. Functions include vCenter and vSphere management, vMotion, fault tolerance, and storage traffic (iSCSI or NFS).
Virtual LANs help control the network traffic being forwarded by switches. VLANs allow segments of the network to only send traffic to each other by default. Normally, a host would receive and respond to every incoming request, which can cause a cascading effect across the entire network. To send traffic between VLANs, as mentioned above, a Layer 3 device such as a router must be engaged, unless you have a network switch capable of routing.
In vSphere, there are three types of VLANs.
External Switch Tagging (EST) VLANs keep the physical uplink in charge of VLAN network tags. Network traffic destined for different VLANs are passed through an external physical switch rather than the vSwitch. The physical ports on the switch have Access mode activated under EST, which strips the VLAN tag before passing on the traffic. Because of this, EST is best suited to single VLAN configurations. All traffic on the vSwitch must use the same VLAN, which is configured to the access port on the physical switch.
Virtual Switch Tagging (VST) has the vSwitch inspect, add, or remove VLAN tags. The upstream port is set as a trunk port rather an access port, allowing a predefined number of VLAN traffic through to the vSwitch. The vSwitch checks the VLAN tag and the MAC address to determine the end destination. If it recognizes the VLAN tag, it removes the tag before passing on the traffic. If it doesn’t know the VLAN tag, the traffic is dropped. Going the other direction, like from a NIC or VMkernel port, causes the vSwitch to add a VLAN tag before passing it on to the physical uplin
Virtual Guest Tagging (VGT) keeps the trunk configuration on the upstream physical switch, so VLAN tags make it through to the vSwitch. The vSwitch, however, does not strip the tag before passing it on. The virtual machine must be configured to read VLAN tags and add them to outgoing traffic as well. This can be manually set under Ethernet Adapter settings in the Operating System of a VM. Use cases for VGT include VMs that must monitor or route network traffic. | <urn:uuid:b1b6595f-a59d-43c1-89f4-bcffa075a014> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/an-introduction-to-vsphere-networking-vswitch-vlan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00277-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898001 | 1,018 | 2.546875 | 3 |
The age of big data has just given us an accidental insight in the wake of a recent significant earthquake in California. It turned out that data points from fitness trackers out in the public indicated how many people were awoken during the 3am Bay Area quake. That was a little bit of a peek into the future of the internet of things and an even bigger peek into how big data can open to doors to unexplored insights.
Now, the finding that an earthquake jolted people from their sleep isn’t exactly surprising, but consider that scientists have been throwing computing at seismic research for a very long time. The big data use case is accelerating these efforts. In fact, one of the biggest benefits of this data analysis is evaluating risks. Everything from fault data, to soil types, to population are integrated to help plan emergency resources, building codes and crisis planning. The point is there are many flexible points of data that have yet to be integrated and analyzed into the picture, a classic big data case.
How big data works with Hybrid Hosting
By now, you may know that big data generally has three characteristics — volume, variety and velocity. If you look at the seismic research example, you can actually segment the basic logical computing structures by computing demands. As you’ll see, the big data computing model maps perfectly to the hybrid cloud computing model.
- Bare metal: The most obvious segment is the data backbone where all the fun stuff happens – crunching numbers, analyzing data and delivering the biggest value point. It should be obvious that this is a high-CPU, high-memory, high-capacity level that is perfect for the bare metal servers layer of hybrid.
- Cloud: Collection points exist everywhere in the big data model. In the case of seismic research, it could mean sensors, geographical data, soil types, etc. Collecting staggering amounts of minute data, these workhorses still require performance, but they aren’t processing the really big stuff. The need to collect more and to collect newer means these elements have to be flexible, portable and quickly deployed. This is the cloud level of the hybrid model. Powerful, cloud-based and as-you-need-it.
- Data: It’s called big data for a reason. Lots and lots of data. Live data. Data from a week ago. Data from a year ago. Maybe even data forever. Every organization has different needs and requirements on data strategy and retention. Sometimes it’s dictated by what value they seek in what they are analyzing in the big data picture. Sometimes there are regulatory demands. At the end of the day, the customer decides what they keep and this requires flexibility. Hybrid cloud is the simplest model to let your data be portable, scalable and accessible throughout your environment all in one.
Benefits of bringing big data and hybrid together
Structuring your big data architecture along hybrid cloud computing architecture reaps you strategic benefits as well. One of the biggest concerns in the data age centers on security. Hybrid infrastructure allows for the isolation of information and data control while simultaneously integrating cloud features.
Another benefit is that by its very nature, big data grows and hybrid scales to those needs in terms of platform, storage, and infrastructure. Hybrid computing is also elastic. Cloud bursting in hybrid environments means spinning up new workloads, meaning you can quickly add quantity to the big data equation or throw more powerful systems into the mix where needed. This allows an organization to avoid downtimes or offload resource demands as needed. Hybrid infrastructure delivers an adaptable platform for big data with full environmental management, scaling to demands and delivering tunable performance.
As we look at the ‘accidental’ revelation from the fitness tracker data, what is most clear is that the big data picture is unpredictable. We don’t have a crystal ball, and it will grow in ways we can’t quite track. Hybrid cloud is the ultimate answer for this as it is completely portable to growth and change. Data analysis itself changes over time, and static environments are not well-suited to this dynamic. The flexible cloud components, powerful bare metal systems, and fluid unified storage and capacity of hybrid cloud is the best platform for your big data needs. | <urn:uuid:c305cfc5-141f-4d05-860b-d497326a7bf1> | CC-MAIN-2017-04 | http://www.codero.com/blog/hybrid-hosting-is-the-answer-to-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933855 | 864 | 2.5625 | 3 |
For its time, the Library of Alexandria was the greatest repository of knowledge in the Western world. Over 500,000 scrolls containing an unimaginable wealth of mathematics, astronomy, geometry and literature reportedly lay in the three buildings that composed the library, located in the city founded by Alexander the Great in the delta of the Nile. Ships entering the Alexandrian harbors were searched for books to be copied and added to the library's shelves, for the value of that knowledge to others was recognized far and wide.
Alexandria's library has long since been demolished, burned to ashes either during an attack on the city by Julius Caesar or by Theophilus in an attempt to leave the Bible as the only source of knowledge. The truth behind the destruction remains unclear, but not the lesson for all subsequent librarians and storage experts: Invest well in your storage systems, create efficient ways to search for items, and maintain copies of everything, for you never know when disaster will strike.
Today's modern Alexandria - the city in Louisiana located just a few hours from the Mississippi delta - may not be as grand as the one of old, but that doesn't change the need for it and countless other municipalities to store today's data safely, retrieve it with ease, and ensure room for future growth.
Over the past few years, the amount of data traveling over Alexandria's wide area network, which stretches across nine government campuses, had nearly doubled thanks to an increase in the number of Oracle users and regular growth in Internet and e-mail use. Everything from data files and server applications to Internet access and e-mail ran across the same system, often overwhelming the capacity of the city's storage devices. Because the city relied on disjointed storage devices - one for each of the nine campuses - each database had to be backed up individually, a process that required an information services employee to drive from campus to campus each day.
What's worse, the entire backup process took up to nine hours to complete. "Often the backup was still going the next day when we were trying to get applications running," said Jimmy Koonce, Alexandria's manager of information systems. "The applications were slow enough even without the backup bumping into it, so it was very painful."
Still, it was a process that had to be done. "The question we asked was, 'What is the true cost of data backup,'" said Koonce. "The answer is, 'How much does it cost if you don't [back up your data]?' Good backup costs far less than lost data."
Cutting Out Duplication
To ensure his information services department could continue to serve Alexandria, Koonce needed to eliminate the lengthy backup times and the labor required to keep each campus up to speed.
To start, the IS department brought in a new storage/backup tape drive platform from Exabyte that reduced backup time from an average of eight hours to an hour and a half. "That greatly reduced the stress we had," said Koonce.
The next purchase was an IBM RS/6000 workstation that ran applications faster and further reduced demands on available processing power.
Finally, Koonce got the municipality to purchase a new storage infrastructure that uses backup and restore software called NetVault. Taking full advantage of the speed of the IBM workstation, NetVault lets data move from disk drive to tape storage media without being processed by the backup servers at each campus. By centralizing data storage, backup can now be handled from one location, thereby eliminating the daily trip to each campus.
Koonce said the NetVault software is more user-friendly than his old storage system. "We couldn't put anything on hold," he said. "If we had something scheduled at 5:30, we either had to let it go at that time or kill it and completely re-input the parameters of the backup. We couldn't reschedule anything. Now we can hold and reschedule backups without having to re-enter anything. The operation is a whole lot smoother."
Koonce said he had little problem getting the project funded. "We've been fortunate in that our city council has been supportive of information technologies. I think everyone here understands the value of having backup."
A Dozen Problems, One Solution
California's Contra Costa County, located on the San Francisco Bay, was facing an equally discouraging situation with its three-year-old server-attached storage system. "We were looking at hitting the storage wall," said Jeana Pieraldi, a network technician for Contra Costa.
Contra Costa stores 100 gigabytes of data annually, and Pieraldi knows that figure will grow in the future as material from decades past is added to the database. "We had talked about changing to larger drives, which would have given us six months of space at a cost of about $45,000," she said. "We knew that wasn't going to work, so we looked at getting another server and replacing the system."
In their search for a new server and storage system, Pieraldi said Contra Costa had particular needs, such as operating across both Unix and Windows platforms, reducing a crippling backup time of three days, and cutting the number of hours Pieraldi spent maintaining the servers. But their main requirement was the ability to expand storage capacity in the future without burdening their current budget. "That was the big requirement from my boss, looking down the road five and 10 years and seeing our needs," she said.
In the end, after examining systems and watching demonstrations from numerous vendors, the county went with a network attached storage system from Auspex that was installed in a day. The Auspex server relies on three microprocessors that do discrete functions so the system is never overloaded by user requests, reports or administrative tasks. Pieraldi said the backup time has dropped from three days to three hours, and she now spends only a couple of hours each day on server maintenance, down from four to six hours.
The system came with 500 gigabytes of storage space and can be expanded to nine terabytes without ever taking the system offline. "That was another thing we looked for because we can't be down," said Pieraldi. "We don't have that option."
The Auspex system is a self-contained unit that simply plugs into an existing network, and its portability helps satisfy the county's disaster recovery needs. "If I have to move it to another building, I just take the system and move it," said Pieraldi.
In addition to running reports faster and retrieving images more quickly, Contra Costa's new system saved them $375,000 compared to the cost of more general-purpose servers, not to mention ongoing savings in terms of reduced labor costs.
Planning for future storage needs qualifies as a decidedly unsexy task, but that's often what public officials hope for. "We answer to the taxpayers," said Pieraldi. "If we can't function reliably for them, they're not very happy with us."
To avoid falling into that situation, Pieraldi said, "Plan down the road, look at all your options, have a clear, defined set of needs, and don't compromise on them."
Koonce seconds the need for a big picture approach. "Every time we try to solve a problem we make sure the backup fits into the backup scheme we have for the city," Koonce said. "If you don't have that kind of plan, then it can get away from you real quickly." | <urn:uuid:68533e7d-b553-4c54-9f11-a4e31b576cf6> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Storage-Envy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975325 | 1,534 | 3.03125 | 3 |
NASA's Kepler spacecraft observatory has discovered 2,229 "high-quality (multiple transits), non-circumbinary transiting planet candidates" so far, which orbit 1,770 unique stars.
But what if they all orbited a single star? That's the idea behind this video, created by Alex Parker. The animation creates a very crowded star system, as you can imagine. It's also recommended watching it in full-screen mode in the HD format so you can see even the tiny planets.
There is some actual calculations going on in the video. As Parker explains:
"Using a transit lightcurve, a planet's distance from a star and its radius are both measured in terms of the host stars' radius, and those relationships are preserved here. This means that for two planets of equal size, if one orbits a larger star it will be drawn smaller here. Similarly, because the orbital distances scale with the host stars' sizes, some planets orbit faster than others at a given distance from the star in the animation (when in reality, planets on circular orbits around a given star always orbit at the same speed at a given distance). These faster-moving planets are orbiting denser stars."
So it's not just like he threw a bunch of random circles in the animation.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: James Bond meets My Little Pony: Mashup gold This 13-foot Japanese robot is packing heat The Legend of Zelda as a Western Friday Funnies: Batman rants against the Dark Knight haters/a> Did this 1993 film predict Google Glasses and iPads? | <urn:uuid:0bb6ad35-e916-4817-ac6b-012402c3a976> | CC-MAIN-2017-04 | http://www.itworld.com/article/2724930/consumer-tech-science/what-if-all-the-known-planets-orbited-a-single-star-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909598 | 379 | 3.296875 | 3 |
Joined: 06 Dec 2004 Posts: 211 Location: Keane Inc., Minneapolis USA.
Correlated sub query is a type of sub query in which Outer Query is executed at once (fetches one row) and by taking that values inner query will be executed for all rows. Then outer query fetches 2nd row then inner query will be executed for all rows and so on.
To clearly under stand see the following example which finds nth max sal
SELECT SAL FROM EMP A WHERE N = (SELECT COUNT(*) FROM EMP B WHERE A.SAL<=B.SAL);
Outer Query fetches 1 row and inner query compares it with all salaries. Then outer query fetches 2nd row and so on. | <urn:uuid:4b185ae3-521c-4f05-812a-ea27c56d8121> | CC-MAIN-2017-04 | http://ibmmainframes.com/about1736.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885686 | 150 | 2.71875 | 3 |
Two vs. Three Data Centers
Having a third or even fourth data center that operates outside an enterprise’s immediate geographical area is extra insurance for regionwide disasters, but there are also several trade-offs in DR performance—and in the costs of operating additional data center facilities.
Two data center DR within a specific geographic or metropolitan region where data centers are relatively proximate comes with lower facility and staffing costs than a strategy that uses more than two data centers. It’s possible to replicate data between the two sites synchronously. This entirely avoids data loss because the replication occurs simultaneously to both sites and not in the asynchronous, periodic update mode of distant, third data center updates. In the intra-region, two data center model, the server writes to disk and the data on the primary disk subsystem is hardened; the subsystem sends the data to the secondary system disk subsystem, which replies to the primary subsystem; and the primary subsystem replies to the system server. Because there is continuous access to data with synchronous disk replication, both the RTO and RPO are zero, even if a disk system fails.
In cases where the two data centers are far away from each other, a data latency factor of approximately 1 millisecond for each 100 km of distance between sites (directly related to the speed of light as it passes through fiber) enters in. The geographical distance gives the enterprise protection against any event that impacts a specific data center, but the latency induced by distance slightly diminishes performance, which is why many enterprises have adopted an active-active configuration (i.e., where systems in two of their data centers are functioning actively and in parallel with each other) and typically have these two data centers separated by 20 km or less. This sharply contrasts with an RTO of roughly one hour for data centers that function in active-standby mode (when one system is active and the other is in a standby “wait” mode and is activated when DR/failover is needed). RPO remains unchanged in an active-standby configuration.
In a three data center configuration, immediate issues are distance, latency and the fact the asynchronous data updates common in a distance environment will introduce data loss. To circumvent this in a metropolitan region where distances aren’t great, GDPS/Metro Region z/OS Global Mirror (MzGM) or GDPS Mirror/Global Mirror (MGM) can be used. Both deliver an RTO of minutes when sites are run in active-active mode and an RTO of less than one hour in active-standby mode. In all cases, RPO is zero.
However, because large spans of distance require asynchronous data replication, this entails some data loss. The good news for global enterprises is that IBM reports testing was successful for asynchronous update processing between two different sites at distances that were as far as 12,000 km apart, and commercially at data centers that were between 4,000 to 5,000 km apart. In other words, while some data loss must be managed, at least there are no distance constraints on asynchronous change updates. IBM says it actually has one global client that has a distance of nearly 9,000 km between its in-territory and outside data centers.
What About the Data Loss?
This is where sites must weigh the cost of losing business against the level of IT investment they want to make in DR and failover. If you’re a manufacturer and can afford to operate for up to several days in a manual mode without a system, instantaneous data recovery may be less of an issue. But if you’re an active online business with more than 2,000 transactions coming through your system every second, you will think about what the cost of losing 4,000 transactions is if your DR data loss exposure is 2 seconds—and if it makes sense to invest so you can cut that data exposure window by half.
Key Decision Points
It isn’t always an easy decision to determine a long-term data center strategy where it concerns DR. Conventional practice has always centered around a two data center backup and recovery strategy. Only recently have enterprises begun to bring up multiple DR sites in different geographies so they can assure non-stop processing in a global economy.
These are the questions most sites want to ask:
• How do you best manage your risks? If your commitment to your stakeholders is for continuous uptime, the most popular management data center strategy is one that can be kept within the metropolitan region in which your enterprise operates. Especially if you can maintain reasonable distances between data centers, the technology is there for you to operate in active-active mode so that systems failover is seamless and no one but IT knows the difference. But if you’re a global enterprise and you determine the risks are too great to keep all data centers in a single metropolitan region, having a backup data center in a remote area from your headquarters can make good business sense. Many enterprises try to get the best of both worlds. They maintain zero RTO/RPO by keeping redundant data centers within a more proximate metropolitan region, and use a “Plan C” third (or even fourth) data center in a remote geographical area that’s updated asynchronously.
• How long can you stay offline? A toothpaste manufacturer might be able to accept up to one week of downtime in its manufacturing facility, but a financial services company can’t. Best-of-class enterprises invest to assure their DR and failover solutions can meet the needs of customers, investors, auditors, regulators, managers and the board.
• Do you toggle? This is an emerging trend. Enterprises are toggling their production between two or more sites on a quarterly and semi-annual basis. A planned strategy for regular migration of production ensures that moving production in an unplanned situation will work.
There’s no question that more enterprises will strongly consider multiple (i.e., three or more) data center options as they continue to scale out IT to support global enterprise presence. Zero RPO and RTO times with systems running in parallel for seamless failovers will be the order of the day. However, if the region your parallel data centers are in gets hit with a major disaster, it will be equally reassuring to have a data center in a distant location that can be “up and running”—even if it can only be run with asynchronous updates that stretch out RTO and RPO.
Sites will also begin to take a look at the new automation built into DR and failover. Today, this automation notifies IT of impending failover events, and it also recommends the next set of actions to take. This automation is capable of completely failing over a system based on business rule sets and parameters without IT intervention. In the future, IT might take advantage of this opportunity for true “lights out” disaster recovery. But for now, there’s too much at stake for high-level business and IT managers to circumvent “pressing the button” themselves. | <urn:uuid:f2136687-64f3-400e-8c8a-5a4bc0d1fb44> | CC-MAIN-2017-04 | http://enterprisesystemsmedia.com/article/disaster-recovery-in-a-global-environment/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938426 | 1,444 | 2.84375 | 3 |
In the past decade our identity has undeniably evolved, we’re preoccupied with identity theft and authentication issues, while governments work to adopt open identity technologies. David Mahdi, a Product Manager at Entrust, explains the critical issues in understanding the very nature of identity in a society actively building bridges between the real and digital world.
What are the critical issues in understanding the very nature of identity in a society actively building bridges between the real and digital world?
While one’s identity in a digital world is analogous to what it is in the traditional “real” world, the challenges and issues associated with trusting one’s digital identity, managing it, and securing it are very different between these two worlds.
The core value to one’s digital identity is Trust. In the real world an individual is able to easily confirm their identity by presenting documents, such as a passport or driver’s license, that have been issued by authorities, based on verifiable information provided by the individual. And because these authorities (such as governments) are trusted, the documents, or credentials they issue can be used by the individual to prove their identity with many different organizations that might be offering services.
In the digital world, however, trust is not as easy to determine. Like the real world, a digital identity must be issued by a trusted authority. The extent to which that digital identity can be used may well be a function of the trust that other organizations put in that Authority. In some cases a digital identity may be issued by a single Authority – a bank, a retailer or even a government agency – and that identity may only be used with that Authority. As a result, to take advantage of the digital world, individuals may have many digital identities. This, however, is not ideal. If the Authority that issues a digital identity is trusted by other organizations, in much the same way that a government issuing passports is trusted, then the digital identities they issue could also be trusted by other organizations, and be used more broadly. But establishing that trust is one of the key challenges of the digital world.
As a result, an individual’s digital identity may actually consist of many different identities, issued by many different organizations, and generally they’ll be used only and trusted by the organization that issued them. This creates a bit of a management nightmare for individuals in the online world as they’re faced with keeping track of which identity is used with which organization, where that identity is stored electronically and, most importantly, how to protect it.
How has individual identity evolved in the digital world?
One of the great opportunities in the digital world is the unparalleled growth of services that are available online – whether it’s for purchasing vacations, accessing health documents, balancing bank accounts and paying bills, or just interacting with friends and business colleagues in a social network or over email. But taking advantage of these services has resulted in an individual having many unique digital identities. Each of these organizations may recognize an individual very differently – and their entitlements with each of the organizations may differ dramatically. An individual’s overall identity, therefore, is a collection of digital identities, all of which must be managed and protected.
While services and networks have expanded, threats in the digital world have also increased – in particular threats related to stealing identities – identity theft. So as individuals take advantage of new services, the number of digital identities they have also expands – and in the absence of an effective way to manage all of these identities, or a consistent way of protecting them, their vulnerability to identity theft also increases.
Would you say fraud is the main catalyst behind authentication innovation?
While many people still lose money to traditional fraud scenarios, such as the massive Ponzi scheme perpetrated by Bernard Madoff, increasingly sophisticated on-line scenarios continue to emerge. Early online attacks, orchestrated largely by “script kiddies” intent on have evolved into sophisticated malware attacks orchestrated by organized crime rings. For the first half of 2010 the Anti-Phishing Working Group (APWG) reported that there were 48,244 phishing attacks occurring across 28,646 unique domain names. At the root of most of these attacks is the use of Social Engineering. Criminals are using very persuasive and often personalized tactics to entice users to take specific actions that will result in the attackers ability to in some way misdirect or take over a users session—or their entire machine!
But fraud is a very broad term that is used to refer to anything from the theft of personal information to the interception of financial transactions. At the end of the day, people who are taking advantage of the Internet want to feel protected from all of these threats online – and a big part of that is having the confidence that their identity is protected. Authentication is an important means of ensuring that a person online is who they say they are – and the means to ensure this is to provide reliable, trusted strong authentication. But for users to adopt stronger authentication it needs to be easy to use so it does not interrupt the typical way in which they interact – it must be flexible, and it must be easily deployed.
Even within organizations the adoption of strong authentication is challenging – while a recent Forrester report indicated that 65% of firms in North America and Europe had adopted strong authentication, it had been rolled out to fewer than 10% to 20% of the employee base.
The desire to provide this broader protection against online threats is certainly an important motivator in to the development of new authentication technologies. As an example, mobile devices, are becoming ubiquitous among online users, and being able to leverage these devices would offer an easy to use, affordable method of authentication that could be easily rolled out to a broad population base. Similarly, authentication methods such as grid cards offers an affordable and easily adopted alternative to traditionally complicated methods such as one time passwords – in turn making stronger authentication accessible to a broad base of users. And to offer these approaches on a single platform, provides organizations with flexibility so they can apply the appropriate authentication method to the type of user, matching their online behavior. All of these innovations have been spurred on by the desire to extend greater protection to the online user.
Nowadays most users have a hard time managing their online identity across multiple websites and services. This comes mainly from a lack of understanding of security risks. Would an official unified identity document like a passport solve the problem or just bring more controversy to the issue?
One of the challenges in the digital world is that individuals receive identities from many different sites, so their digital identity is actually a collection of unique identities, all of which must be managed and protected individually. While a lack of knowledge about security risks certainly makes the user’s experience more difficult, the larger issue is the lack of trust among the issuing authorities – the fact that each agency or site is compelled to issue their own branded identity – and that there is little to no trust of identities issued by different organizations.
An identity that could be trusted by more than one organization would certainly make for an easier user experience, particularly if the identity could be managed and protected seamlessly and transparently to the user.
However, trust between organizations is difficult to establish because organizations often have very different, sometimes competing priorities. Even within government agencies, the jurisdictional concerns make such collaboration difficult – and that is compounded in a competitive environment. Leveraging identities across organizations in some type of federation requires common policies and common processes that are adopted and implemented consistently – and that there is a legal framework governing the Federation.
These are difficult issues to resolve, but the establishment of federations in which identities are trusted would be an important step forward in making it easier for individuals to understand and manage their digital identity. And in the absence of a federation that trusts identities issued by another authority, the number of identities that make up an individual’s overall digital identity, will continue to expand.
What are the key issues we have to deal with when implementing identity management? How can they be resolved?
There are a number of issues that need to be addressed when implementing an identity management solution – much of these can be grouped around administration and deployment, security and lifecycle management of the identities.
One of the first issues in the implementation of an identity management solution is the establishment of trust for the identities. The ability to properly vet the individual before issuing the identity creates a foundation for trust – and the potential extension of the trust framework. The development of a common acceptable framework to issue an identity is an important factor in establishing that underlying trust.
In terms of administration it’s important that an identity management solution can be centrally administered so that policies can be implemented consistently and efficiently throughout the organization. From a security perspective, if central policies cannot be implemented consistently or enforced then it undermines the overall system.
It’s also important that an identity management system provides flexibility to apply different types of identities to different types of users. This reflects the fact that not all users are equal – that different roles may perform different types of transactions, with different risk levels. An effective identity management system will support many different authentication types, which in turn can support different security levels – such as one-time passwords versus digital certificates.
Based on your experience, what’s the quality of the software used to work with open identity standards? What are the missing ingredients?
There’s a lot more acceptance today of the products that are using and leveraging open identity standards than was the case 3 to 5 years ago. However, to a large extent many of the projects that are being implemented are very slow to develop and are very basic applications. As an example, being able to leverage a Google ID across multiple sites is convenient to users and a significant step forward than what has been the case to date, however the applications supported are not high value. The standards that have been developed in this area allow for more robust or stepped up authentication, but to date there has not been a significant movement to leverage this.
What’s your take on government adoption of open identity technologies?
The government has provided a major impetus to the adoption of open identity technologies and to a large extent has led the way. They have been involved in standards-based federated models for many years, based largely on PKI using x.509 certificates.
In more recent years the government has played an important role in driving some of the requirements that need to be addressed for the back-end systems, such as stronger protection of the servers to address privacy concerns. These considerations need to be addressed before these technologies can be leveraged for mass consumption, or for higher value services. | <urn:uuid:24479516-6bd2-472d-9dfd-867f10b826d3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/12/20/the-importance-of-identity-in-the-digital-age/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959108 | 2,166 | 2.59375 | 3 |
|Internet Of Things(IoT)|
The Internet of things (IoT) is a concept that has emerged from a vision of an always connected world. Seamless connectivity between devices anywhere, anytime, in any condition is the goal behind the concept of IoT. Explore the impact of the Internet of Things in key sectors such as Energy, Healthcare, Transportation, Manufacturing, Aerospace & Defense and Microelectronics. The video also highlights the key technologies and their convergence prospect to envision the future connected world across sectors. | <urn:uuid:2cf01ee8-e1b7-403c-b2c6-7611c925f1cb> | CC-MAIN-2017-04 | http://now.frost.com/forms/IoT_InternetOfThings | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930306 | 105 | 2.78125 | 3 |
PandaLabs published its security tips for consumers to stay safe this summer and avoid falling victim to computer fraud. During the summer, people (especially children) have more spare time on their hands for using computers and connecting to the Internet more frequently, thus increasing the risk of falling victim to malicious code.
One of the newest scams that has surfaced recently involves sending fake flight confirmation emails. The potential victim receives a fake confirmation for ‘recently purchased tickets’ with instructions on how to open an attachment to view the ticket. The file, however, is a Trojan of the Sinowal family that is designed to steal users’ confidential information.
“In the summer, many people book flights online to get to their holiday destinations,” said Luis Corrons, technical director of PandaLabs. “Cyber-crooks are taking advantage of this situation to send a new wave of fake emails aimed at tricking users into opening the attachments and infecting their computers.”
PandaLabs is continually analyzing the latest Internet trends, and with this in mind, offers the following advice to help safeguard users’ security this season:
Use caution with social networking sites: People give out too much information about their holiday plans on social networking sites, even tipping criminals off about their empty homes. Check privacy settings and avoid sharing private information on social networks.
Install parental controls: Children spend more time in front of computers during summer vacation. Installing a good parental control program on the computer will help minimize children’s vulnerability on the Internet.
If you can avoid it, never use a shared computer: If using a shared or public computer on vacation is a must, prevent identity theft by making sure your account doesn’t automatically save your password and user ID. If you suspect the computer’s security has been compromised by a virus, leave it and use another. Take care when connecting an external device to the computer, as it may become infected without your knowledge.
Take care with email: Email is one of the main virus entry points, so pay special attention to it. Do not open messages from unknown senders or click on dubious links.
Beware of public Wi-Fi networks: You could be hooking up to a network set up by hackers to steal any information that you share across the Internet. When you connect to email, social networking sites or online stores, make sure you are using a secure connection (https), so that traffic is encrypted and no one else can access the information.
Keep your computer up-to-date: Malware seeks to exploit existing security holes in systems to infect them. Make sure all necessary security patches and updates are properly installed.
Protect your computer: Make sure you have reliable, up-to-date protection installed on your computer.
“By following PandaLabs’ tips for staying secure, users can enjoy their summer vacations with greater peace of mind. Just as people would lock all doors and windows before going on vacation, consumers should also take great care to protect their digital worlds,” added Corrons. | <urn:uuid:b135fdb4-0546-48e2-8d26-448455dcaa74> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/07/04/security-tips-to-stay-safe-this-summer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927798 | 633 | 2.71875 | 3 |
Last week the Obama Administration announced its Open Data Policy, which requires federal agencies and departments to make their data available in machine readable formats and is meant to make government more open and accessible through technology. A number of federal agencies have already made their data available in this way, and many others are in the process of doing so. It turns out that similar open data initiatives have also been put in place by local and foreign governments.
At the local level here in the U.S., 39 states and 35 cities and counties have begun to make government data available through open data web sites. They allow us to answer questions such as:
Outside of the U.S., 41 countries and 133 regional governments have created open data sites of their own. You can find some interesting stuff amongst all those data such as:
Even some international organizations such as the United Nations, the World Bank and the European Union are making data available in this way. Their data will let you answer some broad demographic questions like:
You get the idea. Thanks to these initiatives there are quite a bit of data now freely available and easily accessible for people to study, create apps with or simply peruse for pleasure on a rainy day. Have at it!
Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:0b2d87b2-d89d-46d1-a60e-793fcc6cca92> | CC-MAIN-2017-04 | http://www.itworld.com/article/2710519/big-data/which-restaurants-violate-health-codes-and-other-insights-from-open-data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938643 | 301 | 2.828125 | 3 |
University of Sussex student Simon Bell has reverse-engineered the Android Simplocker (Simplelocker) ransomware, and has created a Java program that can be converted into an Android app to decrypt the files encrypted by the malware.
Simplocker was first spotted and analyzed by ESET researchers earlier this month.
The malware scans the SD card for certain file types, encrypts them using AES, and demands a ransom in order to decrypt the files.
“Our analysis of the Android/Simplock.A sample revealed that we are most likely dealing with a proof-of-concept or a work in progress – for example, the implementation of the encryption doesn’t come close to ‘the infamous Cryptolocker’ on Windows,” the researchers pointed out at the time, and advised against paying the ransom, as there is no guarantee the crooks will keep their part of the bargain and decrypt the files upon receiving the money.
According to Bell, the creators of the malware didn’t use code obfuscation techniques, which allowed him to dissect it easily.
“The antidote for this ransomware was incredibly easy to create because the ransomware came with both the decryption method and the decryption password. Therefore producing an antidote was more of a copy-and-paste job than anything,” he noted.
“It’s also worth noting that while this antidote doesn’t detect the decryption password automatically, it could be possible to do so. However, future versions of the ransomware will probably not reveal the decryption password so easily and will likely receive it from the C&C server,” he concluded, adding that future versions of this and other ransomware will likely prove significantly harder to reverse-engineer. | <urn:uuid:9b159c36-a528-41bd-b057-d271a9ffc3a5> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/06/17/java-program-to-reverse-android-ransomware-damage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938406 | 360 | 2.5625 | 3 |
Providing greater insight and control over elements in our increasingly connected lives, the Internet of Things (IoT) emerges at a time when threats to our data and systems have never been greater. There is an average of thirteen enterprise security breaches every day, resulting in roughly 10 million records lost a day—or 420,000 every hour.
As new connected devices come to market, security researchers have taken up the cause to expose their vulnerabilities, and make the world aware of the potential harm of connecting devices without properly securing the Internet of Things.
Gartner's Market Guide for IoT Security provides recommendations to meet IoT security challenges.
Security experts Chris Valasek and Charlie Miller grabbed headlines with their research on the vulnerability of connected cars when they hacked into a Toyota Prius and a Ford Escape using a laptop plugged into the vehicle’s diagnostic port. This allowed the team to manipulate the cars headlights, steering, and breaking.
In April 2014, Scott Erven and his team of security researchers released the results of a two-year study on the vulnerability of medical devices. The study revealed major security flaws that could pose serious threats to the health and safety of patients. They found that they could remotely manipulate devices, including those that controlled dosage levels for drug infusion pumps and connected defibrillators.
In 2012, the Department of Homeland Security discovered a flaw in hardened grid and router provider RuggedCom’s devices. By decrypting the traffic between an end user and the RuggedCom device, an attacker could launch attacks to compromise the energy grid.
We can sort potential attacks against the Internet of Things into three primary categories based on the target of the attack—attacks against a device, attacks against the communication between devices and masters, and attacks against the masters. To protect end users and their connected devices, we need to address all three of these IoT attacks.
To a potential attacker, a device presents an interesting target for several reasons. First, many of the devices will have an inherent value by the simple nature of their function. A connected security camera, for example, could provide valuable information about the security posture of a given location when compromised.
A common method of attack involves monitoring and altering messages as they are communicated. The volume and sensitivity of data traversing the IoT environment makes these types of attacks especially dangerous, as messages and data could be intercepted, captured, or manipulated while in transit. All of these threats jeopardize the trust in the information and data being transmitted, and the ultimate confidence in the overall infrastructure.
For every device or service in the Internet of Things, there must be a master. The master’s role is to issue and manage devices, as well as facilitate data analysis. Attacks against the masters – including manufacturers, cloud service providers, and IoT solution providers – have the potential to inflict the most amount of harm. These parties will be entrusted with large amounts of data, some of it highly sensitive in nature. This data also has value to the IoT providers because of the analytics, which represent a core, strategic business asset—and a significant competitive vulnerability if exposed.
Dive in to the details of securing the Internet of Things with our comprehensive guidebook, “Building a Trusted Foundation for the Internet of Things.”
In this on-demand webinar, we explore the challenges to securing the Internet of Things as well as how to mitigate these IoT threats. | <urn:uuid:f1683533-8bf3-4799-b01a-ac610854f1f2> | CC-MAIN-2017-04 | https://safenet.gemalto.com/data-protection/securing-internet-of-things-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942431 | 692 | 2.765625 | 3 |
If you ask the average consumer what their first order of business would be to learn about a new restaurant or research hotel reservations, they'd most certainly head to the Internet. Your company's online presence is exceptionally important, and plays a key role in your potential guests' purchasing decisions. Yet despite the importance of online image, many hotels and restaurants fall short in a key area of website design: general accessibility for all populations.
A website's 'accessibility' specifically refers to web content that is available to all individuals, regardless of any disabilities or environmental constraints. Users may be operating in situations under which they cannot see, hear, or move, or they may have difficulty processing some types of information, reading or understanding text, or may be unable to use a keyboard or a mouse.
According to the U. S. Census Bureau, there are about 51.2 million Americans with some level of disability and 32.5 million people with a severe disability. Furthermore, the proportion of people with disabilities grows as the baby boomer generation ages. People between the ages of 45 and 54 have an 11.5 percent chance of developing a disability, and those chances increase dramatically between the ages of 55-64. Almost 54.5 percent of the population over 65 years of age has a disability.
The Internet can offer stimulating opportunities to people with disabilities, while providing independence and freedom. But, if a website offers low accessibility or provides vague information, then technology will be of little help to communicate to users with visual impairment.
Weak access discovered
To evaluate website accessibility, I worked with a graduate student from the University of Delaware, Lina Xiong, to conduct a study of 100 randomly selected hotel and restaurant websites. More than half of all those evaluated could not be viewed successfully by people with disabilities. Most of the hotel websites we analyzed failed the majority of our evaluation parameters; the single largest cause of failure was a lack of alternative text for non-text materials. This result is consistent with previous research, and such failure is relatively easy to rectify. Restaurant websites faired slightly better than hotel websites, though this could be largely attributable to their general simplicity by comparison.
Tips for improvement
Consider following these guidelines for web design to provide universal access to all guests:
- Tap best practices. Become familiar with Section 508 standards, an amendment to the Rehabilitation Act that works to eliminate barriers in information technology in Federal environments. Although Section 508 only applies to Federal agencies, it offers a comprehensive picture of how to improve information technology accessibility.
- Test current sites. Run an online testing tool such as Cynthia Says (www.cynthiasays.com) to determine what accessibility measures are missing, based on Section 508 standards or Web Content Accessibility Guidelines (WCAG 1.0), established by the World Wide Web Consortium.
- Look for easy areas to improve. In some cases, simply adding alternative text for non-text materials can greatly enhance accessibility.
- Focus on content. Reconsider the balance between 'presentation' and 'usability' of a website; content is ultimately king. Decrease flash-based content or animation elements that may present difficulties in compatibility for assistive technologies or other users with lower versions of necessary viewing software.
- Create a second site. Redesigning large, complex websites to be more accessible can be costly and labor intensive. Consider instead creating a 'mirror website' that includes all of the necessary content from the original website, without any elements that may hinder accessibility. Offer a prominent link on the nav bar directing users to the mirror website.
- Understand user disabilities. Only through an outside-in approach can web designers develop an accessible website which can better reach business revenue potential and offer enhanced customer interaction. | <urn:uuid:43165d71-4942-4a9e-81a8-7ed2c2d0170e> | CC-MAIN-2017-04 | http://hospitalitytechnology.edgl.com/magazine/September-2008/Improving-Web-Accessibility55301 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00222-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911999 | 763 | 2.515625 | 3 |
Though recent technological advances in education have led to greater student engagement and a reduction in time spent on testing, K-12 still faces significant challenges over the next five years. The NMC/CoSN Horizon Report: 2016 K-12 Edition, published by the New Media Consortium and the Consortium for School Networking, highlights what politicians, education experts, school administrators, and teachers should focus on to best serve students.
Solvable Challenges: Those That We Understand and Know How to Solve
- Authentic Learning Experiences–Schools need to focus on contextualizing lesson plans in real-world scenarios. The report explains, the term authentic learning is seen as an umbrella for several important pedagogical strategies that have great potential to immerse learners in environments where they can gain lifelong learning skills; these approaches include vocational training, apprenticeships, and certain scientific inquiries. Schools can incorporate authentic learning experiences by growing community partnerships, promoting apprenticeships and engaging students in citizen science, which involves a partnership among volunteers, amateurs, and trained scientists. By having students take part in projects that actually affect their communities, as opposed to hypothetical or simulated exercises, students become more engaged and gain a deeper understanding of the material at hand.
- Rethinking the Roles of Teachers–In the old education model, teachers were the sole diffuser of knowledge. Through lectures and slideshows, teachers engaged in a one-way communication model with students. They shared the knowledge and students took notes and memorized. However, with the increasing adoption of technology into the classroom, this model no longer exists. Teachers now act as mentors and guides, according to the report, and help students gain knowledge from a variety of sources–online lectures, interactive lessons and collaborative exercises with fellow students, and conducting research. Because the role of the teacher is shifting, professional development and teacher training also needs to adapt. The report explains that these evolving expectations are changing the ways teachers engage in their own continuing professional development, much of which involves social media, collaboration with other educators both inside and outside their schools, and online tools and resources. Pre-service teacher training programs are also challenged to equip educators with digital competencies amid other professional requirements to ensure classroom readiness.
Difficult Challenges: Those That We Understand but for Which Solutions are Elusive
- Advancing Digital Equity–As technology continues to spread in classrooms, the digital divide between students becomes more obvious. Pew Research reports that 5 million households in the U.S. with school-aged children do not have high-speed broadband service. As school work shifts from paper worksheets to online portals and research moves from encyclopedias to Google Scholar, students without access to the Internet at home run the risk of falling behind. However, both the government and private business are trying to shrink the divide, the report explains. President Obama’s ConnectALL and ConnectED initiatives promises high-speed broadband and technology access for all Americans, at home and at school. Additionally, with its Google Fiber, the tech giant is enabling greater access in low-income areas by providing connectivity to entire cities.
- Scaling Teaching Innovations–In education, innovation will typically come from the classroom. However, that is at odds with the bureaucratic nature of most school districts where change must come from the top. Additionally, given the heavy emphasis on test results, teachers have little incentive to innovate with new technology and go off the lesson plan, in case the technology doesn’t improve test scores. The lack of incentives to innovate can have dire results. In addition to students not getting technology that could help them learn, the report explains that many educators become frustrated by the rigid confines of a school that is in desperate need of transformation. Scaling pedagogical innovation requires adequate funding, capable leadership, strong evaluation practices, and the removal of restrictive policies–a tall order for the majority of K-12 public schools, which are receiving fewer resources, the report acknowledges.
Wicked Challenges: Those That are Complex to Even Define, Much Less Address
- Achievement Gap–New education technology can be scary for teachers and administrators. While innovation only happens when someone is willing to take a risk and try something new, the bureaucratic nature of most U.S. schools means change typically comes from the top. Additionally, with so much riding on test scores, teachers have little incentive to try something new that may not improve scores. Scaling pedagogical innovation requires adequate funding, capable leadership, strong evaluation practices, and the removal of restrictive policies. The reality, according to the report, is that many teachers are not prepared to lead innovative, effective practices, and there are many systemic factors that must be addressed to resolve this complex issue.
- Personalizing Learning–With technology comes the ability to personalize lesson plans and education. Teachers can now use apps, games, and other technology to education students on their level. Previously, teachers were limited because there’s only one of them, but many students. Now, with different pieces of technology, the teacher can essentially be in 20 places at once, educating each student individually. However, there are barriers to adoption. One major barrier, the report explains, is a lack of infrastructure within school systems to support dissemination of personalized learning technologies at scale. Compounding the challenge, the report continues, is the notion that technology alone is not the whole solution; personalized learning efforts must incorporate effective pedagogy and include teachers in the development process.
The report also highlights key trends in education industry, as well as new technologies that will shake things up over the next five years. | <urn:uuid:169146d2-162a-4cae-8910-be2e07f7274f> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/challenges-on-the-horizon-for-k-12-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960594 | 1,135 | 3.265625 | 3 |
Hasler J.F.,Bioniche Animal Health Inc.
Theriogenology | Year: 2014
After the first successful transfer of mammalian embryos in 1890, it was approximately 60 years before significant progress was reported in the basic technology of embryo transfer (ET) in cattle. Starting in the early 1970s, technology had progressed sufficiently to support the founding of commercial ET programs in several countries. Today, well-established and reliable techniques involving superovulation, embryo recovery and transfer, cryopreservation, and IVF are utilized worldwide in hundreds, if not thousands, of commercial businesses located in many countries. The mean number of embryos produced via superovulation has changed little in 40 years, but there have been improvements in synchrony and hormonal protocols. Cryopreservation of invivo-derived embryos is a reliable procedure, but improvements are needed for biopsied and invitro-derived embryos. High pregnancy rates are achieved when good quality embryos are transferred into suitable recipients and low pregnancy rates are often owing to problems in recipient management and not technology per se. In the future, unanticipated disease outbreaks and the ever-changing economics of cattle and milk prices will continue to influence the ET industry. The issue of abnormal pregnancies involving invitro embryos has not been satisfactorily resolved and the involvement of abnormal epigenetics associate with this technology merits continued research. Last, genomic testing of bovine embryos is likely to be available in the foreseeable future. This may markedly decrease the number of embryos that are actually transferred and stimulate the evolution of more sophisticated ET businesses. © 2014 Elsevier Inc. Source
Kaimio I.,Faba Co. |
Mikkola M.,Faba Co. |
Mikkola M.,University of Helsinki |
Lindeberg H.,University of Eastern Finland |
And 3 more authors.
Theriogenology | Year: 2013
The aim of this study was to examine the effect of sex-sorted semen on the number and quality of embryos recovered from superovulated heifers and cows on commercial dairy farm conditions in Finland. The data consist of 1487 commercial embryo collections performed on 633 and 854 animals of Holstein and Finnish Ayrshire breeds, respectively. Superovulation was induced by eight intramuscular injections of follicle-stimulating hormone, at 12-hour intervals over 4 days, involving declining doses beginning on 9 to 12 days after the onset of standing estrus. The donors were inseminated at 9 to 15-hour intervals beginning 12 hours after the onset of estrus with 2 + 2 (+1) doses of sex-sorted frozen-thawed semen (N = 218) into the uterine horns or with 1 + 1 (+1) doses of conventional frozen-thawed semen (N = 1269) into the uterine corpus. Most conventional semen (222 bulls) straws contained 15 million sperm (total number 30-45 million per donor). Sex-sorted semen (61 bulls) straws contained 2 million sperm (total number 8-14 million per donor). Mean number of transferable embryos in recoveries from cows bred with sex-sorted semen was 4.9, which is significantly lower than 9.1 transferable embryos recovered when using conventional semen (P ≤ 0.001). In heifers, no significant difference was detected between mean number of transferable embryos in recoveries using sex-sorted semen and conventional semen (6.1 and 7.2, respectively). The number of unfertilized ova was higher when using sex-sorted semen than when using conventional semen in heifers (P < 0.01) and in cows (P < 0.05), and the number of degenerated embryos in cows (P < 0.01), but not in heifers. It was concluded that the insemination protocol used seemed to be adequate for heifers. In superovulated cows, an optimal protocol for using sex-sorted semen remains to be found. © 2013 Elsevier Inc. Source | <urn:uuid:6c61c893-77ae-429e-97ef-e2fd4691550f> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bioniche-animal-health-inc-1598406/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00554-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923486 | 825 | 2.515625 | 3 |
VoIP Protocol Essentials: SIP
Learn the SIP protocol and important protocols related to SIP implementations.
In this course, you will learn about Session Initiation Protocols (SIPs) and
important protocols related to SIP implementations through a process of lecture
and hands-on training. You will learn what SIP is, how it works, and get a practical
guide on how to use it. The lessons in this course are clear, very technical, and
always practical, and since at least 60% are hands on, you can investigate and reinforce
each lesson. In this course, you'll examine how SIP interoperates into the current
telecommunications network, going beyond the basics of the protocol and getting
a big picture understanding of how it all fits together. | <urn:uuid:474ebe3b-e0f9-47e0-b9ef-17797525f8d5> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/114197/voip-protocol-essentials-sip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00370-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89144 | 163 | 2.5625 | 3 |
Over the last few years, a new global trend has emerged in the field of genomic studies. With the advent of a new generation of analytical instruments, the cost of determining the order of the nucleotides in a DNA molecule (DNA sequencing) has dramatically decreased, resulting in a significant acceleration of a number of basic and applied related biomedical areas.
While a typical sequencing project (de novo determination of an organism genome, for example) used to last several years and millions of dollars in reagents and resources, nowadays even small laboratories are able to sequence the complete genomes of simple organisms in hours, for just a small fraction of the cost.
Big sequencing projects have shifted to the determination of the specific sequences of populations of individuals, which will give us the ability to associate the differences at the sequence level between them (variants) to specific individual traits (those causing diseases like cancer, for example). Consequently, the bottleneck in sequencing projects has shifted from obtaining DNA reads to the alignment and post-processing of the huge amount of read data now available.
To minimize both processing time and memory requirements, specialized algorithms and high-throughput analysis pipelines are being constantly developed.
The need to analyze increasingly large amounts of genomics and proteomics data has meant that research institutions such as the Spanish National Cancer Research Centre (CNIO) allocate an increasing part of their time and budget provisioning, managing and maintaining their scientific computing infrastructure, areas that not their core business.
The Server Labs, a European IT company focused on IT architectures, software engineering and cloud architecture and services, is working with the Bioinformatics Unit, Structural Biology and Biocomputing Programme at CNIO, to develop a cloud-based solution that would meet their genomic processing needs.
With its pay-per-use concept CNIO would benefit from the Cloud saving time and money maintaining and upgrading their internal IT department. Fixed costs will be translated to variable costs in terms of infrastructure, purchases and upgrades of computational resources, software licenses, as well as expert admins and external resources.
As the number of sequencing experiments which the CNIO runs can also be variable, the cloud not only eliminates potential over-provisioning, but it also prevents the under-provisioning of resources at peak times, which would result in the inability to run scheduled experiments. CNIO is thus able to pass on the risks associated with the planning and allocation of resources to the cloud provider.
Without the need to provide and manage computational resources themselves, CNIO can focus on their core business, scientific research in genomics and proteomics applied to cancer. In addition to providing the elasticity to run experiments on an on-demand basis the cloud also reduces the time to supply the hardware infrastructure and its configuration based on an automated installation and customization of the software running on top of the hardware. A controlled computational environment for the post-processing of experiments allows results to be more easily reproduced, a key objective to researchers across all disciplines.
Data management cloud services facilitate publishing of data over the Internet enabling researchers to easily share results whilst controlling their access. Data storage in the Cloud was designed from the ground-up with high-availability and durability as key objectives.
By storing their experiment data in the cloud, researchers can ensure their data is safely replicated among data centres. These advantages free researchers from time-consuming operational concerns, such as in-house backups and the provisioning and management of servers from which to share their experiment results.
The vast potential benefits of the cloud will enable the Spanish National Cancer Research Centre to speed up its pace of innovation and bring them a faster ROI on their current research efforts.
An Environment for Genomic Processing in the Cloud
The first step towards carrying out genomic processing in the cloud is to identify the requirements that fulfill a suitable computational environment. These include the hardware architecture, the operating system and the genomic processing tools. Together with CNIO we identified the following software packages employed in their typical genomic processing workflows:
- Burrows-Wheeler Alignment Tool: BWA aligns short DNA sequences (reads) to a reference sequence such as the whole human genome.
- Novoalign: Novoalign is a DNA short read mapper implemented by Novocraft Technologies. The tool uses spaced-seed indexing to align either single or paired-end reads by means of Needleman-Wunsch algorithm. The source code is not available for download. However, anybody may download and use these programs free of charge for their research and any other non-profit activities as long as results are published in open journals.
- SAM tools: After reads alignment, one might want to call variants or view the alignments against the reference genome. SAM tools is an open-source package of software applications which includes an alignments viewer and a consensus base caller tool to provide lists of variants (somatic mutations, SNPs and indels).
- BEDTools: This software facilitates common genomics tasks for the comparison, manipulation and annotation of genomic features in Browser Extensible Data (.BED) format. BEDTools supports the comparison of sequence alignments allowing the user to compare next-generation sequencing data with both public and custom genome annotation tracks. BEDTools source code in freely available.
Note that, except for Novoalign, all software packages listed above are open source and freely available.
For our initial proof of concept, we decided to run a configured image with Ubuntu 9.10 x64. This ensures that no additional setup tasks are required when launching new instances in the Cloud, and provides a controlled and reproducible environment for genomic processing. The Amazon EC2 instance type required was a large instance with 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) and 850 GB of local instance storage.
With this minimum set up we executed some typical genomic workflows suggested to us by CNIO. We found that for their typical workflow with a raw data input between 3 and 20 GB, the total processing time on the cloud would range between 1 and 4 hours, depending on the size of the raw data and whether the sequencing experiment was single or paired-end. With an EC2 instance pricing at 38 cents per hour for large instances, and ignoring additional time required for customization of the workflow, the cost of pure processing tasks totalled less than $2 for a single experiment.
CNIO’s genomic facilities are able to process up to 20-25 sequencing runs in an Illumina GAII sequencer. On average, they expect to analyse about 150 sequencing lanes per year, generating each 30 gigabyte of entry data (average), and totalling up to 3-4.5 terabytes in storage / processing requirements p.a.
We also found the processing times to be comparable to running the same workflow in-house on similar hardware. However, when processing in the cloud, we found that transferring the raw input data from the lab to the Amazon cloud could become a bottleneck, depending on the bandwidth available. We were able to work around this limitation by processing our data on Amazon’s European data centre and avoiding peak-hours for the data uploads. In future a high-speed file-transfer protocol such as Aspera’s could be leveraged to optimize this step.
Maximizing the Advantages of the Cloud
We demonstrated that genomic processing in the Cloud is feasible and cost-effective, while providing a performance on par with in-house hardware. The true benefits of the cloud will become apparent when processing tens or hundreds of experiment jobs in parallel. This would allow researchers, for instance, to run algorithms with slightly different parameters to analyse the impact on their experiment results. At the same time, the resulting framework should incorporate all of the strengths of the cloud, in particular data durability, publishing mechanisms and audit trails to make experiment results reproducible.
For more detailed information please have a look at The Server Labs’ technical blog.
Paul Parsons is CTO and chief architect at The Server Labs, Alfonso Olias, also from The Server Labs serves at Senior Consultant. | <urn:uuid:a185107d-d9cd-491a-8f9c-c5e1c8baa329> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/01/13/cloud_to_improve_genomic_research_at_spanish_national_cancer_research_centre/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924823 | 1,655 | 2.515625 | 3 |
Anyone who has ever created something new is granted the right to baptize it. However, given that they are born under the sign of destruction and disruption, viruses are an exception to this rule.
Normally, you would not expect anything in the "John Jr." vein. Any hint as to the identity of virus creators would probably get them into trouble. Plus, in order to avoid adding to the glory of malware authors antimalware producers will probably re-name the malware samples they discover. And the naming trouble does not stop here. A scenario where several antimalware labs simultaneously conduct research on the same new malware sample is not that uncommon. In this case, the first to publicly announce the discovery gets to give it a name.
Aside from creativity and authorship, virus naming also raises the issue of utility. Confronted with an overwhelming malware population, researchers and antimalware producers have understood how important it is to approach the naming process systematically. All in all, simple logic calls for malware names that contain information the industry can recognize: the affected platform, the virus family name and its spreading method.
This whitepaper aims to summarize the efforts that have been invested into creating a coherent, unanimously accepted and, most of all, efficient malware naming system as well as to briefly dwell on how these regulatory attempts are reflected in practice. | <urn:uuid:4dcf6c4b-d21c-4a91-830a-b4d150f219b1> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/virus-naming-the-whos-who-dilemma-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959747 | 271 | 2.578125 | 3 |
Celebrating Our Military on Armed Forces Day 2014
/ May 13, 2014
May 17 is Armed Forces Day in the U.S. The holiday is celebrated by many world nations to honor their militaries at various dates throughout the year, but the U.S. holiday was created in 1949 following the consolidation of the U.S. Military through the Department of Defense.
This photo shows a Marine in the forest of Camp Geiger, N.C., during patrol week last year, a five-day training event that teaches infantry students basic offensive, defensive and patrolling techniques. This Marine is part of the Delta company, was the first in the Marines to fully integrate females into an entire training cycle. And the performance of Delta company was used to determine the future use of women in combat-related military jobs. | <urn:uuid:970ee55f-c4e6-48e9-be14-66f471acf7b2> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Celebrating-Our-Military-on-Armed-Forces-Day-2014.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969964 | 165 | 2.828125 | 3 |
For years, people have talked about the electricity consumption of data centers. Some people want to believe, somehow, that Googling is energy intensive. But it's not. Thanks to Koomey's Corollary to Moore's Law, computation has been getting more energy efficient: The number of computations per kilowatt-hour of electricity usage has been doubling every 1.5 years for decades. Relative to our society's other technological processes -- heating homes or growing corn or ground transportation -- computing's energy usage was and is a drop in the bucket. All of Google, all its servers and campuses and everything, require about 260 megawatts of electricity on a continuous basis, as of 2011. The US has about 1,000 gigawatts of capacity, or 1,000,000 megawatts. So, to put it mildly, I am sanguine about the electrical consumption of our computing infrastructure.
But, according to a new report from the University of Melbourne's Centre for Energy Efficient Telecommunications, the wireless networks that let our devices tap into those data centers might turn out to be another story.
In a new whitepaper, the CEET estimates that when we use wireless devices to access cloud services, 90 percent of the electrical consumption of that system is eaten up by the network's infrastructure, not the servers or phones. The data centers themselves use one-tenth that amount of electricity. Worse, cloud services accessed wirelessly will continue to explode, leading to a ballooning electrical load as well. | <urn:uuid:eec31ef2-8e20-4e37-a0d1-202a7bbacdaa> | CC-MAIN-2017-04 | http://www.nextgov.com/cloud-computing/2013/04/cell-networks-use-much-more-energy-data-centers/62490/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938183 | 307 | 2.828125 | 3 |
In this lesson, you will learn about fuzzing basics and how to use the Fuzzer module.
Our example will be based on WebGoat, Session Management - Hijack a session. In this example, you will see how to exploit a vulnerable session because of a weak encryption.
This technique isn't really fuzzing. We are going to force the generation of the same page to get the value of the WEAKID cookie for each, and try to find a pattern in it.
Point your browser to WebGoat at the address of the exercise (Session Management Flaws > Hijack a Session): http://localhost:8080/webgoat/attack?Screen=139&menu=1800, enter foo and bar respectively as login and password and submit the form.
Identify the chat that is responsible of the WEAKID cookie definition (#286 in the screenshot). Right click on it and select Send to > Fuzzer.
Define a fancy tag that generates 50 session IDs
- In the right panel, double click on Tag and give it the name foo
- Double click on the newly created foo tag and select a generator from type Counter. Define it with these values: Start=1, Stop=50, Step=1.
- Uncheck the box "Update Session Information" to be sure that the ID will be different each time
- In the left panel, click on the Start button to start fuzzing
- In the right panel, select the Results tab and export them (click on "Save Matches") in a path of your choice (e.g. /data/tmp/fuzz.txt).
Analyze the results
- The exported results show the server's responses. We have a file that is grepable. Issue this command:
$ cd /data/tmp/ $ grep Set-Cookie fuzz.txt | sort
It indicates to show only the lines where the Set-Cookie (the definition of our WEAKID) appears, and we send the result to the unix sort function to order the results.
- We notice holes in the sequence. Here is an example:
Crack the session
Now we have the first missing pat of our WEAKID but we still need to know the second part. See how the cookie is generated:
|First part||Second part|
Notice in bold the fixed part. The variable part is increasing from one ID to the other. So we deduce that the value that we are looking for is between 562 and 568.
Now, we need to use the record where we send the WEAKID cookie with the credentials. Right click on it and send it to the fuzzer:
Do following actions in the fuzzer window:
- Modify the entire value of WEAKID with the prefix we have found
- Replace the 3 last characters with a variable %%crack%% that we will use as tag
- In the right panel, create a tag named crack
- Right click on that tag and create a generator of type Counter with values between 562 and 568 with an increment of 1
- Uncheck both "Update Content-Length" checkbox and "update Session Information" checkboxes.
- Click on the "Start" button
- On the right panel, go to the results tab and click on the "Save Matches" button. Export the results in /data/tmp/fuzz2.txt.
Analyze the results
- Open a terminal window and point to /data/tmp or whatever directory where you saved fuzz2.txt
- We have completed the exercise:
$ grep -i congrat fuzz2.txt <div id="message" class="info"><BR> * Congratulations. You have successfully completed this lesson.</div>
By further analyzing the file, we deduce that the solution is:
We could also use the Manual Request module to check it:
And by clicking on Browser-View: | <urn:uuid:77cbe905-933b-4edb-8a29-f262ad673979> | CC-MAIN-2017-04 | https://www.aldeid.com/wiki/Watobo/Usage/Fuzzer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.845082 | 824 | 2.796875 | 3 |
Introduction to Computer Crime
by M. E. Kabay, PhD, CISSP-ISSMP
Program Director, MSIA
School of Graduate Studies
Much of the following material was
originally published in the 1996 textbook,
NCSA Guide to Enterprise Security (McGraw Hill) and was most recently updated with newer references
for use in Norwich University programs in July 2006.
Introduction to Computer Crime.
1 Sabotage: Albert the Saboteur.
4 Equity Funding Fraud.
4.1 What happened.
6 Scavenging: Garbage Out, Data In.
6.1 Legal status of garbage.
6.2 RAM and Virtual Memory.
6.3 Magnetic Spoor.
6.4 Bye-Bye, Data.
7 Trojan horses.
7.1 Case studies.
7.2 1993-1994: Internet monitoring attacks.
7.3 Cases from the INFOSEC Year in Review Database.
7.4 Hardware Trojans.
7.5 Diagnosis and prevention.
8 Back Doors: Secret Access.
8.2 Examples of Back Doors.
8.3 Easter Eggs and the Trusted Computing Base.
8.4 Back Doors: RATs.
8.5 Back Doors: Testing Source Code.
8.6 Additional resources.
8.7 Additional reports.
9 Voice Mail Security.
10 Salami Fraud.
11 Logic bombs.
11.1 Time bombs.
11.2 Renewable software licenses.
11.3 Circumventing logic bombs.
12 Data leakage.
12.1 Some cases of data leakage:
12.2 USB Flash Drives.
12.6 Plugging covert channels.
13.1 More recent cases:
14.1 Desktop forgery.
14.2 Fake credit cards.
One of the most interesting cases of computer sabotage
occurred at the National Farmers Union Service Corporation of
The next morning, management confronted Albert with the film of his actions and asked for an explanation. Albert broke down in mingled shame and relief. He confessed to an overpowering urge to shut the computer down. Psychological investigation determined that Albert, who had been allowed to work night shifts for years without a change, had simply become lonely. He arrived just as everyone else was leaving; he left as everyone else was arriving. Hours and days would go by without the slightest human interaction. He never took courses, never participated in committees, never felt involved with others in his company. When the first head crashes occurred–spontaneously – he had been surprised and excited by the arrival of the repair crew. He had felt useful, bustling about, telling them what had happened. When the crashes had become less frequent, he had involuntarily, and almost unconsciously, re‑created the friendly atmosphere of a crisis team. He had destroyed disk drives because he needed company.
In this case, I blame not Albert but the managers who relegated an employee to a dead‑end job and failed to think about his career and his morale. Preventing internal sabotage depends on proper employee relations. If Albert the Saboteur had been offered a rotation in his night shift, his employer might have saved a great deal of money.
Managers should provide careful and sensitive supervision of employees’ state of mind. Be aware of unusual personal problems such as serious illness in the family; be concerned about evidence of financial strains. If an employee speaks bitterly about the computer system, his or her job conditions, or conflicts with other employees and with management, TALK to them. Try to solve the problems before they blow up into physical attack.
Another crucial element in preventing internal and external sabotage is thorough surveillance. Perhaps your installation should have CCTV cameras in the computer room; if properly monitored by round‑the‑clock security personnel or perhaps even an external agency, such devices can either deter the attack in the first place or allow the malefactors to be caught and successfully prosecuted.
One of my favourite BC cartoons (drawn by Johnny Hart) shows two cavemen talking about a third: “Peter has a mole on his back,” says one. The other admonishes, “Don’t make personal remarks.” The final frame shows Peter walking by–with a grinning furry critter riding piggyback.
For readers whose native language is not English, “piggybacking” (origins unknown, according to various dictionaries) is the act of being carried on someone’s back and shoulders. It’s also known as pick‑a‑back. Kids like it.
So do criminals.
Now, if you are imagining masked marauders riding around on innocent victims’ backs, you must learn that in the world of information security, piggybacking refers to unauthorized entry to a system (physically or logically) by using an authorized person’s access code.
In a sense, piggybacking is a special case of impersonation–pretending to be someone else, at least from the point of view of the access-control system and its log files.
To interfere with physical piggybacking, we have to avoid making security a nuisance that employees will come to ignore out of contempt for ham-handed restrictions. For example, it is wise to control access to the areas that should be secure but not to unimportant areas.
The other crucial dimension of piggybacking is employee training. Everyone has to understand the risks of allowing normal politeness (e.g., letting in a colleague) to overcome security rules. Letting even authorized people into a secured area without registering their security IDs with the access-control system damages the audit trail but it also puts their safety at risk: in an emergency, the logs will incorrectly fail to indicate their presence in the secured area.
Using someone’s logged-on workstation is a favorite method used by penetration testers or criminals who have gained physical access to devices connected to a network. Such people can wear appropriate clothing and assume a casual, relaxed air to convince passers-by that they are authorized to use someone else’s workstation. Sometimes they pose as technicians and display toolkits while they are busily stealing information or inserting back doors into a target system.
Unattended workstations that are logged on are the principle portal for logical piggybacking. Even a workstation that is not logged on can be a vulnerability, since uncontrolled access to the operating system may allow an intruder to install keystroke-capture software that will log user IDs and passwords for later use.
A simple but non‑automatic method is to lock the keyboard by physical removal of a key when one leaves one’s desk. Because this method requires a positive action by the user, it is not likely to be fool‑proof – not because people are fools, but because we are not machines and so sometimes we forget things. In addition, any behavior that has no reinforcement tends to be extinguished; in the absence of dramatic security incidents, the perceived value of security measures inevitably falls.
There are two software solutions currently in use to prevent unauthorized use of a logged‑on workstation or PC when the rightful session‑owner is away:
· Automatic logoff after a period of inactivity
· Branch to a security screen after a timeout
One approach to preventing access at unattended logged‑on workstations is at the operating system level. The operating system or a background logoff program can monitor activity and abort a session that is inactive. These programs usually allow different groups to have different definitions of “inactive” to adapt to different usage patterns. For example, users in the accounting group might be assigned a 10‑minute limit on inactivity whereas users in the engineering group might get 30 minutes.
When using such utilities, it is critically important to measure the right things when defining inactivity. For instance, if a monitor program were to use only elapsed time, it could abort someone in the middle of a long transaction that requires no user intervention. On the other hand, if the monitor were to use only CPU activity, it might abort a process which was impeded by a database lock through no fault of its own.
Currently, PCs can be protected with the timeout features of widely‑available and inexpensive screen‑saver programs. They allow users to set a count‑down timer that starts after keyboard‑input; the screen saver then requests a password before wiping out the images of flying toasters, swans and whatnot. The critical question to ask before relying on such screen savers is whether they can be bypassed; for example, early versions of several Windows 3.11 and Windows 95 screensavers failed to block access to the CTL-ALT-DEL key combination and therefore allowed intruders to access the Task Manager window where the screensaver process could easily be aborted. Today’s screensavers are largely free of this defect.
A few suggestions for secure screen savers, timeout and shutdown utilities (these references are not endorsements):
Such utilities are relatively crude; application‑level timeouts are preferable to the blunt object approach of operating system‑level logoff utilities or generic screen-lock programs. Using application timeouts, a program can periodically branch to a security screen for re‑authentication. A security screen can ask for a password or for other authentication information such as questions from a personal profile. Best of all, such application-level functions, being programmed in by the development team that knows how the program will be used or is being used in practice. To identify inactivity, one uses a timed terminal read. A function can monitor the length of time since the last user interaction with the system and set a limit on this inactivity. At the end of the timed read, the program can branch to a special reauthentication screen. Filling in the right answer to a reauthentication question then allows the program to return to the original screen display. Since programmers can configure reauthentication to occur only after a reasonable period of inactivity, most people would not be inconvenienced.
A really smart program would actually measure response time for a particular entry screen for a particular user and would branch to the security screen only if the delay were much longer than usual; e.g., if 99% of all the cases where the John accessed the customer-information screen were completed within 5 minutes, the program would branch to the security screen after 5 minutes of inactivity. In contrast, if Jane took at most 10 minutes to complete 99% of her accesses to the employee-information screen, the program would not demand reauthentication until more than 10 minutes had gone by.
In summary, an ideal timeout facility would be written into application program to provide
· A configurable time‑out function with awareness of individual user usage patterns;
· Automatic branching to a security screen for sophisticated reauthentication;
· Integration with a security database, if available;
· Automatic return to the previous (interrupted) state to minimize disruption of work.
Short of programming your own, sophisticated user-monitoring system in home-grown programs, is there any hope for spotting the user that leaves a workstation logged on to the network?
In general, there are problems with any system that simply reads a single data entry from a token which can be removed or uses input that does not require repeated data transfer. If the authentication data don’t have to be supplied all the time, then the workstation and the program that is monitoring it cannot know that the user has left until a timeout occurs, just like any other software-based solution. For example, a single fingerprint entry, a single retinal scan, or a single swipe of a smart card are inadequate for detecting the departure of an authorized user because there is no change of state when the user leaves the area.
One approach to detecting the departure of an authorized user depends on access to a continuous stream of data or presence of a physical device; e.g., a system can be locked instantly when a user removes a smart card from a reader (or a USB token from the USB port) and then can be reactivated when the token is returned. Unfortunately, the presence of the physical device need not imply that the human being who uses it is still at the workstation. The problem might be reduced if the device were like an EZ-Pass proximity card that naturally got carried around by all users – perhaps as part of a general-purpose, required ID badge that could serve to open secured doors as well as grant access to workstations and specific programs.
Another approach to program‑based re‑authentication would prevent piggybacking by means of biometric devices such as facial- or iris-recognition systems and fingerprint recognition units. For example, a non-invasive facial- or iris-recognition system could be used programmatically to shut down access the moment the user leaves the workstation and reactivate access when the user returns. Similarly, a touchpad or mouse with a fingerprint-recognition device could continually reauthenticate a user silently and with no trouble at all whenever the user moves the cursor.
Another tool that might be used for programmatic verification of continuous presence at a keyboard is keyboard typing dynamics. Such systems learn how a user types a particular phrase as a method of authentication. However, with today’s increased processor speeds and sophisticated pattern-recognition algorithms, it ought to be possible to have a security module in a program learn how a user usually types – and then force reauthentication if the pattern doesn’t match the baseline. True, this system might produce false alarms after a three-martini lunch – but maybe that’s not such a bad idea after all.
Such sophisticated methods are still not readily available
in the workplace despite steadily falling costs and steadily rising reliability.
It will be interesting to see how the field evolves in coming years as
In 1970, Jerry Neal Schneider used “dumpster diving” to retrieve printouts
from the Pacific Telephone and Telegraph (PT&T) company in
In discussions of impersonation in an online forum, one contributor noted that with overalls and a tool kit, you can get in almost anywhere. You just produce your piece of paper and say, “Sorry, it says here that the XYZ unit must be removed for repair.”
In one of my courses some years ago, a participant recounted the following astonishing story:
A well‑dressed business man appeared at the offices of a large firm one day and appropriated an unused cubicle. He seemed to know his way around and quickly obtained a terminal to the host, pencils, pads, and so on. Soon, he was being invited out to join the other employees for lunch; at one point he was invited to an office party. During all this time, he never wore an employee badge and never told anyone exactly what he was doing. “Special research project,” he would answer with a secretive air. Two months into his tenure, my course participant, a feisty information security officer, noticed this man as she was walking through his area of the office. She asked others who he was and learned that no one knew. She asked the man for his employee ID, but he excused himself and hurried off. At this point, the security officer decided to call for the physical security guards. She even prevented the mystery man’s precipitous departure by running to the only elevator on the floor and diving into it before he could use it to escape.
It turned out that the man was a fired employee who was under indictment for fraud. He had been allowed into the building every morning by a confederate, a manager who was also eventually indicted for fraud. The manager had intimidated the security guards into allowing the “consultant” into the building despite official rules requiring everyone to have and wear valid employee passes. The more amazing observation is that in two months of unauthorized computer and office use, this man was never once stopped or reported by the staff working in his area.
This case illustrates the crucial importance of a sound corporate culture in ensuring that security rules are enforced.
Because so many people are hesitant to get involved in enforcing security rules, I recommend that security training include practice simulations of how to deal with unidentified people; anyone spotting such a person should call facilities security at once. One can even run drills by letting people know that there will be deliberate violations of the badge rule and that the first person to report the unbadged “intruder” will win a prize. Naturally, one should not terminate such practice drill – just keep it going indefinitely. Sooner or later, someone will report a real intruder.
This method of spotting intruders will fail, however, if authorized employees consistently fail to wear visible identification at all times on the organization’s property. The most common reason for such delinquency is that upper managers take off their badges as an unfortunate sign of high social status; naturally, eventually all employees end up taking off their badges. And then, since all it takes to look like one of the gang is not wearing an ID, the street door may as well be kept unlocked with a large sign pointing into the building reading, “Come steal stuff here.”
One of the most common forms of computer crime is data diddling – illegal or unauthorized data alteration. These changes can occur before and during data input or before output. Data diddling cases have included banks, payrolls, inventory, credit records, school transcripts, and virtually all other forms of data processing known.
One of the classic data diddling frauds was the Equity Funding
case, which began with computer problems at the Equity Funding Corporation of
The computer problems occurred just before the close of the financial year in 1964. An annual report was about to be printed, yet the final figures simply could not be extracted from the mainframe. In despair, the head of data processing told the president the bad news; the report would have to be delayed. Nonsense, said the president expansively (in the movie, anyway); simply make up the bottom line to show about $10,000,000.00 in profits and calculate the other figures so it would come out that way. With trepidation, the DP chief obliged. He seemed to rationalize it with the thought that it was just a temporary expedient, and could be put to rights later anyway in the real financial books.
The expected profit didn’t materialize, and some months later, it occurred to the executives at Equity that they could keep the stock price high by manufacturing false insurance policies which would make the company look good to investors. They therefore began inserting false information about nonexistent policy holders into the computerized records used to calculate the financial health of Equity.
In time, Equity’s corporate staff got even greedier. Not content with jacking up the price of their stock, they decided to sell the policies to other insurance companies via the redistribution system known as re‑insurance. Re‑insurance companies pay money for policies they buy and spread the risk by selling parts of the liability to other insurance companies. At the end of the first year, the issuing insurance companies have to pay the re‑insurers part of the premiums paid in by the policy holders. So in the first year, selling imaginary policies to the re‑insurers brought in large amounts of real cash. However, when it the premiums came due, the Equity crew “killed” imaginary policy holders with heart attacks, car accidents, and, in one memorable case, cancer of the uterus – in a male imaginary policy-holder.
By late 1972, the head of DP calculated that by the end of the decade, at this rate, Equity Funding would have insured the entire population of the world. Its assets would surpass the gross national product of the planet. The president merely insisted that this showed how well the company was doing.
The scheme fell apart when an angry operator who had to work overtime told the authorities about shenanigans at Equity. Rumors spread throughout Wall Street and the insurance industry. Within days, the Securities and Exchange Commission had informed the California Insurance Department that they’d received information about the ultimate form of data diddling: tapes were being erased. The officers of the company were arrested, tried, and condemned to prison terms.
What can we learn from Equity Funding scandal? Here are some thoughts for discussion:
As managers, make it clear in writing and behaviour that no illegality will be tolerated in your organization. Provide employees with information on what to do if their complaints of malfeasance are not taken seriously by their superiors. You may demonstrate the seriousness of your commitment to honesty by including instructions on how to reach legal or regulatory authorities.
As employees, be suspicious of any demands that you break documented rules, unspoken norms of data processing, or the law. For example, if you are asked to fake a delay in running a program–for any ostensible reason whatsoever–write down the time and date of the request and who asked you to do it. I know that it’s easy to give advice when one doesn’t bear the consequences, but at least see if it’s possible to determine why you are being asked to dissimulate. If you’re braver than most people, you can try seeing what happens if you flatly refuse to lie. Who knows, you might be the pin that bursts whatever bubble your superiors are involved in.
If you notice an irregularity–e.g., a high‑placed official apparently doing extensive data entry–see if you can discreetly find out what’s happening. See what kind of response you get if you politely inquire about it. If a high‑placed employee tries to enter the computer room without authorization, refuse access until your own supervisor authorizes entry–preferably in writing.
If you do come to the conclusion that a crime is being committed, inform your supervisor–if (s)he seems to be honest. Otherwise, inform the appropriate civic or other authorities when you have evidence and your doubts are gone. At least you can escape being arrested yourself as a co‑conspirator.
“Superzap” was an IBM utility that bypassed normal operating system controls. The term eventually became a generic word; with such a program, a user with the appropriate access and privileges could read, modify, or destroy any data on the system, whether in memory or on disk. Such tools can sometimes allow the user to avoid leaving an audit trail. Worse, normal application controls may be ignored; e.g., requirements for referential integrity in databases, respect for business rules, and authorization restrictions limiting access to specific people or roles.
What kinds of utilities qualify as superzaps?
In my own experience, I was told by one customer, a service bureau, that one of its customers regularly used a superzap program to modify production data. Other than warning the managers that such a procedure is inherently risky, there was nothing the bureau could do about it.
When I was running operations at a service bureau in the 1980s, I discovered that a programmer made changes directly in spoolfiles (spooled print files) on a monthly basis to correct a persistent error that had never been fixed in the source code. If such shenanigans were going on in a mere report, what might be happening in, say, print runs of checks?
So why tolerate superzaps at all?
Superzap programs serve us well in emergencies. No matter how well planned and well documented, any system can fail. If a production system error has to be circumvented NOW, patching a program, fixing a database pointer, or repairing an incorrect check-run spoolfile may be the very best solution as long as the changes are authorized, documented, and correct. However, repeated use of such utilities to fix the same problems indicates a problem of priorities. Fix the problem now, yes; but find out what caused the problem and solve the root causes as well.
Powerful system utilities that bypass normal controls can be used to damage data and code. Network managers can control such “superzap” programs by limiting access to them; software designers can help network managers by enforcing capability checking at run-time.
Security systems using menus can restrict users to specific tasks; the usual security matrix can prevent unauthorized access to powerful utility programs. Some programs themselves can check to see that prospective users actually have appropriate capabilities (e.g., root access). Ad hoc query programs can sometimes be restricted to read-only in any given database.
On some systems, access control lists (ACLs) permit explicit inclusion of user sets which may access a file (including superzap programs) for read and write operations.
Aside from using normal operating system security, one can also disable programs temporarily in ways which interfere with (they don’t preclude) unauthorized access; e.g., a system manager can reversibly remove the capabilities allowing interactive or batch execution from dangerous programs.
It may be desirable to eliminate certain tools altogether from general availability. For example, special diagnostic utilities which replace the operating system should routinely be inaccessible to unauthorized personnel. Such diagnostic tools could be kept in a safe, for example, with written authorization required for access. In an emergency, the combination to the safe might be obtained from a sealed, signed envelope which would betray its having been opened. I can even imagine a cartoon showing a sealed glass box containing such an envelope on the computer room wall with the words, “IN CASE OF EMERGENCY, BREAK GLASS” to be sure that the emergency crew could get the disk or cartridge if it had to.
When printing important files such a runs of checks, it may be wise to print “hot” instead of spooling the output. That is, have the program generating the check images control a secured printer directly rather than passing through the usual buffers. Make sure that the printer is in a locked room. Arrange to have at least two employees watching the print run. If a paper jam requires the run to be started again, arrange for appropriate parameters to be passed to prevent printing duplicates of checks already produced.
Regardless of all the access-control methods described above, if an authorized user wishes to misuse a superzap program, there is only one way to prevent it: teamwork. By insisting that all use of superzaps be done with at least two members of the staff present, one can reduce the likelihood of abuse. Reduce, not eliminate: there is always the possibility of collusion. Nonetheless, if only a few percent (say, two percent for the sake of the argument) of all employees are potential crooks, then the probability of getting two crooks on the same assignment by chance alone is about 0.04%. True, the crooks may cluster together preferentially, but in any case, having two people using privileged-mode DEBUG to fix data in a database seems better than having just one.
One method that will certainly NOT work is the ignorance-is-bliss approach. I have personally heard many network managers dismiss security concerns by saying, “Oh, no one here knows enough to do that.” This is a short-sighted attitude, since almost everything described above is fully documented in vendor and contributed software library publications. Recalling that managers are liable for failures to protect corporate assets, I urge all network managers to think seriously about these and other security issues rather than leaving them to chance and the supposed ignorance of a user and programmer population.
Sometimes it’s the little details that destroy the effectiveness of network security. Firewalls, intrusion-detection systems, token-based and biometric identification and authentication – all of these modern protective systems can be circumvented by criminals who take advantage of what few people ever think about: garbage.
Computer crime specialists have described unauthorized access to information left on discarded media as scavenging, browsing, and Dumpster‑diving (from the trademarked name of metal bins often used to collect garbage outside office buildings).
Discarded garbage is not considered private
property under the law in the
“The Fourth Amendment does not prohibit the warrantless search and seizure of garbage left for collection outside the curtilage of a home.... Since respondents voluntarily left their trash for collection in an area particularly suited for public inspection, their claimed expectation of privacy in the inculpatory items they discarded was not objectively reasonable. It is common knowledge that plastic garbage bags left along a public street are readily accessible to animals, children, scavengers, snoops, and other members of the public. Moreover, respondents placed their refuse at the curb for the express purpose of conveying it to a third party, the trash collector, who might himself have sorted through it or permitted others, such as the police, to do so. The police cannot reasonably be expected to avert their eyes from evidence of criminal activity that could have been observed by any member of the public.....”
In other words, anything we throw out is fair
game, at least in the
NewsScan authors John Gehl and Suzanne Douglas summarized the rest of the story as follows: In mid-2000,
Microsoft . . . [complained] that various organizations
allied to it have been victimized by industrial espionage agents who attempted
to steal documents from trash bins. The organizations include the Association
for Competitive Technology in
Saying he was exercising a “civic duty,” Oracle
chairman and founder Lawrence J. Ellison defended his company of suggestions
that Oracle’s behavior was “Nixonian” when it hired private detectives to scrutinize
organizations that supported Microsoft’s side in the antitrust suit brought
against it by the government. The investigators went through trash from those
organizations in attempts to find information that would show that the organizations
were controlled by Microsoft. Ellison, who, like his nemesis Bill Gates at Microsoft,
is a billionaire, said, “All we did was to try to take information that was
hidden and bring it into the light,” and added: “We will ship our garbage to
[Microsoft], and they can go through it. We believe in full disclosure.” “The
only thing more disturbing than Oracle’s behavior is their ongoing attempt to
justify these actions,” Microsoft said in a statement. “Mr. Ellison now appears
to acknowledge that he was personally aware of and personally authorized the
broad overall strategy of a covert operation against a variety of trade associations.”
(New York Times
Discarded information can reside on paper, magnetic disks and tapes, and even electronic media such as PC-card ram disks. All of them have special methods for obliterating the unwanted information. I don’t want to spend much time on paper, carbon papers, and printer ribbons; the obvious methods for disposing of these media are so simple they need little explanation. One should ensure that sensitive paper documents are shredded; the particular style of shredding depends on the degree of sensitivity and the volume of sensitive papers. Cross-cut shredders, locked recycling boxes and secure shredding services that reliably take care of such problems are well established in industry.
At this point, I suggest that readers take a look around their own operations and find out how discarded paper, electronic and magnetic media containing confidential information are currently handled. With this information in hand, you’ll be able to read the upcoming articles with your own situation well in mind.
The first area to look at is the least obvious: electronic storage. Data are stored in the main random-access memory (RAM, as in “This computer has 128 MB of RAM) in computers whenever the data are in use. Until the system is powered off, data can be captured through memory dumps and stored on non-volatile media such as CD-ROM. Forensic specialists use this approach as one of the most important steps in seizing evidence from systems under investigation. However, criminals with physical access to a PC or other computer may be able to do the same if there is inadequate logging enabled on the system. Furthermore, even if the system is powered off and rebooted, thus destroying the contents of main memory, most systems use virtual memory (VM) which extends main memory by swapping data to and from a reserved area of a hard disk. Examining the hard disk (usually with special forensic software) allows a specialist to locate a great deal of information from RAM such as keyboard, screen and file buffers and process stacks (containing the global variables used by a program plus the data in use by subroutines at the time the swap occurred). Although there is never a guarantee of what will be found in the swap file, rummaging around with text-search tools can reveal logon IDs, passwords, and fragments of recently active and possibly confidential documents. The most alarming aspect of swap files is that they may contain cleartext versions of encrypted files; any decryption algorithm necessarily has to put a decrypted version of the ciphertext somewhere in memory to make it accessible by the authorized user of the decryption key.
Physical protection of a workstation to preclude access to the hardware is the most cost-effective mechanism for preventing scavenging via the swap files as well as to reduce scavenging of disk-resident data. Tools such as secure cabinets, anti-theft cables, movement-sensitive alarms, locks for diskette drives, and special screws to make it more difficult to enter the processor card cage all make illicit or undetected access more difficult.
While we’re on the topic of RAM, most handheld computers use RAM for storage. What happens when you have to return such a system for repairs? Users can set passwords to hide information on some systems (e.g., Palm Pilots) but there are lots of programs for cracking the passwords of these devices. If it is possible to overwrite memory completely, I recommend that the user do so before having the device repaired or exchanged. If the system is nonfunctional, administrators should decide whether the relatively low cost of replacing the unit is justified to maintain security. Old handheld computers make excellent and original coasters for hot or cold drinks; they can also be used as very short-lived Frisbees.
One issue worth mentioning in connection with disks is that some documents may contain more information than the sender intends to release. MS-Office documents, for example, have a PROPERTIES sheet that some people never seem to check before sending their documents to others. I have noticed Properties sheets with detailed Comments or Keywords fields that reveal far too much about the motives underlying specific documents; others include detailed or out-dated information about reporting structures such as the name of the sender’s manager (a real treat for social engineering adepts). Users of MS-Word should turn off the FAST SAVE “feature” that was useful when saving to slow media such as floppy disks but that is now completely useless and even dangerous: FAST SAVE allows deleted materials to remain in the MS-Word document. Worse yet is the danger of turning on “TOOLS | TRACK CHANGES” but turning off the options to “Highlight changes on screen” and “Highlight changes in printed document.” In this configuration, Word maintains a meticulous record of exactly who made which changes – including deletions – in the document but does not display the audit trail. Someone receiving such a document can restore the display functions at the click of a mouse and read potentially damaging information about corporate intentions, background information and bargaining positions. All documents destined for export should be checked for properties and track changes. My own preference when exchanging documents is to create a PDF (Portable Document Format) file using Adobe Acrobat – and to check the output to see that it conforms to my expectations.
What should network administrators do about sensitive information on hard disks that are being sent out to third parties as part of workstations that need repairs, in exchange programs or as charitable donations?
In general, the most important method for protecting sensitive data on disk is encryption. If you routinely encrypt all sensitive data then only the swap file will be of concern (see the previous column in this series). However, many organizations do not require encryption on desktop systems even if laptop systems must use encrypting drivers. If you decide that the hard disk be “wiped” before sending it out, be sure that you use adequate tools for such wiping.
As many readers know, deleting a file under most operating systems usually means removing the pointer to the first part (extent, cluster) of the file from the disk directory (file allocation table or FAT under the Windows operating systems). The first character of the file name may be obliterated, but otherwise, the data remain unchanged in the now-discarded file. Unless the disk sectors are allocated to another file and overwritten by new data, the original data will remain accessible to utilities that can reconstruct the file by searching the unallocated clusters all over the disk and offering a menu of potentially recoverable data. With the size of today’s hard disks, free space can be in the gigabytes, the clusters containing discarded data may not be overwritten for a long time.
Quick formatting a disk drive reinitializes file system structures such as the file allocation table but leaves the raw file data untouched. Full formatting using the operating system is a high-level format that leaves data in a recoverable state. Low-level formatting is normally carried out at the factory and establishes sectors, cylinders and address information for accessing the drive. Low-level formatting may render all data previously stored on a disk inaccessible to the operating system but not necessarily to specialized recovery programs.
One inadequate method for obliterating data that I have heard people recommend is regular defragmentation. Moving existing files around on disk to ensure that each file uses the minimum number of contiguous blocks of disk space will likely overwrite blocks of recently liberated file clusters. However, there is no guarantee that existing free space containing data residues will be overwritten.
It is best to obliterate sensitive hard disk data at the time you discard the files. File shredder programs (use any search engine with keywords “file shredder program review” for plenty of suggestions) can substitute for the normal delete function or wastebasket. These tools overwrite the contents of a file to be discarded before deleting it with the operating system. However, a single-pass shredder may allow data to be recovered using special equipment; to make data recovery impossible, use military-grade obliteration that uses seven passes of random data.
Unfortunately, even shredder programs may not solve the problem for ultra-high sensitivity data. Because file systems generally allocate space in whole number of clusters, an end-of-file (EOF) that falls anywhere short of the end of a cluster leaves slack space between the EOF and the end of the cluster. Slack space does not normally get overwritten by the file system, so it is extremely difficult to get rid of these fragments unless you use shredder programs that specifically take this problem into account.
One tool that is used by the US Department of Defense for wiping disks is WipeDrive
< http://www.whitecanyon.com/wipedrive.php >. The documentation specifies that the product genuinely wipes all data from a hard drive, regardless of operating system and format. The tool can even be run from a boot disk. It is licensed to individual technicians rather than to specific PCs, thus making it ideal for corporate use. [I have no involvement with CleanDrive or its makers and this reference does not constitute an endorsement.]
File shredder programs are a double-edged sword. They allow honest employees to obliterate company-confidential data from disks but they also allow dishonest employees to obliterate incriminating information from disks. One program review includes the words, “The program’s even got a trial copy you can download for free. So try it out and get those... ummm... errr... personal files off your work PC before the boss sends his computer gurus out to check your machine.” This advice is clearly not directed at system administrators or to honest employees.
Telling the difference between the good guys and the bad guys is a management issue and has been discussed in previous articles published in this newsletter. However, as a precaution, I recommend that corporate policies specifically forbid the installation of file-shredder programs on corporate systems without authorization.
One quick note about magnetic tapes: beware the scratch tape. In older environments where batch processing still uses tapes as intermediate storage space during jobs, it is customary to have a rack of “scratch” tapes that can be used on demand by any application or job. There have been documented cases in which data thieves regularly read scratch tapes to scavenge left-over data from competitors or for industrial espionage. Scratch tapes should be erased before being re-used.
As for broken or obsolete magnetic media such as worn-out diskettes, used-up magnetic tapes and dead disk drives, the worst thing to do is just to throw this stuff into the regular garbage.
Security experts recommend physical destruction of such media using band saws, industrial incineration services capable of handling potentially toxic emissions and even sledge hammers.
In conclusion, all of us need to think about the data residues that are exposed to scavengers. Whether you work in a mainframe shop or a PC environment, whether your organization is a university or a vulture capitalist firm, it’s hard to, ah, carrion when data scavengers steal our secrets.
Some of my younger students have expressed bewilderment over the term Trojan “horse.” They associate “Trojan” with condoms and with evil programs. Here’s the original story:
[The Horse is then dragged into the walled city of
...In the night, the armed men who were enclosed in
the body of the horse...opened the gates of the city to their friends, who had
returned under cover of the night. The city was set on fire; the people, overcome
with feasting and sleep, put to the sword, and
Bullfinch’s Mythology thus describes the original Trojan Horse. See < http://homepage.mac.com/cparada/GML/WOODENHORSE.html > for extensive information about the story. Today’s electronic Trojan is a program which conceals dangerous functions behind an outwardly innocuous form.
One of the nastiest tricks played on the shell‑shocked world of microcomputer users was the FLU‑SHOT‑4 incident of March 1988. With the publicity given to damage caused by destructive, self‑replicating virus programs distributed through electronic bulletin board systems (BBS), it seemed natural that public‑spirited programmers would rise to the challenge and provide protective screening.
He reported the incident immediately to his superior officers. Panic ensued until , when the it was found that a program called JOKE.RUN had been assigned to the function key. The program merely listed file names with “DELETING...” in front of each. No files had actually been deleted. Investigation found the programmer responsible; the joke had originally been directed at a fellow programmer, but the redefinition of the function key had accidentally found itself into the installation diskettes for a revision of the workstation software. It took additional hours to check every single workstation on the base looking for this joke. The programmer’s career was not enhanced by this incident.
Some of the first PC Trojans included
Trojan attacks on the Internet were discovered in late
1993. Full information about all such attacks is available on the World Wide
Web site run by CIAC, the Computer Incident Advisory Capability of the
CIAC and other response teams have observed many compromised systems surreptitiously monitoring network traffic, obtaining username, password, host‑name combinations (and potentially other sensitive information) as users connect to remote systems using telnet, rlogin, and ftp. This is for both local and wide area network connections. The intruders may (and presumably do) use this information to compromise new hosts and expand the scope of the attacks. Once system administrators discover a compromised host, they must presume monitoring of all network transactions from or to any host “visible” on the network for the duration of the compromise, and that intruders potentially possess any of the information so exposed. The attacks proceed as follows. The intruders gain unauthorized, privileged access to a host that supports a network interface capable of monitoring the network in “promiscuous mode,” reading every packet on the network whether addressed to the host or not. They accomplish this by exploiting unpatched vulnerabilities or learning a username, password, host‑name combination from the monitoring log of another compromised host. The intruders then install a network monitoring tool that captures and records the initial portion of all network traffic for ftp, telnet, and rlogin sessions. They typically also install “Trojan” programs for login, ps, and telnetd to support their unauthorized access and other clandestine activities.
System administrators must begin by determining if
intruders have compromised their systems. The
CIAC works closely with CERT-CC, the
A few weeks later, CIAC issued Bulletin E-12, which warned ominously,
The number of Internet sites compromised by the ongoing series of network monitoring (sniffing) attacks continues to increase. The number of accounts compromised world‑wide is now estimated to exceed 100,000. This series of attacks represents the most serious Internet threat in its history.
IMPORTANT: THESE NETWORK MONITORS DO NOT SPECIFICALLY TARGET INFORMATION FROM UNIX SYSTEMS; ALL SYSTEMS SUPPORTING NETWORK LOGINS ARE POTENTIALLY VULNERABLE. IT IS IMPERATIVE THAT SITES ACT TO SECURE THEIR SYSTEMS.
The attacks are based on network monitoring software, known as a “sniffer”, installed surreptitiously by intruders. The sniffer records the initial 128 bytes of each login, telnet, and FTP session seen on the local network segment, compromising ALL traffic to or from any machine on the segment as well as traffic passing through the segment being monitored. The captured data includes the name of the destination host, the username, and the password used. This information is written to a file and is later used by the intruders to gain access to other machines.
Finally, another CIAC alert (E-20, May 6, 1994) warned of “A Trojan‑horse program, CD‑IT.ZIP, masquerading as an improved driver for Chinon CD‑ROM drives, [which] corrupts system files and the hard disk.” This program affects any MS-DOS system where it is executed.
1997.04.29 The Department of Energy’s Computer Incident Advisory Capability (CIAC) warned users not to fall prey to the AOL4FREE.COM Trojan, which tries to erase files on hard drives when it is run. A couple of months later, the NCSA worked with AOL technical staff to issue a press release listing the many names of additional Trojans; these run as TSRs (Terminate - Stay Resident programs) and capture user IDs and passwords, then send them by e-mail to Bad People. Reminder: do NOT open binary attachments at all from people you don’t know; scan all attachments from people you do know with anti-virus and anti-Trojan programs before opening. (EDUPAGE)
1997-11-06 Viewers of pornographic pictures on the sexygirls.com
site were in for a surprise when they got their next phone bills.
1998-01-05 Jared Sandberg, writing in the Wall Street Journal, reported on widespread fraud directed against naïve AOL users using widely-distributed Trojan Horse programs (“proggies”) that allow them to steal passwords. Another favorite trick that fools gullible users is the old “We need your password” popup that claims to be from AOL administrators. AOL reminds everyone that no one from AOL will ever ask users for their passwords.
1999-01-29 Peter Neumann summarized a serious case of software
contamination in RISKS 20.18: At least 52 computer systems downloaded a TCP
wrapper program directly from a distribution site after the program had been
contaminated with a Trojan horse early in the morning of
1999-05-28 Network Associates Inc. anti-virus labs warned of a
new Trojan called BackDoor-G being sent around the Net as spam in May. Users
were tricked into installing “screen savers” that were nothing of the sort.
The Trojan resembled the previous year’s Back Orifice program in providing remote
administration — and back doors for criminals to infiltrate a system. A variant
called “Armageddon” appeared within days in
1999-06-11 The Worm.Explore.Zip (aka “Trojan Explore.Zip) worm
appeared in June as an attachment to e-mail masquerading as an innocuous compressed
WinZIP file. The executable file used the icon from WinZIP to fool people into
double-clicking it, at which time it began destroying files on disk. Within
a week of its discovery in
1999-09-20 A couple of new Y2K-related virus/worms packaged as Trojan Horses were discovered in September. One e-mail Trojan called “Y2Kcount.exe” claimed that its attachment was a Y2K-countdown clock; actually it also sent user IDs and passwords out into the Net by e-mail. Microsoft reported finding eight different versions of the e-mail in circulation on the Net. The other, named “W32/Fix2001” came as an attachment ostensibly from the system administrator and urged the victims to install the “fix” to prevent Internet problems around the Y2K transition. Actually, the virus/worm would replicate through attachments to all outbound e-mail messages from the infected system. [These malicious programs are called “virus/worms” because they integrate into the operating system (i.e., they are virus-like) but also replicate through networks via e-mail (i.e., they are worm-like).]
2000-01-03 Finjan Software Blocks Win32.Crypto the First Time: Finjan Software, Inc. announced that its proactive first-strike security solution, SurfinShield Corporate, blocks the new Win32.Crypto malicious code attack. Win32.Crypto, a Trojan executable program released in the wild today, is unique in that infected computers become dependant on the Trojan as a “middle-man” in the operating system. Any attempt to disinfect it will result in the collapse of the operating system itself. It is a new kind of attack with particularly damaging consequences because attempting to remove the infection may render the computer useless and force a user to rebuild their system from scratch.
2000-08-29 Software companies . . . reported that the first . .
. [malware] to target the Palm operating system has been discovered. The bug,
which uses a “Trojan horse” strategy to infect its victims, comes disguised
as pirated software purported to emulate a Nintendo Gameboy on Palm PDAs and
then proceeds to delete applications on the device. The . . . [malware] does
not pose a significant threat to most users, says Gene Hodges, president of
Network Associates’ McAfee division, but signals a new era in technological
vulnerability: “This is the beginning of yet another phase in the war against
hackers and virus writers. In fact, the real significance of this latest Trojan
discovery is the proof of concept that it represents.” (Agence
2000-10-27 Microsoft’s internal computer network was invaded by
the QAZ “Trojan horse” software that caused company passwords to be sent to
an e-mail address in
However, within a few days, Microsoft . . . [said] that network vandals were able to invade the company’s internal network for only 12 days (rather than 5 weeks, as it had originally reported), and that no major corporate secrets were stolen. Microsoft executive Rick Miller said: “We started seeing these new accounts being created, but that could be an anomaly of the system. After a day, we realized it was someone hacking into the system.” At that point Microsoft began monitoring the illegal break-in, and reported it to the FBI. Miller said that, because of the immense size of the source code files, it was unlikely that the invaders would have been able to copy them. (AP/Washington Post
2002-01-19 A patch for a vulnerability in the AOL Instant Messenger (AIM) program was converted into a Trojan horse that initiated unauthorized click-throughs on advertising icons, divulged system information to third parties and browsed to porn sites.
2002-03-11 The “Gibe” worm was circulated in March 2002 as a 160KB
EXE file attached to a cover message pretending to be a Microsoft alert explaining
that the file was a “cumulative patch” and pointing vaguely to a Microsoft security
site. Going to the site showed no sign of any such patch, nor was there a digital
signature for the file. However, naive recipients were susceptible to the trick.
[MORAL: keep warning recipients not to open unsolicited attachments in e-mail.]
2002-04-03 Nicholas C. Weaver warned in RISKS that the company Brilliant Digital (BD) formally announced distribution of Trojan software via the Kazaa peer-to-peer network software. The BD software would create a P2P server network to be used for distributed storage, computation and communication -- all of which would pose serious security risks to everyone concerned. Weaver pointed out that today’s naïve users appear to be ready to agree to anything at all that is included in a license agreement, whether it is in their interests or not.
2003-02-14 E-mail purporting to offer revealing photos of Catherine
Zeta-Jones, Britney Spears, and other celebrities is actually offering something
quite different: the secret installation of Trojan horse software that can be
used by intruders to take over your computer. Users of the Kazaa file-sharing
service and IRC instant messaging are at risk. (Reuters/USA Today
2003-05-22 Data security software developer Kaspersky Labs reports that a new Trojan program, StartPage, is exploiting an Internet Explorer vulnerability for which there is no patch. If a patch is not released soon, other viruses could exploit the vulnerability. StartPage is sent to victim addresses directly from the author and does not have an automatic send function. The program is a Zip-archive that contains an HTML file. Upon opening the HTML file, an embedded Java-script is launched that exploits the “Exploit.SelfExecHtml” vulnerability and clandestinely executes an embedded EXE file carrying the Trojan program.
2003-07-14 Close to 2,000 Windows-based PCs with high-speed Internet
connections have been hijacked by a stealth program and are being used to send
ads for pornography, computer security experts warned. It is unknown exactly
how the trojan (dubbed “Migmaf” for “migrant Mafia”) is spreading to victim
computers around the world, whose owners most likely have no idea what is happening,
said Richard M. Smith, a security consultant in
2004-01-08 BackDoor-AWQ.b is a remote access Trojan written in Borland Delphi, according to McAfee, which issued an alert Tuesday, January 6. An email message constructed to download and execute the Trojan is known to have been spammed to users. The spammed message is constructed in HTML format. It is likely to have a random subject line, and its body is likely to bear a head portrait of a lady (loaded from a remote server upon viewing the message). The body contains HTML tags to load a second file from a remote server. This file is MIME, and contains the remote access Trojan (base64 encoded). Upon execution, the Trojan installs itself into the %SysDir% directory as GRAYPIGEON.EXE. A DLL file is extracted and also copied to this directory (where %Sysdir% is the Windows System directory, for example C:\WINNT\SYSTEM32) The following Registry key is added to hook system startup: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion \RunOnce “ScanRegedit” = “%SysDir%\GRAYPIGEON.EXE” The DLL file (which contains the backdoor functionality) is injected into the EXPLORER.EXE process on the victim machine. More information, including removal instructions, can be found at:
2004-01-09 A Trojan horse program that appears to be a Microsoft
Corp. security update can download malicious code from a remote Web site and
install a back door on the compromised computer, leaving it vulnerable to remote
control. Idefense Inc., a Reston,
2004-03-17 The U.S. Department of Homeland Security has alerted
computer security experts about the Phatbot Trojan, which snoops for passwords
on infected computers and tries to disable firewall and antivirus software.
Phatbot . . . Has proved difficult for law enforcement authorities and antivirus
companies to fight.... Mikko Hypponen, director of the antivirus software company
F-Secure in Finland says, “With these P2P Trojan networks, even if you take
down half of the affected machines, the rest of the network continues to work
just fine”; security expert Russ Cooper of TruSecure warns, “If there are indeed
hundreds of thousands of computers infected with Phatbot, U.S. e-commerce is
in serious threat of being massively attacked by whoever owns these networks.”
2004-05-12 Intego has identified a Trojan horse −− AS.MW2004.Trojan −− that affects Mac OS X. This Trojan horse, when double−clicked, permanently deletes all the files in the current user’s home folder. Intego has notified Apple, Microsoft and the CERT, and has been working in close collaboration with these companies and organizations. The AS.MW2004.Trojan is a compiled AppleScript applet, a 108 KB self−contained application, with an icon resembling an installer for Microsoft Office 2004 for Mac OS X. This AppleScript runs a Unix command that removes files, using AppleScript’s ability to run such commands. The AppleScript displays no messages, dialogs or alerts. Once the user double−clicks this file, their home folder and all its contents are deleted permanently. All Macintosh users should only download and run applications from trusted sources.
2004-05-18 Security experts are tracking two new threats that have emerged in the past few days, including a worm that uses seven mechanisms to spread itself. The worm is known as Kibuv, and researchers first noticed its presence Friday, May 14. Kibuv affects all versions of Windows from 98 through Windows Server 2003 and attempts to spread through a variety of methods, including exploiting five Windows vulnerabilities and connecting to the FTP server installed by the Sasser worms. The worm has not spread too widely as of yet, but with its variety of infection methods, experts say the potential exists for it to infect a large number of machines. The second piece of malware that has surfaced is a Trojan that is capable of spreading semi−automatically. Known as Bobax, the Trojan can only infect machines running Windows XP and seems to exist solely for the purpose of sending out large amounts of spam. When ordered to scan for new machines to infect, Bobax spawns 128 threads and begins scanning for PCs with TCP port 5000 open. If the port is open, it exploits the Windows LSASS vulnerability. Bobax then loads a copy of itself onto the new PC, and the process repeats. Antivirus and antispam providers say they have seen just a few machines infected with Bobax as of Tuesday, May 18.
2004-05-20 A Trojan horse may be responsible for an online banking
scam that has cost at least two
2004-08-10 Malicious code that dials premium rate numbers without a user’s consent has been found in a pirated version of Mosquitos 2.0, a popular game for Symbian Series 60 smartphones. The illicit copies of the game are circulating over P2P networks. News of the Symbian Trojan dialler comes days after the arrival of the first Trojan for handheld computers running Windows Pocket PC operating system, Brador−A.
2004-10-25 An e−mail disguised as a Red Hat patch update is a fake designed to trick users into downloading malware designed to compromise the systems they run on, the Linux vendor warned in a message on its Website. While the malicious site was taken down over the weekend, the SANS Internet Storm Center posted a message on its Website saying the hoax “is a good reminder that even though most of these are aimed at Windows users, always be suspect when receiving an e−mail asking you to download something.”
2004-11-23 A new attack by Trojan Horse software known as “Skulls”
targets Nokia 7610 cell phones, rendering infected handsets almost useless.
The program appears to be a “theme manager” for the phone. It replaces most
of an infected phone’s program icons with images of skulls and crossbones, and
disables all of the default programs on the phone (calendar, phonebook, camera,
Web browser, SMS applications, etc.) -- i.e., essentially everything except
normal phone calls. Symbian, the maker of the Nokia 7610 operating system, says
that users will only be affected if they knowingly and deliberately install
the file and ignore the warnings that the phone displays at the conclusion of
the installation process. Experts don’t consider the Skulls malware to be a
major threat, but note that it’s the third mobile phone bug to appear this year
-- and therefore probably means that this kind of problem is here for the foreseeable
future. (ENN Electronic News.net
2005-01-13 Users are being warned about the Cellery worm -- a Windows
virus that piggybacks on the hugely popular Tetris game. Rather than spreading
itself via e-mail, Cellery installs a playable version of Tetris on the user’s
machine. When the game starts up, the worm seeks out other computers it can
infect on the same network. The virus does no damage, but could result in clogged
traffic on heavily infected networks. “If your company has a culture of allowing
games to be played in the office, your staff may believe this is simply a new
game that has been installed -- rather than something that should cause concern,”
says a spokesman for computer security firm Sophos. (BBC News
2005-01-24 Two new Trojan horse programs, Gavno.a and Gavno.b,
masquerade as patch files designed to trick users into downloading them, says
Aaron Davidson, chief executive officer of SimWorks International. Although
almost identical with Gavno.a, Gavno.b contains the Cabir worm, which attempts
to send a copy of the Trojan horse to other nearby Symbian−based phones
via short−range wireless Bluetooth technology. The Gavno Trojans, according
to Davidson, are the first to aim at disrupting a core function of mobile phones−−telephony−−in
addition to other applications such as text messaging, e−mail, and address
books. Gavno.a and Gavno.b are proof−of−concept Trojan horses that
“are not yet in the wild,” Davidson says. Davidson believes the Trojan programs
2005-02-11 Microsoft Corp is investigating a malicious program
that attempts to turn off the company’s newly released anti-spyware software
for Windows computers. Stephen Toulouse, a Microsoft security program manager,
said yesterday that the program, known as “Bankash-A Trojan,” could attempt
to disable or delete the spyware removal tool and suppress warning messages.
It also may try to steal online banking passwords or other personal information
by tracking a user’s keystrokes. To be attacked,
SOPHOS anti-malware company summarizes the Trojan’s functions as follows:
* Steals credit card details
* Turns off anti-virus applications
* Deletes files off the computer
* Steals information
* Drops more malware
* Downloads code from the internet
2005-04-08 On Thursday, April 7, the same day that Microsoft announced
details of its next round of monthly patches, hackers sent out a wave of emails
disguised as messages from the software company in a bid to take control of
thousands of computers. The emails contain bogus news of a Microsoft update,
advising people to open a link to a Web site and download a file that will secure
and ‘patch’ their PCs. The fake Website, which is hosted in
I recently purchased an Apple Macintosh computer at a “computer superstore,” as separate components ‑ the Apple CPU, and Apple monitor, and a third‑party keyboard billed as coming from a company called Sicon.
This past weekend, while trying to get some text‑editing work done, I had to leave the computer alone for a while. Upon returning, I found to my horror that the text “welcome datacomp” had been *inserted into the text I was editing*. I was certain that I hadn’t typed it, and my wife verified that she hadn’t, either. A quick survey showed that the “clipboard” (the repository for information being manipulated via cut/paste operations) wasn’t the source of the offending text.
As usual, the initial reaction was to suspect a virus. Disinfectant, a leading anti‑viral application for Macintoshes, gave the system a clean bill of health; furthermore, its descriptions of the known viruses (as of Disinfectant version 3.5, the latest release) did not mention any symptoms similar to my experiences.
I restarted the system in a fully minimal configuration, launched an editor, and waited. Sure enough, after a (rather long) wait, the text “welcome datacomp” once again appeared, all at once, on its own.
Further investigation revealed that someone had put
unauthorized code in the ROM chip used in several brands of keyboard. The only
solution was to replace the keyboard. Readers will understand the possible
consequences of a keyboard which inserts unauthorized text into, say, source
It is difficult to identify Trojans because, like the ancient Horse built by the Greeks, they don’t reveal their nature immediately. The first step in catching a Trojan is to run the program on an isolated system. That is, try the candidate either on a system whose hard disk drives have been disconnected or which is reserved exclusively for testing new programs.
While the program is executing, look for unexpected disk drive activity; if your drives have separate read/write indicators, check for write activity on drives.
Some Trojans running on micro‑computers use unusual methods of accessing disks; various products exist which trap such programmatic devices. Such products, aimed mostly at interfering with viruses, usually interrupt execution of unusual or suspect instructions and indicate what’s happening but prevent the damage from occurring. Several products can “learn” about legitimate events used by proven programs and thus adapt to your own particular environment.
If the Trojan is a replacement for specific components of the operating system, as in the network monitoring problem described by CIAC above, it is possible to compute check sums and compare them with published checksums for the authentic modules.
The ideal situation for a microcomputer user or a system/network manager is to know, for every executable file (e.g., PROG, .COM, or .EXE) on the system
Take, for example, shareware programs. In general, each program should come not only with the name and address of the person submitting it for distribution but also with the source code. If the requisite compiler is available, one can even compare the object code available on the tape or diskette with the results of a fresh compilation and linkage to be sure there are no discrepancies. These measures make it easier to hope for Trojan‑free utilities.
It makes sense for system managers to forbid the introduction of foreign software into their systems and networks without adequate testing. Users wishing to install apparently useful utilities should contact their system support staff to arrange for acceptance tests. Installing software of unknown quality on a production system is irresponsible.
When organizations develop their own software, the best protection against Trojans is quality assurance and testing (QAT). QAT should be carried out by someone other than the programmer(s) who created the program being tested. QAT procedures often include structured walk‑throughs, in which designers are asked to explain every section of their proposed system. In later phases, programmers have to explain their code to the QAT team. During systems tests, QAT specialists have to ensure that every line of source code is actually executed at least once. Under these circumstances, it is difficult to conceal unauthorized functions in a Trojan.
In the 1983 movie, War Games, directed by John Badham, a young computer cracker (played by a very young Matthew Broderick) becomes interested in breaking through security on a computer system he’s located by automatic random dialing (“war dialing”) of telephone numbers. Thinking that he’s cracking into a video-game site, he eventually manages to break security by locating a secret password that gives him the power to bypass normal limitations. He goes on to play “Global Thermonuclear War”–which nearly results in the real thing.
The unauthorized, undocumented part of the source code which bestows special privileges is, in the language of computer security, a “back door,” sometimes called a “trap door.” A back door will not necessarily cause harm by itself; it merely allows access to program functions – including normal functions – by breaching normal access controls.
Why would anyone install a back door in a program?
In cases where the culprit means no harm, back doors are leftovers from the development and testing phases of software development. When functions are deep in nested series of commands or screens, programmers often insert a shortcut that lets them go directly to a specific function or screen so they can continue testing from that point rather than having to go through the entire sequence of data entry, menu-item selection, and so on. Such shortcuts can significantly shorten testing time for those people unfortunate enough still to be using manual quality assurance techniques (as opposed to automated testing).
The problem occurs when the programmers forget to remove the back doors. When this happens, a poorly-tested program can enter production (use for real business or distribution to real customers) with a dangerous, undocumented feature that can bypass normal restrictions such as edit checks during data entry. Back doors of this kind sometimes result in data corruption, as when a database program allows someone to short-circuit the usual validation of entered data and simply lets a user cut directly to an update function that happens to have bad data in the input buffers.
Back doors are part of a program; they are distinguished from Trojan Horses, which are programs with a covert purpose. A Trojan Horse is a program which has undocumented or unauthorized functions that can cause harm during normal usage by innocent users as well as by criminals. Thus many Trojan Horse programs have back doors, but back doors may exist in programs that would not usually be described as Trojan Horses. A specific kind of Trojan Horse program is known as an Easter Egg; this is usually an undocumented game or display intended by its authors to be harmless. Unfortunately, due to poor programming or software incompatibilities that develop as operating systems change, Easter Eggs can also cause major problems such as system lockups or crashes. All Easter Eggs depend on back doors – usually undocumented keystroke sequences – to be invoked.
Back doors (or trap-doors as they are often known) have been known for decades. As Willis Ware pointed out in 1970, “Trap-door entry points often are created deliberately during the design and development stage to simplify the insertion of authorized program changes by legitimate system programmers, with the intent of closing the trap-door prior to operational use. Unauthorized entry points can be created by a system programmer who wishes to provide a means for bypassing internal security controls and thus subverting the system. There is also the risk of implicit trap-doors that may exist because of incomplete system design – i.e., loopholes in the protection mechanisms. For example, it might be possible to find an unusual combination of system control variables that will create an entry path around some or all of the safeguards.”
Early experiments in cracking the MULTICS operating system developed by Honeywell Inc. and the Massachusetts Institute of Technology located back doors in that environment in trials from 1972 to 1975, allowing the researchers to obtain maximum security capabilities on several MULTICS systems (see Karger & Schell for details).
In 1980, Philip Myers described the insertion and exploitation
of back doors as “subversion” in his MSc thesis at the
Donn B. Parker described interesting back-door cases
in some papers (no longer available) from the 1980s. For example, a programmer
discovered a back door left in a FORTRAN compiler by the writers of the compiler.
This section of code allowed execution to jump from a regular program file to
code stored in a data file. The criminal used the back door to steal computer
processing time from a service bureau so he could execute his own code at other
users’ expense. In another case, remote users from
More recently, devices using the Palm operating system (PalmOS) were discovered to have no effective security despite the password function. Apparently developer tools supplied by Palm allow a back-door conduit into the supposedly locked data.
Distributed denial-of-service (DDoS) zombie or slave programs are examples of a type of back door, although they don’t offer total control of the contaminated system. These tools allow the user of a master or controller program to issue (usually) encrypted messages that direct a stream of packets at a designated IP address at a specific time; with hundreds or thousands of such infected systems responding all at once, almost any target on the Internet can be swamped.
In March 2000, I spoke at NATO headquarters
The confluence of several security threats has destroyed the Trusted Computing Base (TCB) on which security has depended for the last two decades.
The TCB was the constellation of trustworthy hardware, operating system, and application software that allowed for predictable results from predictable inputs.
If you have DirectX drivers installed, a bizarre landscape appears and you can “fly” over (or under) the geometric forms by using the arrow keys on your keyboard. If you look carefully in the virtual distance, you can find a stone monitor planted in the ground. If you get close enough, you can see the names of the development team scrolling by.
How much space in the source and object code does this Easter Egg take? How much RAM and disk space are being wasted in total by all the people who have installed and are using this product? And much more seriously, what does this Easter Egg imply about the quality assurance at the manufacturer’s offices?
An Easter Egg is presumably undocumented code – or at least, it’s undocumented for the users. I do not know if it is documented in internal Microsoft documents. However, I think that the fact that this undocumented function got through Microsoft’s quality assurance process is terribly significant. I think that the failure implies that there is no test-coverage monitoring in that QA process.
When testing executables, one of the necessary (but not sufficient) tests is coverage: how much of the executable code has actually been executed at least once during the QA process. Without running all the code at least once, one can state with certainty that the test process is incomplete. Failing to execute all the code means that there may be hidden functionality in the program: anything from an Easter Egg to something worse. What if the undiscovered code were to be invoked in unusual circumstances and cause damage to a user’s spreadsheet or system? We would call such code a logic bomb.
That’s bad enough, but it gets worse. Consider the following observations:
Well then, here’s the scenario: Bad Guys infiltrate major software company and install undocumented code in widely-distributed spreadsheet software. Faulty quality assurance allows the logic bomb to go into production releases.
The logic bomb in the spreadsheet software receives payload instructions from an Internet connection.
At a specified time, the spreadsheet program
alters data in millions of spreadsheets in, say, the
This situation leads to decreased efficiency
This scenario is an example of asymmetric information warfare
– electronic sabotage on a grand scale but for low cost.
So the next time you play with an Easter Egg in commercial software, stop to think: shouldn’t you express your concerns to the manufacturer instead of just chuckling over a programmer’s joke?
Back doors may be installed by Trojan Horse programs. For example, in July 1998, The Cult of the Dead Cow (cDc) announced Back Orifice (BO), a tool for analyzing and compromising MS-Windows security (such as it be). The author, a hacker with the L0PHT group which later became part of security firm @Stake, described the software as follows (the brackets are in the original): “The main legitimate purposes for BO are remote tech support aid, employee monitoring and remote administering [of a Windows network].” However, added the cDc press release, “Wink. Not that Back Orifice won’t be used by overworked sysadmins, but hey, we’re all adults here. Back Orifice is going to be made available to anyone who takes the time to download it [read, a lot of bored teenagers].” Within weeks, 15,000 copies of Back Orifice were distributed to Internet Relay Chat users by a malefactor who touted a “useful” file (“nfo.zip”)that was actually a Trojan infected by Back Orifice.
BO and programs like it provide back doors for malefactors to invade a victim’s computer. Once the Bad Guy has seized control of the system, functions available include keystroke logging, real-time viewing of what’s on the monitor, screen capture, and full read/write access to all files and devices.
Today, such programs are known as RATs (Remote Administration Trojans). The PestPatrol Glossary provides this useful information [MK note: I have changed “trojan” to “Trojan” in what follows]:
RAT: A Remote Administration Tool, or RAT, is a Trojan that when run, provides an attacker with the capability of remotely controlling a machine via a “client” in the attacker’s machine, and a “server” in the victim’s machine. Examples include Back Orifice, NetBus, SubSeven, and Hack’a’tack. What happens when a server is installed in a victim’s machine depends on the capabilities of the Trojan, the interests of the attacker, and whether or not control of the server is ever gained by another attacker -- who might have entirely different interests.
Infections by remote administration Trojans on Windows machines are becoming more frequent. One common vector is through File and Print Sharing, when home users inadvertently open up their system to the rest of the world. If an attacker has access to the hard-drive, he/she can place the Trojan in the startup folder. This will run the Trojan the next time the user logs in. Another common vector is when the attacker simply e-mails the Trojan to the user along with a social engineering hack that convinces the user to run it against their better judgment.”<
RATs are frequently distributed as part of “Trojanized” applications such as WinAMP as well as in data files for (especially) pornographic pictures and MP3 sound files. Once executed or loaded, such infected files quietly install the RAT and sometimes signal a base station to inform it of the IP address of yet another victim.
There are currently over 300 RATs listed and removed by PestPatrol. For a more extensive research paper on RATs, see the PestPatrol White Paper listed in the references at the end of this paper.
In this section, I summarize some basic approaches to preventing back doors in source code. Network managers may not be directly involved in software quality assurance, but it would be a Good Thing to make sure that the quality assurance folks in your shop are aware of and implementing these principles before you install their software on production systems and networks.
Documentation standards are not merely desirable; they can make back doors difficult to include in production code. Deviations from such standards may alert a supervisor or colleague that all is not as it seems in a program. Using team programming (more than one programmer responsible for any given section of code) and walkthroughs (following execution through the code in detail) will also make secret functions very difficult to hide.
During code walkthroughs and other quality-assurance procedures, the search for back doors should include the following:
Every line of code in a program must make sense for the ostensible application. All alphanumerics in source code have to make sense; a more difficult problem is dealing with numeric codes which may have a hidden meaning. Every entry point for a compiled program must make sense in the programming context.
Every line of code must be exercised during system testing. Test-coverage (sometimes called “code coverage analysis”) monitors show which lines of source code have been executed during system tests. Such programs identify the percentage of code that is executed by a test or series of tests of programs written in a wide range of programming languages; however, each programming language may require its own test-coverage tool. The monitors usually identify which lines of source code correspond to the object code executed during the tests and which were left unexecuted. They can also count the number of times that each line is executed. Finally, test-coverage monitors may provide a detailed program trace showing the path taken at each branch and conditional statement.
It would be nice if the major software vendors who provide operating systems
and utilities were also aware of these principles. Certainly some of the quality-assurance
teams at Microsoft must not have been applying such tools diligently in recent
years. For example, in addition to the Excel 97 flight simulator mentioned earlier
< http://www.eeggs.com/items/29841.html >), you can activate a spy hunter game that uses DirectX for graphics in Excel 2000 (see < http://www.eeggs.com/items/8240.html >).
Diane Levine’s chapter on software development and quality assurance in the Computer Security Handbook, 4th edition is an excellent primer on how quality assurance is fundamental to security and will be studied later in the MSIA program.
The IYIR has a section reserved for remote-control issues, including remote reprogramming as a design feature of safety-critical systems. Here’s a list of some of the items:
1997-08-21 MediVIEW and Medically Oriented Operating Network (MOON) from Sabratek Corp. allow intensive remote medical intervention such as alterations of automated flow control devices for drug administration. The initial press releases included no sign that anyone was concerned about security issues in this system. [The risks of system error and hacking now become life-threatening.]
1999-07-12 David Hellaby of the Canberra Times (
2000-05-31 The General Motors OnStar system will allow not only geographical positioning data, local information, and outbound signaling in case of accidents: it will also allow inbound remote control of features such as door locks, headlights, the horn and so on — all presumably useful in emergencies. However, Armando Fox commented in RISKS, >If I were a cell phone data services hacker, I’d know what my next project would be. I asked the OnStar speaker what security mechanisms were in place to prevent your car being hacked. He assured me that the mechanisms in place were “very secure”. I asked whether he could describe them, but he could not because they were also “very proprietary”. *Sigh*<
2000-08-17 Anatole Shaw reported in RISKS on a dreadful new development in mobile attack weapons: “The Thailand Research Fund has unveiled a new robot, resembling a giant ladybug with a couple of extra limbs. The unit is equipped with visible-spectrum and thermal vision, and a gun. According to Prof. Pitikhet Suraksa, its shooting habits can be automated, or controlled `from anywhere through the Internet’ with a password. The risks of both modes are obvious, but the latter is new to this arena. Police robots of this ilk have been around for a long time, but are generally radio-controlled. The apparent goal here is to make remote firepower available on-the-spot from around the Internet, which means insecure clients everywhere. How long will it take for one of these passwords to be leaked via a keyboard capture, or a browser bug? Slowly, we’re bringing the risks of online banking to projectile weaponry.”
2000-08-25 Several hundred users of new Japanese programmable wireless phones were harassed when someone remotely ordered their devices to dial the emergency services. Kevin Connolly commented in RISKS, “The risk is that people designing new mobile phone functions do not learn from the mistakes in the MS Word macro `virus enabling’ feature.”
2000-10-20 A gateway sold by National Instruments allows instruments equipped with the standard IEEE-488 bus to be connected to the Internet — completely without any security provisions — and thus controlled remotely by total strangers. The usual dangers to the electronic equipment are exacerbated, wrote Stephen D. Holland in RISKS, because laboratory equipment is often used to control mechanical devices.
2000-12-22 In the early 1990s, certain tape drives were criticized for allowing uncontrollable automatic firmware upgrades if a “firmware-configuration tape” was recognized. The problems occurred when the tape drive “recognized” a tape as such even if it wasn’t. A decade later, the same type of feature — and problem — has been noted in Dolby digital sound processors for the audio tracks of 35mm film: any time anything looking like a firmware-reconfiguration data stream is encountered, the device attempts to reconfigure itself, regardless of validity of the data stream or the wishes of the operator. A German contributor to a discussion group about movie projectors noted (translation by Marc Roessler), “The trailer of “Billy Elliott” has got some nasty bug: If the trailer is being cut right behind start mark three, the CP500 will do a software reset with data upload as the trailer runs through the machine. Either Dolby Digital crashes completely or the Cat 673 is set to factory default, which means setting the digital soundhead delay to 500 perforations, i.e. the digital sound lags 5.5 seconds behind the picture. . . .”
2000-12-27 Andrew Klossner noted in RISKS that home electronics such as DVDs are being reprogrammed using automatic firmware upgrades from media (e.g., DVDs). The correspondent writes, “When the authoritarian software forbids me to skip past a twenty-second copyright notice, it makes me nostalgic for the old 12-inch laser disks.” [MK notes: This poses additional sources of troublesome problems when the software doesn’t work right. Even if it isn’t broke, someone at a distance may try to fix it anyway.]
2001-01-12 Daniel P. B. Smith reported in RISKS that a new airborne laser is being designed to shoot down missiles. Smith quotes an article at < http://www.cnn.com/2001/US/01/12/airborne.laser/index.html> as follows: >No trigger man. No human finger will actually pull a trigger. Onboard computers will decide when to fire the beam. Machinery will be programmed to fire because human beings may not be fast enough to determine whether a situation warrants the laser’s use, said Col. Lynn Wills of U.S. Air Force Air Combat Command, who is to oversee the battle management suite. The nose-cone turret is still under construction. “This all has to happen much too fast,” Wills said. “We will give the computer its rules of engagement before the mission, and it will have orders to fire when the conditions call for it.” The laser has about only an 18-second “kill window” in which to lock on and destroy a rising missile, said Wills. “We not only have to be fast, we have to be very careful about where we shoot,” said Wills, who noted that the firing system will have a manual override. “The last thing we want to do is lase an F-22 (fighter jet).” [MK: Readers are invited to decide if, given the current state of software quality assurance worldwide, they would be willing to entrust the safety of their family to an automobile equipped with analogous control systems.]
2001-01-19 Steve Loughran noted in RISKS that the British government
has sponsored tests of computer-controlled speed governors for automobiles;
the system would rely on a GPS to locate the vehicle and an on-board database
of speed limits. Loughran commented, “Just think how much fun you’ll be able
to have by a
2001-01-26 Jeremy Epstein wrote an interesting report for RISKS on remote reprogramming: “DirecTV has the capability to remotely reprogram the smart cards used to access their service, and also to reprogram the settop box. To make a long story short, they were able to trick hackers into accepting updates to the smart cards a few bytes at a time. Once a complete update was installed on the smart cards, they sent out a command that caused all counterfeit cards to go into an infinite loop, thus rendering them useless.”
2001-03-30 Microsoft Networks (MSN) upgraded its dialup lists automatically for users in the Research Triangle, NC area -- and wiped out several local access node numbers. Outraged users found out (too late) that their modems had switched to dialing access nodes in areas reached through long distance calls. About a month later, MSN reimbursed its customers for the long-distance calls their modems had placed due to MSN’s errors.
2001-04-09 Appliance hacking has been a subject of speculation
for years, but more and more manufacturers are interested in controlling their
domestic appliances at a distance. According to a report in RISKS, “IBM and
Carrier, an air-conditioning manufacturer, said they plan to offer Web-enabled
air conditioners in
2001-04-10 IBM and the Carrier Corp., which makes heating and air
conditioning systems, is planning a pilot program this summer in Britain, Greece
and Italy to test an Internet-based system that would allow people to use a
Web site, myappliance.com, to control their home air conditioners from work
or elsewhere. The system will allow troubleshooting to be done remotely and
will make it easier to conserve electricity during peak demand periods. (AP/New
2001-09-06 A new Web-based service called GoToMyPC enables users
to control their desktop PCs in their homes or offices using any other Windows
PC anywhere in the world that has Internet access. The service, a brainchild
of Expertcity Inc., costs $10 a month. Instead of lugging a laptop along on
a trip, a user could sit down at an Internet café PC and access all files, e-mail,
etc. on his or her PC at home. Alternatively, if a worker found that the file
he or she needed over the weekend was on the computer at work, it could be retrieved
using the service. The company says the system is highly secure and requires
two passwords -- one to log onto the service and another to gain access to each
target PC. All of the data exchanged in each remote-control session is encrypted
and Expertcity says the service will operate through many corporate firewalls.
(Wall Street Journal
2001-10-01 Steve Bellovin contributed an item to RISKS about remote control of airplanes: “The Associated Press reported on a test of a remotely-piloted 727. The utility of such a scheme is clear, in the wake of the recent attacks; to the reporter’s credit, the article spent most of its space discussing whether or not this would actually be an improvement. The major focus of the doubters was on security: But other experts suggested privately that they would be more concerned about terrorists’ ability to gain control of planes from the ground than to hijack them in the air. I’m sure RISKS readers can think of many other concerns, including the accuracy of the GPS system the tested scheme used for navigation (the vulnerabilities of GPS were discussed recently in RISKS), and the reliability of the computer programs that would manage such remote control.”
2001-12-20 In a discussion of “the telesurgery revolution” in The
Futurist magazine, surgeon Jacques Marescau, a professor at the European
Institute of Telesurgery, offers the following description of the success of
the remotely performed surgical procedure as the beginning of a “third revolution”
in surgery within the last decade: “The first was the arrival of minimally invasive
surgery, enabling procedures to be performed with guidance by a camera, meaning
that the abdomen and thorax do not have to be opened. The second was the introduction
of computer-assisted surgery, where sophisticated software algorithms enhance
the safety of the surgeon’s movements during a procedure, rendering them more
accurate, while introducing the concept of distance between the surgeon and
the patient. It was thus a natural extrapolation to imagine that this distance--currently
several meters in the operating room--could potentially be up to several thousand
kilometers.” A high-speed fiber optic connection between
2002-01-08 J. P. Gilliver noted an alarming development in remote reprogramming -- an easy way to modify firmware: “. . . For example, IRL (Internet Reconfigurable Logic) means that a new design can be sent to an FPGA in any system based on its IP address.” (From Robert Green, Strategic Solutions Marketing with Xilinx Ltd., in “Electronic Product Design” December 2001. Xilinx is a big manufacturer of FPGAs.) For those unfamiliar with the term, FPGA stands for field-programmable logic array: many modern designs are built using these devices, which replace tens or hundreds of thousands of gates of hard-wired logic. The RISKs involved are left as an exercise to the readers.”
2002-01-16 Researchers at the
* If there is any security mechanism protecting anyone from sending such “special” messages.
* Which setting[s] on the mobile phone can be changed (or probably retrieved from the phone) without knowledge to the customer.
* If the network provider must implement such features, I do not understand why this must happen unperceived by the customer. Why not send a message telling people what will happen?”
2002-02-20 Scott Schram published a paper at < http://schram.net/articles/updaterisk.html > that pointed out the security risks of all auto-update programs (e.g., self-updating antivirus products, MS Internet Explorer, MS-Windows Update, and so on). Once the firewall has been set to trust their activity, there is absolutely no further control possible over what these programs do. If any of them should ever be compromised, the results on trusting systems would be potentially catastrophic.
2002-03-14 In March 2002, tests on unmanned remote-control aircraft studied the effectiveness of automated collision-avoidance systems. Look for exciting developments in security-engineering failures in years to come.
2002-04-22 John McPherson noted in RISKS: “... The Matamata wireless link replaced an expensive frame relay service as well as providing a 1Mbs Internet service to several outlying sites including a library and remote management of water supplies. As the water facilities are computer controlled, they are able to manipulate them remotely rather than sending someone 20 miles down the road just to turn a valve.” ... From *The New Zealand Herald* (Talking about 802.11b) He added: “Now I don’t know if this technology is mature enough to be trusted for this type of thing - I guess I’ll wait for the comments to come flooding in. I sincerely hope they’ve thought through the encryption and security issues here.”
2002-04-26 The widespread use of “adaptive cruise technologies”
to prevent automobile collisions is still well in the future, but some luxury
cars such as Infinti, Lexus, and Mercedes-Benz are now being offered with expensive
options designed to allow moving vehicles to communicate with each and to detect
sensors embedded in the pavement that detect the vehicle ahead either by radar
or lidar (the laser-based equivalent of radar). Steven Schladover of the California
Partners for Advanced Transit and Highways says: “It feels like you’re in a
train -- a train of cars. You don’t see any separation between the vehicles,
and, after a minute of feeling strange, most people relax and say, ‘Oh, this
is pretty nice!’” A lidar package for the Infiniti Q45 will require purchase
of a $10,000 optional equipment package. (
2002-06-21 State police have confiscated desktop computers and
hard drives at
2003-03-10 A Windows root kit called “ierk8243.sys” was discovered
on the network of
2004-07-26 The use of wireless networks of sensors and machinery
has been expanding rapidly in such applications as the management of lighting
systems and the detection of construction defects. Recent examples include a
wireless communications system to tell precisely when to irrigate and harvest
grapes to produce premium wine and a system to monitor stresses on aging bridges
to help states decide maintenance priorities. Hans Mulder, associate director
for research at Intel, says that systems such as these “will be pervasive in
20 years.” Tom Reidel of Millenial Net comments: “The range of potential market
applications is a function of how many beers you’ve had,” but adds: “There’s
a whole ecosystem of hardware, software and service guys springing up.” (New
2005-01-20 Toshiba has developed software that will make it possible
for people to edit documents, send e-mail, and reboot their PCs remotely from
their cellphones, allowing them to work anywhere. Toshiba will begin offering
the service in Japan by the end of March through CDMA1X mobile phones offered
by KDDI Corp. Toshiba is initially targeting the corporate work force, but says
individuals can use it to record TV shows, work security cameras and control
air conditioners tied to home networks. (AP/
There have been many documented cases of voice-mail penetration. For example,
in the late 1980s, a
The bottom line: secure your PBX and voice-mail systems with the same attention that you apply to any other computer-based system you care about.
For additional reading on this topic, see
Another type of computer crime that gets mentioned in introductory courses or in conversations among security experts is the salami fraud. In the salami technique, criminals steal money or resources a bit at a time. Two different etymologies are circulating about the origins of this term. One school of security specialists claim that it refers to slicing the data thin – like a salami. Others argue that it means building up a significant object or amount from tiny scraps – like a salami. Some examples:
Unfortunately, salami attacks are designed to be difficult to detect. The only
hope is that random audits, especially of financial data, will pick up a pattern
of discrepancies and lead to discovery. As any accountant will warn, even a
tiny error must be tracked down, since it may indicate a much larger problem.
For example, Cliff Stoll’s famous adventures tracking down spies in the Internet
began with an unexplained $0.75 discrepancy between two different resource accounting
systems on UNIX computers at the Keck Observatory of the Lawrence Berkeley Laboratories.
Stoll’s determination to understand how the problem could have occurred revealed
an unknown user; investigation led to the discovery that resource-accounting
records were being modified to remove evidence of system use. The rest of the
story is told in Stoll’s book, The Cuckoo’s Egg (1989, Pocket Books:
If more of us paid attention to anomalies, we’d be in better shape to fight the salami rogues. Computer systems are deterministic machines – at least where application programs are concerned. Any error has a cause. Looking for the causes of discrepancies will seriously hamper the perpetrators of salami attacks. From a systems development standpoint, such scams reinforce the critical importance of sound quality assurance throughout the software development life cycle.
Moral: don’t ignore what appear to be errors in computer-based financial or other accounting systems.
A logic bomb is a program which has deliberately been written or modified to produce results when certain conditions are met that are unexpected and unauthorized by legitimate users or owners of the software. Logic bombs may be within standalone programs or they may be part of worms (programs that hide their existence and spread copies of themselves within a computer systems and through networks) or viruses (programs or code segments which hide within other programs and spread copies of themselves).
An example of a logic bomb is any program which mysteriously stops working three months after, say, its programmer’s name has disappeared from the corporate salary database. Examples of logic bombs:
Time bombs are a subclass of logic bombs which “explode” at a certain time. The infamous Friday the 13th virus was a time bomb. It duplicated itself every Friday and on the 13th of the month, causing system slowdown; however, on every Friday the 13th, it also corrupted all available disks. The Michelangelo virus tried to damage hard disk directories on the 6th of March. Another common PC virus, Cascade, made all the characters fall to the last row of the display during the last three months of every year.
The HP3000 ad hoc database inquiry facility,
QUERY.PUB.SYS, had a time‑bomb‑like bug which exploded after
Tony Xiaotong Yu, 36, of
In the movie Single White Female, the protagonist is a computer programmer who works in the fashion industry. She designs a new graphics program that helps designers visualize their new styles and sells it to a sleazy company owner who tries to seduce her. When she rejects his advances, he fires her without paying her final invoice. However, the programmer has left a time bomb which explodes shortly thereafter, wiping out all the owner’s data. This is represented in the movie as an admirable act.
In the CONSULT Forum of CompuServe in the early 1990s, several consultants brazenly admitted that they always leave secret time bombs in their software until they receive the final payment. They seemed to imply that this was a legitimate bargaining chip in their relationships with their customers.
In reality, such tricks can land software suppliers in court.
Gruenfeld (1990) reported on a logic bomb found
in 1988. A software firm contracted with an
· The bomb was a surprise‑‑there was no prior agreement by the client to such a device.
· The potential damage to the client was far greater than the damage to the vendor.
· The client would probably win its case denying that it owed the vendor any additional payments.
A legitimate use similar to time-bomb technology is the openly time‑limited program. One purchases a yearly license for use of a particular program; at the end of the year, if one has not made arrangements with the vendor, the program times out. That is, it no longer functions. When the license is renewed, the vendor either sends a new copy of the program, sends instructions for patching the program (that is, perform the necessary modifications) or dials up the client’s system by modem and makes the patches directly.
Such a program is not technically a time bomb as long as the license contract clearly specifies that there is a time limit beyond which the program will not function properly. However, it is a poor idea for the user. In the opinion of Mr. Gruenfeld,
What if the customer is told about the bomb prior to entering into the deal? The threat of such a sword of Damocles amounts to extortion which strips the customer of any bargaining leverage and is therefore sufficient grounds to cause rejection of the entire deal. Furthermore, it is not a bad idea to include a stipulation in the contract that no such device exists.
In addition, a time‑limited program can cause major problems if the vendor refuses to update the program to run on newer versions of the operating system. Even worse, the vendor may go out of business altogether, leaving the customer in a bind.
My feeling is that if you are paying to have software developed, you should refuse all time‑outs. However, if you a simply renting off‑the‑shelf software such as utilities, accounting packages and so on, it may be acceptable to let the vendor insist on timeouts‑‑provided the terms are made explicit and you know what you’re getting into.
If you do agree to time limits on your purchase, you should require the source code to be left in escrow with a legal firm or bank. Don’t forget to include the requirement that the vendor indicate the precise compiler version required to produce functional object code identical to what you plan to use.
In summary, if a vendor’s program stops working with a message stating that it has timed out, your software contract must stipulate that your license applies to a certain period of use. If it does not, your vendor is legally obligated to correct the time bomb and allow you to continue using your copy of the program.
The general class of logic bombs cannot reasonably be circumvented unless the victim can figure out exactly what conditions are causing the bomb. For example, at one time, the MPE‑V operating system failed if anyone on the HP3000 misspelled a device class name in a :FILE equation. It wasn’t a logic bomb, it was a bug; but the workaround was to be very careful when typing :FILE equations. I remember we put up a huge banner over the console reminding operators to double‑check the spelling following the ;DEV’ parameter.
Time bombs may be easier to handle than other logic bombs, depending on how the trigger is implemented. There are several methods used by programmers to implement time bombs:
· One is a simple‑minded dependence on the system clock to decide if the current date is beyond the hard‑coded time limit in the program file; this bomb is easily defused by resetting the system clock while one tries to solve the problem with the originator.
· The second method is a more sophisticated check of the system directory to see if any files have creation or modification dates which exceed the hard coded limit.
· The third level is to hide the latest date recorded by the program in a data file and see if the apparent date is earlier than the recorded date (indicating that the clock has been turned back).
If the time limit has been hard coded without encryption, then a simple check of the program file may reveal either ASCII data or a binary representation of the date involved. If you know what the limiting date is, you can scan for the particular binary sequence and try changing it in the executable file. These processes are by no means easy or safe, so you may want to experiment after a full backup and when no one is on the system.
If the time limit is encrypted, or if it resides in a data file, or if it is encoded in some weird aspect of the data such as the byte count of various innocuous‑looking fields, the search will be impracticably tedious and uncertain.
Much better: solve your problems with the vendor before either of you declares war.
Information can be stolen without obvious loss; often data thefts are undiscovered until the information is used for extortion or fraud. The term data leakage is used to suggest the sometimes undetectable loss of control over confidential information.
The most obvious form of unauthorized disclosure of confidential or proprietary data is direct access and copying. For example, Thomas Whiteside writes that in the early 1970s, three computer operators stole copies of 3 million customer names from the Encyclopedia Britannica; estimated commercial value of the names was $1 million. Other cases of outright data theft include
· The Australian Taxation Commission, where a programmer sold documentation about tax audit procedures to help unscrupulous buyers reduce the risks of being audited
· The Massachusetts State Police, where an officer is alleged to have sold computerized criminal records
The theft of
· The sale of records about sick people from the Norwegian Health Service to a drug company
The misuse of voter registration
In June 1992, officers of the
Ordinary diskettes can hold more than a megabyte of data; optical disks and special forms of diskette can hold up to gigabytes. Ensure that everyone in your offices using PCs or workstations understands the importance of securing diskettes and hard drives to prevent unauthorized copying. The effort of locking a system and putting diskettes away in secure containers under lock and key is minor compared to the possible consequences of data leakage.
Electronic mail can also be a channel for data leakage. For example, in September 1992, Borland International accused an ex‑employee of passing trade secrets to its competitor‑‑and his new employer‑‑Symantec Corporation. The theft was discovered in records of MCI Mail electronic messages allegedly sent by the executive to Symantec.
In November 1992, NASA officials asked the FBI
to investigate security at the
A case of data leakage via Trojan occurred in
October 1994, when a ring of criminal hackers operating in the
1997-07-02 A report by Trudy Harris in _The Australian_ reviewed
risks of telemedicine, a technology of great value in
1997-07-10 Mark Abene, a security expert formerly known to the underground as Phiber Optik, launched a command to check a client’s password files — and ended up broadcasting the instruction to thousands of computers worldwide. Many of the computers obligingly sent him their password files. Abene explained that the command was sent out because of a misconfigured system and that he had no intention of generating a flood of password files into his mailbox. Jared Sandberg, Staff Reporter for the The Wall Street Journal, wrote, “A less ethical hacker could have used the purloined passwords to tap into other people’s Internet accounts, possibly reading their e-mail or even impersonating them online.” Mr Abene was a member of the Masters of Deception gang and was sentenced to a year in federal prison for breaking into telephone company systems. The accident occurred while he was on parole.
1997-07-19 A firm of accountants received passwords and other confidential codes from British Inland Revenue. Government spokesmen claimed it was an isolated incident. [How exactly did they know that it was an isolated incident?]
1997-08-07 The ICSA’s David Kennedy reported on a problem in
1997-08-15 Experian Inc. (formerly TRW Information Systems & Services), a major credit information bureau, discontinued its online access to customers’ credit reports after a mere two days when at least four people received reports about other people.
1999-01-29 The Canadian consumer-tracking service Air Miles inadvertently left 50,000 records of applicants for its loyalty program publicly accessible on their Web site for an undetermined length of time. The Web site was offline as of 21 January until the problem was fixed.
1999-02-03 An error in the configuration or programming of the F. A. O. Schwarz Web site resulted paradoxically in weakening the security of transactions deliberately completed by FAX instead of through SSL. Customers who declined to send their credit-card numbers via SSL ended up having their personal details — address and so forth — stored in a Web page that could be accessed by anyone entering a URL with an appropriate (even if randomly chosen) numerical component.
2000-02-06 The former director of the CIA, John Deutch, kept thousands of highly classified documents on his unsecured home Macintosh computer. Critics pointed out that the system was also used for browsing the Web, opening the cache of documents up to unauthorized access of various kinds.
2000-02-06 An error at the Reserve Bank of
2000-02-20 H&R Block had to shut down its Web-based online tax-filing system after the financial records of at least 50 customers were divulged to other customers.
2000-04-28 Conrad Heiney noted in RISKS that network-accessible shared trashcans under Windows NT have no security controls. Anyone on the network can browse discarded files and retrieve confidential information. [Moral: electronically shred discarded files containing sensitive data.]
2000-06-18 A RISKS correspondent reported on a new service in some hotels: showing the name of the guest on an LCD-equipped house phone when someone calls a room. Considering the justified reluctance to reveal the room number of a guest or to give out the name of a room occupant if one asks at the front desk, this service seems likely to lead to considerable abuse, including fraudulent charges in the hotel restaurant.
2000-06-24 New York Times Web-site staff chose an inappropriate mechanism for obscuring information in an Adobe Acrobat PDF document that contained information about the 1953 CIA-sponsored coup d’état in Iran. The technicians thought that adding a layer on top of the text in the document would allow them to hide the names of CIA agents; however, incomplete downloading allowed the supposedly hidden information to be read. Moral: change the source, not the output, when obscuring information.
2000-07-07 One of Spain’s largest banks — and its most aggressive
in terms of moving operations onto the Internet — is suffering from an identity
crisis that has resulted in thousands of messages being routed to Bulletin Board
VA, run by a rural Virginia man who publishes a weekly shopper with a circulation
of 10,000. Banco Bilboa Vizcaya Argentaria, which goes by the acronym BBVA after
Banco Bilbao Vizcaya merged with Argentaria SA last fall, is the owner of the
“grupobbva.com” domain name, but many employees, customers and outside vendors
mistakenly send their sometimes-sensitive e-mail to “bbva.com,” a domain name
owned by Bulletin Board VA. “When all this e-mail started coming in, I didn’t
know who to contact. I didn’t know who to talk to,” says
2000-07-13 Microsoft . . . acknowledged that a flaw in its Hotmail
program . . . [was] inadvertently sending subscribers’ e-mail addresses to online
advertisers. The problem, which is described as a “data spill,” occurs when
people who subscribe to HTML newsletters open messages that contain banner ads.
“The source of the problem is that Hotmail includes your e-mail address in the
[Web address], and if you read an e-mail that has banner ads,” the Web address
will be sent to the third-party company delivering the banner, says Richard
Smith, a security expert who alerted Microsoft to the problem in mid-June. Data
spills are common on the Web, says Debra Pierce of the Electronic Frontier Foundation.
“This isn’t just local to Hotmail; we’ve seen hundreds of instances of data
spills over the course of this year.” Smith estimates that more than a million
addresses may have been transferred to ad firms, but most of the big agencies,
including Engage and DoubleClick, are discarding the information. (
2000-07-24 AT&T allowed extensive details of a phone account
to be revealed to anyone entering a phone number into their touch-tone interface
2000-08-01 Peter Morgan-Lucas reported to RISKS, “Barclays Bank yesterday had a problem with their online banking service - at least four customers found they could access details of other customers. Barclays are claiming this to be an unforeseen side-effect of a software upgrade over the weekend.”
2000-08-14 Kevin Poulson of SecurityFocus reported “Verizon’s twenty-eight
million residential and business telephone subscribers from
2001-02-16 Paul Henry noted that the well-known problem of hidden information in MS Word documents continues to be a source of breaches of confidentiality. Writing in RISKS, he explained, “I received an MS Word document from a software start-up regarding one of their clients. Throughout the document the client was referred to as ‘X’, so as not to disclose the name. However I do not own a copy of Word, and was reading it using Notepad of all things, and discovered at the end the name of the directory in which the document was stored -- and also the real name of the client! I checked on a number of other word documents I had for hidden info, especially ones from Agencies who are looking to fill positions -- and yes, again I was able to tell who the client was from the hidden information in the documents.” Mr Henry concluded, “Risks: What potentially damaging information is hidden in published documents in Word, PDF and other complex formats? Mitigation: Use RTF when you can -- no hidden info, no viruses.”
2001-06-22 The e-mail of Dennis Tito, the investment banker who
paid to become the first tourist in space, was insecure for more than a year
-- as were the communications of his entire company, Wilshire Associates. .
. . Although there is no evidence that anyone took advantage of the breaches,
they allowed access by outsiders to confidential company business, including
financial data, passwords, and the personal information of employees. However,
security experts say Wilshire’s problem is not an isolated one, and warn that
American companies are not taking computer security issues seriously. Peter
G. Neumann, principal scientist in the computer science lab at SRI International,
says that the security breach discovered at Wilshire is just “one of thousands
of vulnerabilities known forever to the world. Everybody out there is vulnerable.”
2001-07-05 The drug company Eli Lilly sent out an e-mail reminder to renew their prescriptions for Prozac to 600 clients -- and used CC instead of BCC, thus revealing the entire list of names and e-mail addresses to all 600 recipients.
2001-11-26 Search engines increasingly are unearthing private information
such as passwords, credit card numbers, classified documents, and even computer
vulnerabilities that can be exploited by hackers. “The overall problem is worse
than it was in the early days, when you could do AltaVista searches on the word
‘password’ and up come hundreds of password files,” says Christopher Klaus,
founder and CTO of Internet Security Systems, who notes that a new tool built
into Google to find a variety of file types is exacerbating the problem. “What’s
happening with search engines like Google adding this functionality is that
there are a lot more targets to go after.” Google has been revamped to sniff
out a wider array of files, including Adobe PostScript, Lotus
2002-02-20 RISKS correspondent Diomidis Spinellis cogently summarized some of the problems caused by search engines on the Web: “The aggressive indexing of the Google search engine combined with the on-line caching of the pages in the form they had when they were indexed, is resulting in some perverse situations. A number of RISKS articles have already described how sensitive data or supposedly non-accessible pages leaked from an organization’s intranet or web-site to the world by getting indexed by Google or other search engines. Such problems can be avoided by not placing private information on a publicly accessible web site, or by employing metadata such as the robot exclusion standard to inform the various web-crawling spiders that specific contents are not to be indexed. Of course, adherence to the robot exclusion standard is left to the discretion of the individual spiders, so the second option should only be used for advisory purposes and not to protect sensitive data.”
2002-03-22 Paul van Keep reported in RISKS, >Christine Le Duc,
a dutch chain of s*xshops, and also a mail & Internet order company, suffered
a major embarrassment last weekend. A journalist who was searching for information
on the company found a link on Google that took him to a page on the Web site
with a past order for a CLD customer. He used the link in a story for online
newspaper nu.nl. The full order information including name and shipping address
was available for public viewing. To make things even worse it turned out that
the classic URL twiddling trick, a risk we’ve seen over and over again, allowed
access to ALL orders for all customers from 2001 and 2002. The company did the
only decent thing as soon as they were informed of the problem and took down
the whole site.<
[Note: * included to foil false positive exclusion by crude spam filters.]
2002-06-10 Monty Solomon wrote in RISKS, “A design flaw at a Fidelity
Investments online service accessible to 300,000 people allowed Canadian account
holders to view other customers’ account activity. The problem was discovered
over the weekend by Ian Allen, a computer studies professor at
2003-01-16 MIT graduate students Simson Garfinkel and Abhi Shelat
bought 158 hard drives at second hand computer stores and eBay over a two-year
period, and found that more than half of those that were functional contained
recoverable files, most of which contained “significant personal information.”
The data included medical correspondence, love letters, pornography and 5,000
credit card numbers. The investigation calls into question PC users’ assumptions
when they donate or junk old computers — 51 of the 129 working drives had been
reformatted, and 19 of those still contained recoverable data. The only surefire
way to erase a hard drive is to “squeeze” it — writing over the old information
with new data, preferably several times — but few people go to the trouble.
The findings of the study will be published in the IEEE Security & Privacy
journal Friday. (AP
2003-02-10 A state auditor found that at least one computer used
by staffers counseling clients with AIDS or HIV was ready to be offered for
sale to the public even though it still contained files of thousands of people.
Auditor Ed Hatchett said: “This is significant data. It’s a lot of information
lots of names and things like sexual partners of those who are diagnosed with
AIDS. It’s a terrible security breach.” Health Services Secretary Marcia Morgan,
who has ordered an internal investigation of that breach, says the files were
thought to have been deleted last year. (AP/USA Today
2003-04-17 A glitch on the CNN.com Web site accidentally made available
draft obituaries written in advance for Dick Cheney, Ronald Reagan, Fidel Castro,
Pope John Paul II and Nelson Mandela. “The design mockups were on a development
site intended for internal review only,” says a CNN spokeswoman. “The development
site was temporarily publicly available because of human error.” The pages were
yanked about 20 minutes after being exposed. (CNet News.com
2003-05-29 Hacker Adrian Lamo found a security hole in a website
run by lock\line LLC, which provides claim management services to Cingular customers.
Lamo discovered the problem last weekend through a random finding in a
2003-06-16 Confidential vulnerability information managed by the
2003-06-30 Pet supply retailer PetCo.com plugged a hole in its online storefront over the weekend that left as many as 500,000 credit card numbers open to anyone able to construct a specially-crafted URL. Twenty-year old programmer Jeremiah Jacks discovered the hole. He used Google to find active server pages on PetCo.com that accepted customer input and then tried inputting SQL database queries into them. “It took me less than a minute to find a page that was vulnerable,” says Jacks. The company issued a statement Sunday saying it had hired a computer security consultant to assist in an audit of the site.”
2003-09-15 Two Bank of Montreal computers containing hundreds,
potentially thousands, of sensitive customer files narrowly escaped being sold
on eBay.com late last week, calling into question the process by which financial
institutions dispose of old computer equipment. Information in one of the computers
included the names, addresses and phone numbers of several hundred bank clients,
along with their bank account information, including account type and number,
balances and, in some cases, balances on GICs, RRSPs, lines of credit, credit
cards and insurance. Many of the files were dated as recently as late 2002,
while some went back to 2000. The computers appeared to originate from the bank’s
head office on
2004-01-05 Contributor Theodor Norup reports that a press-release Word document from the Danish Prime Minister’s Office unintendedly revealed its real source and all its revisions. As a result of this incident, ministry spokesman Michael Kristiansen said the Prime Minister’s office would “distribute speeches as PDF files…” Norup believes the risk still is trusting “high echelons of governments” will know a little about information security.
2004-03-16 A portion of Windows source code was leaked last month, and researchers are saying that hackers have uncovered several previously unknown vulnerabilities in the code. Immediately following the code’s posting on the Internet, members of the security underground began poring over the code, searching for undocumented features and flaws that might give them a new way to break into Windows machines. The real danger isn’t the vulnerabilities that this crowd finds and then posts; it’s the ones that they keep to themselves for personal use that have researchers worried. Experts said there has been a lot of talk about such finds on hacker bulletin boards and Internet Relay Chat channels of late, indicating that some hackers are busily adding new weapons to their armories. Another concern for Microsoft and its customers is that even though the leaked code is more than 10 years old, it forms the base of the company’s current operating system offerings, Windows XP and Windows Server 2003. This means that any vulnerabilities found in Windows NT or Windows 2000 could exist in the newer versions as well.
2004-10-19 Google Desktop Search may prove a boon to disorganized PC users who need assistance in finding data on their computers, but it also has a downside for those who use public or workplace computers. Its indexing function may compromise the privacy of users who share computers for such tasks as processing e-mail, online shopping, medical research, banking or any activity that requires a password. “It’s clearly a very powerful tool for locating information on the computer,” says one privacy consultant. “On the flip side of things, it’s a perfect spy program.” The program, which is currently available only for Windows PCs, automatically records any e-mail read through Outlook, Outlook Express or the Internet Explorer browser, and also saves pages viewed through IE and conversations conducted via AOL Instant Messenger. In addition, it finds Word, Excel and PowerPoint files stored on the computer. And unlike the built-in cache of recent Web sites visited that’s included in most browser histories, Google’s index is permanent, although individuals can delete items individually. Acknowledging potential privacy concerns, a Google executive says managers of shared computers should think twice about installing the tool before advanced features like password protection and multi-user support are available.
2005-02-07 A leaked list containing the names of about 240,000
people who allegedly spied for
2005-02-18 ChoicePoint, a spinoff of credit reporting agency Equifax,
has come under fire for a major security breach that exposed the personal data
records of as many as 145,000 consumers to thieves posing as legitimate businesses.
The information revealed included names, addresses, Social Security numbers
and credit reports. “The irony appears to be that ChoicePoint has not done its
own due diligence in verifying the identities of those ‘businesses’ that apply
to be customers,” says Beth Givens, director of the Privacy Rights Clearinghouse.
“They’re not doing the very thing they claim their service enables their customers
to achieve.” In its defense, ChoicePoint claims it scrutinizes all account applications,
including business license verification and individuals’ background checks,
but in this case the fraudulent identities had not been reported stolen yet
and everything seemed in order. ChoicePoint marketing director James Lee says
they uncovered the deception by tracking the pattern of searches the suspects
were conducting. (
2005-04-07 A hard drive full of confidential police data has been
sold on eBay, for only $25.
John Bumgarner (President of Cyber Watch, Inc.)
and I published the following summary of data leakage risks from USB flash drives
in Network World Fusion in 2003
< http://www.networkworld.com/newsletters/sec/2003/1027sec1.html > and
< http://www.networkworld.com/newsletters/sec/2003/1027sec2.html >:
In the movie “The Recruit,” (Touchstone Pictures, 2003) an agent for the Central Intelligence Agency (played by Bridget Moynahan) downloads sensitive information onto a tiny USB flash drive. She then smuggles the drive out in the false bottom of a travel mug. Could this security breach (technically described as “data leakage”) happen in your organization?
Yep, it probably could, because most organizations do not control such devices entering the building or how they are used within the network. These drives pose a serious threat to security. With capacities currently ranging up to 2 GB (and increasing steadily), these little devices can bypass all traditional security mechanisms such as firewalls and intrusion detection systems. Unless administrators and users have configured their antivirus applications to scan every file at the time of file-opening, it’s even easy to infect the network using such drives.
Disgruntled employees can move huge amounts of proprietary data to a flash drive in seconds before they are fired. Corporate spies can use these devices to steal competitive information such as entire customer lists, sets of blueprints, and development versions of new software. Attackers no longer have to lug laptops loaded with hacking tools into your buildings. USB drives can store password crackers, port scanners, key-stroke loggers, and remote-access Trojans. An attacker can even use a USB drive to boot a system into Linux or other operating system and then crack the local administrator password by bypassing the usual operating system and accessing files directly.
On the positive side, USB flash drives are a welcome addition to a security tester’s tool kit. As a legitimate penetration tester, one of us (Bumgarner) carries a limited security tool set on one and still has room to upload testing data. For rigorous (and authorized) tests of perimeter security, he has even camouflaged the device to look like a car remote and has successfully gotten through several security checkpoints where the officers were looking for a computer. So far, he has never been asked what the device was by any physical security guard.
This threat is increasing in seriousness. USB Flash drives are replacing traditional floppy drives. Many computers vendors now ship desktop computers without floppy drives, but provide users with a USB flash drive. Several vendors have enabled USB flash drive support on their motherboard, which allows booting to these devices. A quick check on the Internet shows prices dropping rapidly; Kabay was recently given a free 128 MB flash drive as a registration gift at a security conference. The 2 GB drive mentioned above can be bought for $849 as this article is being written; 1GB for $239; 512 MB for $179; 256 MB for $79; and 128 MB for $39.
To counter the threats presented by USB Flash drives organizations need to act now. Organizations need to establish a policy which outlines acceptable use of these devices within their enterprises.
· Organizations should provide awareness training to their employees to point out the security risk posed by these USB Flash drives.
· The policy should require prior approval for the right to use such a device on the corporate network.
· Encrypting sensitive data on these highly portable drives should be mandatory because they are so easy to lose.
· The policy should also require that the devices contain a plaintext file with a contact name, address, phone number, e-mail address and acquisition number to aid an honest person in returning a found device to its owner. On the other hand, such identification on unencrypted drives will give a dishonest person information that increases the value of the lost information – a bit like labeling a key ring with one’s name and address.
· Physical security personnel should be trained to identify these devices when conducting security inspections of inbound and outbound equipment and briefcases.
Unfortunately, the last measure is doomed to failure in the face of any concerted effort to deceive the guards because the devices can easily be secreted in purses or pockets, kept on a string around the neck, or otherwise concealed in places where security guards are unlikely to look (unless security is so high that strip-searches are allowed). That doesn’t mean that the guards shouldn’t be trained, just that one should be clear on the limitations of the mechanisms that ordinary organizations are likely to be able to put into place.
Administrators for high security systems may have to disable USB ports altogether. However, if such ports are necessary for normal functioning (as is increasingly true), perhaps administrators will have to put physical protection on those ports to prevent unauthorized disconnection of connected devices and unauthorized connection of flash drives.
Because without appropriate security, these days your control over stored data may be gone in a flash.
The problem is exacerbated by the increasing variety of form factors for USB flash drives. Not only are they available in inch-long versions that are easy to conceal in any pocket, purse or wallet, but there are forms that are not even recognizable as storage devices unless one knows what to look for.
Consider for example the “USB MP3 Player Watch” with 256 MB of storage (see < http://tinyurl.com/5xtxb > for details) that one of my readers pointed out to me recently (thanks, James!). This device looks like an analog watch but comes with cables for USB I/O (and earphones too). Any bets your security guards are going to be able to spot this as a mass-storage device equivalent to a stack of 177 3.5” floppy diskettes?
Then there is the newest gift for the geeks in your life, the SwissMemory USB Memory & Knife < http://tinyurl.com/4c5g8 >. You can buy this gadget, including a blade, scissors, file with screwdriver tip, pen and USB memory in 64, 128, 256, or 512 MB capacities. And here I thought that my Swiss Army knife with a set of screwdriver heads was the neatest geek tool I’d ever seen.
The USB Pen (not a “PenDrive”) is a pen that uses standard ink refills but also includes 128 MB of USB flash memory < http://tinyurl.com/6z6js >.
There are three distinct approaches I’ve seen to protecting data against unauthorized copying to USB devices (or to any other storage device):
The pointers below don’t claim to be exhaustive, and inclusion should not be interpreted as endorsement. I haven’t tried any of these products and I have no relationship with the vendors whatsoever.
On a slightly different note, it is not at all clear how any of these products can cope with the rather nasty characteristics of the KeyGhost USB Keylogger < http://www.keyghost.com/USB-Keylogger.htm >, which, as far as I can see from reading the Web pages, may be completely invisible to the operating system. This device can be stuck on to the end of the cable of any USB keyboard and will cheerfully record days of typing into its 128MB memory. Such keyloggers can provide a wealth of confidential data to an attacker, including userIDs and passwords as well as (no doubt tediously error-bespattered) text of original correspondence.
Anyone can use even an ordinary mobile phone as a microphone (or cameras) by covertly dialing out; for example, one can call a recording device at a listening station and then simply place the phone in a pocket or briefcase before entering a conference room. However, my friend and colleague Chey Cobb, CISSP recently she pointed out a device from Nokia that is unabashedly being advertised as a “Spy Phone” because of additional features that threaten corporate security.
On < > we read about the $1800 device that works like a normal mobile phone but also allows the owner to program a special phone number that turns the device into a transmission device under remote control. In addition, the phone can be programmed for silent operation: “By a simple press of a button, a seemingly standard cell phone device switches into a mode in which it seems to be turned off. However, in this deceitful mode the phone will automatically answer incoming calls, without any visual or audio indications whatsoever. . . . A well placed bug phone can be activated on demand from any remote location (even out of another country). Such phones can also prove valuable in business negotiations. The spy phone owner leaves the meeting room, (claiming a restroom break, for instance), calls the spy phone and listens to the ongoing conversation. On return the owners negotiating positions may change dramatically.”
It makes more sense than ever to ban mobile phones from any meeting that requires high security.
David Bennahum wrote an interesting article in December 2003 about these questions and pointed out that businesses outside the USA are turning to cell-phone jamming devices (illegal in the USA) to block mobile phone communications in a secured area. Bennahum writes, “According to the FCC, cell-phone jammers should remain illegal. Since commercial enterprises have purchased the rights to the spectrum, the argument goes, jamming their signals is a kind of property theft.” Seems to me there would be obvious benefits in allowing movie houses, theaters, concert halls, museums, places of worship and secured meeting locations to suppress such traffic as long as the interference were clearly posted. No one would be forced to enter the location if they did not agree with the ban, and I’m sure there would be some institutions catering to those who actually _like_ sitting next to someone talking on a cell phone in the middle of a quiet passage at a concert.
Bennahum mentioned another option
– this one quite legal even in the
Finally, one can create a Faraday cage < > that blocks radio waves by lining the secured facility with appropriate materials such as copper mesh or, more recently, metal-impregnated wood.
Unfortunately, there are more subtle ways of stealing information. Security specialists have long pointed out that information can be carried in many ways, not just through obvious printed copies or outright copies of files. For example, a programmer may realize that (s)he will not have access to production data, but the programmer’s programs will. So (s)he can insert instructions which modify obscure portions of the program’s output to carry information. Insignificant decimal digits (e.g., the 4th decimal digit in a dollar amount) can be modified without exciting suspicion. Such methods of hiding information in innocuous files and documents are collectively known as “steganography.”
For more information about steganography, see
Charles Pfleeger points out that even small amounts of information can sometimes be valuable; e.g., the mere existence of a specific named file may tell someone what they need to know about a production process. Such small amounts of information can be conveyed by any binary operations; i.e., anything that has at least two states can transmit the knowledge being stolen. For instance, one could transmit information via tape movements, printer movements, lighting up a signal light, and so on.
Alas, there are many subtle ways of stealing information. Security specialists have long pointed out that information can be carried in many ways, not just through obvious electronic or paper copies. For example, a programmer may realize that she will not have access to production data, but the programmer's programs will. So she can insert instructions which modify obscure portions of the program's output to carry information. Insignificant decimal digits (e.g., the 4th decimal digit in a dollar amount) can be modified without exciting suspicion. Such methods of hiding information in innocuous files and documents are collectively known as “steganography.” The most popular form of steganography these days seems to involve tweaking bits in graphics files so that images can carry hidden information.
Even small amounts of information can sometimes provide a covert channel for data leakage. Information can be conveyed by any controllable multi-state phenomenon, including binary operations; i.e., anything that has at least two states can transmit the knowledge being stolen. For instance, one could transmit information via tape movements, printer movements, lighting up a signal light, and so on.
An alternative to encryption is encoding; i.e., agreements on the specific meaning of particular data. A code book can turn any letter, word or phrase into a meaningful message. Consider for example, "One if by land, two if by sea." Unless the code book is captured, coded messages are difficult (bit not always impossible) to detect and block. If there are large quantities of suspect messages in natural language, it _may_ be possible to spot something odd if the frequencies of unusual words or curious phrases is higher than expected. Even so, spotting such covert channels may still not reveal the actual messages being transmitted.
Even without data processing equipment, one can ferry information out of a secured system using photography. A search for “spy cameras” on GOOGLE brings up many hits for tiny, concealable cameras– and today we find cameras even in mobile phones.
Bluntly, the wide variety of covert channels of communication make it impossible to stop data leakage. The best one can do is to reduce the likelihood of such data theft through code developed in-house is by enforcing strong quality assurance procedures on all such code. For example, if there are test suites which are to produce known output, even fourth decimal point deviations can be spotted. This kind of precision, however, absolutely depends on automated quality assurance tools. Manual inspection is not reliable.
The same preventive measures applied to detect Trojans and bombs can help stop data leakage. Having more than one programmer be responsible for each program can make criminality impossible without collusion--always a risk for the criminal. Random audits can make increase the risk of making improper subroutines visible. Walkthroughs force each programmer to explain just what that funny series of instructions is doing and why.
As for other covert channels such as coded messages sent through e-mail, I'm sorry to say that there's not much we can do about this problem yet – and little prospect of solving the problem.
Again, the best defense starts with the educated, security‑conscious employee.
Computer data can be held for ransom. For example, according to Whiteside,
1999-10-15 Jahair Joel Navarro, an 18-year-old from
2000-01-12 A 19-year-old Russian criminal hacker calling himself Maxus broke into the Web site of CD Universe and stole the credit-card information of 300,000 of the firm’s customers. According to New York Times reporter John Markoff, the criminal threatened CD Universe: “Pay me $100,000 and I’ll fix your bugs and forget about your shop forever....or I’ll sell your cards [customer credit data] and tell about this incident in news.” When the company refused, he posted 25,000 of the accounts on a Web site (Maxus Credit Card Pipeline) starting 1999-12-25 and hosted by the Lightrealm hosting service. That company took the site down on 2000-01-09 after being informed of the criminal activity. The criminal claimed that the site was so popular with credit-card thieves that he had to set up automatic limits of one stolen number per visitor per request. Investigation shows that the stolen card numbers were in fact being used fraudulently, and so 300,000 people had to be warned to change their card numbers.
2000-01-15 In September 1999, the Sunday Times reported in an article
by Jon Ungoed-Thomas and Maeve Sheehan that British banks were being attacked
by criminal hackers attempting to extort money from them. The extortion demands
were said to start in the millions and then run down into the hundreds of thousands
of pounds. Mark Rasch is a former attorney for computer crime at the United
States Department of Justice and later legal counsel for Global Integrity, the
computer security company that recently spun off from SAIC. He said, “There
have been a number of cases in the
2000-01-18 In January, information came to light that VISA International had been hacked by an extortionist who demanded $10M for the return of stolen information — information that VISA spokesperson Chris McLaughlin described as worthless and posing no threat to VISA or to its customers. The extortion was being investigated by police but no arrests had been made. However, other reports suggested that the criminal hackers stole source code and could have crashed the entire system. In a follow-up on RISKS, a correspondent asked, “. . . [What source code was *stolen*? It is extremely unlikely that it was *the source code for the Visa card system* as stated! There is no such thing. Like any system, it would consist of many source libraries, each relating to different modules of the overall system. So we should be asking what source was copied? (You can hardly say it was *stolen*, as that would imply that it was taken away, leaving the rightful owner without possession of the item of stolen property, and we all know that is not what happens in such cases. In a shop like Visa, the code promotion system maintains multiple copies in the migration libraries, so erasure of the sole copy is highly unlikely).”
2000-01-25 French programmer Serge Humpich spent four years on the cryptanalysis of the smart-card authentication process used by the Cartes Bancaires organization and patented his analysis. When he demonstrated his technique in September 1999 by stealing 10 Paris Metro tickets using a counterfeit card, he was arrested. The man had asked the credit-card consortium to pay him the equivalent of $1.5M for his work; instead, he faced a seven-year term in prison and a maximum fine of about $750,000 for fraud and counterfeiting (although prosecutors asked for a suspended sentence of two years’ probation and a fine of approximately U$10,000). He was also fired from his job because of the publicity over his case. In late February 2000, he was given a 10-month suspended sentence and fined 12,000 FF (~U$1,800).
2000-12-13 The FBI . . . [began] searching for a network vandal
who stole 55,000 credit card numbers from a private portion of the Creditcards.com
Web site and published them on the Internet after the company refused to pay
the intruder money in order to keep the information from being circulated. .
. ..” (New York Times
2001-03-02 The FBI says an organized ring of hackers based in
2001-03-09 A little-known company called TechSearch has found a
new gimmick for making money off the Net -- it’s using a 1993 patent that covers
a basic process for sending files between computers to demand license payments
from big-name companies, including The Gap, Walgreen, Nike, Sony, Playboy Enterprises
and Sunglass Hut. Other less-willing contributors include Audible, Encyclopaedia
Britannica and Spiegel, which were threatened with litigation when they refused
to pay up. “We chose to settle the lawsuit rather than move forward with potentially
costly litigation,” says a Britannica spokeswoman. Following complaints that
the patent is invalid, the U.S. Patent and Trademark Office reached an initial
decision late last month to void it, but TechSearch has amassed a collection
of 20-some other patents that it can use to extract payments. It’s filed several
lawsuits against major electronics firms based on a 1986 patent on “plug and
play” technology, and has initiated litigation with several distance learning
providers based on a 1989 patent that broadly covers computer-based educational
techniques. TechSearch founder Anthony Brown says his methods, although aggressive,
are perfectly legal, and the company’s law firm says it’s won $350 million in
settlements in a string of jury verdicts over the last six years. Critics have
labeled the company’s techniques “extortionate” and “patentmail.” (Wall Street
2002-06-18 The administrator of
2003-07-29 An unanticipated by-product of
2003-08-25 In June 2003, a high-tech extortionist in the
1. Campina had to open a bank account and get a credit card for it.
2. The victims deposited the payoff in the bank account.
3. They had to buy a credit card reader and scan the credit card to extract the data from the magnetic strip.
4. Using a steganography program and a picture of a red VW car sent by the criminal, the victims encoded the card data and its PIN into the picture using the steganographic key supplied with the software.
5. They then posted the modified picture in an advertisement on a automobile-exchange Web site.
6. The criminal used an anonymizing service called SURFOLA.COM to mask his identity and location while retrieving the steganographic picture from the Web site.
The victims worked with their local police, who in turn communicated with the FBI for help. The FBI were able to find the criminal’s authentic e-mail address along with sound financial information from his PAYPAL.COM account. Dutch police began surveillance and were able to arrest the 45-year-old micro chip designer when he withdrew money from an ATM using the forged credit card.
2004-02-26 Tokyo Metropolitan Police arrested three men on suspicion of trying to extort up to 3 billion yen (U.S. $28 million) from Softbank. The suspects claimed that they obtained DVD and CD disks filled with 4.6 million Yahoo BB customer information. Two of the suspects run Yahoo BB agencies which sells DSL and IP Telephone services…. According to Softbank, the stolen data includes name, address, telephone number, and e-mail. No billing or credit card information was leaked. However, there were indications that the suspects could be linked to organized crime (the Yakuza).
2004-03-23 Federal law enforcement officials in California have
arrested a 32-year-old man who demanded $100,000 from Google Inc. and threatened
to “destroy” the company by using a a software program to fake traffic on Internet
ads. The man’s program automated phony traffic to cost-per-click ads Google
places on websites and caused Google to make payments to Web sites the man had
set up. Released on $50,000 bail, he faces up to 20 years in prison and a $250,000
fine. (Bloomberg News/
2004-05-26 Australians are being targeted by Eastern European organized
crime families using the internet to extort and steal far from home. Delegates
at the annual AusCERT Asia Pacific Internet Security Conference were warned
Wednesday, May 26, that mobsters were hiring computer programmers to take their
brand of criminal activity online. The deputy head of
2004-05-31 Police have arrested two additional people on suspicion
of trying to extort money from Softbank after obtaining personal data on as
many as 4 million subscribers to the Internet company’s broadband service. The
two -- Yutaka Tomiyasu, 24, and Takuya Mori, 35 -- are accused of obtaining
company passwords to hack into Softbank’s database from an Internet cafe in
Clearly, one of the best defenses against extortion based on theft of data is to have adequate backups. Another is to encrypt sensitive data so they cannot be misused even if they’re stolen.
A Public Broadcasting System (PBS) television show in early 1993 reported that there are rumors that unscrupulous auditors have occasionally blackmailed white collar criminals found during audits.
The best way to prevent embarrassment or blackmail during an audit is to run internal audits. Support your internal audits staff. Explain to them what you need to protect. Point out weak areas. Better to have an internal audit report that supports your recommendations for improved security than to have a breach of security cost your employer reputation and money.
Another form of extortion is used by dishonest employees who are found out by their employers. When confronted with their heinous deeds, they coolly demand a letter of reference to their next victim. Otherwise they will publicize their own crime to embarrass the their employer. Many organizations are thought to have acceded to these outrageous demands. Some scoundrels have even asked for severance pay‑‑and, rumor has it, they have been paid.
Such narrow defensive strategies are harming society’s ability to stop computer crime.
Hiding a problem makes it worse. A patient who conceals a cancer from doctors will die sooner rather than later. Organizations that conceal system security breaches make it harder for all system managers to fight such attacks. Victims should report these crimes to legal authorities and should support prosecution.
Interestingly, there’s a different kind of extortion that involves vendors and vulnerabilities. In this scam, a criminal discovers a vulnerability in a product and threatens to reveal it unless they’re paid money to conceal it. The normal response of a company with any sense at all is “Publish and be damned.”
Criminals have produced fraudulent documents and financial instruments for millennia. Coins from ancient empires had elaborate dies to make it harder for low‑technology forgers to imitate them. Even thousands of years ago, merchants knew how to detect false gold by measuring the density of coins or by testing the hardness of the metal. Cowboys in Wild‑West movies occasionally bite coins, much to the mystification of younger viewers.
Whiteside provides two particularly interesting
cases of computer‑related forgery. The most ingenious involved a young
If a teller had observed that customers were writing in account numbers different from the magnetically‑imprinted codes at the bottom of each deposit slip, the fraud would have been impossible.
The other case cited by Whiteside concerned
checks which were fraudulently printed with the name and logo of a bank in
Once again, human awareness and attention could have foiled the fraud.
But things are getting worse. Forgers have gone high‑tech. It seems nothing is sacred any more, not even certificates and signatures.
A fascinating article in Forbes Magazine in 1989 showed how the writer was able to use desktop publishing (DTP) equipment even that long ago to create fraudulent checks. He used a high‑quality scanner, a PC with good DTP and image‑enhancement (touch‑up) programs and high‑resolution laser printers. Color copiers and printers have opened up an even wider field for forgery than the monochrome copiers and printers did. The total cost of a suitable forgery system at this writing (July 2004) is about $1,000 in all.
The Forbes article and other security
references list many examples of computer‑related forgeries. A
In December 1992, California State Police in
You should verify the authenticity of documents before acting on them. If a candidate gives you a letter of reference from a former employer, verify independently that the phone numbers match published information; call the person who ostensibly wrote the letter; and read them the important parts of their letter.
Financial institutions should be especially careful not to sign over money quickly merely because a paper document looks good. Thorough verification makes sense in these days of easy forgery.
Credit cards have become extensions of computer databases. In most shops where cards are accepted, sales clerks pass the information encoded in magnetic strips through modems linked to central databases. The amount of each purchase is immediately applied to the available balance and an authorization code is returned through the phone link.
The Internet RISKS bulletin distributed a note in December 1992 about credit card fraud. A correspondent reported on two bulletins he had noticed at a local bookstore. The first dealt with magnetically forged cards. The magnetic stripe on these fraudulent cards contains a valid account code that is different from the information embossed on the card itself. Since very few clerks compare what the automatic printers spew forth with the actual card, thieves successfully charge their purchases to somebody else’s account. The fraud is discovered only when the victim complains about erroneous charges on the monthly bill. Although the victim may not have to pay directly for the fraud (the signature on the charge slip won’t match the account owner’s), everyone bears the burden of the theft by paying higher credit card fees.
In one of my classes, a security officer from a large national bank explained that when interest rates on unpaid balances were at 18%, almost half of that rate (8%) was assigned to covering losses and frauds.
In January 1993, a report on the Reuter news
wire indicated that credit card forgery was rampant in southeast Asia. Total
losses worldwide reached $1 billion in 1991, twice the theft in 1990. In a single
Those of you whose businesses accept credit cards should cooperate closely with the issuers of the cards. Keep your employees up to date on the latest frauds and train them to compare the name on the card itself with the name that is printed out on the invoice slip. If there is the slightest doubt about the legitimacy of the card, the employee should ask for customer identification or consult a supervisor for help.
Ultimately, it may become cost-effective to insist on the same, rather modest, level of security for credit cards as for bank cards: at least a PIN (personal identification number) to be entered by the user at the time of payment. There are, however, difficulties in ensuring the confidentiality of such PINs during telephone ordering. A solution to this problem is variable PINs generated by a “smart card:” a micro-processor-equipped credit card which generates a new PIN every minute or so. The PIN is cryptographically related to the card serial number and to the precise date and time; even if a particular PIN is overheard or captured, it is useless a very short time after the transaction. Combined with a PIN to be remembered by the user, this system may greatly reduce credit-card fraud.
Using computers in carrying out crime is nothing new. Organized crime uses computers all the time, according to August Bequai. He catalogs applications of computers in gambling, prostitution, drugs, pornography, fencing, theft, money laundering and loan‑shark operations.
A specialized subset of computer‑aided crime is simulation, in which complex systems are emulated using a computer. For example, simulation was used by a former Marine who was convicted in May 1991 of plotting to murder his wife. Apparently he stored details of 26 steps in a “recipe” file called “murder.” The steps included everything from “How do I kill her?” through “Alibi” and “What to do with the body.”
If it is known that you will carry out periodic audits of files on your enterprise computer systems, there’s a better chance that you will prevent criminals from using your property in carrying out their crimes. On the other hand, such audits may force people into encrypting incriminating files. Audits may also cause morale problems, so it’s important to discuss the issue with your staff before imposing such routines.
Simulation was used in a bank fraud in
Bellefeuille, Yves (2001). Passwords don’t protect Palm data, security firm
warns. RISKS 21.26
< http://catless.ncl.ac.uk/Risks/21.26.html#subj7 >
Bequai, A. (1987). Technocrimes: The Computerization of Crime and Terrorism.
Bosworth, S. & M. E. Kabay (2002), eds. Computer Security Handbook,
4th Edition. Wiley (
Bullfinch, T. (1855). The Age of Fable.
Reprinted in Bullfinch’s Mythology in the Modern Library edition. Random
cDc (1998). Running a Microsoft operating system on a network? Our condolences.
before visiting criminal-hacker sites.]
< http://www.cultdeadcow.com/news/back_orifice.txt >
Kabay, M. E. (2001). Fighting DDoS, part 1 (2001-07-25)
Kabay, M. E. (2005). INFOSEC Year in Review. See < http://www.mekabay.com/iyir > for details and instructions on downloading this free database. PDF reports are also available for download.
Karger, Paul A., and Roger R. Schell (1974). MULTICS Security Evaluation:
Vulnerability Analysis, ESD-TR-74-193 Vol. II. (ESD/AFSC, Hanscom AFB,
Abstract < http://csrc.nist.gov/publications/history/#karg74 >;
full text < http://csrc.nist.gov/publications/history/karg74.pdf >.
Myers, Philip (1980). Subversion: The Neglected Aspect of Computer Security.
Master’s Thesis (
Abstract < http://csrc.nist.gov/publications/history/#myer80 >;
full text < http://csrc.nist.gov/publications/history/myer80.pdf >
Parker, D. B. (1998) Fighting Computer Crime: A New Framework for Protecting Information. John Wiley & Sons (NY) ISBN 0-471-16378-3. xv + 500 pp; index
PestPatrol Resources < >
PestPatrol White Papers < >
Rivest, Ron (1997). !!! FBI wants to ban the Bible and smiley faces !!! Risks
< http://catless.ncl.ac.uk/Risks/19.37.html#subj1 >
Schwartau, W. (1991). Terminal Compromise (novel). Inter.Pact Press (Seminole, FL). ISBN 0‑962‑87000‑5.
Stoll, C. (1989). The Cuckoo’s Egg: Tracking a Spy through the Maze
of Computer Espionage. Pocket Books (
Ware, Willis (1970). Security Controls for Computer Systems: Report of
Defense Science Board Task Force on Computer Security.
Abstract < http://csrc.nist.gov/publications/history/#ware70 >;
full text < http://csrc.nist.gov/publications/history/ware70.pdf >
Whiteside, T. (1978). Computer Capers: Tales of Electronic Thievery,
Embezzlement, and Fraud. New American Library (
Schwartau, W. (1994). Information Warfare: Chaos on the Electronic Superhighway.
Thunder’s Mouth Press,
For a discussion of proximity devices to prevent piggybacking,
see Kabay, M. E. (2004). The end of passwords: Ensure’s approach,
Part 1 < http://www.networkworld.com/newsletters/sec/2004/0607sec1.html > and
Part 2 < http://www.networkworld.com/newsletters/sec/2004/0607sec2.html >
Tate, C. (1994). Hardware-borne Trojan Horse programs. RISKS 16.55 < http://catless.newcastle.ac.uk/Risks/16.55.html#subj3 >
Associated Press (2000). Man indicted in computer case.
New York Times, | <urn:uuid:57dbad41-dcd2-4f1f-9ea3-127fcea739e4> | CC-MAIN-2017-04 | http://www.mekabay.com/overviews/crime.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940733 | 34,998 | 2.84375 | 3 |
CoPP – Control Plane Protection or better Control Plain Policing. It is the only option to make some sort of flood protection or QoS for traffic going to control plane.
In the router normal operation the most important traffic is control plain traffic. Control plane traffic is traffic originated on router itself by protocol services running on it, destined to other router device on the network. In order to run properly, routers need to speak with each other. They speak with each other by rules defined in protocols and protocols are running in shape of router services.
Examples for this kind of protocols are routing protocols like BGP, EIGRP, OSPF or some other non-routing protocols like CDP etc..
When router is making BGP neighbour adjacency with the neighbouring router, it means that both routers are running BGP protocol service on them. BGP service is generating control plane traffic, sending that traffic to BGP neighbour and receiving control plane traffic back from the neighbour.
Usage of Control Plane Protection is important on routers receiving heavy traffic of which to many packets are forwarded to Control Plane. In that case, we can filter traffic based on predefined priority classes that we are free to define based on our specific traffic pattern.
By using CoPP, we can make a part of control plane traffic prioritised so that it can be efficiently processed by control plane in timely manner. Some other less important control traffic will be dropped on the entrance to control plane or slowed down by using buffering. We can use QoS techniques in the entrance to Router Processor (showed in the image above), enabling us to drop or even better, to throttle some less important control traffic flows. In this way whole valid control plane traffic gets through but some flows slower that others.
Route Processor Virtual Interfaces
Control Plane Protection extends QoS feature to the control plane by considering the Route Processor to be additional virtual interface attached to the router. All traffic redirected to the Route Processor is classified into three categories corresponding to three sub-interfaces of the virtual interface:
1. Control-plane host sub-interface
This interface is receiving all control plane traffic that is destined for one of the router interfaces. This is usually management traffic and routing protocols traffic. Most control plane protection features operate on this sub-interface, so this sub-interface provides most features, such as policing, port filtering, and per-protocol queue thresholds.
Class-map type port-filter allows for automatically dropping of packets destined for the TCP/UDP ports not currently open in the router. The operating system automatically detects all open ports, and you can manually configure some exceptions. This can significantly reduce load on device CPU during flooding attacks.
If traffic destined to Route Processor is not TCP/UDP, that kind of control traffic ends up on the CEF exception sub-interface.
Per-protocol queue thresholds set selective queue limits for packets of different protocols, such as ICMP, BGP, OSPF, etc. We have in our example below policy-map ICMP_RATE_LIMIT which will catch all ICMP packets and do some rate policing on them.
2. Control-plane transit sub-interface
This sub-interface is handles transit IP traffic that is not able to be handled by faster hardware CEF mechanism. This usually happens when a packet must be routed out of Ethernet interface and there is no ARP mapping done already for that MAC. In this case we will be switching in the processor by making ARP lookup to find the next-hop MAC address.
3. Control-plane CEF exception sub-interface
Like the name says, packet that causes an exception in CEF switching ends up here. Example of this kind of traffic is non-IP traffic destined to router itself, CDP, OSPF updates, and ARP packets.
How Control Plane Protection works and how is configured
There are two ways of doing this. We can apply separate rate-limiting policy to any of the sub-interfaces or apply one aggregate policy for all sub-interfaces which is knows as classic control plane policing. Using both the sub-interface and aggregate policy is possible but can be unstable on some IOS versions thus is not recommended. In our configuration example below we will configure separate rate-limiting policy to each of the sub-interfaces.
Before packets reach one of specific control plane sub-interfaces, they are processed with more different ingress features. Packets are going through input access-list, uRP checks and aggregate control-plane policy if one is configured. After this, packets are forwarded to sub-interface-specific policy, the packets are then queued onto the respective interface input queue and handled via selective packet discard policy.
Here’s the example of Control Plane Protection. I’ll just put the example here and the explanation, bullet by bullet at the bottom.
The class-map type port-filter is really cool. It allows to match some of the ports (like 2323 and 2424 in our example). The best part is that you are able to match all closed ports on the router dynamically and drop packets destined to non-listening ports before the router process them and responds with ICMP unreachable or TCP RST packet.
- In the first part of the example above, we are blocking all closed ports except TCP 2323 and 2424.
class-map type port-filter match-all CLOSED_PORTS match closed-ports match not port tcp 2323 match not port tcp 2424 policy-map type port-filter HOST_PORT_FILTER class CLOSED_PORTS drop
- In the next part we matching ICMP traffic and limiting that traffic going toward the host sub-interface, which means to the Route Processor.
ip access-list extended ICMP permit icmp any any class-map ICMP match access-group name ICMP policy-map ICMP_RATE_LIMIT class ICMP police rate 10 pps burst 5 packets
- Next example is checking transit fragmented traffic matched with an access-list. Fragmented transit traffic will be limited to 1000000 packet per second rate on the transit sub-interface with some burst.
ip access-list extended FRAGMENTS permit ip any any fragment class-map FRAGMENTS match access-group name FRAGMENTS policy-map TRANSIT_RATE_LIMIT class FRAGMENTS police rate 1000000 pps burst 200000 packets
- At the end of the example all other packets resulting in CEF exceptions are limited to 400 packets per second.
policy-map CEF_EXCEPTION_RATE_LIMIT class class-default police rate 400 pps burst 20 packets
- In the last few lines we are applying service policies to all three sub-interfaces. With this step we are actually applying the Control Plane Protection.
control-plane host service-policy input ICMP_RATE_LIMIT service-policy type port-filter input HOST_PORT_FILTER control-plane transit service-policy input TRANSIT_RATE_LIMIT control-plane cef-exception service-policy input CEF_EXCEPTION_RATE_LIMIT | <urn:uuid:78c917d6-6edb-47f0-91f1-1b12e9642f2b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2015/control-plane-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888477 | 1,491 | 3.28125 | 3 |
The Grain and Cereal Crop Protection Chemicals (Pesticides) Market has been estimated at USD 24578.98 million in 2015 and is projected to reach XX million by 2020. Among all Crop-Based applications of Pesticides, Grains & Cereals corner the largest market share and it estimated to grow at CAGR 5.97% during 2015-2020. It is forecasted that this segment will remain the largest segment by 2020.
Increasing global population is causing more pressure to enhance the cereal production worldwide, hence leading to more use of pesticides in this segment. Grains and cereals accounts for the largest product category among crop-based as well as non-crop based pesticides. Among grains and cereals herbicides form the largest and the fastest growing category.
Production and yield of cereals & grains present vastly contradictory figures when the worldwide data is compared to data from developed and developing regions. For instance, between 1993 and 2020, global land area used for producing cereals & grains is expected to witness a growth of 0.25%, while yield is projected to increase by 1.3%. In developed countries, the situation varies considerably over the same period, with producible land area growing by a mere 0.1% and cereal & grain production likely to grow by 0.9%. Land available for planting grains & cereals in developing regions is quite substantial, which is expected to grow at the fastest rate of 0.35% during the aforementioned period, with yield also projected to maintain the fastest growth of 1.6% in this period.
Key Deliverables in the Study | <urn:uuid:1dcf47d6-17e4-4117-9bb5-bca5623480c1> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-grain-and-cereal-crop-protection-market-growth-trends-and-forecasts-2014-2019-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943366 | 325 | 2.578125 | 3 |
Describe the operation of data networks
These questions are based on 642-822 – Interconnecting Cisco Networking Devices Part 1.
Objective: Describe the operation of data networks
Sub-Objective: Describe common networking applications, including Web applications
Single Answer, Multiple Choice
Which of these applications uses the IMAP protocol to transfer information between a server and a host?
- Web browser
E-mail applications use Internet Message Access Protocol (IMAP) to retrieve messages from the mail servers. IMAP differs from Post Office Protocol (POP) in that IMAP does not download e-mail messages to client machines. By default, IMAP uses Transmission Control Protocol (TCP) to connect to the client. E-mail allows network users to communicate information and exchange data in an efficient and timely manner, and can also be used to communicate with users outside the network. It is known as the store and forward method of sending and receiving messages with the help of an electronic communication system. Some common e-mail applications include Microsoft Outlook and Outlook Express.
File Transfer Protocol (FTP) uses TCP, not IMAP, to transfer bulky data files from an FTP server to a client computer over the Internet or intranet. By default, FTP uses TCP port 21 to connect to the client system.
A Web browser uses Hyper Text Transmission Control Protocol (HTTP) to exchange information over the Internet. A Web browser provides access to the Internet through which a user can access text, images and other information on a Web site. By default, HTTP uses TCP port 80 to connect to the client computer.
Telnet uses TCP port 23 to connect to the client computer. You can use Telnet to remotely access a computer. | <urn:uuid:560fa34f-00f1-4f81-8351-32a8df085383> | CC-MAIN-2017-04 | http://certmag.com/describe-the-operation-of-data-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00105-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.829175 | 356 | 3.09375 | 3 |
SSD (Solid State Drive) is becoming increasingly popular and may soon become an integral part of future PC’s, Laptops, Ultrabooks and maybe other mobile devices as well. SSD’s are storage devices which are faster compared to traditional hard disk. The logic is to save operating system files, applications, games and other frequently used files inside SSD rather than on a hard disk drive (HDD). The remaining files can be saved inside the hard disk. In case of Windows 8, some users were able to achieve reaching desktop in just 6 secs using SSD. The real advantage is for users who use high end graphic software such as Photoshop. The program which usually takes almost 30 secs to load completely got it opened fully in 9 secs. The real beneficiaries are the hardcore gamers who are addicted to some of the modern high end games. SSD’s are a must for achieving the level of speed they always crave for.
What makes SSD drives faster than hard disk drives (HDD)?
Hard disks store data in one or more magnetic rapidly rotating discs called platters. The data stored inside these discs are read by a magnetic heads which physically moves to and fro on top of the discs separated by a very thin layer of space.
SSD stores data inside flash memory which stores data electronically using transistors. This is the same flash memory which is seen on USB storage devices. There are no physical moving parts involved. All the data storage is done electronically. Hence the data storage and transfer is incredibly fast using SSD. They are also impressively shock and vibration resistant as there are no physical moving parts inside SSD.
Limitation of using SSD drive
- SSD drives are costly compared to hard disks, atleast at the time of this writing.
- The maximum available drive space for SSD driver is 500 GB at the time of writing.
- (MLC) Multi level cells which are used for storing data in SSD’s can only be written up to 10,000 times. This is why hard disk is compulsory. Data which are periodically changed and written is stored inside the hard disk, while data which don’t undergo many changes such as operating system files are stored inside SSD.
What are the basic requirements for using SSD?
- Your computer should support SATA connection. If your computer is around 4yrs old, most probably you have a computer which support SATA drive. However it’s better to confirm with the vendor or from product details website whether your motherboard supports SATA
- This is just for your information. The latest motherboard comes with SATA 3 connection. It gives the maximum performance that you can get from SSD. Upgrading to SATA 3 is not easy as it require you to change motherboard and most probably processor as well.
- SSD works well in XP. But it works better in latest OS such as Win 7 and Win 8 because of its support for TRIM command. TRIM helps in maintaining optimum write performance, prevent fragmentation by organizing data and erase junk files which are removed.
- Make sure Advanced Host Controller Interface (AHCI) is turned on in the BIOS. Enabling this will ensure that all features needed for SSD is enabled.
- Under the boot priority, SSD should be set as the primary boot device. All the OS files are stored inside SSD for getting the maximum performance benefit. So naturally SSD needs to be the primary boot device.
How to Install Windows in SSD?
- Assume that you are using SSD on a computer which already have OS and applications installed. In this case, you can use the drive’s cloning software to clone and transfer all the OS and application files to the new SSD drive. There are other third party software that can do the job for you. The major disadvantage of this method is that cloning might transfer the junk files as well which quickly fills up the SSD drive. (Remember, drive space inside SSD may not be as big as your hard disk)
- Install Windows from scratch. This might be time consuming as you have to install everything from scratch. But it’s worth it, considering this method creates fewer problems and create a more stable PC.
- Once Windows OS is up and running, you need to redirect folders inside windows libraries such as Documents, Pictures, Music and Videos to your secondary drive which is HDD. The files in these folders undergo changes constantly and so it is not advisable to save it inside SSD. This tutorial explain how to change the default file saving location for windows libraries.
Never defragment an SSD drive
SSD drive does not require defragmentation unlike HDD. Defragmenting an SSD drive reduce it’s life span (Remember Multi level cells(MLC)). By default, windows disable its defragmentation tool for SSD drive. If you are using any third party tools for disk optimization you need to be careful as it might accidentally try to defragment SSD.
Hopefully this article is helpful in understanding more about why and how to install SSD drive. There are many different brands available now, but my top recommendations will be to go for Corsair or Crucial. | <urn:uuid:38046179-7669-49dd-a6c5-977882d43e58> | CC-MAIN-2017-04 | http://atechjourney.com/why-and-how-should-you-install-ssd-drive.html/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937201 | 1,045 | 3.265625 | 3 |
In the seemingly unending search for computer security solutions that are both reactive and proactive, researchers have occasionally turned to other sciences for ideas.
In creating an algorithm that searches for and implements more secure computer configurations, computer science associate professor Errin Fulp and graduate student Michael Crouse from the Wake Forest University in North Carolina have been inspired by genetics.
“A lot of security instances that we read about ultimately are the result of a poor configuration in some form or fashion,” said Fulp, explaining that modern cyber attacks are usually performed in two waves – first reconnaissance, then action based on the discovered information.
“Just as one might try to prevent a home robbery, our goal is to create a ‘moving target defense’ that detects cyber threats when they first case the house. If we can automatically change the landscape by adding the technological equivalent of security cameras or additional lighting, the resulting uncertainty will lower the risk of attack,” he pointed out.
Their goal is to make the system able to lear form experience, adapt and – above all – is automated, so that already often overwhelmed administrators aren’t saddled with additional work. To do that, they took cues from nature and the evolution process.
“Typically, administrators configure hundreds and sometimes thousands of machines the same way, meaning a virus that infects one could affect any computer on the same network,” Crouse added. “If successful, automating the ability to ward off attacks could play a crucial role in protecting highly sensitive data within large organizations.”
The two researchers started their work in March 2011 and, according to the Winston-Salem Journal, they already have a prototype that shows a lot of promise.
Unfortunately, the grant money they received for the project from the Pacific Northwest National Laboratory is running out. They estimate that three more years and $500,000 would allow them to conclude it successfully, and they are currently looking for individuals or organizations that would sponsor their research.
This is not the first time that Fulp collaborates with Pacific Northwest, nor the first time he was inspired by nature in his quest for innovation.
In 2009, he started a project aimed at creating “digital ants” that wander through computer networks looking for threats. Once found, the ants would swarm the location and draw the attention of human operators to it so that they can step in to investigate.
The value of the project was proven when Fulp introduced a worm into the network and the digital ants successfully found it. As work on it is still ongoing three years later, it is likely that we might soon find the technology implemented in security solutions. | <urn:uuid:665c0ad4-06e6-4896-a492-4fc38b2ed87a> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/02/16/thwarting-attacks-with-genetically-inspired-computer-configuration-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965295 | 542 | 3.4375 | 3 |
Create a Cluster to Balance the Network Load
Windows Server 2003 supports two types of clustering server clustering and network load balancing. Server clustering is a relatively complex topic, and one that generally involves the purchase of at least some additional hardware. Although it is unarguably the more powerful and versatile of the two clustering methods, this need for additional hardware is often enough to discourage most people.
Network load balancing, however, is much less complex than server clustering, and offers similar benefits: increased fault tolerance and improved performance for server-based applications. Fault tolerance comes from the fact that more than one server is hosting an application. If a server in the cluster fails, users can continue to access the application via one of the other servers in the cluster. Increased performance is provided by virtue of more than one physical computer system answering requests from clients.
In an NLB cluster, multiple systems appear to the network as a single entity. On Windows Server 2003, up to 32 systems can be included in a single NLB cluster, although the cluster can be created with just two machines and then be scaled as needed. From an end-user perspective, the operation of a cluster is invisible. They will have no way of knowing that the application they are accessing is being hosted in a cluster, but they will appreciate the additional performance.
How NLB works
In its most basic form, NLB is a mechanism for distributing incoming requests among multiple network interfaces. When a request is received via the IP address assigned to the cluster, one of the servers in the cluster takes the request and processes it. The other servers in the cluster ignore the request. The next request that comes in goes to one of the other servers, and so on until the original server answers another request.
Although the principle is straightforward, the mechanics behind such a system are quite complex. For example, TCP/IP communication relies on the fact that an IP address can be resolved to a Media Access Control (MAC) address, which is encoded on a network interface. In a regular system this equates to one IP address per MAC address and vice versa.
In the case of NLB, a single IP address must equate to more than one MAC address the MAC address of each system in the NLB cluster. This is achieved by the use of a software based virtual network adapter that sits between the network adapter and its regular network card driver. The virtual network adapter has an IP address and MAC address associated with it that is used as the external presence of the cluster. Because the virtual network adapter is aware of the real IP and MAC addresses of the systems behind it in the cluster, it is able to channel requests through to systems using this information.
Each system in an NLB cluster receives every request, but elects to process onlycertain requests based on a mathematical algorithm. It's easy to gain a simple understanding of the process through the following example. Let's imagine that there are four technical support operators manning a help desk. The four agree that each will answer one in every four calls, in order. So, operator 1 will answer calls 1, 5, 9, 13 and so on. Operator 3 on the other hand will answer 3, 7, 11, 15 and so on. Once the initial agreement is made, the only communication required between the operators is to make sure that the other operators are still answering calls. If one of the operators were to stop answering calls for some reason, the other operators must detect this and shift their call-answering ration to 1:3. Otherwise, every fourth call would go unanswered.
To detect the presence of the other servers in the cluster, NLB uses heartbeat messages that are sent among the servers. If a server doesn't receive a heartbeat message from one of the other servers in the cluster, that server recalculates the clustering algorithm to accommodate the change. When the missing server comes back online, the other servers in the cluster detect the presence of the server and once again recalculate the algorithms. All of this calculating and recalculating is done automatically, and, in general, the end users using the applications hosted on the servers in the NLB cluster will be oblivious to the change.
What Can You Use NLB for?
NLB is not suited to every application or environment. NLB is a form of clustering, but unlike server clustering it does not necessarily employ a shared data store. In other words, each server in an NLB cluster can be a fully self-contained server in its own right. This self-contained approach brings with it one major issue that dictates where NLB can be implemented that of the "state" of the applications hosted on the cluster.
The optimal application for an NLB cluster in which there is no shared data storage, is one that is almost totally read-only, and whose data does not change on a frequent basis. Such an application is said to be stateless, as opposed to stateful. A good example of a stateless application would be a Web server that provided static information, and that had very little dynamic content.
A corporate Intranet with perhaps a searchable database of product information is one application that comes to mind. A relational database like SQL Server, on the other hand, is a good example of a stateful application because the data on the server is likely to be changing on an ongoing basis.
NLB is not suited to stateful applications because, as was mentioned earlier, the data store is not necessarily shared among the servers. As a result, if NLB is used on servers that are each hosting a separate copy of a stateful applications, it would likely not be long before the data stores on each server became out of sync with each other. This is because, although servers in an NLB cluster communicate heartbeat information with each other, they do not communicate data.
Running a stateful application on an NLB cluster would likely result in a query run against one server yielding a different result than one run against another server in the cluster at the same time. The issue of whether or not an application is stateless is perhaps the biggest factor in deciding how you can use NLB on your network.
What Do You Need to Make NLB Work?
NLB is included with all versions of Windows Server 2003 including the Web Edition. In all cases, NLB clusters of up to 32 nodes can be created. From a hardware perspective, the only addition you will need to consider is an extra network card for each system in the cluster. The extra NIC allows servers to communicate normal network traffic with each other without impacting the performance of the network links in the cluster. If an additional network card is not available, you can still create a cluster with only one network card per system, but you will need to make sure that your network cards support multicast mode (which most do). You will also need to ensure that any routers you have on the network support multicast MAC addresses (which not all do).
While the additional hardware requirements for NLB may be minimal, it is also worth mentioning some of the other, non-Windows, considerations that you should think about before implementing NLB. Specifically, this refers to the network infrastructure surrounding the servers in the cluster. For example, there is little point in creating an NLB cluster of three servers with a view to providing fault tolerance, if those three servers are all connected to the same network switch. Doing so would create a single point of failure at the switch, and although switches are generally resilient, the ideal of eliminating single points of failure should still be a priority.
In part two of this article, we will look at the process of implementing NLB on a Windows Server 2003 system and at some of the tools used to create and monitor a cluster. | <urn:uuid:a28b950f-d39b-418d-8c88-be741ecfe32c> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/3346081/Create-a-Cluster-to-Balance-the-Network-Load.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94593 | 1,586 | 3.28125 | 3 |
Definition: A collection of items accessible one after another beginning at the head and ending at the tail.
Generalization (I am a kind of ...)
Specialization (... is a kind of me.)
linked list, doubly linked list, circular list, self-organizing list, ordered linked list.
Aggregate parent (I am a part of or used in ...)
Note: A list may have additional access methods. A list may be ordered. A list can be seen as an ordered bag. A list may be kept as the leading items of an array, but inserting items other than at the end takes time.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 22 January 2009.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "list", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 22 January 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/list.html | <urn:uuid:91d1a27d-b6fc-4c24-a899-beb8d3ec2e30> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/list.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.882554 | 242 | 2.96875 | 3 |
A group of researchers at the University of South Wales’ Genomics and Computational Biology lab are using supercomputing to help solve a critical public health issue. With antibiotic resistance a growing concern and drug resistant infections, such as MRSA and tuberculosis, on the rise, scientists are keen to understand how bacteria evolve into deadly strains.
The University of South Wales project, led by PhD student Farzana Rahman, is focused on predicting drug resistance so that patients receive the drug that is most likely to be effective. The effort is part of the new frontier of personalized medicine, which holds enormous promise for treating and curing many kinds of diseases, including bacterial infections.
Supercomputer modeling illustrates how fairly benign bacterial strains can evolve into toxic ones, such as E Coli 0157. The work is supported by High Performance Computing (HPC) Wales, which provides the advanced technology that facilitates science that is both compute- and data-intensive.
In the video case study below, Rahman describes her experience growing up in Bangladesh, where diseases like typhoid, cholera, and gangrene plagued the population. Rahman was deeply troubled by the suffering and was moved to make a difference to bring positive change to society and human life.
“While scientists are accessing many avenues to solve this issue, my research takes a different approach,” says Rahman. “I’m developing an open method to predict the risk of a bacterial strain becoming toxic. This project involves data mining and biostatistical modeling using high-performance computing.”
Fujitsu is funding the transformational research, which the company believes will have a huge impact both socially and commercially. Rahman’s vision for her work is to create a mechanism for more affordable, relevant treatments that can be used by public health organizations, disaster relief efforts and emergency services. Currently, identifying the best antibiotic for a given strain can take days to weeks, but with this new approach, the matching process can be done in a matter of hours. When you’re fighting an infectious disease, getting an effective treatment days sooner can literally be the difference between life and death. | <urn:uuid:674942a4-9d70-481b-b9c8-b2aa08911934> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/13/hpc-aids-fight-toxic-bacteria/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945152 | 433 | 3.359375 | 3 |
TABLEA has the following primary key definition:
PRIMARY KEY ( COL1 )
TABLEB has the following foreign key definition:
FOREIGN KEY FK1 ( COLY ) REFERENCES TABLEA ON DELETE CASCADE
What is the result when the SQL UPDATE statement below is executed?
DELETE FROM TABLEA WHERE COL1 = HV1
A. The delete will fail because a primary key value cannot be deleted.
B. The row will be deleted in all cases.
C. The row will be deleted from TABLEA and all rows from TABLEB where COLY = HV1 will also be deleted.
D. The row will be deleted only if no rows exist in TABLEB where COLY = HV1.
E. The row will be deleted only if a separate delete statement is executed before the next COMMIT, deleting all rows from TABLEB where COLY = HV1 if any such rows exist.
ON DELETE CASCADE means whenever a row in PARENT table (TABLEA) is requested to delete, corresponding rows in all child tables dependent on the parent table (TABLEA) will be deleted. This cascade applies to all lower level child tables that are inturn depent on TABLEB using ON DELETE CASCADE option. | <urn:uuid:707a5885-d14c-4e2c-af54-95ac715457bf> | CC-MAIN-2017-04 | http://ibmmainframes.com/about5877.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00243-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.88484 | 274 | 3.109375 | 3 |
When I was in school, computer labs were a luxury. Many schools were just receiving their first few computers, so for my class to have access, once a week, to a room of 12 computers was like getting the chance to drive a Ferrari on your 16th birthday. I vividly remember turning on a beige colored terminal and monitor, inserting a floppy disk that was about the size of an iPad mini, and getting straight to work—on the Oregon Trail. Technology in the classroom today looks much different, but it continues to be vital part of education.
The rapid evolution of technology is changing the way we learn, work and educate. Students want the freedom to learn and study using the latest software or applications on any device, in the location where they feel most productive and inspired.
As we head back to campus this fall, you might notice that several schools have invested in new technology in an effort to improve the learning experience for students. Many schools are turning to virtualization technology to provide anywhere, any device access to make learning more convenient, accessible, and affordable. Typically, applications and software needed for class are only available in the classroom or computer lab, but by virtualizing desktops, applications, and data, you can deliver them to the student, on his or her personal laptop or tablet at home, in the library—anywhere.
A public school district brings a new meaning to “backpacks”
Atlanta Public School (APS) district students will be sure to notice a few changes when classes are in session. MyBackPack, which officially launches at the start of the 2014-2015 school year, is an innovative initiative that will leverage virtualization technology to enhance and personalize learning for every student in the district. Students receive a secure user login and password to access MyBackPack where they will have access to assignments, safe internet browser, and a full menu of applications needed to learn and study.
MyBackPack—using virtual applications and desktops—will also allow APS to offer a Bring-Your-Own-Device (BYOD) program for students, faculty, and staff. Anytime, any device learning opens the door to new learning opportunities for students at all levels. It doesn’t matter if you or your school can afford the latest, shiny new laptop or computer. Students have the same, high quality learning environment regardless of device.
America’s second largest public university empowers student mobility
Students at the University of Central Florida (UCF) will also enjoy the benefits of virtualization technology as they arrive for the fall semester. UCF Apps will enable software mobility by providing curriculum-essential apps, such as Word, Excel, Powerpoint, SPSS, SAS, ArcGIS, and many more to students anywhere, on any device. No longer will students need to wait in line at the computer lab or even need to come on campus to study or complete work. They can now work, learn, and study from their dorm room, outside on campus, or in their favorite coffee shop.
Getting started is easy. Students simply sign up at apps.ucf.edu using their student login and password. The focus for this project is students. However, UCF Apps lays the foundation for a long term virtualization strategy for UCF. They recognize many use cases for virtualization, including software mobility for faculty and staff, virtual labs and classrooms, task specific kiosks/signage, development sandboxes—the possibilities are endless.
Mobile students need mobile workspaces with on-demand, secure access to the apps, data and services they require, expanding beyond traditional methods to promote independent and exploratory learning – without compromising security or compliance. Embrace mobility and promote a lifetime of learning.
These are just a two examples of how mobile workspaces are empowering students to learn, work, and study anywhere, on any device. For more information on how schools are leveraging Citrix virtualization to power student mobile workspaces, please visit citrix.com/education.
Related education content:
About the author
Nicole Nesrsta – Manager, Vertical Solutions Marketing & Strategy, Education
Nicole Nesrsta is a leader in solutions and vertical marketing and strategy in the high-tech industry. At Citrix, Nicole is responsible for designing and executing global, company-wide go-to-market strategies for the education market. With a passion for solving business problems with technology solutions, Nicole has enjoyed 5 years in various solutions sales and marketing roles—targeting audiences at all levels: executive, line of business, and IT. Nicole holds a bachelors and masters degree from the University of Florida. You are welcome to connect with her via twitter and linkedin. | <urn:uuid:b27dd002-5d14-4b2d-9b1e-8dacbe8dcc38> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2014/08/27/back-to-school-with-new-technology-schools-empowering-students-with-citrix-mobile-workspaces/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947721 | 963 | 2.671875 | 3 |
Whether or not you print so much to rival a federal or state government agency, I think we can all agree that saving money on printer ink is always a good thing. A 14-year-old student has discovered one simple trick to reducing ink costs: switch to a more efficient font for printing.
Survir Mirchandandi started researching this after getting inundated with printouts upon graduating to middle school. He compared the most commonly used characters in words (e, t, a, o, and r) and measured how much each ink was used for each letter across four fonts: Garamond, Times New Roman, Century Gothic, and, yes, Comic Sans. He repeated his tests with document samples from the Government Printing Office.
The result: switch from Times New Roman to Garamond, and the federal government could save almost 30% on ink costs--$136 million per year. If state governments follow suit, an additional $234 million would be saved.
That's a whole lot of money wasted (or saved) just from a typeface choice. Here's a comparison of the two fonts, both with 11-point type.
Other things you can do, of course, include printing in draft mode and using an even more efficient printer font, such as Ecofont. You won't save hundreds of millions of dollars, but you won't have to buy ink as often. Mirchandandi tells CNN that "ink is two times more expensive than French perfume by volume" (I'd rather buy perfume!).
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:7eed7471-d9d9-4128-87f0-267caeeb458f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2697534/consumerization/change-your-font--save-a-ton-of-money-on-printer-ink.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946151 | 369 | 2.71875 | 3 |
Marsalek B.,Academy of Sciences of the Czech Republic |
Jancula D.,Academy of Sciences of the Czech Republic |
Marsalkova E.,Academy of Sciences of the Czech Republic |
Mashlan M.,Center for Nanomaterial Research |
And 6 more authors.
Environmental Science and Technology | Year: 2012
Cyanobacteria pose a serious threat to water resources around the world. This is compounded by the fact that they are extremely resilient, having evolved numerous protective mechanisms to ensure their dominant position in their ecosystem. We show that treatment with nanoparticles of zerovalent iron (nZVI) is an effective and environmentally benign method for destroying and preventing the formation of cyanobacterial water blooms. The nanoparticles have multiple modes of action, including the removal of bioavailable phosphorus, the destruction of cyanobacterial cells, and the immobilization of microcystins, preventing their release into the water column. Ecotoxicological experiments showed that nZVI is a highly selective agent, having an EC 50 of 50 mg/L against cyanobacteria; this is 20-100 times lower than its EC 50 for algae, daphnids, water plants, and fishes. The primary product of nZVI treatment is nontoxic and highly aggregated Fe(OH) 3, which promotes flocculation and gradual settling of the decomposed cyanobacterial biomass. © 2012 American Chemical Society. Source | <urn:uuid:461584c8-8ddd-4e12-8954-f7da9248f5a0> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-for-nanomaterial-research-2179644/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00269-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910244 | 299 | 3 | 3 |
Basic hypervisors, the software that allows one server to be partitioned into multiple virtual machines, are free, but that does not mean virtualization is free. Microsoft Windows 2008, for example, still needs to be licensed. Of the various license classes for Windows Server, Datacenter Edition is the most flexible and cost effective.
Windows 2008 Virtual Licensing
Microsoft built its software empire upon licensing operating system software for each PC and server on which they run. Virtualization disrupts this one to one relationship by making it possible to run multiple virtual machines per physical machine; each virtual machine (VM) has its own operating system. | <urn:uuid:1e5d44c6-3a82-47bf-9d7d-f81fac1af134> | CC-MAIN-2017-04 | https://www.infotech.com/research/windows-server-2008-virtual-licensing-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894472 | 127 | 2.59375 | 3 |
Head-of-line blocking (HOL blocking) in networking is a performance issue that occurs when a bunch of packets is blocked by the first packet in line. It can happen specially in input buffered network switches where out-of-order delivery of packets can occur. A switch can be composed of input buffered ports, output buffered ports and switch fabric.
When first-in first-out input buffers are used, only the first received packet is prepared to be forwarded. All packets received afterwards are not forwarded if the first one cannot be forwarded. That is basically what HOL blocking really is.If there’s no HOL blocking happening, arrived packets have the chance to be forwarded around the stuck packet. All non-blocking switched should be smart enough to get through with this kind of network phenomenon.
Why it happens?
It happens when for a packet its destination output port is busy. The output is occupied if there is output contention or output congestion.
- Contention in this case is the situation where two or more input flows want to be forwarded to single output at the same time.
- Output congestion happens when the output buffer is full. Output buffer can be filled up if packets input rate exceeds the output rate.
HOL can significantly increase packet reordering.
Overcoming HOL blocking
One way to do it is to use Virtual Output Queues. The thing to note here is that only switch with input buffering suffers from HOL blocking. If the switch is non-blocking it means that is a switch which sufficient internal bandwidth makes input buffering unnecessary. All buffering will be done at outputs and HOL blocking is gone. This no-input-buffering architecture is common in small to medium-sized Ethernet switches. | <urn:uuid:ddd0f8b5-1b5d-4888-87b7-7ec338982a73> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2015/hol-head-of-line-blocking | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923192 | 354 | 2.828125 | 3 |
In a recent decision, the D.C. Court of Appeals vacated key elements of the Federal Communication Commission’s (FCC’s) 2010 Open Internet order. In doing so, it has revived discourse around the open Internet (commonly referred to as net neutrality), what it really means for consumers and the future of the Internet.
We welcome the discussion, but one thing is for sure — the open Internet experience that consumers have enjoyed for years will continue in the future. Nothing in the court’s decision will change the basic incentives of service providers to offer consumers capabilities that meet all of their ever-increasing needs.
But apart from the practical impact, what will happen now regarding the legal proceedings is a matter of some debate. Officially, the case has been remanded back to the FCC for further consideration, and parties to the case are weighing whether or not to appeal the D.C. Circuit’s decision.
Some advocates are proposing that the FCC break with over 15 years of bipartisan restraint and treat Internet Service Providers (ISPs) as “common carriers.” To understand why such a shift would be harmful to innovation and the ongoing evolution of Internet technologies, it’s worth explaining exactly what the term “common carrier” means, why your ISP isn’t a common carrier, and why the court’s decision is a good thing for both broadband customers and for American innovation.
What are Common Carriers?
Put simply, common carriers are private companies that sell their services to everyone on the same terms, rather than companies that make more individualized decisions about who to serve and what to charge. The term originally applied to companies that carried goods or passengers (like railroads or shipping companies), but after the invention of the telephone, it also included phone companies. Congress created laws to make sure that phone companies provided basic phone service to all customers on a non-discriminatory basis and at reasonable prices, and created the FCC to regulate them. For phone companies, common carrier regulations included strict pricing rules that determined how much they can charge, while also ensuring that the companies made enough money to stay in business.
These common carrier principles are also typically applied to utilities, such as electric and water companies that provide a basic service. But common carrier regulation discourages infrastructure investment and network enhancements. When a company’s return on investment is dictated by the government, there’s little incentive to re-invent or improve the system, which is why copper phone lines are still prevalent, water main breaks are an all-too-common occurrence, and the electric grid is in need of serious repair. Recognizing this problem, the FCC has over the years relaxed many of the requirements on traditional telephone companies, although they still remain subject to significant regulation.
Why Aren’t Broadband Providers Considered Common Carriers?
In the 1996 Telecom Act, Congress made a distinction between two types of services: “telecommunications services” and “information services.” “Telecommunications services” transmit a user’s information from one designated point to another without changing the form or content of that information. For example, a phone call transmits the user’s voice from one point to another without changing the content of the voice message, similar to the way a shipping company would deliver a package that you hand to it. “Information services,” on the other hand, offer a user the capability to create, store, or process information. Once that information is created, it might be transmitted via telecommunications, but the creation of the message would be done via information service. Telecommunications services, such as traditional phone service, were subject to common carrier rules. Information services were not.
Based on the definitions in the 1996 Telecom Act, the FCC classified cable broadband as an “information service” and as a result it is not treated as a common carrier service and is largely exempt from regulation. This was to encourage innovation and investment in private infrastructure and preclude unnecessary government intervention. In hindsight, this was a wise decision. Since 1996, cable broadband companies have invested $210 billion in growing and improving their networks, leading to faster speeds and 93 percent cable broadband penetration in the U.S. This massive investment by cable spurred substantial broadband investment by our competitors, the traditional telephone companies, particularly after the FCC freed their Digital Subscriber Lines (DSL) broadband service from common carrier regulation.
Why Is It Good That ISPs Aren’t Classified As Common Carriers?
Common carrier laws were established nearly a century ago when the pace of innovation was measured in decades. In such a static environment, a regulatory regime in which the government grants a monopoly and micromanages the operations of a service provider was feasible and rational. We now live in a vastly different world and broadband is a very different service than any traditional utility service. The flexibility required by an ISP to effectively deliver increasingly fast broadband to more people requires a constant state of infrastructure updates fueled by capital investment. Classifying ISPs as common carriers would invariably stifle these investments by inserting the federal government into the operation of broadband networks and the provision of broadband services.
Congress recognized this in the 1996 Act, where it stated: “It is the policy of the United States . . . to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”
Part of what we need to do as a nation is to encourage innovation and vibrant marketplaces. Classifying the most technologically advanced communications network in human history as a common carrier is a terrible mistake. Time and time again both Democrats and Republicans have said this type of regulation delays innovation, creates uncertainty, and inhibits a lively marketplace. | <urn:uuid:1696a9b0-3a48-4206-b685-2a99ce5f704a> | CC-MAIN-2017-04 | https://www.ncta.com/platform/public-policy/why-its-a-good-thing-that-broadband-isnt-a-common-carrier/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952595 | 1,182 | 2.875 | 3 |
A fisherman prepares to cast his line standing in the surf as a full moon rises at Mollymook Beach, located south of Sydney, Australia, February 23, 2016. REUTERS/David Gray More TORONTO (Thomson Reuters Foundation) - Climate change is pushing fish toward the planet's North and South poles, robbing traditionally poorer countries closer to the Equator of crucial natural resources, U.S. biologists said in a study published on Wednesday. Key species of fish are migrating away from temperate zones and toward the poles as global temperatures rise, according to a research team from Rutgers University, Princeton University, Yale University and Arizona State University. The migration patterns of fish, a critical food source for millions of people, are likely to exacerbate inequality between the world's poor and rich, they said. The world's wealthier areas tend to be in cooler regions closer to the poles. "Natural resources like fish are being pushed around by climate change, and that changes who gets access to them," said Malin Pinsky, one of the study's authors and a marine biologist, in a statement. The study, published on Wednesday in the journal "Nature Climate Change," used data on fish migration patterns along with a mathematical formula that tracked the movement of natural resources and shifts in wealth.
Smoke billows from a chimney in the early morning hours during a smoggy day near Ramsgate, April 10, 2015. Output of the heat-trapping gases in Europe's second-largest emitter behind Germany fell to 497.2 million tonnes of carbon dioxide equivalent (CO2e), from 514.4 million tonnes in 2014, the Department of Energy and Climate Change said. Emissions of carbon dioxide (CO2), the main greenhouse gas blamed for climate change, dropped 4 percent to 405 million tonnes. The fall stemmed largely from a drop in energy-sector emissions. Those fell 13 percent to 136 million tonnes of CO2e as low-carbon electricity production from renewable and nuclear power plants rose and carbon-intensive coal generation fell. Data released by the government last month showed coal-fired generation fell 24 percent last year while nuclear generation rose by 10 percent and wind generation by 24 percent. Thursday’s data shows Britain’s GHG emissions have fallen 38 percent since 1990, and dropped for a third consecutive year. Britain has a legally binding target to cut its GHG emissions by 2050 to 80 percent below 1990 levels and has set out five yearly carbon budgets towards meeting this goal. The country is on track to achieve the cuts needed to meet the second and third carbon budgets to 2022 but the government has said it risks missing the fourth, 2023-27 budget, which needs a reduction of 50 percent by 2025. Last November the government announced plans to close polluting coal-fired power plants and replace them with gas plants by 2025, but industry experts have warned the new plants are not being built quickly enough. They also warned that a decision last year to cancel a 1 billion pound ($1.44 billion) project to help fund technology to capture CO2 emissions and store them underground would make meeting the climate target more difficult. The bulk of Britain's emissions, some 27 percent, came from energy supply, followed by transport at 23 percent, business at 14 percent and residential at 13 percent. The rest came from sectors including agriculture and waste management.
News Article | May 5, 2016
A majority of Donald Trump supporters believe that man-made climate change is real and happening — something in stark contrast to the U.S. presidential candidate’s history of denying global warming, according to a poll released on May 4. Fifty-six percent of those supporting Trump in the presidential race think global warming is happening. Trump supporters, too, were more likely to back a candidate who strongly backed global warming measures. The survey, conducted by the Yale Program on Climate Change Communication along with the George Mason University Center for Climate Change Communication, probed how supporters of current candidates viewed climate change, how those views will shape their vote, and how they think Americans should address the matter. Among those voting for John Kasich, the percentage climbed to 71, while a mere 38 percent of Ted Cruz voters believe in man-made climate change. Both of these Republican candidates have since left the race after Indiana’s primary. “I am not a believer [in man-made climate change], and I will, unless somebody can prove something to me, I believe there’s weather,” Huffington Post quoted Trump's radio interview last fall. It goes up and down and again, with changes depending on years and centuries, Trump added, emphasizing “much bigger problems” that the country is facing. On May 2, during a meeting with the editorial board of Washington Post, Trump stuck to his conviction, instead putting the spotlight on nuclear weaponry. "I think our biggest form of climate change we should worry about is nuclear weapons. The biggest risk to the world, to me – I know President Obama thought it was climate change,” read part of Trump’s statements in the climate change exchange. “We don’t know who has them. We don’t know who’s trying to get them.” Even though most Trump supporters acknowledge man-made climate change, only 35 percent expressed worry over it – compare that with the 83 percent of Hillary Clinton voters and 80 percent of Bernie Sanders voters who were worried by ongoing warming phenomena. This poll was conducted in March and covered 1,004 registered U.S. voters, with +/- 3 percentage points as a margin of error. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.
When it comes to hurricanes in the U.S., large-scale trends are not in our favor. In fact, unless action is taken to curb both global warming and coastal development, the American economy may be set to take a perilous bashing from stronger storms, rising seas and too much high value, high-risk property lying in harms' way. By the end of the century, a hurricane that strikes the eastern United States could cause up to three times more economic damage than a hurricane that strikes today, climate researchers warn in a new study. SEE ALSO: Why the extreme Louisiana floods are worrying but not surprising If the world doesn’t drastically reduce its greenhouse gas emissions and Americans don’t move to safer ground, the U.S. could suffer an eight-fold jump in average annual financial losses from hurricanes by 2100, the study found. In the study, published Tuesday in the journal Environmental Research Letters, the scientists from Germany's Potsdam Institute for Climate Impact Research show that future hurricane-related losses for American families, companies and communities could grow faster than the overall U.S. economy — meaning the country won’t be able to counteract the damages from extreme weather events by creating more jobs and wealth. “We find that hurricane losses have risen and will rise faster than the economy,” Tobias Geiger, the paper’s lead author and a climate scientist at the Potsdam Institute, told Mashable. “The impacts of climate change cannot be simply economically outgrown," he said. In the U.S. alone, hurricanes caused $400 billion in estimated losses between 1980 and 2014, accounting for more than half of all weather-related economic losses, the German reinsurance giant Munich Re last year. Damages from other extreme weather events — including floods, wildfires, tornadoes and droughts — are also on the rise due to both human-caused global warming and unchecked development into floodplains, fire-prone forests and waterfronts. In 2015, the U.S. experienced 10 weather and climate disaster events that each caused $1 billion in total damages and costs, according to the National Oceanic and Atmospheric Administration (NOAA). That’s about twice the annual average of 5.2 billion dollar events experienced from 1980 to 2015. And this year is already on track to outpace 2015 in the number of these expensive disasters. As of July, the country has suffered eight such events, including two flooding and six severe storm events — not including the devastating floods in southern Louisiana that have killed at least 13 people and led to 40,000 rescues. Geiger said that few other studies have projected future U.S. damages from hurricanes in a comparable way to the Potsdam Institute’s research. A in Climate Change Economics and a by the World Bank both found that hurricane-related losses would roughly double by the end of the century due to climate change alone. But Geiger said his team made a novel finding: That hurricane-related losses in the U.S. will grow faster than per-capita income growth, so that even if the United States grows wealthier as a nation, it will be no better protected from the wrath of warming-fueled hurricanes. “Some people hope that a growing economy will be able to compensate for the damages caused by climate change — that we can outgrow climate change economically instead of mitigating it,” Anders Levermann, one of the study’s authors, said in a press release. “But what if damages grow faster than our economy? What if climate impacts hit faster than we are able to adapt?” A house destroyed by 2012's Hurricane Sandy in New York City, Oct. 8, 2013. Kerry Emanuel, a meteorologist at MIT and a prominent researcher examining global warming-related trends in hurricanes, said the study is “important” because it shows that the U.S. can’t compensate for increasing hurricane damages solely by growing the economy. Emanuel contributed research on the synthetic storm events used in the study and reviewed an earlier draft. “This adds urgency to the need to revise existing policies that inadvertently promote migration to and building within hurricane-prone coastal regions,” he told Mashable in an email. The Potdsam Institute researchers developed statistical damage models that linked a hurricane’s wind speed, the size of the exposed population and per-capita gross domestic product (GDP) to reported storm losses. They also studied information on historical hurricane tracks for the eastern U.S. to determine the connections between storm damages and the other three indicators. The team used those findings to analyze thousands of potential hurricane tracks that could affect the Gulf and Atlantic Coast regions through 2100, using different degrees of global warming. When it comes to the rise in annual financial losses from hurricanes, about one-third of the losses could be the result of global warming, while the remaining two-thirds could come from “increased vulnerability on the socio-economic side,” Geiger said in an email interview. Geiger said the institute’s research shows that adaptation measures — such as storm surge barriers, wind-resistant housing and floodgates — won’t be enough to keep the U.S. or other hurricane-prone nations safe in the warming future. To fully hold costs down, we would have to limit the magnitude and pace of global warming, he said. Reducing emissions of harmful greenhouse gases that cause global warming is perhaps even more essential to limiting the future damage of hurricanes, Geiger said. “Although improving adaptation efforts can reduce further harm, it is important to increase climate mitigation in order to prevent or damp still-avoidable consequences,” Geiger said. Editor's Note: This story has been edited to remove the quotes from a researcher, Roger Pielke Jr. of the University of Colorado at Boulder, who felt he had been misquoted as disagreeing with the new study's findings. Pielke's research, in fact, is consistent with the study's findings about future storm losses, he said on Thursday.
News Article | April 19, 2016
Exxon (now ExxonMobil) is showing signs that it’s gearing up for history’s largest ever battle over the future of fossil fuels and climate change. The oil and gas titan has been sowing the seeds of climate change denial since the 1980s, when it and other energy giants created the Global Climate Coalition to aggressively lobby Congress and lawmakers to their side, and away from environmentalists concerned over early evidence of global warming. As the seeds began to grow, doubt and a fierce anti-science mentality flowered among public opinion. In 2002, the coalition disbanded, explaining that it had “served its purpose by contributing to a new national approach to global warming.” Since then, Exxon has funneled hundreds of millions of dollars to powerful climate change denial organizations both through traceable funds and secret “dark money” transactions filtered through veiled third-parties. But all the while, the corporation has been sitting on a hidden cache of scientific evidence that starkly contradicts the very idea it has fought so hard to snuff out. Evidence, that if brought into the light, could allow Exxon to be tried for criminal violations. As InsideClimate News revealed last year after an eight month investigation into company documents, Exxon itself had quietly funded some of the first cutting-edge climate research ever conducted, as early as the 1970s. According to an email from Exxon’s former scientific advisor, the company was aware of climate change and CO2’s adverse effects by 1981, which was seven years before the issue came into public focus. But when NASA scientist James Hansen brought climate change to the fore in 1988, Exxon was already starting to reverse its position on the validity of the science it helped to put forth. Now, multiple probes into the energy giant’s decades of alleged deceit have blossomed out of the ICN investigation. Most recently, the attorney general of the US Virgin Islands issued a subpoena for nearly 40 years of climate change documents from Exxon through a Washington, DC law firm. Attorney General Claude Walker’s push for additional information makes the Virgin Islands the fourth party to launch a legal investigation into the corporation’s knowledge of fossil fuel’s role in global warming, joining US states New York, California, and Massachusetts. Exxon attempted to block the subpoena last week by suing over allegations that Walker’s request violates the corporation’s First and Fifth Amendment constitutional rights, and compels it to present documents that extend beyond the five-year statute of limitations. While some of the technicalities around Exxon’s stand-off are easily lost in the weeds, here’s what you need to know about why the corporation’s legal flexing is so significant. Who is the group leading the investigation into Exxon’s alleged criminal activity? Earlier this year, a coalition of 17 state attorneys general formed an alliance to press Exxon on its knowledge of climate science and the negative impacts of fossil fuel burning. The group includes the top legal authorities from California, Connecticut, Washington, DC, Illinois, Iowa, Maine, Maryland, Massachusetts, Minnesota, New Mexico, Oregon, Rhode Island, Virginia, Vermont, Washington, and the US Virgin Islands. All of the attorneys general are Democrats. Every member has vowed to open an investigation into whether Exxon knowingly deceived shareholders and the public for decades about the catastrophic consequences of anthropogenic climate change. What the multistate effort hopes to find is evidence of fraud on the basis that Exxon allowed the production and consumption of its product with full knowledge of its dangerous effects, and also played a key role in the manufacturing of climate change denial. In other words, the states' attorneys general believe that Exxon put the environment at risk while knowingly deceiving people by manufacturing climate science doubt, and profiting off that manufactured disbelief. "Every attorney general does work on fraud cases, and we are pursuing this as we would any other fraud matter. You have to tell the truth, you can't make misrepresentations of the kinds we've seen here," said New York Attorney General Eric Schneiderman. "The scope of the problem we are facing, the size of the corporate entities and their alliances, the trade associations and other groups, is massive and it requires a multistate effort." The coalition has been supported by former vice president Al Gore, who remarked that Exxon and other climate change denial lobbyists “are likely now, finally, at long last, to be held to account.” Each state’s laws and investigative authorities are different, but through their joint probes, the attorneys general hope to create a nexus between Exxon’s alleged deception and the damages that have been inflicted upon consumers. What kinds of documents would Exxon have to hand over according to the subpoena? According to Attorney General Walker’s subpoena, Exxon is required to submit documents sent from or received by Exxon regarding climate change and its impacts; communications regarding the likelihood that Exxon products played a role in climate change; studies, research, and reviews published by Exxon or any of the third-parties acting on its behalf; information regarding Exxon sales and revenue that would have been impacted by acting on climate change; and meetings with or funding of third-parties. If compelled to hand over 100 percent of the requested material, Exxon would be sharing nearly 40 years of memos, emails, studies, speeches, publications, and strategy reports, dating back to January 1, 1977. Attorney General Walker has also included in his subpoena all communications regarding last year’s investigation into Exxon’s climate science funding conducted by ICN and the Los Angeles Times. In his subpoena, what is Attorney General Walker accusing Exxon of having done? Attorney General Walker has accused Exxon of violating two state laws under the Virgin Islands’ anti-racketeering legislation called the “Baby RICO” statute. As presented in the subpoena, Exxon is suspected to have engaged in, or be engaging in, “conduct misrepresenting its knowledge of the likelihood that its products and activities have contributed and are continuing to contribute to Climate Change” in order to defraud the government and consumers by obtaining money under false pretenses and conspiring to do so. The “Baby RICO” statute that Walker has invoked was adopted by the Virgin Islands and other states and territories after Congress enacted the Racketeer Influenced and Corrupt Organizations Act (RICO) in 1970, which it created to try Mafia enterprises as a whole, rather than singular cases of lesser criminal associates. What the federal law allows prosecutors to do is build cases from lots of broader evidence. The Virgin Islands’ RICO statute, while not identical to the federal law that preceded it, can potentially be even more broad sweeping. In order for federal prosecutors to use the RICO Act, they must first prove reasonable suspicion that patterns of racketeering activity exist, and that the defendant obtained money through false pretenses. Last year, New York Attorney General Eric Schneiderman also issued a subpoena for similar documents—with which Exxon has complied—but did so using the state’s powerful securities fraud law called the Martin Act. Some legal experts believe that Attorney General Schneiderman is in a much better position to subpoena Exxon for evidence than Walker and other attorneys general in the coalition. “I don’t think the Virgin Island statutes are nearly as comprehensive. They don’t speak directly to the subpoena power given to the attorney general of New York. I have some doubts about what the Virgin Islands attorney general is doing,” Pat Parenteau, a professor of environmental law at the Vermont Law School, told me. According to ICN, this marks the first time “a prosecutor has cited racketeering law to probe Exxon over its longtime denial of climate change and its products' role in it.” Several influential prosecutors, and even Al Gore, have encouraged the Department of Justice (DOJ) to launch an investigation into Exxon over potential violations of the RICO Act. How is Exxon using existing laws to push back against Attorney General Walker? Exxon is not about to hand over thousands of potentially confidential documents without a legal fight. In their lawsuit against Walker and the Washington, DC firm that issued the subpoena, Exxon alleges the request for information violates its constitutional rights. The corporation’s complaint, which was filed in its homebase of Texas, accuses Walker of infringing on its First Amendment right to freedom of speech by asking it to participate in the national discussion around its alleged fraud by deception. Exxon is allowed to make this claim because of the Supreme Court’s 2010 ruling in favor of corporate personhood. Invoking the First Amendment may sit well in Texas, and an early ruling that Walker’s subpoena is baseless due to an infringement of corporate free speech would be an undue win for Exxon, according to Parenteau. Exxon has also cited the Fifth Amendment in its suit, which exists to protect people from being forced to incriminate themselves but does not generally apply to corporations, and would likely not be a persuasive defense against Walker’s subpoena. Additionally, the corporation claims the probe requests documents that extend past a five-year statute of limitations. While the statute of limitations in territories such as the Virgin Islands is more nuanced than it is in US states, if upheld, it could drastically curtail the volume of potential evidence that Exxon is required to submit. Walker’s subpoena, the lawsuit also alleges, "constitutes an abuse of process, in violation of common law,” because the documents requested, Exxon's lawyers claim, appear to target individuals who hold policy viewpoints that stand in direct opposition to those supported by the attorney general. "The First Amendment does not shield any company from being investigated for fraud," said Walker in a statement. What this complaint suggests is that Exxon is posturing itself for future inquiries into its alleged knowledge of anthropogenic climate change and fraudulent behavior. While the corporation may be forced to comply with Walker’s subpoena, it could still be years before any battle with Exxon escalates to a lawsuit. The risk in pushing too hard, said Parenteau, would be courts ruling in Exxon’s favor over misuse of criminal law or political retaliation. Will the documents subpoenaed by state attorneys general become public? When, and whether or not, the Exxon documents acquired through current and future subpoenas become available to the public depends entirely on how the offices of the attorneys general negotiate their exchange. Some pieces of evidence may very well be confidential under the Privacy Act or heavily redacted, but Exxon can be expected to hand over material with the stipulation that it never becomes public. Has this type of large-scale, corporate investigation ever happened before? The main reason why legislators and environmentalists have encouraged the DOJ to use the RICO Act against Exxon is because the powerful law was previously invoked to bring down the tobacco industry in the 1990s. In the United States v. Philip Morris, the DOJ successfully sued the country’s most influential tobacco companies over fraudulent and unlawful conduct. The companies were held liable for violating RICO by deliberately concealing the health risks of smoking and marketing their products to the public. "In the tobacco case, people were harmed by false beliefs propagated by the companies and were tricked into dangerous behavior," former state attorney general Sen. Sheldon Whitehouse told ICN. "In the case of climate change, there is the general harm, the damage that carbon is wreaking, and the cost to the government of flooding, wildfires and other disasters." But if the case against Exxon is anything like the one involving the tobacco industry, it would also require strong bipartisan support among a class of politicians who have strong ties to the oil industry. Ultimately, there are various legal options that prosecutors could pursue to venture deeper into Exxon’s decades of darkness. Investigators have only just begun to dig, but look at how much they’ve been able to uncover so far. | <urn:uuid:e528ee82-a7a2-4483-bd33-347e468eb937> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/climate-change-1362301/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956948 | 4,810 | 2.984375 | 3 |
Big data in healthcare is overwhelming not only because of its volume but also because of the diversity of data types and the speed at which it must be managed. The totality of data related to patient healthcare and well-being make up the “big data” in the healthcare industry. By discovering associations and understanding patterns and trends within this data, big data analytics can help in improving care, save lives, and lower costs.
Clinical data analytics help physicians to take decisions about the care of their patients or aid in better understanding the health of their covered populations. The technology can vary according to the data involved or the users of the information or the actions which are at the discretion of decision-makers who can be anyone or more of the following: nurses, doctors, public health officials and senior management.
The North American region has a share of more than 55% of the global clinical data analytics market and is estimated to cross USD 3 billion mark by the end of 2016. Accounting 40% of the clinical trials worldwide, the U.S. remains the major market for these solutions in North America. The North American healthcare sector varies according to the countries in the region. In the USA, the healthcare sector is mostly owned and operated by private sector. More than 60% of the hospitals are non-profit and only around 20% are government-owned and the rest is for profit. The efficiency and the reach of healthcare in the USA vary from region to region. In Canada, healthcare is publicly funded which helps the majority of the population get access to basic and advanced healthcare. Clinical data analytics has started to be used widely in the region and is being used by many companies and hospitals to give the required assistance and treatment to the patients.
New electronic health record systems and health information exchanges are transforming the ways of exchanging patient data. This data is ultimately being analyzed and used by the healthcare providers for a larger population as a whole. These solutions are also helping in cutting down costs of clinical trials and make healthcare processes more efficient.
Clinical data analytics market is estimated to grow from the current USD 2.25 billion to USD 16.98 billion by 2021, at a CAGR of 41.50%. North America will be the biggest market for these solutions with a share of 55%, followed by Europe, Asia-Pacific and rest of the world.
IBM has the largest market share among the different software vendors catering to clinical data analytics market, followed by athenahealth, InterSystems Corporation and Cerner Corporation.
The report focuses on different industry policies and factors which are driving the market growth. The report also provides key insights into strategies, market shares and solutions of key vendors like IBM, Caradigm, CareEvolution, Cerner, Explorys, InterSystems, McKesson, Wellcentive, Athenahealth and Truven Health Analytics.
Some of the major vendors who are providing stiff competition to existing players mentioned in the report are ActiveHealth Management, The Advisory Board Company, Humedica, Inc., Comprehend Systems and Forte Research Systems.
The factors responsible for driving the demand for clinical data analytics market are, an increasing focus of the population towards health management, supportive government policies, cost benefits, and the opportunity to provide better quality services to the patients.
Fragmented end-user market and software related privacy and security issues are some of the challenges of the clinical data analytics market.
WHAT THE REPORT OFFERS | <urn:uuid:e5e4f0b4-c804-4fd3-b2d0-67afc5690157> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/north-america-clinical-data-analytics-in-healthcare-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944228 | 697 | 2.546875 | 3 |
Google is using machine learning technology to forecast - with an astounding 99.6% accuracy -- the energy usage in its data centers and automatically shift power to certain sites when needed.
Using a machine learning system developed by its self proclaimed "Boy Genius" Jim Gao, Google says it calculates power usage efficiency or PUE, a measure of energy efficiency, every 30 seconds. Google, in a blog post by Joe Kava, vice president of the company's Data Centers says it constantly tracks things like total IT load (the amount of energy servers and networking equipment are using at any time), outside air temperature (which affects how cooling towers work) and the levels at which Google sets its mechanical and cooling equipment.
"After some trial and error, Jim's models are now 99.6 percent accurate in predicting PUE. This means he can use the models to come up with new ways to squeeze more efficiency out of our operations. For example, a couple months ago we had to take some servers offline for a few days-which would normally make that data center less energy efficient. But we were able to use Jim's models to change our cooling setup temporarily-reducing the impact of the change on our PUE for that time period. Small tweaks like this, on an ongoing basis, add up to significant savings in both energy and money," Kava wrote.
"...a comprehensive DC efficiency model enables operators to simulate the DC operating configurations without making physical changes. Currently, it's very difficult for an operator to predict the effect of a plant configuration change on PUE prior to enacting the changes. This is due to the complexity of modern DCs, and the interactions between multiple control systems. A machine learning approach leverages the plethora of existing sensor data to develop a mathematical model that understands the relationships between operational parameters and the holistic energy efficiency. This type of simulation allows operators to virtualize the data center for the purpose of identifying optimal plant configurations while reducing the uncertainty surrounding plant changes," Gao wrote in a Google white paper outlining the details of his work.
A typical large-scale data center generates millions of data points across thousands of sensors every day, yet this data is rarely used for applications other than monitoring purposes. Advances in processing power and monitoring capabilities create a large opportunity for machine learning, data-driven to guide best practice and improve data center efficiency, Gao said.
"Machine learning applications are limited by the quality and quantity of the data inputs. As such, it is important to have a full spectrum of DC operational conditions to accurately train the mathematical model. The model accuracy may decrease for conditions where there is less data. As with all empirical curvefitting, the same predictive accuracy may be achieved for multiple model parameter. It is up to the analyst and DC operator to apply reasonable discretion when evaluating model predictions," Gao wrote.
Check out these other hot stories: | <urn:uuid:2a0ba19f-c509-45a3-9c05-97a95c0da813> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2343255/virtualization/google-taps-machine-learning-technology-to-zap-data-center-electricity-costs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92894 | 581 | 3.0625 | 3 |
The Curiosity rover landed on Mars in August 2012 and has been hard at work learning much about the red planet.
NASA's Curiosity Mars rover is celebrating its one-year anniversary on the Martian surface as the space agency looks forward to more amazing discoveries as the rover begins its second full year of exploration.
So far, the rover and its two-year planned mission have brought back incredible finds to scientists back on Earth, including the discovery of solid evidence that ancient Mars could have supported life
, according to NASA.
"Without a doubt, everyone here at the Jet Propulsion Laboratory (JPL), NASA and everyone else on the team is very excited about the mission to date," Rick Welch, the mission manager at the JPL, told eWEEK
. "Looking back to a year ago, I don't think that anybody could have predicted how well the mission would go."
Other rovers have visited Mars in the past, but none before have had the capabilities to dig into the Martian soil and them analyze the soil and rock using an on-board laboratory. That changed with Curiosity, said Welch.
"To actually scoop up the soil and get samples, it has been an incredible invention and adventure," he said. To accomplish those feats, the latest rover is the size of an SUV back on Earth, so it had to be brought to the Martian surface gingerly so it wasn't destroyed on impact last August.
Once landed, the scientists initially took a bit of a detour with the rover because it landed in an area with amazing geology, said Welch. "When we saw where we actually landed, at this conjunction of three different terrain types just east of the planned landing site, it made sense to look at this area first. We spent the better part of the year exploring that area."
That resulted in a six- to seven-month detour to explore an area around an ancient river system, which is now dry, in a valley that's called Peace Vallis, according to an earlier eWEEK
story. There, the rover found the remnants of a former river, which spread out across the crater floor like a fan, where conditions could have existed for life on Mars.
That discovery was huge, said Welch. "It really gives us good evidence of a habitable environment, and that's what this mission is all about," he said.
So far, the rover is doing very well on the Martian surface, said Welch, who is a 20-year veteran of rover missions. The systems have been working well as a whole, meeting expectations. It's been great."
The one-year landing anniversary for the rover occurs early in the morning on Aug. 6, which officially will mark the halfway point for the planned activities of the two-year mission.
The JPL is a division of the California Institute of Technology in Pasadena, Calif., and manages the Mars Science Laboratory Project for NASA's Science Mission Directorate in Washington, D.C., according to NASA. The JPL designed and built the project's Curiosity rover.
Since landing, Curiosity has so far sent more than 190 gigabits of data back to Earth, and has sent back more than 36,700 full images and 35,000 thumbnail images, according to NASA. The equipment on board the rover has also fired more than 75,000 laser shots to investigate the composition of targets, collected and analyzed sample material from two rocks, and driven more than 1 mile (1.6 kilometers), according to the space agency.
In July, the Curiosity rover began a long-awaited, 5-mile-long journey
across the terrain of the red planet to begin exploring a rocky area known as Mount Sharp, 11 months after the rover arrived on the planet's surface. | <urn:uuid:e79c1396-ff54-4dec-a2cd-2f17445ffb2d> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/nasas-curiosity-mars-rover-marking-first-anniversary | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00344-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965037 | 764 | 3.28125 | 3 |
Historian Peter Turchin, who studies population dynamics at the University of Connecticut, has assumed the role of the world's biggest bummer with his recent prediction that widespread violence will erupt worldwide sometime around the year 2020, as profiled in this recent feature in Nature. What has many people worried is that he's backing up this premonition with a mathematical formula, known as cliodynamics.
Turchin is credited with coining the term cliodynamics, which is the study of historical mathematical data like population figures and global economic performance to identify patterns of similar behavior. Turchin's studies point to a cycle in which society at large becomes engulfed in widespread violence every 50 years.
The current pattern dates back at least to 1870, when economic disparity in the U.S. led to urban violence, and follows the 50-year cycle to the anti-Communist fervor and race riots around 1920, followed by the political assassinations, terrorist attacks and domestic violence in 1970, Turchin told Nature. By that logic, Turchin believes we should circle the year 2020 on our calendars as the year when we start locking our doors.
“I hope it won't be as bad as 1870,” he told Nature.
Turchin operates a website devoted to spreading awareness of cliodynamics, which can be found here. Separately, in a page titled "Why do we need mathematical history?" an appeal for the increased use of numerical patterns and formulas to attempt to answer the Turchin's hypothesis that history follows patterns is made, and quite compellingly:
Mathematics is not just about quantities (it includes such fields as mathematical logic, abstract algebra, and topology). However, if we are interested in understanding the dynamics of such historical processes as population change, territorial expansion/contraction, and the spread of religions, we must get involved with numbers and rates. Furthermore, a “naked” human mind, unaided by mathematical formalism and computers, is a poor tool for predicting dynamical processes characterized by nonlinear feedbacks, or grasping such complex behaviors as mathematical chaos.
Without mathematics (understood broadly) we are doomed to make vague statements and to arrive at wrong conclusions. How can we test theoretical predictions with data, if we are not even sure that the “prediction” in fact follows from the theory’s premises?
This page also points to similar practices by other historians, namely "the work of the Nobel laureate Robert Fogel and colleagues on the economic feasibility of slavery in the Antebellum United States."
"We need more such studies, and not just in the field of economic history," the page adds (it shows no author but is linked from Turchin's homepage).
Of course, the theory has its opponents, as the Nature article points out. But if Turchin's numbers end up being right, is it too late to do anything about it? | <urn:uuid:9c64a74d-a2c5-4dbe-81e5-2041167ab3ff> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222881/opensource-subnet/historian--mass-violence-to-erupt-in-2020--mathematical-pattern-suggests.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95121 | 607 | 2.8125 | 3 |
The National Science Foundation (NSF) announced funding for two cloud testbeds, named “Chameleon” and “CloudLab.” A total award of $20 million to be split evenly between the projects will enable the academic research community to create and experiment with novel cloud architectures and potentially transformative applications.
Along with developing next-generation cloud systems, the programs emphasize the importance of forward-looking applications related to medical devices, power grids, and transportation systems.
The new projects are part of the NSF CISE Research Infrastructure: Mid-Scale Infrastructure – NSFCloud program, a follow-on to NSFNet, which supported and facilitated cutting-edge networking infrastructure.
Both “Chameleon” and “CloudLab” are underway and will ramp up over the next two years.
“Chameleon” will provide a configurable large-scale environment for cloud research. The project will initially leverage the existing FutureGrid hardware at the University of Chicago and the Texas Advanced Computing Center before transitioning over to newer hardware, also co-located at the University of Chicago and The University of Texas at Austin. Other partners include Ohio State University, Northwestern University, and the University of Texas at San Antonio.
The testbed will consist of 650 multicore cloud nodes, 5 petabytes of storage, with 100 Gbps connection between the sites. The architecture will combine homogenous hardware to support large-scale experiments with heterogenous systems that allow experimentation with high-memory, large-disk, low-power, GPU and coprocessor units. Researchers will be able to configure and test different cloud architectures on a range of problems, including machine learning and adaptive operating systems to climate simulations and flood prediction.
Chameleon was conceived as a creative endeavor. Researchers are encouraged to mix-and-match hardware, software and networking components and test their performance. Access to “bare-metal” hardware is key to enabling this flexibility. According to the University of Chicago’s Computation Institute, “this system will allow researchers to develop and test at scale new high-performance and low-noise virtualization solutions that might make possible high-performance computing in the cloud – creating virtual supercomputers on demand for research.”
“Like its namesake, the Chameleon testbed will be able to adapt itself to a wide range of experimental needs, from bare metal reconfiguration to support for ready made clouds,” said Kate Keahey, a scientist at the Computation Institute and principal investigator for Chameleon. “Furthermore, users will be able to run those experiments on a large scale, critical for big data and big compute research. But we also want to go beyond the facility and create a community where researchers will be able to discuss new ideas, share solutions that others can build on or contribute traces and workloads representative of real life cloud usage.”
The other project, CloudLab, is envisioned as a large-scale distributed infrastructure comprised of three clusters of 5,000 cores each, based at Clemson University, the University of Wisconsin and the University of Utah. Each site has a different technology focus to enable researchers to evaluate novel cloud technologies in a realistic environment.
The first cluster to be hosted at Clemson University in South Carolina should be up and running in the fall of 2014. Designed in partnership with Dell, the focus is on high-performance computing and high-memory configurations. The second cluster is scheduled to come online this winter at the University of Wisconsin in Madison. It’s being built in collaboration with Cisco, and will focus on SDN capabilities. The third cluster, expected to go live in the spring of 2015 at the University of Utah in Salt Lake City, will incorporate a hybrid approach with low-power ARM64 processors being used in tandem with x86 processors. Designed by HP, the emphasis of this site is energy efficient computing.
The three centers will be interconnected via 100 gigabit-per-second connections on Internet2’s advanced platform to form a unique scientific infrastructure. Researchers will have control and visibility all the way down to bare metal. They’ll have access to “slices” of the testbed, which they can use to build their own clouds. Other partners include UMass Amherst, Raytheon BBN Technologies and US Ignite.
“CloudLab will enable the next generation of innovations in the entire cloud ecosystem, ranging all the way from hardware elements, to software infrastructures, to the applications that run on the clouds,” observes UW-Madison computer science processor and CloudLab co-principal investigator Aditya Akella. “In turn, these innovations will lead to exciting new services that are simply impossible to realize on clouds today, benefitting our economy and society at large.”
Today’s NSF investment comes three years after the DOE-funded Magellan cloud testbed folded in late 2011. Although the $32 million project failed to show a significant ROI for “scientific cloud computing,” there are a raft of new technologies, configurations and applications that warrant exploration.
Public clouds, like those offered by Amazon, Google and Microsoft, run on privately-owned datacenters. They enable utility-style computing, but they don’t allow direct access to the infrastructure. By providing an all-access playground, Chameleon and CloudLab are filling an unmet need. In the words of University of Massachusetts Amherst researcher Michael Zink [speaking about CloudLab], “there’s nothing like this available today. This is something that will be very beneficial to many, many researchers.” | <urn:uuid:d3daefc4-6052-49e6-9d86-5e2d451366a2> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/08/21/nsf-rolls-innovative-cloud-testbeds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918103 | 1,171 | 2.625 | 3 |
There is a serious move toward adding ever-more technology to cars in an effort to reduce accidents that take 32,000 human lives a year and cause some 2 million injuries.
The human toll is obvious but can high-tech automotive communications and sensor technologies - know collectively as vehicle-to-vehicle (V2V) technologies really change those statistics? Many experts say they indeed can but they face a number of challenges according to a Government Accountability Office report on the technology this week. The US Department of Transportation says if widely deployed, V2V technologies could provide warnings to drivers in as much as 76% of potential multi-vehicle collisions involving at least one light vehicle, such as a passenger car.
[RELATED: Wacky low- and high-tech wheels of the world]
Crash avoidance technologies, which use sensors such as cameras and radar, can observe a vehicle's visible surroundings and issue warnings to the driver when certain types of collisions with other vehicles or obstacles appear to be imminent. These technologies also facilitate the sharing of data, such as vehicle speed and location, among vehicles to warn drivers of potential collisions, the GAO stated.
The GAO said efforts by the U.S. Department of Transportation (DOT) and the automobile industry have focused on developing: 1) in-vehicle components such as hardware to facilitate communications among vehicles, 2) safety software applications to analyze data and identify potential collisions, 3) vehicle features that warn drivers, and 4) a national communication security system to ensure trust in the data transmitted among vehicles.
The GAO report defined a number of challenges to these high-tech tools for cars. They include:
Security: A security system capable of detecting, reporting, and revoking the credentials of vehicles found to be sharing inaccurate information will be needed to ensure trust in the V2V data transmitted among vehicles. Final plans and policies for the V2V communication security system - including its technical framework and management structure - have not yet been developed and will need to be finalized prior to V2V technology deployment. The GAO said 12 of the 21c experts it interviewed said the technical development of a V2V communication security system as a great or very great challenge to the deployment of V2V technologies. One expert said that it is challenging to establish technical specifications for a system that attempts to maintain users ' privacy while providing security for over-the-air transmission of data. Another expert told the GAO that a public key infrastructure system the size of the one needed to support the nationwide deployment of V2V technologies has never been developed before; the sheer magnitude of the system will pose challenges to its development.
Spectrum arguments: In response to requirements in the Middle Class Tax Relief and Job Creation Act of 2012, FCC issued a Notice of Proposed Rulemaking in February 2013 that requested comments on allowing unlicensed devices to share the 5.9 GHz band of the radio-frequency spectrum that had been previously set aside for the use of applications such as V2V technologies. Although existing FCC regulations are designed to ensure that unlicensed devices do not cause interference, four automobile manufacturers and 16 experts we interviewed expressed concern or uncertainty about the potential effects of allowing unlicensed devices to share the 5.9 GHz band. One automobile industry group said that its members are not opposed to opening the 5.9 GHz band for sharing but emphasized the importance of understanding the implications of doing so to ensure that it will not hinder critical V2V safety applications.
Deployment Levels: According to DOT, the safety benefits of V2V technologies will be maximized with near full deployment across the U.S. vehicle fleet. However, even if NHTSA pursues a rules requiring installation of these technologies in new vehicles, it could take a number of years until benefits are fully realized due to the rate of turnover of the fleet. According to one automobile manufacturer the GAO interviewed, given the rate of new vehicle sales, it can take up to 20 years for the entire U.S. vehicle fleet to turn over. Also, aftermarket devices that allow existing vehicles to be equipped with V2V devices could help speed deployment. However, three experts the GAO interviewed expressed concern that drivers may not see value in purchasing aftermarket devices, which could limit their adoption.
Driver response: The benefits of V2V technologies will also depend on how well drivers respond to warning messages. If drivers do not take appropriate action in response to warnings, then the benefits of V2V technologies could be reduced. For example, if drivers do not respond to warnings quickly enough due to distraction, impairment, or other reasons they may not be able to avoid a collision. Furthermore, if safety applications offer too many false warnings when no imminent threat exists, drivers could begin to ignore valid warnings.
Deployment of other safety technologies: The potential benefits solely attributable to V2V technologies will also depend on the market penetration and effectiveness of sensor-based crash avoidance technologies. These existing technologies are able to address some of the same crash scenarios as V2V safety applications and their market penetration is likely to increase in the future. While there are cases where V2V technologies can provide safety benefits where sensor-based crash avoidance technologies cannot such as around a curve or when detecting an unseen stopped carthere are some V2V technology collision scenarios that sensor-based crash avoidance technologies can also address. For example, cameras and radar can be used to provide drivers with forward collision warnings or lane change warnings when another vehicle is in a blind spot.
No fault - Your fault: Six automobile manufacturers and 17 experts GAO interviewed expressed concern about the challenge posed by uncertainties related to potential liability in the event of a collision involving vehicles equipped with V2V technologies. This challenge is demonstrated in a number of potential liability issues and questions that are unanswered at this time. One automobile manufacturer said that because V2V technologies offer warnings that are based in part on data transmitted by other vehicles as opposed to sensor-based systems that collect data solely from a vehicle's surroundings-it could be harder to determine whether fault for a collision between vehicles equipped with V2V technologies lies with one of the drivers, an automobile manufacturer, the manufacturer of a V2V device, or another party.
Costs: The costs associated with a V2V communication security system also remain unknown as the specifics of the system's technical framework and management structure are not yet finalized. While the costs of in-vehicle V2V components may be modest relative to the price of a new vehicle, some experts noted that the potential costs associated with the operation of a V2V communication security system could be significant. Further, it is currently unclear who--consumers, automobile manufacturers, DOT, state and local governments, or others--would pay the costs associated with a V2V communication security system.
Check out these other hot stories: | <urn:uuid:908ab8cb-f0ab-4535-990c-18d67e1896d0> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225701/data-center/car-crash-prevention-technologies-face-huge-challenges.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00298-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948809 | 1,394 | 2.84375 | 3 |
The C Programming Language, second edition by Kernighan and Ritchie is a well-known and rightly praised book. I read the book and worked all the exercises.
My answers in a gzipped tar archive. (Or in a ZIP archive if you prefer.)
Other people working through the book may be interested in the Notes to Accompany The C Programming Language by Steve Summit, maintainer of the comp.lang.c FAQ. Also be aware of the errata.
There is a book, The C Answer Book by Tondo and Gimpel, that has answers to these exercises. I didn't know about it until I was literally on the last (non-appendix) page of K&R; there is an advertisement for it on the back cover. I haven't read it.
I found another site (based on an older one) that has answers to most of the exercises. I didn't find out about it until after I had finished the book, when I was searching for information on Tondo and Gimpel.
Back Answers for The Practice of Programming | <urn:uuid:54672397-9af9-4bdd-9170-3ce7d1b3167d> | CC-MAIN-2017-04 | https://www.bamsoftware.com/computers/tcpl-answers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00291-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970754 | 224 | 2.640625 | 3 |
By Stephen Goldsmith, Daniel Paul professor of government and faculty director, Innovations in American Government program at Harvard University, and William D. Eggers, adapted from their new book Governing by Network: The New Shape of Government (Brookings Press, 2004).
In the 20th century, hierarchical government bureaucracy was the predominant organizational model used to deliver public services and fulfill public-policy goals. Public managers won acclaim by ordering those under them to accomplish highly routine -- albeit professional -- tasks with uniformity but without discretion. Today, increasingly complex societies force public officials to develop new governance models.
In many ways, 21st-century challenges and the methods of addressing them are more numerous and complex than ever. Problems have become both more global and more local as power disperses and boundaries (when they exist at all) become more fluid. One-size-fits-all solutions have given way to customized approaches as the complicated problems of diverse and mobile populations increasingly defy simplistic solutions.
The traditional, hierarchical government model simply does not meet this complex, rapidly changing age's demands. Rigid bureaucratic systems with command-and-control procedures, narrow work restrictions, and inward-looking cultures and operational models are particularly ill-suited to addressing problems that often transcend organizational boundaries.
Consider homeland security. Acting alone, neither the FBI nor the CIA can effectively stop terrorists. These agencies need a law enforcement network that crosses agencies and levels of government. They need communications systems to capture, analyze, transform and act upon information across public and private organizations at a speed, cost and level that were previously impossible.
For this and countless other reasons, the hierarchical model of government is in decline. The government model is being pushed by governments' appetites to solve ever more complicated problems and pulled by new tools that allow innovators to fashion creative responses. This push and pull is gradually producing a new government model, in which executives' core responsibilities no longer center on managing people and programs but on organizing resources -- often belonging to others -- to produce public value. We call this trend "governing by network."
Complex public-private, network-to-network collaboration models now operate, with varying degrees of success, in nearly every area of government. The building of NASA's James Webb Space Telescope, for example, involves multiple governments (Germany is supplying many instruments; France, the launch vehicle), multiple contractors (Northrop Grumman is the prime contractor), and several universities. NASA is also using in-house capabilities (the agency is doing the testing itself).
Medicaid is a federal-state program in which health-care services are delivered by private and nonprofit organizations, while a third party processes claims. Likewise, most job training programs, funded at least partially by federal and state governments, are administered by local work force boards and delivered by private and nonprofit provider networks.
At the state level, Wisconsin's welfare delivery model engages multiple levels of government, multiple state agencies, a handful of nonprofit and for-profit administrators, and dozens of community-based subcontractors.
As outsourcing, partnerships and network models multiply, scores of public agencies have become de facto
contract management agencies. NASA and the U.S. Department of Energy spend more than 80 percent of their respective budgets on contracts. Contractors at the Department of Energy (DOE) outnumber employees by more than 130,000 people. For a growing number of agencies at all levels of government -- including NASA, the DOE and Wisconsin's Department of Workforce Development -- the skill with which agencies manage networks contributes as much to their successes and failures as the skill with which they manage their own employees.
Public CIOs are in the vortex of this change, both enabling it and affected by it. The trend toward governing by network changes the CIO's role in two fundamental ways. First, CIOs increasingly are expected to provide technical infrastructure and knowledge management capabilities needed to tie together disparate public and private organizations involved in these networks. Networks can still operate by fax, phone and meetings, but without the sophisticated electronic links to public and private partners that CIOs provide, government management of the network is unlikely to succeed.
Second, as more core IT services are outsourced and governments rely more on public-private partnerships to do everything from operating their portals to hosting e-government transactions, successful CIOs must become highly skilled at network management.
In short, talented CIOs have become critical agents for government reform and no longer merely support staff for the status quo.
The CIO as Network Integrator
Technology is the central nervous system of governing by network, connecting partners to each other and the public sector. Web-based technologies, for example, allow third-party providers to check client eligibility for job training, social services agencies to share real-time information with nonprofit partners about abandoned children, or contractors delivering motor vehicle services on behalf of states to verify instantly identities of driver's license renewal applicants. With technology a key enabler of networked government, CIOs more frequently are being asked to play a central role in facilitating partnerships and networks in five ways.
Networked approaches typically require a high level of coordination. Consider a network that provides services for single, low-income mothers. It might involve coordinating agencies and providers involved in everything from delivering food stamps and providing job training to arranging for daycare.
Portals, middleware and collaboration software have helped private-sector companies make huge strides in coordinating complex production tasks in their supply chains. Companies like Dell, Cisco Systems, General Motors, Ford Motor Co. and Herman Miller invest significant resources and executive-level attention in building electronic pipelines to suppliers, alliance partners, customers and employees.
Free market competition taught the private sector that time is indeed money. Using technology to speed and improve product or service delivery enables employees of each network organization to save money by doing more in less time.
NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif., for example, slashed the typical rocket- and shuttle-component design cycle to two to three weeks, from eight to 12 months, by using simulation-based design and acquisition tools to collaborate with contractors. Instead of issuing multiple RFPs and going back and forth with contractors for months on each piece of the design, JPL created an integrated mission design center packed with computers that allow NASA partners to present their initial designs and requirements to NASA engineers. The engineers can then test the design -- and any modifications -- in real time to find out how they would hold up against multiple scenarios. NASA partners, contractors and engineers are all linked to the sessions by video conference. Not only has this innovation radically accelerated the design process, but it also greatly enhanced the quality of initial contractor submissions. "If you don't have any clothes on, we'll notice real fast," explained NASA Deputy Chief Engineer Liam Sarsfield.
Unfortunately the sophisticated electronic collaboration tools NASA employs with its partners are exceedingly rare in government. Most agencies still interact with partners through manual processes, creating a host of inefficiencies, from slow responsiveness and poor reliability to uncoordinated service delivery. This must change.
Coordinating capabilities are critical for building effective public-private networks in a post-Sept. 11 world. Take a city facing a terrorist threat to its water system. Individuals charged with responding to such a threat might include representatives from the Federal Emergency Management Agency, state environment officials, local hospitals, environmental groups, public utility executives, local law enforcement and building inspectors. An electronic coordination mechanism that allows disparate groups to share information in real time and synchronize their responses would be a basic requirement for such a network.
A promising model is Pennsylvania's National Electronic Disease Surveillance System (PA-NEDSS). In February 2002, Pennsylvania became the first state to introduce a fully integrated disease surveillance system that lets participants share information quickly so they can identify, track, predict and contain the spread of disease. More than 130 hospitals, 120 labs, 450 public-health staff and 475 physicians are connected to PA-NEDSS. Public-health officials can distribute alerts and advisories immediately, and collect case data on an ongoing basis over a secure system. More than 100,000 cases representing dozens of diseases in 67 counties have been reported over PA-NEDSS.
Previously local health departments in Pennsylvania received reports from doctors, hospitals and community health officials by mail and fax, then forwarded them to the state the same way -- a cumbersome and slow process. It could take weeks to identify an outbreak. By then, dozens of people could have died. "The problem was the length of time it took to get something in the mail and get it to the right place for the right investigator to review and act on it," explained Joel Hersh, director of the state's Bureau of Epidemiology. "We'd get a report a week after the lab generated it. The report would then have to be sorted and sent to one of our investigators in the state, so it would have to be faxed or remailed. It could be a couple of weeks before investigators could start checking out a situation."
PA-NEDSS changed all that. Thanks to enhanced coordination and information-sharing capabilities, the reporting cycle for each case in Pennsylvania dropped from three weeks to fewer than 24 hours, enabling more rapid and effective response. The system's quick detection abilities, for example, helped the York City Bureau of Health contain an outbreak of shigellosis.
A Single Client View
Network partners also must share relevant customer information to coordinate activities. When a customer purchases a computer on Dell's Web site or changes the order, all information about his or her order is transmitted immediately throughout Dell's supply chain. Dell's just-in-time model would not succeed if each production partner possessed a different view of the customer. Production and delivery would take much longer.
Achieving a single view of the customer is no less important for networked government. Using Wisconsin's welfare eligibility IT system, Client Assistance for Re-employment and Economic Support, state employees and private contractors see the same information on their computers about each individual in the state's W-2 welfare-to-work program. Contractor employees also have instant access to all the government information they need, including educational assessment history, state wage records and Social Security records. "It allows us to work hand-in-glove with the state and county on individual cases," said Gerald Hanoski, executive director of Workforce Connections, a welfare-to-work contractor. "It's the backbone of the network. We couldn't do our work without it."
Hierarchical systems, stimulated by various motives, instill data control as part of the old governance. Professionalism often produces an arrogance that causes officials to limit what others can see without the official's help. In the late 1980s, for example, when pressure began building from employers and gun dealers to access criminal history records held by state police departments and the FBI, the agencies resisted, arguing that data quality issues made access untenable for the untrained eye.
Governing by network, however, cannot succeed without robust knowledge sharing. Cross-sector knowledge sharing can help develop new knowledge, flesh out solutions to daily problems, enhance learning across the network, and build trust and aid in learning from each other's successes -- and mistakes. These capabilities, in turn, can help government better integrate and align its own strategic objectives with those of its partners.
In today's digital age, sustained knowledge sharing across organizations requires a sophisticated technical infrastructure. Virtual communities -- using technologies such as extranets, Web-based seminars, electronic rooms and bulletin boards -- let people share information and knowledge across geographic and organizational boundaries.
The biggest benefits from such "collaborative knowledge networks" come from creating interactive media -- electronic spaces in which government agencies can communicate, collaborate and share knowledge with partners.
Few organizations have as many interdependent moving parts as the Federal Aviation Administration (FAA). Its employees, customers and contractors face extraordinary difficulties in coordinating projects, daily decisions and rule-making. The FAA created an electronic knowledge service network to capture knowledge across 20 business units or nodes, involving 100 work teams and 3,000 users. The IT platform, which supports document storage, virtual conferences, scheduling, threaded discussions and e-mail, encourages collaboration and reduces cycle times across the network. Problem-solving conferences are hosted online, and key decisions are posted on the knowledge network, replacing faxes and e-mails, and saving time and money.
Measuring and tracking performance within a complex network is a major challenge for public innovators. Here again, CIOs can exploit technological advances to overcome a perennial challenge to partnering. Until recently, government managers could not monitor real-time performance of complex, dispersed organizations. Thanks to distributed IT, networking and digital record-keeping, however, service providers can now see each other's information immediately, while the contract monitor can follow individual cases and aggregate data for online reporting. The end result: Governments have a clearer picture of how well the overall network and its individual partners perform at any one time. Wisconsin's Department of Workforce Development, for example, uses a data warehouse to collect performance information about its contractors. The warehouse provides the state with extensive information about every welfare-to-work client's progress, which the state uses to analyze whether contractors are exceeding or falling behind performance targets.
Arizona's Motor Vehicle Division also uses cutting-edge technology to better evaluate its partners' performance. The department's quality assurance group monitors the compliance of its 89 third-party motor vehicle service providers with their performance agreements. The assurance group's electronic accountability system centralizes all data from third-party companies, tracking everything from the number of transactions the third party does in a given time period, to customer complaint information, to the average amount of time it takes to complete transactions. The system makes it extremely difficult for providers to defraud the state; it automatically flags providers that engage in an unusually high number of transactions and generates an activity report to ensure work is legitimate.
The CIO as Relationship Manager
CIOs themselves increasingly use networks to meet several government IT needs. Many of the largest and most complex, multiprovider, government outsourcing projects fall wholly or partly under the auspices of public CIOs. The Navy Marine Corps Intranet and the National Security Agency's Groundbreaker -- two huge IT outsourcing projects -- are multibillion dollar deals. Nearly half the states outsource their Web portals -- as do dozens of cities and counties. At the Transportation Security Administration (TSA), Unisys manages a vendor network of at least 25 companies, including Dell, Cisco, Oracle and Motorola, on behalf of the agency. Together they provide all components of the agency's IT infrastructure, including computers, software, networks, data center and help-desk services.
As more CIOs forge varied partnerships with private corporations, their performance will increasingly depend on how well partnerships are managed. To achieve high performance in this environment, CIOs must develop core capabilities in a multitude of areas where today many have scant internal expertise. In addition to planning, budgeting, staffing and other traditional government duties, managing in a networked environment requires proficiency in activating, arranging, stabilizing, integrating and managing a network. To do this, CIOs and their staffs must possess some degree of aptitude in negotiation, mediation, risk analysis, trust building, collaboration and project management. They must have the ability and inclination to work across sector boundaries and the resourcefulness to overcome the prickly challenges of governing by network.
Case in point: Public CIOs must handle the difficult tasks of approving or rejecting the contract changes that inevitably arise in outsourcing deals. Some vendors will try to take advantage of government through lowball bids and the subsequent change-order process. Strong CIOs will not let that happen; they know how and when to stand up to private partners. "If a partner is not performing, you're going to have to deal with their management," explained a Defense Department official. "You better have a person who has the savvy to do this well."
The most successful CIOs in a networked government will be those best able to assess the public value, and if sensible, look "outside their own world" to identify other mechanisms or organizations they can involve to enhance public value. Successful CIOs will fulfill government's mission by keeping the agency's outcome-focused goals foremost -- the "product" rather than the "process." Successful CIOs will understand not only how to address the make-or-buy decision, but how to bring others with needed capabilities and resources into the supply chain.
Given the challenges, what kind of people will make effective CIOs in this environment? In our experience, they tend to be organized, have strong oral communication skills, think creatively -- rather than from the framework of "that's the way we've always done it" -- and are highly adept at resolving problems. They know how to create win-win situations.
Thriving in the Networked Age
As we have argued, CIOs have a central role in making the transition from hierarchical to networked government successful by deploying IT to help tear down walls between organizations, and giving governments and their private-sector partners tools to work effectively across organizational boundaries.
Ultimately the CIO's role must go beyond simply supplying the technical infrastructure for networked government. CIOs will need to effectively manage people and relationships. Technology, after all, is only an enabler. It cannot solve problems of building trust between organizations, or get organizations with different values and cultures to collaborate and share knowledge. These and many other challenges of integrating and managing public-private networks require creative and skilled public managers with the vision and know-how to bring together multiple organizations across sectors into a functioning whole. Public CIOs who excel at this task will thrive in this networked age.
For more information, visit the Web site | <urn:uuid:05bd04dd-f9d3-4e72-9a09-db39f2da2a63> | CC-MAIN-2017-04 | http://www.govtech.com/featured/Governing-by-Network-CIOs-and-the.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944211 | 3,658 | 2.75 | 3 |
Alaska’s harsh winter weather — an annual average of 40 feet of snow and winds exceeding 140 mph — makes it difficult to keep roads clear. But when the location is Valdez, home to a vital port and a terminal for the Trans-Alaska Pipeline, transportation officials were determined to keep the highway drivable.
“We needed something to try to keep that road open and available to truck traffic,” said Mike Coffey, chief of statewide maintenance and operations for the Alaska Department of Transportation and Public Facilities (ADOT&PF).
The state implemented a GPS- and radar-based system and outfitted snow-removal vehicles with displays similar to those found in fighter planes — an all-electronic view of the highway and approaching vehicles. Based on technology developed at the University of Minnesota through the U.S. Department of Transportation’s Intelligent Vehicle Initiative, the system originally was created to clear snow from Minnesota roadways.
“The problem was as soon as we put the equipment in snowplows here [in Minnesota], it quit snowing, which is slightly ironic,” said Craig Shankwitz, director of the university’s Intelligent Vehicles Laboratory.
Although the need didn’t prove as great in Minnesota, Alaska has been benefiting from the technology since its first use in the winter of 2002-2003.
The ADOT&PF originally outfitted one snowblower and one snowplow with the high-tech equipment, which works to keep the driver in the highway lane while avoiding other vehicles and obstacles like guardrails. The setup is composed of three technologies:
First is differential GPS, which uses information from ground stations and satellites to pinpoint the vehicle’s location with an accuracy of 3 to 5 centimeters, according to Ocie Adams, a project manager for the ADOT&PF.
Second is collision avoidance technology, which uses radar sensors and information from the vehicle positioning system to search for oncoming vehicles.
Last is the driver interface where all the information comes together; it enables drivers to plow snow from the highway in conditions as bad as zero visibility. Using a heads-up display that lets drivers keep their eyes on the road, the interface shows lines that represent the highway’s center line and fog line (the white line painted on the right side of the road), as well as intersections and guardrails. The center and fog lines change color if the driver passes over them, and the seat vibrates and an audio alert can be used.
The display also depicts oncoming vehicles and the image flashes as they get closer to the snowplow. “It allows us to be on the road and stay ahead of things rather than having the guys speed up when the storm is over,” Coffey said. “They’re still going fairly slowly and cautiously when they’re using this — it doesn’t allow them to go out there and drive 50 miles per hour, but it does allow them to be out on the road driving 20 miles per hour.”
Before deploying the technology, operators would grind their snowplows against the guardrail and use it as a guide to follow the road. But that was costly because it eventually flattened the guardrails, which then needed to be replaced. The ADOT&PF also tested a system in which magnets were put into the road and drivers received a warning if they went over the magnet. But, Adams said, the lack of a visual reference made it difficult for drivers to stay in the lanes.
“They would overcorrect and go back over the other side,” he said. “So it was a continuous battle going up and down the mountain.”
Alaska isn’t the only place benefiting from the University of Minnesota’s work with intelligent vehicles. The Minnesota Valley Transit Authority — which serves the communities of Apple Valley, Burnsville, Eagan, Rosemount and Savage — outfitted 10 buses with differential GPS to permit drivers to use shoulders during low visibility conditions and congested traffic. According to the university, “Bus-only shoulders allow a bus to use typically unused road right-of-way to bypass congestion during morning and afternoon rush hours.”
The Bus 2.0 system is used to keep buses in the designated lane and avoid collisions. Craig Shankwitz, director of the university’s Intelligent Vehicles Laboratory, said the buses use the same basic technology as the snowplows in Alaska.
“Now, regardless of weather conditions or congestion, bus drivers can use shoulders to bypass traffic,” he said.
Photo courtesy of the University of Minnesota’s Intelligent Vehicles Laboratory.
Since the technology was first used in Alaska in 2002, it has been updated for increased accuracy and driver comfort, and more trucks were added to the fleet. The first vehicles that were outfitted allowed operators to clear snow from Thompson Pass, a 2,800-foot-high gap in the mountains northeast of Valdez, which is known as the snowiest place in the state. In 2011, three more vehicles were added to the operation. Now one truck focuses on clearing snow in Valdez and about halfway to Thompson Pass on Richardson Highway, and the other trucks cover the pass and remaining distance between it and Valdez.
The system’s location capabilities were improved in October 2011 with the addition of a differential base station that receives information from U.S. and Russian satellites. “The more satellites giving you the position, the higher the resolution that your fix is and the more constant fix you can get in bad weather,” Adams said. “And that’s when we need it the most, when it’s zero visibility and you can’t see beyond the hood of the vehicle.”
Another big upgrade last year included changes to the heads-up display in the vehicles. In the trucks that were outfitted in the first implementation, the display was 6 to 8 inches in front of the driver’s face and it was surrounded by a metal frame. If the driver hit a bump, he might hit his head on the display, and there were complaints from crew members that the setup made them feel claustrophobic. Adams said the new display is mounted on a swivel arm and increases the comfort for the drivers.
Coffey said other states have contacted him about the technology’s applicability to other types of weather — like fog. “If you have a need that justifies the expense, it’s a fabulous technology,” he said. The initial investment for Alaska was $136,000 for the rollout that included outfitting a snowblower and a snowplow, as well as the design, equipment, fabrication, training and installation. The 2011 upgrades — which included outfitting three snowplows, updating the two existing ones and adding another differential base station to increase accuracy — cost $553,000.
The state’s contract with the University of Minnesota expired Dec. 31, 2011 — but that doesn’t mean their partnership is over. Shankwitz said there has been talk about running a gas pipeline adjacent to the oil pipeline, and if that happens, they would like to incorporate GPS base stations along the length of the gas pipeline to support highway operations.
With the recent upgrades, the department of transportation crew is happy with the system and increased safety on the roads. But it may be Alaska truck drivers who benefit the most. “They’re all ecstatic that we’re able to get out there and keep the highway open,” Adams said. “It costs them thousands of dollars every hour that they’re unable to move loads in some cases.” | <urn:uuid:c61d5b3b-aa17-46f0-8f48-99d9e1c58d4e> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/Smart-Snowplows-Keep-the-Highway-to-Valdez-Alaska-Clear.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00345-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96626 | 1,594 | 2.78125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: SEM Lab 7
Select a size
Microscopy Lab Report 7
Introduction to Focused Ion Beam Spectroscopy
By: Crystal Patteson
As part of a series of laboratory exercises designed to teach students advanced microscopy techniques, I was asked to learn and relate key physical concepts used in Focused Ion Beam (FIB) spectroscopy. A FIB is a scientific instrument that resembles a scanning electron microscope (SEM) that, unlike an SEM, uses a focused beam of ions (instead of electrons) for site -specific analysis, deposition, and ablation of materials. To familiarize myself with key concepts, I made sure to pay careful attention to features such as high-resolution imaging (focusing and stigmatism correction), sputtering of arbitrary patterns in a Silicon wafer, deposition of Tungsten patterns on a sample, and understanding the limitations of FIB patterning. For the purposes of this lab, a Hitachi FB-2100 and silicon wafer was used.
The sample was already mounted and prepared for observation at the beginning of lab from the previous class. To begin, the instructor proceeded to carefully explain the role of the main components of the FIB (Fig. 1-4). Figure 1 showcases an outer view of the FB-2100 while identifying many of the main components. For example, an ion beam source that compromises the top section of the column can be readily identified. This column serves to support the liquid-metal gallium ion source so that emitted ions can transverse the interior of the column. Along the way, ions encounter converging, focusing, scanning, blanking, and astigmatism correction lenses and apertures that are used to control the dimensions and trajectory of the beam as it travels to meet the interface of a loaded sample. Figure 1 also shows the specimen stage and chamber which make for easy loading and transfer samples. For instance, to avoid losing a nicely prepared Transmission Electron Microscope (TEM) sample during manual transfer from the FIB sample holder to another SEM or scanning TEM (STEM), the Hitachi FB2100 offers a compatible sample holder which can be inserted into the both FIB and the SEM/(S)TEM units. Thus, the FIB can be used to carefully mount the specimen on a compatible holder in-situ and then safely and quickly transfer the sample to another complementary microscope. An ion pump is used to clear the chamber of lingering ions and a beam limiting aperture is available to limit the size of the ion beam.
Figure 2 shows a deposition, or “gas injection”, system which holds three different gasses that share a single injection needle. This system provides the ability to deliver one of three selected deposition and etching gases through a single needle, one at a time. The gas injection needle is positioned near the sample surface where it came be easily employed. Intuitive software is used to control the needle position and allow the user to manipulate settings for custom applications of gas induced deposition and etching. Figure 3 is similar to Figure 1, but includes a look at the console and control panel. Figure 4 shows two roughing pumps connected to a vibration isolator to limit vibrations coming into the machine.
To demonstrate the high resolution capabilities of the FIB, images were taken at the following magnifications: 1000X, 5000X, 20,000X, and 100,000X (Fig. 5). Then to exhibit the etching and milling capabilities of the FIB, I etched a rough picture of a rabbit with the drawing tools and then bombarded the sketched out image with the ion beam (Fig. 6). Lastly, to establish the narrowest possible line possible, I etched a single line and measured its width to be 51 nm (Fig.7). Images were acquired by rastering the ion beam across the sample and then saving the resulting scan.
Specimen chamberSpecimen stageFIB columnIon sourceIon pumpBeam limiting aperture
Figure 1. The FB-2100 System.
Figure 2. The OmniGIS Deposition System as installed on a Hitachi FB2100 FIB.
ConsoleEvacuation control panel
Figure 3. The FB-2100 System.
Figure 4. Two roughing pumps connect to a vibration isolator.
Figure 5. Top left) Sampled is imaged at 1000X. top Right) Sampled is imaged at 5000X. Bottom left) Sampled is imaged at 20,000X. Bottom right) Sampled is imaged at 100,000X.
Figure 6. Sputtering was used to etch a rabbit on the samples
Figure 7. A 51 nm wide line that has been etched into the surface the sample.
What is astigmatism?
Crisp, clear, and accurate images are created when the electron beam is circular as it approaches the specimen. Sometimes, the probe cross section can become distorted and forms an ellipse. This can be due to the machining accuracy, the material of the pole-piece, and imperfections in the casting of the iron magnets and the copper winding. This elliptical distortion is called astigmatism and is due to perpendicular axis’s having different focal lengths. A stigmator is used to correct for astigmation and make the electron beam circular. Electromagnetic coils are placed in quadruple, sextuple or octagonal orientations inside of the microscope.
What is the purpose of the extraction voltage? What is its effect on the beam current?
The gallium tip of the FIB is heated until the melting point of the Gallium is met. The Gallium drips down the tip of a tungsten needle where the opposing forces of surface tension and the electric field forms a Taylor cone. The electric field causes the ionization and field emission of the gallium ions. The extraction voltage is used to form this field. Therefore, a higher extraction voltage leads to a higher current since more ions will be able to break free and leave the source.
Is FIB always appropriate to characterize/pattern surface thin films? What are the limitations?
While a Fib has the ability to cross-section small targets, offer fast resolution imaging and precise milling, and serve as a good SEM sample prep, a major drawback of FIB imaging and machining is due to the damage caused by the ion beam. Depending on the material of the sample and the temperature within the chamber, ion beam damage can take the form of sample surface amorphization, point defect creation, dislocation formation, phase formation, grain modification, and other unusual effects. The imaging process itself may spoil subsequent analyses through beam damage that leads to lower resolution and the ion beam can implant residual Gallium into a sample and thus contaminate it. An understanding of the material properties should first be examined to determine if FIB spectroscopy is appropriate for a thin film.
How would you experimentally characterize the sputtering rate of a given material?
Sputtering refers to the process of ejecting particles from a solid from the bombardment of energetic ions. Thus, sputtering is used to etch samples because the incoming ions weaken the bonds in the sample. The electronic stopping power of the ions cause electronic excitations to the sample which leads to the breaking of bonds between atoms. Depending on the material, the strength and longevity of these excitations may vary. For example, the electronic excitations in an inductor are not quenched nearly as fast as they would be in a conductor.
Why does it take up to 30 minutes to form the Ga tip?
You have to wait for the Gallium to melt and cover the tungsten tip as previously described in a previous question.
Why is it important to have several apertures when doing imaging/sputtering?
The more apertures you have the smaller the beam size one can acquire. The smaller the beam size is the greater the resolution capabilities for imagining as well as creating a more refined beam for precision sputtering.
7 I etched a single line and measured its width to be 51 nm (Fig.7). Images were acquired by rastering the ion beam across the sample and then saving the resulting scan.
Figure 5. Top left) Sampled is imaged at 1000X. top R | <urn:uuid:4e620e5a-ac27-4dd3-a3f0-ed97fd7279ee> | CC-MAIN-2017-04 | https://docs.com/crystal-patteson/7852/sem-lab-7 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922506 | 1,772 | 3 | 3 |
In this short video, discover how to keep students engaged while learning on their iPads by:
- Enabling teachers to manage iPads in the classroom
- Focusing students on a specific app or webpage
- Sending customized messages to student devices for streamlined classroom transitions
With the power of iPad and Casper Focus, teachers and IT can enhance the digital experience and ensure students get the most out of their classroom time. | <urn:uuid:f79329c5-a49e-471b-b191-a76a73369472> | CC-MAIN-2017-04 | https://www.jamf.com/resources/casper-focus-overview-video/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940447 | 83 | 2.5625 | 3 |
As the apparel industry is aware, the length of the supply chain (from natural fiber, polymer resin or other material, all the way to finished clothing) is quite long. Because of the numerous processing steps involved in garment production, often conducted by different suppliers, the major environmental impacts of production usually occur before the Tier 1 (cut and sew) suppliers of brands and retailers.
Those in the industry working on environmental sustainability, such as the Sustainable Apparel Coalition (SAC), the Outdoor Industry Association (OIA) and The Sustainability Consortium (TSC) have concluded that additional focus needs to be placed on lower-tier suppliers to best improve the environmental performance of the textiles and apparel industry.
One important tool employed by members of these organizations is the assessment of supplier environmental performance through indices. Requesting environmental information from suppliers initiates the engagement process, raises awareness of important issues and signals that customers are concerned about the environmental impact of suppliers. The use of indices allows facilities to identify the areas for greatest improvement, and the scoring system provides a benchmark of sustainability performance to track progress.
Much of the initial supplier improvement work centers on individual facilities completing self-assessments such as the Facility Module of the Higg Index. The Higg Index is a larger sustainability assessment tool organized by the SAC. In addition to supplier facilities, it also evaluates brands and apparel products (footwear to come in Version 2). Currently, the results from the Facility Module assessments are used internally by suppliers and with direct customers, but are not yet intended for external communication.
The Facility Module was closely based on the criteria of the Global Social Compliance Program (GSCP), a program previously developed by leading retailers to improve environmental and social responsibility within their shared global supply chains. It was designed to assess and drive improvement in suppliers to many different industries, and the scope of the program covered 11 different environmental areas of focus.
The Facility Module tailored the questions and criteria of the GSCP to be more specific to the apparel industry, and it focuses on seven environmental areas, which are a subset of the eleven included in GSCP. The Facility Module environmental areas address: environmental management systems, energy use and greenhouse gas emissions, water use, wastewater, emissions to air, waste management, and pollution prevention/hazardous substances.
Some of the general questions textile suppliers will need to answer as part of the Facility Module include:
Do you measure your usage (or emissions) associated with each of the environmental areas?
Do you regularly set and review improvement targets in these areas?
Can you substantiate improvements in these areas?
By going through this process, suppliers and their customers can get a snapshot of where they stand on environmental performance. However, because these questions are being answered by the suppliers themselves, the SAC and others are looking into having the results verified in some way. Verification serves not only to encourage honest responses and identify false ones, but to ensure the accuracy of information and clarify instances where improper scores are simply a result of a misunderstanding by the supplier about the criteria.
As an organization that has conducted audits based on the GSCP program for textile and other facilities, SGS can confirm that major opportunities for improvement indeed exist within most facilities. When suppliers take the next step and initiate plans of improvement in areas where gaps were identified in the self assessment, then both environmental improvement and operational cost savings can be significant.
Indeed, to achieve the highest scores in the Facility Module suppliers need to go beyond minimum regulatory compliance, and actually plan for and work on reducing their impact. This calls for more specific steps and may require a detailed onsite assessment of what improvements should be made and how.
There are a number of approaches textile facilities can take to work on these issues. One way is to work internally, using index results to guide their efforts. The SAC has at least one supplier member that has communicated openly about its internal efforts and success, reporting hundreds of thousands of dollars of annual savings in electricity and water consumption.
Alternatively, companies may choose to seek external support, such as an energy, water and waste audit to identify specific problems, for example water/steam leaks, sub-optimal equipment settings, or improper storage of waste. A third option is one Nike has taken, and that is to work closely with an organization such as bluesign that specializes in textile chemistry and production processes. This approach focuses more on the selection/sourcing of the best chemicals, materials and processes. These efforts in combination with efficiency improvements also greatly reduce environmental impact and cost.
Regardless of the approach, it is in the economic interest of the suppliers, as well as the brands and retailers, to implement these improvements, as cost savings may be shared. Sharing of savings can be incentivized by programs where the brand or retailer provide some or all of the funds for the auditing and/or training that will guide the suppliers in being more cost effective.
Additionally, the environmental benefits from these efforts will reach even further, being felt directly in the countries where these facilities operate, and indirectly by the consumers around the world who are demanding clothing produced in a more sustainable fashion.
Michael Richardson, P.E., LCA & sustainable design sr. project manager, SGS. | <urn:uuid:5b1a091e-3f4b-40b0-ac4a-b5c2bd1da199> | CC-MAIN-2017-04 | http://apparel.edgl.com/news/sustainable-textiles-begin-with-a-sustainable-supply-chain89270 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00217-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953902 | 1,060 | 2.625 | 3 |
Learn How to Learn
The human brain is an amazing thing. This three-pound organ—more than 75 percent of which is water—can process more information than even the most advanced computer. (At least for now: That could very easily change soon.) It continually modifies itself throughout a person’s life, as neural networks rearrange according to experiences with new external stimuli. And when it comes to these encounters, more is better. They drive the development of the brain, as reorganization causes the number of brain cells to grow, as well as communicate with each other with greater frequency and complexity through a chemical neurotransmitter called acetylcholine (the “car”) and nerve fibers known as dendrites (the “highways”). This phenomenon is referred to in neuroscience circles as plasticity, which relates directly to the twin pillars of learning—cognition and memory.
Everyone is born with the ability to learn, and the brain’s capacity for absorbing new skills and knowledge is at its height roughly between the ages of five to 12. They don’t call them the formative years for nothing: This is when people most easily learn about language, mathematics, the sciences and the performance of various kinds of physical tasks. Most will develop personal and professional interests during this period that will last their entire lives. Hopefully, they’ll also cultivate good learning habits in this time span, so they’ll continue to accumulate new knowledge and enhance their brainpower.
Some people, though, never really learn how to learn and therefore have difficulty taking on new tasks and skills throughout their lives. The human brain is such that it will always gather information—a great deal of it, in fact. But if it isn’t really learned, then random bits of data will simply drift in and out of the mind to little significant effect. To comprehend and retain knowledge that can be meaningfully applied time and again, individuals have to understand how to study something and then place it in a larger mental framework. Here are just a few ways to boost the brain’s learning faculties:
- Teach your brain new things as often as possible: As with the body, the mind needs exercise to develop. Going back to the plasticity concept, when you expose your senses to new situations and settings, your brain cells move around and form a new structure, which makes them grow in number and enhance your mental capacity. Although learning a new programming language or designing a new application would certainly qualify as ways to train the brain, it doesn’t have to be that complicated or involved. Simply using your left hand (or right hand, for you southpaws out there) to write or eat would be sufficient. Also, recreational experiences such as traveling to new places or meeting new people at parties can help you expand your mind—literally!
- Figure out how to handle stress: I was going to title this point “Avoid stress,” but that just doesn’t seem likely for professionals in this industry. Instead, you should work out ways to manage the strain that invariably comes with the territory. Why? Well, continual discharge of large numbers of stress-related hormones called cortisol can prevent the brain from creating new memories and accessing old ones. Your adrenal glands release adrenaline during brief periods of severe stress, which can be great for dealing with these situations. However, if these conditions are sustained for very long then cortisol is released into the brain, which damages the hippocampus, the section of the organ that creates new memories and more or less controls learning. In addition, too much cortisol can shut down the brain’s ability to retrieve long-term memories, which helps explain why some people “go blank” during a big test. If you want to learn and retain information, then steer clear of excess stress.
- Learn less and retain more: You want to learn as much as possible, right? Not exactly. Studying voluminous amounts of information can actually interfere with learning, as the brain can only handle so much new data in a given period of time. For example, a study of American and German high schoolers showed that the mathematics textbooks used by the former covered close to twice as many topics. Yet the German students surpassed their American counterparts on math exams. Because they were able to concentrate more of their mental energy on fewer topics, the German high schoolers were able to apply the knowledge much more effectively. Hence, when studying for an exam or trying to learn a new skill, focus first on the most essential topics. Then when you’ve got those down, let your knowledge branch out further into tangential spheres. | <urn:uuid:f10273cc-5b72-40d5-b6d4-1a96241b146f> | CC-MAIN-2017-04 | http://certmag.com/learn-how-to-learn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967822 | 954 | 3.953125 | 4 |
Cisco CCENT Introduction to TCP/IP Part I
This chapter will cover the basics of the TCP/IP model and an introduction to IP addressing and subnetting.
Cisco CCENT Introduction to TCP/IP
The Transmission Control Protocol/Internet Protocol (TCP/IP) suite was created by the Department of Defense (DoD) to ensure and preserve data integrity, as well as maintain communications in the event of catastrophic war.
So it follows that if designed and implemented correctly, a TCP/IP network can be a truly dependable and resilient one.
The Internet is built on a TCP/IP network.
Cisco CCENT User Datagram Protocol vs. Transmission Control Protocol
Reliable (Connection Oriented) – TCP is a reliable protocol that resides at the transport layer of the OSI reference model. It accounts for retransmission of lost data guaranteeing reliable delivery while also providing sequencing of packets so they can be re-ordered accounting for packet received out of order. Examples of applications that utilize TCP as a transport are HTTP, E-mail and FTP just to name a few.
Best Effort (Connectionless) – UDP is a best effort protocol that resides at the transport layer of the OSI reference model. It has much less overhead than TCP. It does not retransmit packets lost in transit nor does it provide sequencing to account for packets received out of order. A couple of examples of applications that utilize UDP are Voice over IP an Video Streaming.
Cisco CCENT Transmission Control Protocol (TCP)
Since the upper layers just send a data stream to the protocols in the Transport layers, the Internet layer then routes the segments as packets through an internetwork.
The packets are handed to the receiving host’s Host-to-Host layer protocol, which rebuilds the data stream to hand to the upper-layer applications or protocols.
TCP creates a reliable sessions by setting up a virtual circuit (TCP connection), which includes acknowledgements, sequence numbers and windowing (flow control). TCP utilizes a three-way handshake to establish the TCP connection. The connection is uniquely identified by a combination of source ip address/port number and destination ip address/port number.
Cisco CCENT TCP Header
Source port – Identifies the TCP source port.
Destination port – Identifies the TCP destination port.
Sequence number – Usually specifies the number assigned to the first byte of data in the current message. On connection establishment, identifies the initial sequence number to be used in the connection.
Acknowledgement number – Contains sequence number of the next byte of data the sender of the packet expects to receive.
Data offset – Indicates the number of 32 bit words in the TCP header.
Reserved – For future use.
Flags – Control information.
Window – Specifies the size of the sender’s receive window or in other words buffer space.
Checksum – Indicates whether the header was damaged in transit.
Urgent pointer – Points to the first urgent data byte in the packet.
Options – Specifies various TCP options.
Data – Contains upper layer information.
Cisco CCENT TCP Port Numbers
Example port numbers:
HTTP (80), HTTPS (443) Telnet (23), FTP (21), SMTP (25): TCP
TFTP (69), SNMP(161): UDP
Originating-source port numbers are dynamically assigned by the source host and will equal some number starting at 1024. 1023 and below are defined in RFC 1700, which discusses what are called well-known port numbers.
Virtual circuits that don’t use an application with a well-known port number are assigned port numbers randomly from a specific range instead. These port numbers identify the source and destination host in the TCP segment.
The different port numbers that can be used are:
Numbers below 1024 are considered well-known port numbers and are defined in RFC 1700.
Numbers 1024 and above are used by the upper layers to set up sessions with other hosts, and by TCP to use as source and destination addresses in the TCP segment.
Cisco CCENT Setting Up A Reliable Session (Virtual Circuit)
In reliable transport operation, one device first establishes a connection-oriented session with its peer system. This is called a call setup, or a three-way handshake.
Data is then transferred, and when finished, a call termination takes place to tear down the virtual circuit.
TCP uses a three-way handshake to establish a connection. The TCP three-way handshake is described in detail on the following slide.
Cisco CCENT TCP Connection Establishment
Shown in the slide is the TCP three way handshake used in establishing all TCP connections. The important thing here is the bits that are set with each packet (i.e. SYN, SYN+ACK, ACK).
As depicted, to establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:
1. The active open is performed by the client sending a SYN to the server. It sets the segment’s sequence number to a random value A.
2. In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number (A + 1), and the sequence number that the server chooses for the packet is another random number, B.
3. Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value i.e. A + 1, and the acknowledgement number is set to one more than the received sequence number i.e. B + 1.
At this point, both the client and server have received an acknowledgment of the connection.
Cisco CCENT TCP Connection Termination
Shown in the slide is the modified TCP three way handshake used in gracrfully terminating all TCP connections. The important thing here is the bits that are set with each packet (i.e. FIN+SYN, ACK, FIN+ACK).
Cisco CCENT TCP Flow Control
Data integrity is ensured at the Transport layer by maintaining flow control and by allowing users to request reliable data transport between systems.
Flow control prevents a sending host on one side of the connection from overflowing the buffers in the receiving host—an event that can result in lost data.
Random Early Detection (RED) is a congestion avoidance mechanism that takes advantage of TCP’s congestion control mechanism.
Positive Acknowledgment and Retransmission (PAR) protocol consists of a sender, a receiver, and two unreliable communication channels for messages and acknowledgements.
A new message is sent only when the preceding one has been acknowledged. The sender detects the loss of a message (or acknowledgement) by using a timeout.
Cisco CCENT Positive ACK with Retransmission
Each TCP packet send is ultimately acknowledged. If it is not acknowledged, it is retransmitted after a specified period of time.
Cisco CCENT Positive ACK with Retransmission
The slide above shows a packet that is sent that sender does not receive an acknowledgement prior to the timer expiring so the packet is retransmitted.
Cisco CCENT TCP Sliding Window
Buffers are used at each end of the TCP connection to speed up data flow when the network is busy. Flow Control is managed using the concept of a Sliding Window. A Window is the maximum number of unacknowledged bytes that are allowed in any one transmission sequence, or to put it another way, it is the range of sequence numbers across the whole chunk of data that the receiver (the sender of the window size) is prepared to accept in its buffer. The receiver specifies the current Receive Window size in every packet sent to the sender. The sender can send up to this amount of data before it has to wait for an update on the Receive Window size from the receiver. The sender has to buffer all its own sent data until it receives ACKs for that data. The Send Window size is determined by whatever is the smallest between the Receive Window and the sender’s buffer. When TCP transmits a segment, it places a copy of the data in a retransmission queue and starts a timer. If an acknowledgment is not received for that segment (or a part of that segment) before the timer runs out, then the segment (or the part of the segment that was not acknowledged) is retransmitted.
Cisco CCENT TCP Sliding Window
TCP Sliding Window Operation:
1. The current sequence number of the TCP sender is y.
2. The TCP receiver specifies the current negotiated window size x in every packet. This often specified by the operating system or the application, otherwise it starts at 536 bytes.
3. The TCP sender sends a datagram with the number of data bytes equal to the receiver’s window size x and waits for an ACK from the receiver. The window size can be many thousands of bytes!
4. The receiver sends an ACK with the value y + x i.e. acknowledging that the last x bytes have been received OK and the receiver is expecting another transmission of bytes starting at byte y + x.
5. After a successful receipt, the window size increases by an additional x, this is called the Slow Start for new connections.
6. The sender sends another datagram with 2x bytes, then 3x bytes and so on up to the MSS as indicated in the TCP Options.
7. If the receiver has a full buffer, the window size is reduced to zero. In this state, the window is said to be Frozen and the sender cannot send any more bytes until it receives a datagram from the receiver with a window size greater than 0.
8. If the data is not received as determined by the timer which is set as soon as data is set until receipt of an ACK, then the window size is cut by half. Failure could be due to congestion or faults on the media.
9. On the next successful transmission, the slow ramp up starts again.
Cisco CCENT TCP Reliable Session
A reliable session is described as follows:
A Virtual Circuit is set up using port numbers
Sequencing numbers each segment
Flow control is used to stop the receiving host from overflowing it’s buffers
Acknowledgments are used | <urn:uuid:ae1da5a3-1200-4e71-ab89-ce26f57831c7> | CC-MAIN-2017-04 | https://www.certificationkits.com/cisco-certification/ccent-640-822-icnd1-exam-study-guide/cisco-ccent-icnd1-640-822-exam-certification-guide/cisco-ccent-icnd1-tcpip-part-i/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912407 | 2,195 | 3.421875 | 3 |
What is a CNAME?
A DNS CNAME record, sometimes referred to as an alias record, is used to map an alias to a hostname, or more specifically map an alias to a canonical name.
A typical use case for CNAME records is where multiple services, e.g. ftp.foo.com and www.foo.com, need to be mapped to one underlying system, e.g. service123.foo.com. In this situation CNAMEs could be employed as follows:
- DNS CNAME record maps ftp.foo.com to service123.foo.com
- DNS CNAME record maps www.foo.com to service123.foo.com
- DNS A record maps service123.foo.com to its IP address
In this example if the IP address of service123.foo.com changes, only the DNS A record for service123.foo.com needs to be updated.
CNAMEs and XenDesktop
Prior to Citrix XenDesktop 7.0, a Delivery Controller (DDC) name could, by default, be specified via a CNAME record rather than a canonical name. From XenDesktop 7.0 onwards Citrix deprecated CNAME support in favour of the “Delivery Controller auto-update” feature that was introduced in that release (see Citrix article CTX137960).
CNAMEs and the Linux VDA
The ability to specify DDC names via CNAME record is still very popular with customers, so this feature has been added to version 1.1 of the Citrix Linux Virtual Desktop Agent (VDA).
Provided that CNAME support has been enabled, the Linux VDA will attempt to locate a DNS CNAME record for each configured DDC name. For each CNAME record that is found, the associated canonical name will be added to the list of DDCs to which the Linux VDA can register. If a CNAME record does not exist for a configured DDC name, then that name will be used as is in the registration process.
For example, consider the following environment:
- CNAME support is enabled in the Linux VDA
- Canonical name of the DDC is w2k12-xd76-123.central.mycorp.net
- A DNS CNAME record aliases ddc1.central.mycorp.net to w2k12-xd76-123.central.mycorp.net
In this environment if I configure the Linux VDA with a DDC name of ddc1.central.mycorp.net, the Linux VDA will implicitly register with w2k12-xd76-123.central.mycorp.net.
Regardless of whether or not CNAME support is enabled I can also configure the Linux VDA with a DDC name of w2k12-xd76-123.central.mycorp.net
Enabling CNAME Support
CNAME support is disabled by default. It can be enabled in one of two ways:
- During the post install configuration phase via the /usr/local/sbin/ctxsetup.sh configuration script.
- At any time after the post install configuration phase via the /usr/local/bin/ctxreg tool.
Enabling CNAME Support via the ctxsetup.sh Configuration Script
The configuration script may be run manually with prompting or automatically with pre-configured responses. To get help on the script run:
sudo /usr/local/sbin/ctxsetup.sh --help
When the configuration script is run manually, answering “Y” to the question “Allow DDC names to be specified via CNAMEs?” will enable CNAME support. Here is an example of the answers I would supply if I wanted to use a DNS alias of ddc1.central.mycorp.net in an environment where Active Directory integration is supplied via WinBind.
$ sudo /usr/local/sbin/ctxsetup.sh Gathering information... Checking CTX_XDL_SUPPORT_DDC_AS_CNAME. CTX_XDL_SUPPORT_DDC_AS_CNAME is not set. Allow DDC names to be specified via CNAMEs? (y/n) [n]: Y Checking CTX_XDL_DDC_LIST. CTX_XDL_DDC_LIST is not set. Please provide the FQDN of at least one DDC: ddc1.central.mycorp.net Checking CTX_XDL_VDA_PORT. CTX_XDL_VDA_PORT is not set. Enter the TCP/IP port the Virtual Delivery Agent service should use to register with the Delivery Controller : Checking CTX_XDL_REGISTER_SERVICE. CTX_XDL_REGISTER_SERVICE is not set. Register service so that XDL starts on boot? (y/n) [y]: Checking CTX_XDL_ADD_FIREWALL_RULES. CTX_XDL_ADD_FIREWALL_RULES is not set. Add firewall exceptions to allow incoming XDL connections? (y/n) [y]: Checking CTX_XDL_AD_INTEGRATION. CTX_XDL_AD_INTEGRATION is not set. What AD integration tool does this system use? 1: Winbind 2: Quest 3: Centrify Choose from the above options (1-3) : Checking CTX_XDL_START_SERVICE. CTX_XDL_START_SERVICE is not set. Start XDL service once configuration is complete? (y/n) [y]:
To enable CNAME support using pre-configured script responses set the environment variable CTX_XDL_SUPPORT_DDC_AS_CNAME to “Y” prior to running the configuration script. Here’s the same example.
export CTX_XDL_SUPPORT_DDC_AS_CNAME=Y export CTX_XDL_DDC_LIST= ddc1.central.mycorp.net export CTX_XDL_REGISTER_SERVICE=Y export CTX_XDL_ADD_FIREWALL_RULES=Y export CTX_XDL_AD_INTEGRATION=1 export CTX_XDL_START_SERVICE=Y sudo -E /usr/local/sbin/ctxsetup.sh
Enabling CNAME Support via the ctxreg Tool
To enable via the ctxreg tool, run:
sudo /usr/local/bin/ctxreg update \ -k "HKLM/Software/Citrix/VirtualDesktopAgent" \ -v "UseCnameLookup" \ -d "1"
To disable via the ctxreg tool, run:
sudo /usr/local/bin/ctxreg update \ -k "HKLM/Software/Citrix/VirtualDesktopAgent" \ -v "UseCnameLookup" \ -d "0"
Note that if CNAME support is enabled or disabled via the ctxreg tool, the Linux Broker Agent service (ctxvda) must be restarted for the change to take effect. To restart run:
sudo /sbin/service ctxvda restart
As of Linux VDA v1.1, a DDC name may be specified via a DNS alias (CNAME record). Using this feature can reduce a network administrator’s burden by minimizing the number of DNS records that need to be updated when the IP address of a server that is hosting the DDC is changed.
Enabling support for CNAME records within the Linux VDA can be as simple as answering “Y” in response to the question “Allow DDC names to be specified via CNAMEs?” when the Linux VDA is first configured.
To read more from the Linux Virtual Desktop Team, please refer to the Linux Virtual Desktop Team blog here. | <urn:uuid:90beb7e3-4ccc-49e9-8cb9-3cec96b1e4a4> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2015/11/04/ddc-names-and-cname-support/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00363-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.754951 | 1,703 | 3.234375 | 3 |
This whitepaper is part of a three-part installment covering a wide breadth of topics on passwords, security, next-generation, and plenty more. In this installment, we look at the modern two-fold weaknesses of passwords, and easy methods to fix these problems. In the next and last installment we look at innovation and evolution of passwords and authentication mechanisms.
In the previous article Passwords vs. Pass Phrases - An Ideological Divide we looked at several factors that weaken password-based authentication security, namely on the side of the end-user. The concept of a password in and of itself is inherently flawed, and many of the surrounding security or enforcement strategies are equally flawed and antiquated. We – being both the content providers and end users alike – operate on a password ideology that is decades old and utilizes some principles no one in the security industry can reasonably justify anymore. (Maximum password length, anyone?) But given the progressive nature of the Internet and the unforgiving speed at which everything changes therein, this is something we can easily change.
Password-based authentication need not be such an archaic pillar of security any longer. Indeed, as we previously went over, content providers must inspire an ideology of passphrases, and end users must deeply understand and implement this concept. However, a good security engineer worth his weight in firewall appliances knows that a proper and functional security posture of a well-built and maintained system requires multiple layers of security. A modernized approach to password ideology is only one of several necessary steps for a highly-secured system. Next, content providers must ensure that the underlying technology can survive a data breach when – not if – it happens.
Authentication – Why it is Important to Protect Every Bit of Data
Unless you are a developer or a systems administrator, not a whole lot of thought goes into a common password-based authentication mechanism. You enter your username or email, your password often masked by black dots, a button click starts some magical voodoo that happens behind the curtain, and voila! You are now logged in and your session is validated for some predetermined length of time. But as you, our well-learned reader, certainly will know, far more goes into an authentication system.
When an authentication occurs, the user's supplied data is submitted as-is, most often in plaintext form. This presents an extremely important, yet often overlooked challenge to developers and administrators: securing the user's data before it even makes it to you. More often than not, you will find many large-scale organizations that use plaintext, unprotected authentication systems. A simple glance at the address bar shows many do not even use SSL – or TLS, as it should properly be called – on their authentication systems. In fact, some smaller self-hosted online stores may know such security is both wise and required for PCI compliance – a topic we have previously discussed at length – but they will for some inexplicable reason completely gloss over end user authentication. Logically, they should ask themselves: Why would a hacker desire only credit card details, but never any login information?
The First Layer: A Secured Line
Consider you run a website that caters to students attending a higher-education institution, such as a university. As has become commonplace among many businesses and organizations, college campuses often provide free WiFi internet access, usually restricted to their students and staff only. Often times these WiFi connections are provided in two varieties: encrypted and unencrypted. The encrypted option usually requires some level of setup on your local computer before it works properly, which is why many universities offer the open, unencrypted option, for those who either cannot make it work properly, or need to obtain the instructions how. Unfortunately, once students have connected to the unencrypted internet access, many all of them are going to mindlessly forego the encrypted path and just keep surfing.
Now say some black hat hacker in the student lobby of whatever university is your largest visitor has a wireless network auditing tool such as WiFi Pineapple. This device allows him to mimic the WiFi signal the university provides, inspect the traffic, and pass it along potentially compromised. He quietly sits and sniffs all the unencrypted traffic in that lobby and passes it along, those students being blissfully unaware all of their unencrypted traffic is plaintext treasure for this hacker. He accomplishes this using Firesheep, a Firefox plugin that allowed hackers to capture other WiFi network traffic in Firefox (prompting Facebook, Gmail, and many others to employ SSL-only connections). If your hypothetical website for students is running an authentication mechanism unencrypted – no end-to-end TLS certificate handshake of any sort – you now have exposed your end users at that campus to potential sniffing attacks made as simple as a dongle and a browser addon. It may not seem that important, but what if you had the next Facebook (which started on college campuses) and were taken down by poor authentication measures? That billion dollar dream is now gone.
It is critically important that the security of password-based authentication start right at the moment an end user arrives at your website. Before they even begin to provide any data to your servers, the channel between you and them must be secure. This is almost always done using an end-to-end TLS certificate (they're commonly called "SSL", but that is actually a misnomer). Trusted TLS certificates can be easily obtained for free and are accepted by most all browsers and other systems that honor such security certificates. However, while end-to-end traffic security is critical and crucial, there are still more layers to a reasonably secured infrastructure.
The Second Layer: Secured Storage
In order to ensure a valid authentication occurs, the system the end-user is authenticating into must compare the challenge password with a previously established comparison. This is often stored in some secured format. (Of course, some organizations choose to store all passwords in plaintext form – not a good idea, obviously – but we will get more into that later.) Typically this storage security is completed using a mathematical algorithm called a cryptographic hash function, a formula that takes in an arbitrary length password and returns a fixed-size string in the form of a password hash. For example, if we take a lesson from the previous article in this series and generate a passphrase – This is a password. – then we are left with a password hash of 07997f833c2d709d2e5fcd7666858d8c.
Commonly, web and web-like password-based authentication mechanisms utilize simple hashing functions, such as MD5 (used in the previous example) or SHA1, even after both have been proven considerably weak for many years. This is likely due to the simplicity of hash creation and comparison with both functions, requiring only hashing the plaintext password supplied by the user and performing a direct string comparison to the stored hash in the user table. In fact, this manner of hash comparison can be, and often mistakenly is, completed within the database query that fetches the stored hash itself. It seems secure enough, and it is incredibly easy to implement in code, so why bother with anything more complex? Indeed, that is apparently the common and acceptable approach to password storage security, but unfortunately it is a dangerously lazy one, too.
What is Wrong With Simple Hashing?
First, when generating a password hash, you absolutely want each hash to be unique. No two differing passwords should ever generate the same password hash. However, both MD5 and SHA1 have been found to have an uncomforting likelihood of two passwords generating the same password hash – known as a hash collision. MD5 can have no more than 3.4 x 1038 possible unique password strings before a collision will occur. SHA1 even has a probability formula to determine collision likelihood. The fact that these are known severe mathematical flaws with both cryptographic hash functions should be reason enough to abandon use of them with password hashing. However, the extremely minimal modern cost of both functions is truly the most damning element.
In the security industry, a password hash's strength is determined by the cost of the cryptographic hash function itself. The term 'cost,' in this sense, implies the direct difficulty or iteration count of an encryption or hashing algorithm. In terms of difficulty, this can be considered to basically be an exponential curve relative to time for each additional character in a password (essentially as steep as f(x) = 2x – see Figure 1 below). Cost is also used in terms of how much additional mathematical work is applied to a cryptographic hash function. In a SHA256 cryptographic hash function, for example, the default of 5,000 iterations over the hashing formula implies a cost of repeating and applying the SHA256 mathematical formula 5,000 consecutive times – thus the more iterations, the more 'expensive' (and often, more secure) an encryption or hashing scheme is. However, neither MD5 nor SHA1 in typical web system deployments contain the ability to iterate over any formula for additional security, therefore their cost lies solely in the amount and type of characters the end user types. This is the first among simple hashing functions' many flaws.
Figure 1 - Example of the exponential-like curve of a password's cost in terms of character length versus time
For quite some time, it was considered that MD5 and SHA1 were reasonably secure since the technology was markedly limited in terms of brute-force cracking the hash itself. The cost of an MD5 or SHA1 hash were substantial enough to hinder common-day Intel or AMD CPUs at that time from directly deciphering an MD5 or SHA1 hash itself or brute-force guessing at a hash. In 2007, however, nVidia released a C programming library for their Cuda and Tesla series graphics processors. This led to all sorts of new projects being designed for GPU usage, linear algebra being a considerably large one. It was not until 2010 that the most frightening aspect of GPU technology became serious headline news, when researchers at Georgia Tech published GPU technology was extremely successful at password hash cracking. And not just extremely successful, but so much so that all prior conceived notions of cost have been rendered wholly obsolete now.
Various usages of Hashcat – a password hash cracking utility – have shown CPU to GPU comparisons with Radeon GPUs pushing upwards of 90 times faster than top-of-the-line Intel or AMD CPUs in comparative MD5 brute force tests. When you put this into terms of the exponential amount of time it takes to brute force a password hash for each additional character, the results are staggering. Where an MD5 password hash may take 450 years with a higher powered multicore CPU, a modern GPU may be able to do it in 5 years at worst. A 20 year wait on a CPU is less than 2 months on a GPU. Today, however, GPUs fare far better than this. Much of the GPU research data available is around three years old, which is centuries in terms of Moore's Law. This has proven to be a potential nightmare for content providers utilizing standard password storage methods.
Consider that the IGHASHGPU password hash brute forcing software projects the ability to brute force attempt 3.7 billion MD5 hashes or 1.4 billion SHA1 hashes, per second. If you assume a passphrase akin to the xkcd joke of 44 bits (in their example, "correct horse battery staple", a 28-character password), a SHA1-encoded password hash by this measure could conceivably be cracked in a little over an hour, at worst. Simply put, using simple password hashing is a welcome invitation for mass password compromise. As assuming as that statement is, it is quite true.
Password Data Mass Compromises – Even the Mighty Can Fall
Over the past five years, several dozen major organizations, corporations, and even government entities have fallen victim to attackers infiltrating their servers and extracting massive password hash dumps. This has become common and recurring event that public projects have begun to appear to document their rapid occurrences and provide a database of the dumps. Many of these victims even had advance warning of the impending attacks, and employed highly skilled teams of security engineers, yet they still could not inhibit their attackers from obtaining password data. Truly, if a hacker (or group thereof) has a strong enough desire to gain entry into your systems, they most likely will eventually find a way in.
In December 2010, Gawker Media—one of the most popular social media blog networks, consisting of a conglomeration of eight different websites—found itself the unfortunate victim of a massive password database compromise. LinkedIn—a very large social media networking website tailored specifically to professional relationships—found itself in the same unfortunate circumstance in June of 2012. The social media gaming giant RockYou found its thirty million users' passwords compromised in December 2009 (this one was unique due to the fact that RockYou stored all of its passwords plaintext, not cryptographically secured). Just from January through April 2014 alone, over ten million cryptographic password hashes—possibly more—were released to the public from hacks against enormous media behemoths like Comcast, Yahoo!, and AOL. Now, as of June 2014, even eBay has found itself victim of a mass compromise, reporting a whopping potential 145 million compromised password hashes.
Indeed, if a hacker is persistent and skilled enough, they may invariably gain access to their target at some point or another. Even with hundreds of thousands of dollars of equipment, personnel, monitoring, and everything else watching the front gate—which, unarguably, are efficient and necessary tactics big players like Comcast and eBay utilize—eventually someone may be able to break through that hole in the fence that no one is looking at—no one except the attacker, that is. So much focus is put on the common entry point of a website that no one considers to continue layering the security on deeper. If not the actual authentication system itself, then how else are hackers able to gain entry, and why are they able to obtain such large treasure troves?
A Firewall Behind the Firewall: Protect the Data at the Database Itself
When your cryptographic password hash data is stored, it is just as critically important to isolate the hashes as it is to have secure hashes. The purpose of encrypting user passwords is indeed to inhibit the ability of an attacker from learning of your users' passwords and potentially compromising other accounts they hold elsewhere. But as we have seen with the widespread use and failures of MD5 and SHA1, even securing your users' passwords is not enough. Looking past the strength of the password hash used, why should an attacker even have access to the password hashes to begin with?
This all starts primarily with poor database security, which can come from any number of bad (in)security habits: unsanitized user input, dangerous or buggy code, non-segregated data, poor access control lists, and many more. Unsanitized user input – known in the industry as a SQL Injection, a topic we have discussed at great length previously – is the crux of all web security critical failures, especially ones that yield database treasure troves of password hash dumps. Since its inception, the Open Web Application Security Project (OWASP) has assembled a top ten list of web security vulnerabilities. Every year that list has been assembled, SQL injections have made the list. Furthermore, nearly every single compromise of a password hash database in the past several years has been possible at least in part because of SQL injections. We have exhausted this topic before – as have many hundreds of other organizations, corporations, even governments – and yet it still remains consistently the most damaging attack vector.
Before we cover SQL injections much further, we must once again and briefly harken back to our two-part series on PCI compliance – a merchant regulatory security standard organized by the major credit card corporations of the world: Visa, MasterCard, American Express, Discover, and Japan Credit Bureau – to revisit some topics that are incredibly important to every aspect of web security. Whether a content provider's data is as simple as RockYou's, or as critical as multi-million dollar banking, the six categories of PCI compliance are highly applicable to nearly any line of business that has a web-facing authentication portal. Of course, some PCI compliance requirements are potentially inapplicable – not every website can restrict data access at a digital or physical level, depending on their hosting scenario – but the core concept still holds valid: restrict and secure the data with multiple layers of security.
Indeed, securing the code that runs the website should be the only step required. However, perhaps it is impractical or infeasible to completely secure the SQL queries in the code used (by one comparison, Drupal has had over 20,000 lines of code committed, WordPress has had over 60,000 lines, and Joomla! has had over 180,000 lines). (The recent HeartBleed bug in the OpenSSL library is an excellent example of software with thousands of lines of code being used without inspection by thousands of users.) Or, it may simply be impossible to do so because the code is encoded, such as with SourceGuardian or ZenCrypt. Even with all these impracticalities, a content provider can still potentially shield against many of these attacks by using layers of firewalls.
Typically this might include some adaptive solution that rides on top of iptables or ipfw (depending if you are using Linux or a BSD variant, respectively), or perhaps a reactive Host Intrusion Detection System (HIDS) such as OSSEC, although these are often more complicated than desired and not exactly purpose-built for these uses. Instead, a content provider may wish to utilize a Web Application Firewall, which is designed specifically for these tasks. While there exist several enterprise-level solutions that are both a WAF and database firewall (sitting between your web application and your database), there are many open-source solutions, such as ModSecurity and IronBee, that perform remarkably well.
Although, a Web Application Firewall is not always secure, either, and may still allow a SQL injection or other method of attack to penetrate through. A common theme you may notice in this paper by now is our frequent mention of the word "layers," and for good reason, too. A Web Application Firewall itself is not enough, nor is just securing of the code a website runs, nor just monitoring, and so forth. However, when a content provider combines all of these approaches, frequently performs thorough web security audits and penetration scans, and encourages end-users to practice strong and modern security standards, the probability of a mass compromise drops significantly.
Wrap-up: Making Intrusions Fruitless to Attackers
Of course, no one will never be able to prevent every single type of attack and have 100% assurance that no one may ever gain unauthorized access into their systems. However, a content provider can implement layers of strong security standards to make any such intrusions fruitless for an attacker. Sure, they may be able to deface a website or mess with some content, but if a content provider employs strong layers of security, they may be able to keep the damage limited within that scope, or even less. A highly-secured communication pathway to the user, use of very expensive cryptographic password hash functions, layers of firewalls and data integrity/security checks from user to database and every step in between – all of these and more are critical components to ensuring your systems do not meet the same fate and embarrassment that even the largest organizations have unfortunately suffered. | <urn:uuid:c4ecb37d-dfac-47dc-bc1c-ecd33712e487> | CC-MAIN-2017-04 | https://www.netsparker.com/blog/web-security/passwords-pass-phrases-password-weaknesses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950137 | 3,989 | 2.984375 | 3 |
|From The Editor
Today's Internet is comprised of numerous interconnected Internet Service Providers (ISPs), each serving many constituent networks and end users. Just as individual regional and national telephone companies interconnect and exchange traffic and form a global telephone network, the ISPs must arrange for points of interconnection to provide global Internet service. This interconnection mechanism is generally called "peering," and it is the subject of a two part article by Geoff Huston. In Part I, which is included in this issue, he discusses the technical aspects of peering. In Part II, which will follow in our next issue, Mr. Huston continues the examination with a look at the business arrangements (called "settlements") that exist between ISPs, and discusses the future of this rapidly evolving marketplace.
In the early 1990s, concern grew regarding the possible depletion of the IP version 4 address space because of the rapid growth of the Internet. Predictions for when we would literally run out of IP addresses were published. Several proposals for a new version of IP were put forward in the IETF, eventually resulting in IP version 6 or IPv6. At the same time, new technologies were developed that effectively slowed address depletion, most notably Classless Inter-Domain Routing (CIDR) and Network Address Translators (NATs). Today there is still debate as to if and when IPv6 will be deployed in the global Internet, but experimentation and development continues on this protocol. We asked Robert Fink to give us a status report on IPv6.
We've already discussed the historical lack of security in Internet technologies and how security enhancements are being developed for every layer of the protocol stack. This time, Marshall Rose and David Strom examine the state of electronic mail security. We clearly have a way to go before we see "seamless integration" of security systems with today's e-mail clients.
Our first Letter to the Editor is included on page 46. As always, we would love to hear your comments and questions regarding anything you read in this journal. Please contact us at firstname.lastname@example.org
-Ole J. Jacobsen, Editor and Publisher email@example.com . | <urn:uuid:0704502f-c31d-4b86-9f81-27a6f0de9920> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents/from-editor.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00171-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948963 | 447 | 2.765625 | 3 |
Archives, USGS to co-manage geospatial archive
- By Kathleen Hickey
- Jun 17, 2008
The National Archives and Records Administration and the U.S. Geological Survey have signed an agreement to ensure preservation of and access to Earth imagery and geospatial data currently archived by USGS at its Earth Resources Observation and Science Center in Sioux Falls, S.D.
Under the agreement, the EROS archive will become an affiliated archive within the NARA system. The agencies will work together to ensure that NARA has legal custody and ultimate responsibility for the preservation of the archived imagery and that USGS meets NARA's stringent preservation and access standards.
The records will remain at the EROS Center under the day-to-day control of USGS, which has already created an advanced information management system that gives the public electronic access to historical Earth observation data.
The EROS archive of satellite imagery and aerial photography is the largest civilian archive of such data in the United States. It occupies more than 40,000 square feet and totals nearly 3 petabytes (3,000 terabytes) of electronic data and millions of film frames, said USGS Director Mark Myers.
Included in the records are aerial photographs dating from the 1930s and satellite images dating from the 1960s. Both are essential for documenting geography and understanding climate change, according to USGS.
Researchers in the public and private sectors use the images to understand natural resources, hazards and long-term changes. For example, imagery of Hurricane Katrina's impact on the south-central coast of the United States in 2005 was provided to researchers and agencies responsible for damage assessment and recovery efforts.
EROS imagery was also used to assess the extent of the damage of the 2004 tsunami in southeastern Asia, and researchers use aerial photography and satellite data to evaluate, prevent, and manage recovery from forest and grassland fires. Data and information are accessible and searchable online.
'This agreement'is a guarantee that our nation's collections of aerial and satellite images of the world's land areas will be permanently maintained, preserved and accessible to the public,' said Allen Weinstein, archivist of the United States. 'These records are crucial to scientists and policy-makers around the world in understanding how man and society affect the natural landscape.'
Kathleen Hickey is a freelance writer for GCN. | <urn:uuid:6c279c37-bc48-471b-a332-266ab23c0d6f> | CC-MAIN-2017-04 | https://gcn.com/articles/2008/06/17/archives-usgs-to-comanage-geospatial-archive.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9182 | 481 | 2.59375 | 3 |
Fiber Optics Splicing is starting to become an even more and more common skill dependence on cabling technicians. Fiber-optic cables might have to be spliced together for many reasons-for example, to produce a link of your particular length, or to repair a damaged cable or connection. One of the links of 10 km may be installed by splicing several fiber-optic cables together. The installer will then fulfill the distance requirement and get away from investing in a new fiber-optic cable. Splices could possibly be required at building entrances, wiring closets, couplers, and literally any intermediate point between a transmitter and receiver. When we used the fiber optic splicer to fiber optic cable splicing, our greatest problem is the preservation in the expertise of the signal.
A special touch is necessary to splice fiber optic cable considering that the glass fibers are encased with fiber insulation sealed inside a plastic coating. Unlike copper, the fibers are delicate and is easily broken by using a lot of pressure to reduce the casing while splicing cables to fiber connectors.
The splicing process begins by preparing each fiber end for fusion. Fusion splicing mandates that all protective coatings be taken off the ends of each one fiber. The fiber is then cleaved using the score-and-break method. Each fiber face to attain a good optical finish by cleaving and polishing the fiber end. Before the connection is created, get rid of each fiber will need to have an even finish that is clear of defects for example hackles, lips, and fractures. These defects, along with other impurities and dirt affect the geometrical propagation patterns of light and cause scattering. The standard of each fiber end is inspected utilizing a microscope. In fusion splicing, splice loss is a direct purpose of the angles and excellence of both fiber-end faces.
The fusion splicing is one kind of a splice cables method. The basic fusion-splicing apparatus is made up of two fixtures on what the fibers are mounted with two electrodes. An inspection microscope assists in the location from the prepared fiber ends into a fusion-splicing apparatus. The fibers are positioned into the apparatus, aligned, after which fused together. Initially, fiber optic fusion splicer used nichrome wire because the heater to melt or fuse fibers together. The heater almost always is an electric arc that softens two butted fiber ends and permits the fibers to be fused together.
In Mechanical Splicing, mechanical splices are only alignment devices, meant to retain the two fiber ends up in a precisely aligned position thus enabling light to feed in one fiber in to the other. Mechanical splicing is conducted in the optical junction the location where the fibers are precisely aligned and kept in place by a self-contained assembly, not just a permanent bond. This method aligns both the fiber ends into a common centerline, aligning their cores and so the light can pass in one fiber to another. It might be is accomplished with a portable workstation utilized to get ready each fiber end. That preparation includes stripping a thin layer of plastic coating in the fiber core before its splicing.
Connecting two fiber-optic cables requires precise alignment of the mated fiber cores or spots in a single-mode fiber-optic cable. This is required so that virtually all the sunshine is coupled derived from one of fiber-optic cable across a junction to the other fiber-optic cable. Actual contact between your fiber-optic cables isn’t even mandatory.
Splices can also be used as optical attenuators if there is a requirement to attenuate a high-powered signal. Splice losses up to 10.0 dB might be programmed and inserted in the cable if desired. Using this method, the splice can work as an in-line attenuator using the characteristic non reflectance of an fusion splice. Typical fusion-splice losses could be estimated at 0.02 dB for loss-budget calculation purposes. Mechanical splices are easily implemented in the field, require no tooling, and give losses of approximately 0.5 to 0.75 dB.
FiberStore provides a comprehensive range of hand tools, network tool kits and consumables for the installation and maintenance of LAN, fibre optic and copper networks. Whether you require a punchdown tool, RJ45 / Cat 5 Crimping tool, fiber splicer or automatic wire stripper or a complete network tool kit, FiberStore has the right tools for your needs. We provide fully automatic fibre optic fusion splicers from Fujikura for multimode and singlemode optical fibre cables, ensuring the best fibre termination possible whether an expert or a novice. | <urn:uuid:bf29e5e0-7b39-4aa6-b1cb-04098867b023> | CC-MAIN-2017-04 | http://www.fs.com/blog/more-and-more-important-of-fiber-optic-splicing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919325 | 953 | 3.1875 | 3 |
With Moore’s law type advances showing signs of stagnation and decline, researchers around the world are hard at work innovating techniques to improve the speed of computing. A research duo from Northeastern University has come up with a breakthrough that could lay the groundwork for a new generation of fast, powerful computing devices.
Assistant professor of physics Swastik Kar and associate professor of mechanical and industrial engineering Yung Joon Jung have created a device that uses optical and electronic signals to perform basic switching operations more efficiently.
At the most essential level, computing is comprised of a series of on-off switches. It takes billions of these operations to carry out even the simplest of computing tasks, so making this switching process even the tiniest bit faster can have a strong net positive effect on overall efficiency and productivity.
“People believe that the best computer would be one in which the processing is done using electrical signals and the signal transfer is done by optics,” Kar said. “It would save precious nanoseconds.”
The partnership began a couple years ago. Kar’s speciality was graphene, an emerging carbon-based material, prized for its strength and conductivity, and Jung’s research centered on carbon nanotubes, nanometer-sized cylinder of carbon atoms.
Early on, the research team made a startling discovery. They found by taking the metal out of traditional nanotube photodiode devices and replacing it with carbon, light-induced electrical currents rose much more sharply. “That sharp rise helps us design devices that can be turned on and off using light,” Kar said.
To better understand the curious phenomenon, the Northeastern team collaborated with Young-Kyun Kwon, a professor from Kyung Hee University, in Seoul, Korea, on the computational modeling. They then got to work building logic circuits that could be manipulated both electrically and optically. The resulting prototype marks the first time that electronic and optical properties have been integrated onto a single electronic chip.
“What we’ve done is built a tiny device where one input can be a voltage and the other input can be light,” Kar told IEEE Spectrum.
The team actually developed three devices: an AND Gate, which requires both an electronic and an optical input to generate an output; and an OR Gate, which can generate an output if either sensor is engaged. The third device works like the front-end of a camera sensor and consists of an array of 250,000 photoactive elements assembled over a centimeter-scale wafer. It functions as a four-bit digital-to-analog converter.
The nanotubes are created in a solution and placed on a patterned silicon/silicon oxide substrate, which should make the technology compatible with existing CMOS processes, according to Jung.
By using light for data movement and some of the logic operations, the technique could pave the way for a new generation of faster computing chips, according to the researchers. Computers process billions of steps each second, so improving their capability begins with the “demonstration of improving just one,” notes Kar.
A paper describing their research appears in a recent edition of journal Nature Photonics. | <urn:uuid:dd29f878-9246-418d-b65b-1ea4ce502ea0> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/02/27/computing-speed-light/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934353 | 661 | 3.59375 | 4 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: WW1
Select a size
Canada’s Role in WW1
Clara Cerdeira Correia Carballal, Block 4
Causes of WW1
Militarism- Denoted a rise in military expenditure, an increase in military and naval forces, more influence of the military men upon the policies of the civilian government, and a preference for force as a solution to problems.
Alliances – An alliance is an agreement made between two or more countries to give each other help if it is needed. When an alliance is signed, those countries become known as Allies.
Nationalism- Nationalism (as of dictionary.com) is “a sentiment based on common cultural characteristics that binds a population and often produces a policy of national independence or separatism.” Nationalism, although it can serve as a unifying force within a country, can also cause intense competition amongst other nations.
Imperialism - Imperialism is a system where a powerful nation controls and exploits one or more colonies. In most cases the imperial nation, euphemistically referred to as the ‘mother country’, establishes control over its colonies by coercion – for example, through infiltration and annexation, political pressure, war and military conquest.
Assassination - The assassination of Austria's Archduke Franz Ferdinand set into motion a series of international events that led to World War I.
Triple Entente- association between Great Britain, France, and Russia, the nucleus of the Allied Powers in World War I. It developed from the Franco-Russia Alliance that gradually developed and was formalized in 1894, the Anglo-French Entente Cordiale of 1904, and an Anglo-Russian agreement of 1907, which brought the Triple Entente into existence.
Triple Alliance- was a secret agreement between Germany, Austria-Hungary, and Italy formed on 20 May 1882 and renewed periodically until World War|. Germany and Austria-Hungary had been closely allied since 1879. Italy sought support against France shortly after it lost North African ambitions to the French. Each member promised mutual support in the event of an attack by any other great power. The treaty provided that Germany and Austria-Hungary were to assist Italy if it was attacked by France without provocation. In turn, Italy would assist Germany if attacked by France. In the event of a war between Austria-Hungary and Russia, Italy promised to remain neutral.
Trench warfare is a type of land warfare using occupied fighting lines consisting largely of trenches, in which troops are significantly protected from the enemy's small arms fire and are substantially sheltered from artillery. The most famous use of trench warfare is the Western Front in World War I. It has become a byword for stalemate, attrition, sieges and futility in conflict.
Trench warfare occurred when a revolution in firepower was not matched by similar advances in mobility, resulting in a grueling form of warfare in which the defender held the advantage. On the Western Front in 1914–18, both sides constructed elaborate trench and dugout systems opposing each other along a front, protected from assault by barbed wire, mines, and other obstacles. The area between opposing trench lines (known as "no man's land") was fully exposed to artillery fire from both sides. Attacks, even if successful, often sustained severe casualties.
With the development of armoured warfare, emphasis on trench warfare has declined, but still occurs where battle-lines become static.
Rifle - The main weapon used by British soldiers in the trenches was the bolt-action rifle. 15 rounds could be fired in a minute and a person 1,400 metres away could be killed.
Machine guns-needed 4-6 men to work them and had to be on a flat surface. They had the fire-power of 100 guns. Large field guns had a long range and could deliver devastating blows to the enemy but needed up to 12 men to work them. They fired shells which exploded on impact .
Gas- The German army were the first to use chlorine gas at the battle of Ypres in 1915. Chlorine gas causes a burning sensation in the throat and chest pains. Death is painful - you suffocate! The problem with chlorine gas is that the weather must be right. If the wind is in the wrong direction it could end up killing your own troops rather than the enemy.
Zeppelin- The Zeppelin, also known as blimp, was an airship that was used during the early part of the war in bombing raids by the Germans. They carried machine guns and bombs. However, they were abandoned because they were easy to shoot out of the sky.
Tanks were used for the first time in the First World War at the Battle of the Somme. They were developed to cope with the conditions on the Western Front. The first tank was called 'Little Willie' and needed a crew of 3. Its maximum speed was 3mph and it could not cross trenches.
Planes- were also used for the first time. At first they were used to deliver bombs and for spying work but became fighter aircraft armed with machine guns, bombs and some times cannons. Fights between two planes in the sky became known as 'dogfights'
Weapons of war
During World War One, propaganda was employed on a global scale. Unlike previous wars, this was the first total war in which whole nations and not just professional armies were locked in mortal combat. This and subsequent modern wars required propaganda to mobilise hatred against the enemy; to convince the population of the justness of the cause; to enlist the active support and cooperation of neutral countries; and to strengthen the support of allies.
The homefront- roles of women
Working class women also took in paid 'piece work' at home, as they had for generations. Carrying out piece work meant that women were paid depending on how much they produced. They undertook tasks such as washing, ironing, sewing, lace-making and assembling toys or boxes. Women also worked hard as housewives, taking care of their families and homes. Women carried out many jobs in the countryside, supporting men on farms by milking cows and helping with the harvest. They also often kept chickens and sometimes geese.
Jobs outside the home
Many employers refused to let married women work for them, so single and widowed women were more likely to have a job outside the home. Women worked in a variety of roles but their jobs were less manual than those carried out by men. Some women worked as school teachers or as governesses, teaching children at home. Well-off families would employ a nursemaid to care for their babies, a nanny to look after children and a governess to teach them until the boys went away to boarding school. Girls usually continued to be educated at home in these types of families.
A British recruitment poster urging women to work in the munitions factories
Some women worked as nurses before the war and a very small number worked as doctors. Many more women began to train and work in medicine and education during the war.
In the early 1900s, there was a rise in the number of women taking jobs in offices. Their duties were mainly limited to small administrative tasks. Other women worked in cotton factories where some of the roles involved labour-intensive work. Women prepared the cotton fibre for spinning and worked on weaving machines. The larger machines were thought to be too heavy for women to operate and were mostly worked by men.
Life for women changed dramatically during the war because so many men were away fighting. Many women took paid jobs outside the home for the first time. By 1918 there were five million women working in Britain. The money they earned contributed to the family's budget and earning money made working women more independent. Many enjoyed the companionship of working in a factory, office or shop rather than doing 'piece work' at home.
The federal government decided in 1917 to conscript young men for overseas military service. Voluntary recruitment was failing to maintain troop numbers, and Prime Minister Sir Robert Borden believed in the military value, and potential post-war influence, of a strong Canadian contribution to the war.
The Battle of the Somme- also known as the Somme Offensive, was a battle of the First World War fought by the armies of the British and French empires against the German Empire. It took place between 1 July and 18 November 1916 on both sides of the upper reaches of the River Somme in France. The battle was intended to hasten a victory for the Allies and was the largest battle of the First World War on the Western Front. More than one million men were wounded or killed, making it one of the bloodiest battles in human history.
The Battle of Ypres- was during the First World War, in the general area of the Belgian city of Ypres, where the German and the Allied armies (Belgian, French, British Expeditionary Force and Canadian Expeditionary Force) clashed. There were hundreds of thousands of casualties. The term "Battle of Ypres" could mean all the fighting that occurred in that area. But the "Battle of Ypres" could refer more specifically to any one of five battles which have been separately identified and named (and which themselves can be subdivided into smaller named battles).
The Battle of Vimy Ridge- was a military engagement fought primarily as part of the Battle of Arras, in the Nord-Pas-de-Calais region of France, during the First World War. The main combatants were the Canadian Corps, of four divisions, against three divisions of the German Sixth Army. The battle, which took place from 9 to 12 April 1917, was part of the opening phase of the British-led Battle of Arras, a diversionary attack for the French Nivelle Offensive.
Important Canadian Battles
The airplane, regarded by military authorities in 1914 as little more than a novelty, became over the next four years a military necessity. Remarkable technical advances in aerial warfare enabled the aircraft to fulfill ever-expanding functions. In the early stages of the war aircraft were used largely for reconnaissance, to observe enemy troop movements and spot artillery, and to obtain photographs and motion pictures. Then came the bombers and fighters as airmen sought to destroy railroad centres and industrial targets far behind enemy lines, to destroy Zeppelin bases, and to hunt submarines at sea.
The war in the air offered to the airman and to the public a glimpse of the fame and glory once expected of war, at a time when mud and shells turned battlefields into nightmares of horror and revulsion.
The flyer became a new kind of warrior - a chivalric, twentieth century, knight-errant. Men went up in rickety planes with few instruments and no parachutes. The fighter pilot was one of the elite, one of the most daring, and his job was one of the most dangerous. What started out as a hazardous adventure developed into a science of killing. One third of all the fliers died in combat, among them 1,600 Canadians.
Canadian airmen played a particularly significant and brilliant role in the air. No less than 25,000 Canadians served with the British air service as pilots, observers and mechanics, in every theatre of the war. Canadian airmen won more than eight hundred decorations and awards for valour including three Victoria Crosses. The names of such Canadian flyers as W.A. "Billy" Bishop, W.G. Barker, Raymond Collishaw and A.A. McLeod became household names in Canada, and they left a record of daring and devotion that was famous everywhere
War in the air
The struggle at sea was chiefly between the British effort to strangle Germany by naval blockade; and the German attempt to cut off Britain's source of food and supply by submarine warfare.
Vigilance of the British navy kept most of the German fleet bottled up in home ports, and at the same time British warships freed the seas of German commerce raiders. The rival fleets met only once, in the battle of Jutland off the coast of Denmark. The British suffered heavily in this encounter, but the decisive result was that the German battle fleet never again dared to leave its bases.
Deprived of the use of surface ships Germany increasingly resorted to submarine warfare to bring Britain to her knees. The German U-boat fleet preyed on enemy and often neutral ships, sank merchantmen on sight, and threatened the supply lines on which the survival of the Allies depended. Protests from the United States brought a reluctant promise in 1915 not to sink ships without warning, but this greatly reduced the effectiveness of the submarine as a weapon.
By the end of 1916 the British blockade was beginning to be felt severely in Germany. In January 1917 the Germans, convinced they could starve Britain in five months, prepared to risk the American entry into the war. They resumed unrestricted submarine warfare.
The policy was initially spectacularly effective. Allied shipping losses mounted, reaching a peak in April 1917 of 869,000 tons. However, the submarine campaign did not achieve the expected speedy victory. New anti-submarine devices, together with the Allied adoption of the convoy system, gradually overcame the submarine menace.
On the other hand, by the middle of 1918, the effects of the British blockade were such that Germany could not continue the war for much longer.
War at the sea
War ends, Armistice day
Armistice Day is commemorated every year on November 11 to mark the armistice signed between the Allies of World War I and Germany at Compiègne, France, for the cessation of hostilities on the Western Front of World War I, which took effect at eleven o'clock in the morning—the "eleventh hour of the eleventh day of the eleventh month" of 1918. The date was declared a national holiday in many allied nations, and coincides with Remembrance Day and Veterans Day, public holidays.irplane, regarded by military authorities in 1914 as little more than a novelty, became over the next four years a military necessity. Remarkable technical advances in aerial warfare enabled the aircraft to fulfill ever-expanding functions. In the early stages of the war aircraft were used largely for reconnaissance, to observe enemy troop movements and spot artillery, and to obtain photographs and motion pictures. Then came the bombers and fighters as airmen sought to destroy railroad centres and industrial targets far behind enemy lines, to destroy Zeppelin bases, and to hunt submarines at sea.
Deprived of the use of surface ships Germany increasingly resorted to submarine warfare to bri | <urn:uuid:7308a120-3acb-49ea-a879-bc7a35639219> | CC-MAIN-2017-04 | https://docs.com/clara-carballal/3153/ww1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977634 | 3,045 | 3.046875 | 3 |
IBM releases world's smallest movie: atomic-scale memory holds the promise of data storage 100 times greater than current hard disk technology
bobby_g 060000DF9B Visits (1695)
“A Boy and His Atom” was created by a team at IBM's Almaden Research Center in California.
This brief film is not very long, does not have much of a plot, and does not garner many laughs, but it's so, so fascinating.
The scientists used a scanning tunneling microscope as their animation tool. The pixels are individual atoms, nudged into place to form a picture. (The Guinness folks have certified this as the smallest movie ever made.)
What's a scanning tunneling microscope, you ask? It's an instrument for imaging surfaces at the atomic level; the development in 1981 earned its inventors,Gerd Binnig and Heinrich Rohrer (at IBM Zurich), the Nobel Prize for Physics in 1986.
In the film, a blocky figure interacts with an individual atom in a variety of ways before the credits, which spell out both “THINK” and “IBM” using individual atoms to form the words. | <urn:uuid:cc6536ed-8569-4637-b942-4c923651b623> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/869bac74-5fc2-4b94-81a2-6153890e029a/entry/ibm_releases_world_s_smallest_movie_atomic_scale_memory_holds_the_promise_of_data_storage_100_times_greater_than_current_hard_disk_technology?lang=pt_br | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930635 | 247 | 2.90625 | 3 |
Charge-coupled devices (CCDs) are sensors for recording images in digital cameras. These devices consist of an integrated circuit containing an array of linked or coupled capacitors acting as many small pixels. The falling of light on a pixel is converted into a charge pulse, which is then measured by the CCD electronics and represented by a number. The number usually ranges from 0 (no light) to 65,535 (very intense light). Each CCD chip is composed of an array of metal-oxide-semiconductor (MOS) capacitors and each capacitor is a pixel. When electrical charges are applied to the CCD top plates, they can be stored within the structure of chips. Digital pulses are applied to the top plates which can shift these charges among the pixels creating a picture representing charged pixels. These photoelectronic image sensors have made digital photography possible and revolutionized astronomy, space science, and consumer electronics. The CCD is a crucial component of fax machines, digital cameras, and scanners.
CCDs have various applications and is used in digital cameras, optical scanners, and video cameras as light-sensing devices. CCD cameras are mostly used in astrophotography and are typically sensitive to infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. CCDs are used to record the exposures of galaxies and nebulae. They commonly respond to 70% of the incident light (meaning a quantum efficiency of about 70%) making them more efficient than photographic film, which captures only about 2% of the incident light. As a result, the CCDs were being rapidly adopted by the astronomers. This report covers the entire spectrum of CCDs, which are used in various applications such as consumer electronics, automotive, medical, industrial, security and surveillance.
The market is segmented into four major geographic regions; namely the Americas, Europe, Asia-Pacific, and the Rest of the World (RoW). The current and future trends of each region have been analyzed in this report. The market share of the major players and the competitive landscaping is also included in the report.
The report highlights drivers, restraints and opportunities for the global market. Further, tbe report covers all the major companies involved in this segment, covering their entire product offerings, financial details, strategies, and recent developments.
Along with the market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standards and deep-dive analysis of the following parameters:
Raw Material/Component Analysis
- In-depth trend analysis of raw materials in competitive scenario
- Raw material/Component matrix which gives a detailed comparison of Raw material/Component portfolio of each company mapped at country level
- Comprehensive coverage of regulations followed in North America (U.S., Canada, Mexico)
- Fast turn-around analysis of manufacturing firms with response to market events and trends
- Opinions from different firms about various components, and standards from different companies
- Qualitative inputs on macro-economic indicators, mergers and acquisitions
- Tracking the values of raw materials/components shipped annually in each country
1.1 Analyst Insights
1.2 Market Definitions
1.3 Market Segmentation & Aspects Covered
1.4 Research Methodology
2 Executive Summary
3 Market Overview
4 CCD Image Sensor by Submarkets
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:6182b639-fc52-4938-90a7-f2cd5773f5cb> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/ccd-image-sensor-reports-8549425947.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00006-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914477 | 754 | 3.578125 | 4 |
Using a WDM(Wavelength Division Multiplexing) for expanding the capacity of the fiber to carry multiple client interfaces is a highly advisable way as the physical fiber optic cabling is not cheap. As WDM widely used you must not unfamiliar with it, it is a technology that combines several streams of data/storage/video or voice protocols on the same physical fiber-optic cable, by using several wavelengths (frequencies) of light with each frequency carrying a different type of data.
Two types of WDM architecture available: Coarse Wavelength Division Multiplexing (CWDM) and Dense Wavelength Division Multiplexing (DWDM). CWDM/DWDM multiplexer and demultiplexer and OADM (Optical Add-Drop Multiplexer) are common fit in with Passive. With the use of optical amplifiers and the development of the OTN (Optical Transport Network) layer equipped with FEC (Forward Error Correction), the distance of the fiber optical communication can reach thousands of Kilometers without the need for regeneration sites.
CWDM, each CWDM wavelength typically supports up to 2.5Gbps and can be expanded to 10Gbps support. The CWDM is limited to 16 wavelengths and is typically deployed at networks up to 80Km since optical amplifiers cannot be used due to the large spacing between channels. CWDM uses a wide spectrum and accommodates eight channels. This wide spacing of channels allows for the use of moderately priced optics, but limits capacity. CWDM is typically used for lower-cost, lower-capacity, shorter-distance applications where cost is the paramount decision criteria.
The CWDM Mux/Demux (or CWDM multiplexer/demultiplexer) is often a flexible plug-and-play network solution, which helps insurers and enterprise companies to affordably implement denote point or ring based WDM optical networks. CWDM Mux/demux is perfectly created for transport PDH, SDH / SONET, ETHERNET services over WDM, CWDM and DWDM in optical metro edge and access networks. CWDM Multiplexer Modules can be found in 4, 8 and 16 channel configurations. These modules passively multiplex the optical signal outputs from 4 too much electronic products, send on them someone optical fiber and after that de-multiplex the signals into separate, distinct signals for input into gadgets across the opposite end for your fiber optic link.
Typically CWDM solutions provide 8 wavelengths capability enabling the transport of 8 client interfaces over the same fiber. However, the relatively large separation between the CWDM wavelengths allows expansion of the CWDM network with an additional 44 wavelengths with 100GHz spacing utilizing DWDM technology, thus expanding the existing infrastructure capability and utilizing the same equipment as part of the integrated solution.
DWDM is a technology allowing high throughput capacity over longer distances commonly ranging between 44-88 channels/wavelengths and transferring data rates from 100Mbps up to 100Gbps per wavelength.
DWDM systems pack 16 or more channels into a narrow spectrum window very near the 1550nm local attenuation minimum. Decreasing channel spacing requires the use of more precise and costly optics, but allows for significantly more scalability. Typical DWDM systems provide 1-44 channels of capacity, with some new systems, offering up to 80-160 channels. DWDM is typically used where high capacity is needed over a limited fiber resource or where it is cost prohibitive to deploy more fiber.
The DWDM multiplexer/demultiplexer Modules are made to multiplex multiple DWDM channels into one or two fibers. Based on type CWDM Mux/Demux unit, with optional expansion, can transmit and receive as much as 4, 8, 16 or 32 connections of various standards, data rates or protocols over one single fiber optic link without disturbing one another.
Ultimately, the choice to use CWDM or DWDM is a difficult decision, first we should understand the difference between them clearly.
CWDM vs DWDM
CWDM scales to 18 distinct channels. While, DWDM scales up to 80 channels (or more), allows vastly more expansion. The main advantage of CWDM is the cost of the optics which is typically 1/3rd of the cost of the equivalent DWDM optic. CWDM products are popular in less precision optics and lower cost, less power consumption, un-cooled lasers with lower maintenance requirements. This difference in economic scale, the limited budget that many customers face, and typical initial requirements not to exceed 8 wavelengths, means that CWDM is a more popular entry point for many customers.
Buying CWDM or DWDM is driven by the number of wavelengths needed and the future growth projections. If you only need a handful of waves and use 1Gbps optics, CWDM is the way to go. If you need dozens of waves, 10Gbps speeds, DWDM is the only option. | <urn:uuid:5d268d4e-5a86-4501-8f4d-1ff45f118e57> | CC-MAIN-2017-04 | http://www.fs.com/blog/multiplex-your-fiber-by-using-cwdm-or-dwdm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00034-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922388 | 1,049 | 3.046875 | 3 |
Aluminum Battery Research Shows 1-Minute Smartphone Charging in Lab
For smartphone users, the available battery life of their oft-used devices is never quite enough. Smartphones gain more features, more style and thinner cases every year, while their components tend to run harder while seemingly demanding more power. And battery life claims often don't seem to keep up, despite our wildest hopes. You can imagine being halfway through a workday, or out on the town for the evening and the worst scenario pops up—your phones are dead again.
It's enough to make smartphone owners carry spare batteries when possible, while loading up on power storage bricks and plug-in charger cables almost everywhere they go, all to keep their devices running .
But research being done on innovative aluminum storage batteries at Stanford University could help to change all of that in the future. Through trial and error, scientists at Stanford have created high-performance aluminum-ion batteries that offer long-lasting power, fast charging and lower costs than traditional lithium-ion or alkaline batteries, according to an April 6 announcement from the university.
What's more, the new aluminum-ion batteries are safer to store and can be recharged many more times than batteries built using existing technologies, Stanford reported.
"We have developed a rechargeable aluminum battery that may replace existing storage devices, such as alkaline batteries, which are bad for the environment, and lithium-ion batteries, which occasionally burst into flames," Hongjie Dai, a professor of chemistry at Stanford, said in a statement. "Our new battery won't catch fire, even if you drill through it."
While others have experimented with aluminum battery designs in the past, Dai and his colleagues accidentally overcame the past deficiencies seen by others by using graphite for a battery component known as a cathode, which carries a positive charge. A key challenge in past aluminum battery research was to find materials that could produce sufficient voltage after repeated cycles of charging and discharging, according to Stanford.
In its experiments, the researchers placed an aluminum anode (which carries a negative charge) and a graphite cathode, along with an ionic liquid electrolyte, inside a flexible polymer-coated pouch (pictured) that held it all together, the report continued.
The experiments quickly showed safety and performance improvements compared to traditional rechargeable batteries, including "unprecedented charging times" of about one minute with the aluminum prototype, Stanford reported. Previous experiments with aluminum batteries developed at other laboratories usually died after just 100 charge-discharge cycles, but the Stanford batteries hold up fine after more than 7,500 charging cycles without any loss of capacity, the report continued. Typical lithium-ion batteries can be recharged only about 1,000 times.
Dai and his colleagues did not reply to an eWEEK inquiry seeking more details, but their research is truly compelling and intriguing.
Imagine for a moment if smartphone, laptop and tablet users could get truly long battery life—a whole day or more—out of devices that would be equipped with aluminum batteries. For mobile users, it would be ground-breaking, life-changing and long overdue.
This is certainly work worth watching. | <urn:uuid:f6d3f033-6629-4144-8b13-ce034fe03e38> | CC-MAIN-2017-04 | http://www.eweek.com/blogs/first-read/aluminum-battery-research-shows-1-minute-smartphone-charging-in-lab.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948026 | 649 | 2.953125 | 3 |
From the space shuttle to your living room, here’s how it all began.
This month marks the 45th anniversary of the Apollo 11 mission to the moon, when Neil Armstrong and Buzz Aldrin became the first men to walk on the surface of our satellite. Considering the years that have passed, it is no surprise that the technology that took Apollo 11 to the moon is now less powerful than that found in the average smartphone. NASA and its sister space organisations around the world have traditionally been a hotbed for trialling new technologies, but did you know that the following were first used in space?
Recent advances in telecommunications now allow to call people on the other side of the planet without having to worry about the distance – and it’s thanks to NASA that the technology was developed.
The technology actually has its roots in several NASA inventions, which took place over several decades and formed an important part of communicating with the Apollo missions.
Before the missions, NASA send a series of satellites into orbit for the purpose of communicating what the conditions of outer space were truly like, and using similar satellite technology, around 200 communication satellites currently orbit the globe each day. NASA monitors the locations and health of many of these satellites to ensure that we can continue to talk to people around the corner or overseas.
Today, hundreds of satellites remain in orbit around the Earth, accompanied by thousands of similar objects covering weather, television, and other media signals which make it possible for us to stay in touch with friends all over the world.
Mobile phone cameras
From selfies to cat pictures, the mobile phone camera has quickly become the most important tool for many users. However, one in every three mobile phone cameras on the planet uses technology that was invented for NASA spacecraft.
The concept of digital photography was developed in the 1960s by Eugene Lally, an engineer at NASA’s Jet Propulsion Laboratory (JPL), in Pasadena, California, who investigated ways of using mosaic photosensors to digitise light signals that could then be used to capture still images.
Lally’s work spurred decades of further NASA research, as engineers sought ways to create small, lightweight image sensors that could withstand the harsh environments in space, eventually leading to the creation of miniature imaging system prototypes in the 1990s, which formed the forefront of small-scale digital cameras.
This wasn’t NASA’s only contribution to the world of digital photography, either, as it was a JPL engineer named Frederic Billingsley who first published the word "pixel" (short for "picture element"), in 1965.
The computer mouse
Now an everyday part of our work and home PC usage, the humble computer mouse was invented by Stanford researcher Douglas Engelbart in the early 1960s. However, it was just part of a much larger project, and it was thanks to NASA funding that the mouse developed and became the tool we know and love today.
The testing and implementation of the mouse was a key part of turning the computers used in the space program from basic arithmetic machines into something resembling the computers we use today, allowing the user much simpler and more direct control over the device.
The mouse didn’t have an easy birth, however, and was nearly trumped by a light pen, which was favoured by the many of the astronaut test subjects, something which would have transformed the way we interact with our computers today.
Now vital for many of us in order to navigate around every day, mapping services such as Google Maps also have their roots in space technology. Global Positioning System (GPS) satellites, which provide the necessary tracking services to place where a user is, were originally launched in 1978, with the first satellites launching from Vandenberg Air Force Base using Atlas rockets that were converted intercontinental ballistic missiles.
However, the system expanded into commercial markets in the 1980s, and since 1994 has been made up of network of 24 satellites placed into orbit and overseen by the U.S. Department of Defence.
The solar-powered GPS satellites circle the earth twice a day, travelling at around 7,000 mph, in a very precise orbit and transmit signal information to earth. GPS receivers take this information and use triangulation to calculate the user’s exact location. | <urn:uuid:6b9a085b-9076-418c-b8eb-fe67d0b44d22> | CC-MAIN-2017-04 | http://www.cbronline.com/news/10-everyday-technologies-that-began-in-space-4315573 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00484-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967549 | 868 | 3.6875 | 4 |
14 Amazing DARPA Technologies On TapGo inside the labs of the Defense Advanced Research Projects Agency for a look at some of the most intriguing technologies they're developing in computing, electronics, communications, and more.
4 of 14
When GPS is unavailable, the military uses bulky, expensive sensors for navigation. DARPA is working on fabrication techniques for "microscale inertial sensors" as part of its Micro-Technology for Positioning, Navigation, and Timing program. During an early phase, the work focused on 3-D microfabrication methods using bulk metallic glasses, diamond and ultra-low expansion glass. Small 3-D structures were fabricated. Image credit: University of Michigan
Military Transformers: 20 Innovative Defense Technologies
DARPA Demonstrates Robot 'Pack Mules'
DARPA Seeks 'Plan X' Cyber Warfare Tools
DARPA Cheetah Robot Sets World Speed Record
DARPA Demos Inexpensive, Moldable Robots
DARPA Unveils Gigapixel Camera
DARPA: Consumer Tech Can Aid Electronic Warfare
4 of 14 | <urn:uuid:b89df7df-375e-411a-a3fe-18f85c11ed07> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/14-amazing-darpa-technologies-on-tap/d/d-id/1106551?page_number=4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.820159 | 222 | 2.796875 | 3 |
The US Department of Energy wants researchers and scientists to "think outside the box" and come up "highly disruptive Concentrating Solar Power technologies that will meet 6¢/kWh cost targets by the end of the decade."
The DOE's "SunShot Concentrating Solar Power R&D" is a multimillion dollar endeavor that intends to look beyond what it calls incremental near-term to support research into transformative technologies that will break through performance barriers known today such as efficiency and temperature limitations.
More on energy: 10 hot energy projects that could electrify the world
The SunShot initiative expects researchers to demonstrate and prove new concepts in the solar collector, receiver, and power cycle subsystems, including associated hardware. The DOE says the CSP realm is composed of a variety of technologies, which convert sunlight into thermal energy, and then use this thermal energy to generate electricity.
There are four demonstrated types of CSP systems: collector field, receiver, thermal storage, and power block. All of involve converting sunlight into thermal energy for use in a heat-driven engine and all must be revolutionized if the cost of solar energy are to be reduced. The DOE noted that the collector field technologies typically represent the largest single capital investment in a CSP plant and is typically composed of many individual collectors, and as such advanced manufacturing, assembly, and installation processes will be considered for Sunshot.
"The overarching goal of the SunShot Initiative is reaching cost parity with baseload energy rates, estimated to be 6¢/kWh without economic support, which would pave the way for rapid and large-scale adoption of solar electricity across the United States. SunShot aims to reduce the total costs of solar energy systems by about 75% by the end of the decade. Beyond the technical goal of reducing total cost by 75%, the objectives of the SunShot Initiative are to boost the US economic competitiveness and manufacturing of solar technologies within the US," the DOE stated.
SunShot-level cost reductions likely include an increase in system efficiency by moving to higher-temperature operation, such as maximizing power-cycle efficiency without sacrificing efficiency elsewhere in the system (minimizing optical and thermal efficiency losses). Likewise, reducing the cost of the solar field and developing high-temperature thermal energy storage compatible with high-efficiency, high-temperature power cycles are critical to driving costs down further, the DOE stated.
The DOE cited a few examples of potential project areas for development:
- Alternative or optimized collector support structures.
- Novel materials for collector structures.
- Low-cost drives and accurate controls.
- Autonomous collector power and control.
- Alternative receiver designs for high-temperature operation.
- Novel receiver materials and selective coatings.
- High-efficiency, high-temperature power cycles.
- Innovative combined-cycle configurations.
- High temperature heat exchangers compatible with advanced power cycles.
- Advanced designs and materials for hardware (e.g. pumps, valves/packing, piping).
- Highly automated collector field manufacturing facilities and equipment.
- Rapid field installation and minimal site preparation techniques.
- Novel CSP components and systems.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:2d26a16b-dde1-445b-bad9-42808dd739f8> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220962/data-center/us-to-fund-aggressive-technology-that-cuts-solar-power-costs-75-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906447 | 652 | 3.1875 | 3 |
Basic file system architecture
The Linux file system architecture is an interesting example of abstracting
complexity. Using a common set of API functions, a large variety of file systems
can be supported on a large variety of storage devices. Take, for example, the
read function call, which allows some number of bytes
to be read from a given file descriptor. The
function is unaware of file system types, such as ext3 or NFS. It is also unaware
of the particular storage medium upon which the file system is mounted, such as AT
Attachment Packet Interface (ATAPI) disk, Serial-Attached SCSI (SAS) disk, or
Serial Advanced Technology Attachment (SATA) disk. Yet, when the
read function is called for an open file, the data is
returned as expected. This article explores how this is done and investigates the
major structures of the Linux file system layer.
What is a file system?
I'll start with an answer to the most basic question, the definition of a file system. A file system is an organization of data and metadata on a storage device. With a vague definition like that, you know that the code required to support this will be interesting. As I mentioned, there are many types of file systems and media. With all of this variation, you can expect that the Linux file system interface is implemented as a layered architecture, separating the user interface layer from the file system implementation from the drivers that manipulate the storage devices.
Associating a file system to a storage device in Linux is a process called
mount command is used to attach
a file system to the current file system hierarchy (root). During a mount, you
provide a file system type, a file system, and a mount point.
To illustrate the capabilities of the Linux file system layer (and the use of
mount), create a file system in a file within the current file system. This is
accomplished first by creating a file of a given size using
dd (copy a file using /dev/zero as the source) -- in
other words, a file initialized with zeros, as shown in
Listing 1. Creating an initialized file
$ dd if=/dev/zero of=file.img bs=1k count=10000 10000+0 records in 10000+0 records out $
You now have a file called file.img that's 10MB. Use the
losetup command to associate a loop device with the
file (making it look like a block device instead of just a regular file within the
$ losetup /dev/loop0 file.img $
With the file now appearing as a block device (represented by /dev/loop0), create
a file system on the device with
mke2fs. This command
creates a new second ext2 file system of the defined size, as shown in
Listing 2. Creating an ext2 file system with the loop device
$ mke2fs -c /dev/loop0 10000 mke2fs 1.35 (28-Feb-2004) max_blocks 1024000, rsv_groups = 1250, rsv_gdb = 39 Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 2512 inodes, 10000 blocks 500 blocks (5.00%) reserved for the super user ... $
The file.img file, represented by the loop device
/dev/loop0), is now mounted to the mount point
/mnt/point1 using the
mount command. Note the
specification of the file system as
ext2. When mounted,
you can treat this mount point as a new file system by doing using an
ls command, as shown in Listing 3.
Listing 3. Creating a mount point and mounting the file system through the loop device
$ mkdir /mnt/point1 $ mount -t ext2 /dev/loop0 /mnt/point1 $ ls /mnt/point1 lost+found $
As shown in Listing 4, you can continue this process by creating a new file within the new mounted file system, associating it with a loop device, and creating another file system on it.
Listing 4. Creating a new loop file system within a loop file system
$ dd if=/dev/zero of=/mnt/point1/file.img bs=1k count=1000 1000+0 records in 1000+0 records out $ losetup /dev/loop1 /mnt/point1/file.img $ mke2fs -c /dev/loop1 1000 mke2fs 1.35 (28-Feb-2004) max_blocks 1024000, rsv_groups = 125, rsv_gdb = 3 Filesystem label= ... $ mkdir /mnt/point2 $ mount -t ext2 /dev/loop1 /mnt/point2 $ ls /mnt/point2 lost+found $ ls /mnt/point1 file.img lost+found $
From this simple demonstration, it's easy to see how powerful the Linux file system (and the loop device) can be. You can use this same approach to create encrypted file systems with the loop device on a file. This is useful to protect your data by transiently mounting your file using the loop device when needed.
File system architecture
Now that you've seen file system construction in action, I'll get back to the architecture of the Linux file system layer. This article views the Linux file system from two perspectives. The first view is from the perspective of the high-level architecture. The second view digs in a little deeper and explores the file system layer from the major structures that implement it.
While the majority of the file system code exists in the kernel (except for user-space file systems, which I'll note later), the architecture shown in Figure 1 shows the relationships between the major file system- related components in both user space and the kernel.
Figure 1. Architectural view of the Linux file system components
User space contains the applications (for this example, the user of the file system) and the GNU C Library (glibc), which provides the user interface for the file system calls (open, read, write, close). The system call interface acts as a switch, funneling system calls from user space to the appropriate endpoints in kernel space.
The VFS is the primary interface to the underlying file systems. This component exports a set of interfaces and then abstracts them to the individual file systems, which may behave very differently from one another. Two caches exist for file system objects (inodes and dentries), which I'll define shortly. Each provides a pool of recently-used file system objects.
Each individual file system implementation, such as ext2, JFS, and so on, exports
a common set of interfaces that is used (and expected) by the VFS. The buffer
cache buffers requests between the file systems and the block devices that they
manipulate. For example, read and write requests to the underlying device drivers
migrate through the buffer cache. This allows the requests to be cached there for
faster access (rather than going back out to the physical device). The buffer
cache is managed as a set of least recently used (LRU) lists. Note that you can
sync command to flush the buffer cache out to
the storage media (force all unwritten data out to the device drivers and,
subsequently, to the storage device).
That's the 20,000-foot view of the VFS and file system components. Now I'll look at the major structures that implement this subsystem.
Linux views all file systems from the perspective of a common set of objects. These objects are the superblock, inode, dentry, and file. At the root of each file system is the superblock, which describes and maintains state for the file system. Every object that is managed within a file system (file or directory) is represented in Linux as an inode. The inode contains all the metadata to manage objects in the file system (including the operations that are possible on it). Another set of structures, called dentries, is used to translate between names and inodes, for which a directory cache exists to keep the most-recently used around. The dentry also maintains relationships between directories and files for traversing file systems. Finally, a VFS file represents an open file (keeps state for the open file such as the write offset, and so on).
Virtual file system layer
The VFS acts as the root level of the file-system interface. The VFS keeps track of the currently-supported file systems, as well as those file systems that are currently mounted.
File systems can be dynamically added or removed from Linux using a set of
registration functions. The kernel keeps a list of currently-supported file
systems, which can be viewed from user space through the /proc file system. This
virtual file also shows the devices currently associated with the file systems. To
add a new file system to Linux,
called. This takes a single argument defining the reference to a file system
file_system_type), which defines the name of
the file system, a set of attributes, and two superblock functions. A file system
can also be unregistered.
Registering a new file system places the new file system and its pertinent
information onto a file_systems list (see Figure 2 and
linux/include/linux/mount.h). This list defines the file systems that can be
supported. You can view this list by typing
cat /proc/filesystems at the command line.
Figure 2. File systems registered with the kernel
Another structure maintained in the VFS is the mounted file systems (see
Figure 3). This provides the file systems that are
currently mounted (see linux/include/linux/fs.h). This links to the
superblock structure, which I'll explore next.
Figure 3. The mounted file systems list
The superblock is a structure that represents a file system. It includes the necessary information to manage the file system during operation. It includes the file system name (such as ext2), the size of the file system and its state, a reference to the block device, and metadata information (such as free lists and so on). The superblock is typically stored on the storage medium but can be created in real time if one doesn't exist. You can find the superblock structure (see Figure 4) in ./linux/include/linux/fs.h.
Figure 4. The superblock structure and inode operations
One important element of the superblock is a definition of the superblock
operations. This structure defines the set of functions for managing inodes within
the file system. For example, inodes can be allocated with
alloc_inode or deleted with
destroy_inode. You can read and write inodes with
or sync the file system with
sync_fs. You can find the
super_operations structure in
./linux/include/linux/fs.h. Each file system provides its own inode methods, which
implement the operations and provide the common abstraction to the VFS layer.
inode and dentry
The inode represents an object in the file system with a unique identifier. The
individual file systems provide methods for translating a filename into a unique
inode identifier and then to an inode reference. A portion of the inode structure
is shown in Figure 5 along with a couple of the related
structures. Note in particular the
file_operations. Each of these structures refers to the
individual operations that may be performed on the inode. For example,
inode_operations define those operations that operate
directly on the inode and
file_operations refer to
those methods related to files and directories (the standard system calls).
Figure 5. The inode structure and its associated operations
The most-recently used inodes and dentries are kept in the inode and directory
cache respectively. Note that for each inode in the inode cache there is a
corresponding dentry in the directory cache. You can find the
defined in ./linux/include/linux/fs.h.
Except for the individual file system implementations (which can be found at ./linux/fs), the bottom of the file system layer is the buffer cache. This element keeps track of read and write requests from the individual file system implementations and the physical devices (through the device drivers). For efficiency, Linux maintains a cache of the requests to avoid having to go back out to the physical device for all requests. Instead, the most-recently used buffers (pages) are cached here and can be quickly provided back to the individual file systems.
Interesting file systems
This article spent no time exploring the individual file systems that are available within Linux, but it's worth note here, at least in passing. Linux supports a wide range of operating systems, from the old file systems such as MINIX, MS-DOS, and ext2. Linux also supports the new journaling file systems such as ext3, JFS, and ReiserFS. Additionally, Linux supports cryptographic file systems such as CFS and virtual file system such as /proc.
One final file system worth noting is the Filesystem in Userspace, or FUSE. This is an interesting project that allows you to route file system requests through the VFS back into user space. So if you've ever toyed with the idea of creating your own file system, this is a great way to start.
While the file system implementation is anything but trivial, it's a great example of a scalable and extensible architecture. The file system architecture has evolved over the years but has successfully supported many different types of file systems and many types of target storage devices. Using a plug-in based architecture with multiple levels of function indirection, it will be interesting to watch the evolution of the Linux file system in the near future.
- The proc file system provides a novel scheme for communicating between user space and the kernel through a virtual file system. "Access the Linux kernel using the /proc filesystem" (developerWorks, March 2006) introduces you to the /proc virtual file system and demonstrates its use.
- The Linux system call interface provides the means to transition control between user space and the kernel to invoke kernel API functions. "Kernel command using Linux system calls" (developerWorks, 2007) explores the Linux system call interface.
- Yolinux.com maintains a great list of Linux file systems, clustered file systems, and performance compute clusters. You can also find a complete list of Linux file systems in the File systems HOWTO.
- For more information on programming Linux in user space, check out GNU/Linux Application Programming, written by the author of this article.
- In the developerWorks Linux zone, find more resources for Linux developers, and scan our most popular articles and tutorials.
- In the developerWorks Linux zone, find more resources for Linux developers, and scan our most popular articles and tutorials.
- See all Linux tips and Linux tutorials on developerWorks.
- Stay current with developerWorks technical events and Webcasts.
Get products and technologies
- The Filesystem in Userspace (FUSE) is a kernel module that enables development of file systems in user space. The file system driver implementation routes requests from the VFS back to user space. It's a great way to experiment with file system development without resorting to kernel development.
- Order the SEK for Linux, a two-DVD set containing the latest IBM trial software for Linux from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
- Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
- Get involved in the developerWorks community. | <urn:uuid:c312da1b-adde-464f-9c8f-146ad638b682> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/linux/library/l-linux-filesystem/?ca=dgr-btw01LinuxFileSystAnat&S_TACT=105AGX59&S_CMP=GR | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899474 | 3,359 | 3.625 | 4 |
Savannah Kunovsky was working toward a computer science degree when she learned of Hack Reactor, a coding boot camp in San Francisco. She applied, got in, and ended up walking away from the four-year degree program. At first, she intended to go back to school after sharpening her coding skills. But – a year later – she doesn't think college will happen any time soon. “It was life changing," she says of the immersive twelve-week program. It saved her the cost of two more years of college and landed her a well-paying job she loves. “You can earn the cost of college in one year after this program," she says. But that's not the only reason she did it. “College was an awesome experience. I grew socially. I figured out how to work hard and find balance in my life. But here? I am constantly stimulated and get to meet people from all backgrounds. College seems stagnant by comparison." (Disclosure: She works as a software engineer at Hack Reactor.)
Savannah is part of a growing number of computer science students being lured away -- sometimes right from high school -- from a traditional four-year degree path directly into an IT job. Instead of investing four years and as much as $100K in a college degree, they learn to code at a boot camp or by taking online classes and go directly into lucrative and interesting work. No one's path is exactly like anyone else's but an ecosystem has sprung up – especially in the high-tech corridor of the San Francisco Bay Area – where there is so much demand for programmers that it is the actual skills – not a diploma that indicates they have those skills – that gets you in the door.
It's easy to see why some companies are looking past degrees for people with the skills to get the work done. According to Code.org, there are 80,000 positions requiring a computer science degree going unfilled – a situation that promises only to get worse. By 2018, US computing occupations will grow about 800,000 new jobs and US college graduates will be able to fill only 29% of them. It's a matter of supply and demand. Companies need to get the work done. There aren't enough people with the skills to do that work. So that filter – a degree from a good college in the right subject – isn't always applied. Google, for example, has some engineering teams where as many as 14% of members do not have a college degree, according to an internal Google PR rep. This is, by no means, a common way in the door at Google – or anywhere else (See "How to get a job at Google"). But it is a way, especially for people with the right talent who can't afford a computer science degree. “There are two ways to look at this," says Hack Reactor's Chief Strategy Officer Ruan Pethiyagoda. “If your goal for going to college is to spend four formative years immersing yourself in an academic atmosphere, learning from people with a variety of backgrounds, and you can afford to do that, there is significant value to that. But we have found that some people are willing to sacrifice that experience and – for a five-figure investment – go directly to a six-figure salary." (Hack Reactor claims it places 99% of graduates and that the average starting salary is $105K.)
Fredric Paul, editor-in-chief at a San Francisco software company watched as his son, Grant, make this "sacrifice" to go to work – right out of high school – for Facebook. Grant didn't even stop at a code school on the way. He had started hacking his Dad's cast-off technology when he was only in grade school; he started building apps in high school. Some of his apps became so popular that, when he was fifteen, Facebook contacted him asking if he'd like to bring his app building skills in house. But when Grant confessed he was only a sophomore in high school, the company invited him to do an internship before his senior year instead. Grant applied to college along with his peers. “Probably because his parents insisted," laughs Fredric. “He got accepted to a few schools, nothing that thrilled him. Probably because he wasn't that focused on trying to get in." But, after he completed the Facebook internship, Grant turned the schools down and took a job. Fredric worries that Grant is missing an important experience by not going to college. “I think skipping college is almost always not the right choice," he says. But there are worse concerns to have when you are the father of an eighteen-year old. Grant is happy, making good money, and doing what he loves. Most people take a lot longer to figure out how to do that in their lives. “I am not worried he won't learn," says Fredric. “He is working with some really smart people. And if he wants to go to college down the road, this experience will probably make him a better candidate."
Fredric raises good questions, though, about people who are going into white collar work after what is, essentially, a vocational school education. It's hard not to wonder if this fast-track is a good long-term career -- or life -- strategy. What about the people you meet in college? What about those late nights cramming that lead to lifelong friends? How about athletics or hobbies or extra-curricular interests? How about subjects other than coding? “I don't recommend students skip college to learn to code," offers Hadi Partovi, Co-founder & CEO of Code.org. “But it's worth noting that computer science degrees are the best-paying college degrees, because of the dearth of students and the growth in opportunity. And if you can't afford the cost of college, you can still learn many of these skills online or at a coding boot camp." It is certainly a decision to forgo a luxury that many consider a rite of passage and a desirable -- if not necessary -- entre into the adult world of work. But, in an economy desperate for coders, it's more like a challenge that can be overcome -- with a bit of resourcefulness -- than a setback. “These engineers enjoy very generous comfortable lives with reasonable hours and generous vacation packages," says Hack Reactor's Pethiyagoda. “They find that sort of activity– rock climbing, kayaking, and other hobbies -- with their peers outside of a college environment. The difference is they are earning good money while doing it."
But won't they someday find themselves thwarted in the pursuit of opportunity by a failure to produce a framed copy of that degree? “I have only heard of one instance of a grad being told he could not have a promotion because of his lack of degree," says Pethiyagoda. And in that case, the manager who made that decision lost a valuable programmer to the competition. “In most cases, this is a meritocracy because it is quite easy to measure performance." But it's true, he says, that someone else will probably have to do work that requires an understanding of hardware engineering. And if the lack of supply and intense demand for software engineers that currently exists changes, it is likely that the students who put in four years to get a degree will have an easier time getting doors to open.
But this is a situation that's been going on for many years and looks as if it will continue for as many more. Sonja Erickson, is VP of Engineering at tech startup, We Heart It. She started working in technology in 1987 -- without a college degree. Nearly thirty years later, that decision has not stifled her career. “Experience and results speak the loudest," she says.
That's not to say, though, that it has never come up. “I've had some awkward moments where it would have been easier for me -- and for others -- if I'd gone to college," she admits. “In fact, there is one memorable one when a very prominent Silicon Valley VC brought in an entourage of Ivy League students to show off ‘his' company. At the meeting, he asked me to share some college experiences with the students. It was awkward for everyone."
THERE ARE A LOT OF WAYS TO LEARN TO CODE. HERE ARE A FEW OPTIONS TO GET YOU STARTED (in alpha sort).
App Academy (San Francisco)
An immersive, full-time, twelve-week web development and job-placement program that does not charge tuition. If you find a job upon graduation, you pay the school a placement fee of 18% of your first year salary.
Find a selection of online classes that will get your mind in gear, teach you some basics, and set you on your way to either teach yourself to code or prepare you for one of the boot camps or at least help you see if you want to pursue a computer science degree.
EdX.com offers a two-course XSeries ($90) – courses that can be taken together and earn an EdX.org certificate of completion -- in computer science that you can take from anywhere. Plus it's taught by Eric L. Grimson, the Chancellor of MIT and a professor there of computer science and engineering so that's not too shabby.
General Assembly (New York, Boston, San Francisco, Los Angeles, and other cities; online.)
Immersive, full-time classes that run simultaneously in eight cities worldwide. The Web Development Immersive ($11,500) is a 12-week program designed to turn beginners into work-ready junior developers. According to GA, more than 90% of the job seekers who complete this program have jobs within 90 days of graduation.
Hack Reactor (San Francisco)
An immersive program ($17,780) that's not for complete beginners (though you can learn enough to apply during the application process) teaches software engineering, creates an ecosystem of fellow-coders to return to, and even encourages students to go to the gym. 99% of graduates get a job on graduation (the school helps with this and tracks graduates) with an average salary of $105K.
HackBright Academy (San Francisco)
A 10-week fellowship ($15K) in coding for women only. Includes an education in software engineering that assumes no previous experience, mentorship, job fairs, and an introduction to companies that are hiring. | <urn:uuid:073bbd01-f5db-41c4-9c69-534c3fc445f9> | CC-MAIN-2017-04 | http://www.itworld.com/article/2695433/it-management/drop-out-of-college--earn-a-six-figure-salary-coding.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00538-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97768 | 2,158 | 2.859375 | 3 |
Traffic jams are caused when too many cars all try to go to the same place at the same time. The roads are overwhelmed, traffic comes to a standstill, and no one gets to their destination on time. A distributed denial of service (DDoS) cyberattack works the same way. Someone purposely forces too many bits of information to a server all at once, rendering it functionless. The system is overwhelmed.
In late March, Spamhaus, an organization based in Europe dedicated to tracking Internet spammers, was hit with the largest DDoS attack in history. The alleged responsible party, CyberBunker, had been listed by Spamhaus as a source of spam. CyberBunker was angry about what it considered to be an illegitimate blacklisting and as retaliation, attacked Spamhaus with a massive flood of 300Gbps of data, making the system unavailable.
The chart above shows how a DDoS attack like the one delivered to Spamhaus can work. It’s a surprisingly simple concept that, when executed, hijacks a benign Internet lookup tool for its own nefarious purpose.
For this particular attack the criminals first amassed a list of vulnerable computers and home routers that could be unwittingly tricked into carrying out a DDoS attack. In the case of Spamhaus, the attackers generally targeted off-the-shelf home routers and servers that were connected to home and corporate networks and were configured out-of-the-box to respond to all queries for web page addresses.
After amassing a list of routers and computers that were susceptible to compromise, the cyber attacker was able to use the Internet directory system to launch the attack by falsely impersonating Spamhaus computers and by asking each of the open routers to send substantial data payloads to the Spamhaus IP addresses. The mass volume of sent data from tens of thousands of unwitting devices was amplified along the way and when it arrived at its destination, was so enormous that it overloaded the target servers and legitimate traffic was unable to access the site. Attacks like this have the potential to relentlessly and indefinitely cause severe damage to businesses and governments and to indiscriminately affect websites other than the intended target.
DDoS attacks are effective and efficient. But there are steps that users can take to protect their routers and computers from being hijacked. By reducing the vulnerability of your computer and ensuring that your routers and servers are properly configured, you can help protect them from being taken over and used in an attack.
In addition, our collective efforts to identify and respond to attacks such as these can be greatly aided by measures that support real-time sharing of information about cyber-threats. In the case of the Spamhaus attack, the coordinated sharing of information among all the parties affected could have reduced the impact of the attack or even prevented the attack. But because they failed to communicate, the attack went as planned and caused severe damage not just to Spamhaus, but because DDoS indiscriminately attack, to many other individuals and businesses. It seems logical that this kind of information should be shared, but today it often doesn’t occur due to legal uncertainties.
This week the House of Representatives will consider the Cyber Intelligence Sharing and Protection Act, H.R. 624, which would allow organizations like tech companies, social media sites, and your ISP to share cyber-threat information to help better protect your data and the Internet’s infrastructure. By promoting more effective information sharing, efforts like this may be able to help mitigate and possibly prevent damage during a DDoS attack as well as identify the attacker.
In a connected world, all members of the Internet ecosystem share the responsibility of ensuring businesses, schools, governments, and individuals can work online safely. By opening channels of communication, establishing systems of knowledge sharing, and ensuring our homes and businesses are using secure devices, coordinated attacks like the DDoS attack on Spamhaus can be prevented. | <urn:uuid:0c017e1f-3ebd-47fb-9307-ac57c0bc01f0> | CC-MAIN-2017-04 | https://www.ncta.com/platform/public-policy/anatomy-of-a-cyberattack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00538-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962627 | 793 | 2.78125 | 3 |
With the increase in cybercrime focused on businesses, high-impact organizations are starting to recognize the importance of implementing cybersecurity awareness training programs to protect and secure their intellectual property. One-off cybersecurity training initiatives are not enough to prevent your organization from being a victim of cybercrime. Establishing BYOD policies, data loss prevention strategies and consistent, agile cybersecurity awareness training is critical to ensuring confidential data is difficult to obtain.
According to research conducted by Citrix, councils in the UK are spending nine times as much money on health and safety training than IT security training. While mindfulness training in the workplace is essential to ensuring employee engagement, retention and productivity, it is even more critical to focus on cybersecurity awareness training.
The ISACA’s Global Cybersecurity Status report revealed, 83 percent of organization claimed cyberattacks as being among the top three threats they face; however, only 38 percent said they were prepared to deal with it.
According to the National Cybersecurity Institute, “there has been a problematic disconnect and lack of both integration and collaboration between the C-suite and IT departments.” All employees play a key role in safeguarding an organization’s digital assets, and as such, should be well equipped with the awareness and knowledge to keep from being hacked or transmitting company-wide viruses.
Here are five steps to improve cybersecurity awareness within your organization, according to Jack Danahy, co-founder and CTO of Barkly.
User Adoption: Oftentimes, individual employees outside of the IT department don’t understand the security measures in place prevent cyberattacks. It is important that cybersecurity initiatives are explained in a way that is easily understandable and retainable to better protect against cyber threats.
Computer-Based Training: Organizations can require employees to take a cybersecurity awareness training program once or twice a year, but is that truly effective? Instead, organizations should make it a top priority to implement security awareness training on a consistent basis to expose employees to real-world phishing and hacking scenarios, as well as emphasize security best practices.
Make It Personal: Sometimes it doesn’t quite have the same impact if it’s not personal. Tell your employees about real-world data breaches that have occurred to people they might know, the consequences and how it could have been prevented.
Outline the Consequences: Just like any other training initiative, security training takes time away from their day-to-day job. However, a lack of cybersecurity awareness can result in a loss of confidential data, ultimately leading to poor business performance and possibly organizational failure. More specifically, this can affect an employee’s job.
Be Clear: Security awareness training needs to be clear, concise and to the point. Leave out the technological jargon and avoid dense training materials.
In a world with increasing digital capabilities, organizations are faced with the challenge of safeguarding their intellectual property with a rapidly growing number of cyber threats. Implementing cybersecurity initiatives should not just be about increasing rules and regulation. Instead, high-impact organizations create an effective security awareness culture that empowers employees to make the best judgment based on their learned knowledge of cyber threats.
The most effective security awareness training program is relevant, applicable, retainable, consistent and up to date. Although no amount of training can fully 100 percent ensure data security and protection against cyber threats, training employees to be conscientious of their actions can be the most effective form of security control. | <urn:uuid:e1d5983c-bf1e-4f83-8174-df7dd76aeadf> | CC-MAIN-2017-04 | http://blog.contentraven.com/security/ways-to-improve-cybersecurity-awareness-and-training | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00201-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95502 | 701 | 2.578125 | 3 |
In May 2005, a lab in California performed a genetic screen on blood drawn from a newborn girl. The screen uncovered a metabolic disorder so rare that only 32 other cases had ever been documented.
Had the baby been born one week earlier, the lab wouldn't have screened for that particular condition, and she probably would have died.
As it was, she received the appropriate medical care and lived.
The infant was lucky to be born just as the California Department of Health Services (CDHS) started piloting the Screening Information System (SIS), a computer system developed to replace an obsolete information platform and support the state's newly expanded genetic screening program.
California started using the SIS statewide in July 2005.
Huey, Dewey and Louie
Under California law, all newborns must be screened for genetic diseases, and every pregnant woman must have the opportunity to choose or decline prenatal screening.
Blood samples are processed in one of eight state-contracted labs where computer-supported equipment performs several tests. The labs then transmit the results to a central state lab, where professionals assess the results -- examining demographic data along with information from the tests -- to determine if the baby suffers from a genetic disease.
If that's the case, the CDHS alerts the child's doctor and parents, and the department follows up until the case is resolved or the baby starts receiving treatment.
Since the early 1980s, the information system that managed this process was a set of three computers -- officially mid-tier machines, but they were so bulky they filled an entire room.
"We called them Huey, Dewey and Louie," said Catherine Camacho, deputy director of Primary Care and Family Health at the CDHS.
The problem was that the more the older the information system grew, the less effective it was in supporting the state's genetic screening program.
"It was obsolete technology," said Christy Quinlan, deputy director of the Information Technology Services Division and CIO of the CDHS. "The fear was we couldn't patch it. We couldn't upgrade it."
The hardware and the software were no longer supported by a vendor, and if the system suffered a serious breakdown, there might be no way to get it running again.
"Every time they had a problem with it, it was no joke -- they had to go to old computer graveyards," Camacho said. "We ran a fabulous system that everybody knew was very comprehensive and highly respected, but we were duct-taping and rubber-banding it together."
Not only were Huey, Dewey and Louie limping into advanced age, they also performed too slowly, couldn't easily produce the management reports the CDHS required, and couldn't be upgraded to contemporary security standards.
"When you're taking input from external sources," Quinlan said, "you want to ensure that you have the latest security installed."
In 2000, officials at the CDHS launched a project to retire the old machines. The original plan was simply to bring in a new system with modern capabilities.
"It would be more nimble. It would be faster. It would be able to sort."
The project encountered many delays, ranging from political opposition to the Y2K conversion, Camacho said. The holdups seemed like bad news at the time, but they proved to be a stroke of good fortune.
The CDHS was still in the middle of planning a new information system to replace the old one when, in 2004, the California Legislature passed a law that turned the implementation program upside down. The CDHS would have to incorporate a new technology, called tandem mass spectrometry, into its genetic screening regimen. The department would also have to start screening newborns for many more genetic conditions.
"Going to the tandem mass spectrometry was a radical change in the design of the system," Quinlan said. "It needed a completely different technology."
So the CDHS scrapped its program-in-progress and started planning all over again.
The new law came at a perfect time, Camacho said. "Had it come much later, we would have had to backtrack. We were at a point where, ideally, it was a great time to stop and incorporate that piece into it."
The CDHS could have conducted two separate technology projects -- one to adopt a modern computer platform, and one to incorporate tandem mass spectrometry and more genetic tests.
"We decided to go for broke," Quinlan said, and department officials determined it would be much less costly to wrap both upgrades into a single initiative.
It would also be a great deal of work -- especially with the Legislature's Aug. 1, 2005 deadline less than a year away.
"We had frank discussions about, once we start, it's a point of no return," Camacho said. "We told staff, 'This will mean people can't take vacations. Around the holidays will be some of our busiest times.' We were going to have to run the marathon at a sprint."
After a couple of small pilots -- including the one that saved the baby's life -- the department started a statewide pilot implementation in June, running both the new and old systems. Then it started shutting down Huey, Dewey and Louie and relying entirely on the SIS.
"We flipped the switch in mid-July, a couple of weeks early," Quinlan said.
Developed in conjunction with Deloitte Consulting, the SIS is a Web-based system, built on Microsoft .NET technology and running on the CDHS's extranet. It receives data from the labs in batch files and uses Business Objects software to produce reports.
The new system supports tandem mass spectrometry and lets the state screen newborns for 75 genetic conditions -- up from 39 in the days before the SIS.
"It allows us to evaluate results using newborn birth weight, which was not possible with the legacy system," said John Sherwin, acting chief of the CDHS's Genetic Disease Branch.
In addition, the SIS has streamlined and improved many processes that are part of the state's genetic screening program.
Unlike the previous system, it supports the entry of demographic data using intelligent character recognition/optical character recognition, Sherwin said. "There are a number of management reports that are much more distributed and more easily available directly to authorized users. It has shortened the time for our staff to identify that patients have gotten into appropriate follow-up care."
In all, the system supports more than 150 reports as well as ad hoc reporting, according to a description published by the CDHS.
The SIS can also match the results of prenatal and newborn genetic screens -- a function that was previously unavailable with the old system.
"A portion of our quality assurance program is the ability to identify if the mother of an affected infant had prenatal screening and what was the outcome," Sherwin said. The SIS also tracks data that enhances the value of genetic counseling in later pregnancies, such as whether a woman previously gave birth to a child with a genetic disorder.
The SIS will soon help the CDHS manage a new genetic screening challenge. In September 2006, Gov. Arnold Schwarzenegger signed a bill that expands the state's genetic screening program to include two more tests, for cystic fibrosis and biotinidase deficiency.
"We're programming for more screening," Quinlan said.
From the start, the CDHS designed the SIS so it could easily add screens for new conditions, Camacho said. "We didn't want to have to go in and rebuild the system." | <urn:uuid:8cda5c76-b2c1-4db8-bcbb-04bfab59c08a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/California-Screening.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972458 | 1,557 | 2.890625 | 3 |
Improve your resilience to the cyber threats facing the energy sector with MWR
A widely cited 2012 report from the World Energy Council acknowledged that the global energy sector is facing a ‘trilemma’, trying to reconcile three problems. These are to ensure future energy cyber security while simultaneously reducing carbon footprint yet keeping consumer bill rises to an acceptable minimum. All are ‘must solves’ that unfortunately also conflict with each other.
These are fundamental issues of high profile political importance, affecting the prosperity and future livelihoods of all. So it’s easy to understand why cyber security often sits a little further down the roll call of energy company priorities.
Effective cyber security of energy supplies also affects the other two aspects of the trilemma. Without it, energy providers will be less able to deploy technologies such as smart metering, integration with the Internet of Things (IoT) or ultra-responsive grid switching and management systems. All these are key aspects of reducing emissions and ensuring maximum efficiency for customers.
The trilemma is also set against a background profound technological change for the industry, with ever more Industrial Control Systems (ICS) and services being networked and connected to the internet. These changes are reflected within the consumer sector too, with the IoT revolution heralding an era of new efficiencies and monitoring benefits for domestic energy products, yet also bringing a number of new challenges along with it.
The energy and utilities sector forms a key part of critical national infrastructure, which makes it a high value target for state or non-state actors seeking to gain military or political advantage or cause chaos and disruption.
Being able to remotely disrupt a national electricity grid would have devastating effects. Therefore, defending the grid from cyber attack is a core part of ensuring energy security.
While these potential attackers might be seeking to control or disrupt energy supply and distribution, they may also have other motives, such as to cause embarrassment or compromise customer data or transactions. Thus energy companies face a diverse range of threats that differ both in goals and execution from the traditional threat map.
Energy companies and grid organizations need to be aware of the various cyber threats that face them, and accept that their strategic role in society places them in the firing line of some particularly skilled and motivated attackers, including state actors.
Due to the speed of these changes, traditional cyber security measures have been found wanting, as evidenced by the growing level of cyber breaches reported in this sector. Forward-thinking organizations must build on the effective parts of their cyber security programmes with practical solutions in order to stay one step ahead.
In our times, cyber security in the energy sector is not just a strategic issue but also an existential one. Energy companies have become prime targets for attackers, including state and non-state actors. Information security will also define the energy sector's ability to meet future challenges such as carbon reduction.
MWR provides an ideal security partner for any energy company, oil & gas firm or IoT provider confronted by the specific challenges peculiar to the energy sector.
Our research-driven approach provides deeper understanding of attacker methodologies. We have a track record in enabling established businesses to adopt a security culture by delivering security programs that deliver improved business competitiveness. Effective cyber security strategies within the sector need to be fully aligned to your business risk appetite and threat profile. We will review these and ensure that your approach contains some of the following key components:
We offer a range of solutions through our dedicated OT Security practice, including Security Assessments for ICS environments; both at the design stage and also for established systems.
For energy product manufacturers, MWR can also assess embedded devices for applications such as home automation or IoT, working with designers and developers to make sure that the product is designed at source to protect its critical assets.
In power grids, utility networks and industrial facilities, safety always trumps security. And often where effective security is required, the pressure to maintain uptime means that new features cannot be added to systems deemed too fragile to modify. Understanding this issue, MWR has developed Vision, a tool that can passively scan your ICS systems for security issues, with far less risk than before.
And for more traditional security environments within the energy sector, MWR’s Cyber Defense solutions can also secure an energy organization’s most valuable assets: its customer base and intellectual property. It’s for this reason that we use a threat-based approach to help you build a realistic view of your security posture, adopting programmes that are highly effective in practice.
Experience has taught us that if your business can resist targeted cyber-attacks from advanced nation states, it can resist cyber-attacks from almost all threat actors. The energy sector has seen more than its fair share of targeted attacks that have been attributed to nation state threat actors.
With solutions such as Targeted Attack Simulations and Countercept, delivered by consultants that truly understand the mind of an attacker, your organization can be safe in the knowledge they are using the most advanced defences to resist the most advanced attackers.
These are just a number of solutions offered by MWR to help firms in the energy sector overcome the security challenges they are facing. | <urn:uuid:990d438b-49a7-45a4-80a9-22819e34dee9> | CC-MAIN-2017-04 | https://www.mwrinfosecurity.com/work/industries/energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00191-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952164 | 1,048 | 2.546875 | 3 |
They say that curiosity is the cure for boredom, and if you’ve ever tried to find specific and detailed information using one of the popular search engines, you’ve probably spent your share of hours in front of the computer screen frustrated. It seems as though you should be able to find relevant websites and databases, but your search results return many different websites with the exact same information. What many people don’t know is that beyond their online presence on Facebook, YouTube and Google there is a vast wealth of knowledge waiting to be discovered. This information is commonly referred to as big data, and is largely inaccessible to conventional web users.
What is big data?
The term big data refers to the part of the internet that is not indexed by regular search engines. Search engines like Google or Yahoo! are actually web spiders. When you type in a search term, the web spiders crawl through trails of hyperlinks and give you an index of pages. This works well for searching the surface of the web, but the internet goes much deeper, and there are many places that web spiders cannot enter.
Tip of the iceberg
What web crawlers are able to access is really only the tip of the iceberg, and no one knows for sure how deep the iceberg goes. Most of big data is raw data that does not include the hyperlinks that web spiders rely on to index sites. Library databases, websites that password protected or have time-limited access, private networks of organizations, and websites that are new are all excluded from search engine results. These pages are not necessarily inaccessible, but a typical web user is not aware of their existence, or where to find them, so they do not have access to the sites.
Big data is a ‘big deal’
Undoubtedly, there are some people out there who are happy with what they can find on the popular search engines. But for those of you who are looking for complex or obscure information, big data could prove to be an invaluable resource. With the number of hidden databases connected to the web estimated to be in the millions, big data consists of research statistics, records, documents and files that could be invaluable to businesses and web users alike. Fortunately, there is a way for you to be able to gain access to the wealth of information contained in big data. Through the use of specialized web analytic tools, you can gain entry to the hidden corners of the internet and access a wealth of information that most people don’t even know exists.
Photo courtesy of AlphachimpStudio | <urn:uuid:2f7d5c4b-f948-4b38-a95e-f64071f73fb7> | CC-MAIN-2017-04 | https://brightplanet.com/2012/06/what-is-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938977 | 516 | 3.21875 | 3 |
Gain an understanding of the core components that make up the information technology (IT) landscape while preparing for the CompTIA IT Fundamentals exam.
In this introductory course, you will learn the basics of computer hardware, software, mobile computing, networking, troubleshooting, and emerging technologies. You will learn about configuring operating systems, file and folder management, networks and network configuration, and the role of the OSI model in networking and troubleshooting. This course will also prepare you for the CompTIA IT Fundamentals certificate exam.
Through presentations, demonstrations, and knowledge-based exercises, you will gain a fundamental understanding of computer hardware, operating systems, computer application software, networking technologies and protocols, web browsers, identifying security risks, troubleshooting errors, and system maintenance. You will also learn about cutting-edge technologies such as cloud computing and virtualization.
Note: You will receive an exam voucher at the end of class so that you may take the exam at a later date. | <urn:uuid:ac5a2bb4-75f4-406e-800a-e000de508a5f> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/121121/comptia-it-fundamentals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884696 | 203 | 3.09375 | 3 |
Cambridge, United Kingdom, February 27, 2001 - Kaspersky Lab, an international data-security software-development company, announces the discovery of a new worm "Mandragore" spreading across the popular Gnutella file exchange network that uses the Peer-to-Peer (P2P) technology.
The opportunity for malicious code of this type to exist in a P2P network was discovered in early May 2000 by Seth MacGann, who posted the results of his research to the respected BugTraq electronic conference. Despite almost a one-year passing, not one single malicious code of this virus had been discovered "in-the-wild." Yet, in only a few days since the "Mandragore" has been discovered, Kaspersky Lab has received information pertaining to nearly 20 computers being infected by this very worm.
"Mandragore" is an EXE file written in the Assembler programming language, and 8192 bytes in size. After the infected file is executed, the worm registers itself as an active node within the Gnutella network, and intercepts all requests for files searching. If a request is detected, the worm returns a positive result, and offers a user to download the requested file even if it is not presented in the infected system. In order to disguise its malicious intentions, "Mandragore" renames its copy according to the intercepted request. For example, if a user is asked for a file containing such words as, "How to become a millionaire," then the infected system will offer to download the file, "How to become a millionaire.exe" It is important to emphasize that the worm cannot penetrate into computers that have no Gnutella-compatible software installed, such as Gnotella, BearShare, LimeWire or ToadNode.
When infecting, "Mandragore" copies itself to the Windows-startup folder under the name of "GSPOT.EXE", and applies the "system" and "hidden" attributes to this file. As a result, each time a computer boots up, the worm automatically takes control and remains in the system memory as an active process.
"This particular worm has no payload except for a minor increase of outgoing traffic and additional consumption of system resources. Mandragore's main danger is not destroying important files or unveiling confidential information, but severe damage to the reputation of a private user and companies that weren't able to repel the worm's attack. I doubt that an infection with even harmless malicious code could stimulate business growth, and attract new customers," said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab.
Infection prevention and removal
To prevent the worm penetrating your computer, you should under no circumstances open EXE files of 8192 bytes in length, even those that are offered to you to download via the Gnutella network. We also recommend you use the anti-virus monitor included in the KasperskyTM Anti-Virus standard package. Checking all the files being accessed in real-time will effectively block the worm's attacking prowess, and prevent infection even if you accidentally have launched (please do not) the infected file.
In case "Mandragore" has managed to get into your computer, we advise you delete the GSPOT.EXE file from the Windows startup folder and reboot the system.
Protection against "Mandragore" has already been added to the daily update of the Kaspersky Anti-Virus virus-signatures database. More details about the worm are available at Kaspersky's Virus Encylopedia.
Kaspersky Anti-Virus can be purchased in the Kaspersky Lab online store or from a worldwide network of Kaspersky Anti-Virus distributors and resellers.
Ben Houston, "A P2P Virus: The "GnutellaMandragore" Virus" | <urn:uuid:f878481a-a2ea-4b11-9166-5b5a876a71e8> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/Gnutella_Users_Warning_Beware_of_the_Mandragore_Worm_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929273 | 795 | 2.703125 | 3 |
Today marks the 35th anniversary of the launch of the Voyager 2 spacecraft, and NASA's Jet Propulsion Laboratory has a cool video update.
Still going strong after 35 years, the video discusses the mission control area for the spacecraft and the latest goings-on, including the discussion of when the ship will break out of the sun's heliosphere.
The spacecraft's twin, Voyager 1, was launched 16 days later and is also heading out of the solar system. In case you're wondering, here's why Voyager 2 was launched before Voyager 1.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Watch some more cool videos: James Bond meets My Little Pony: Mashup gold This 13-foot Japanese robot is packing heat The Legend of Zelda as a Western Friday Funnies: Batman rants against the Dark Knight haters/a> Did this 1993 film predict Google Glasses and iPads? | <urn:uuid:b93e1d7b-e919-43f1-9334-ab4c5886fdf1> | CC-MAIN-2017-04 | http://www.itworld.com/article/2725425/consumer-tech-science/voyager-spacecraft-still-going-strong-after-35-years.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00091-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895803 | 228 | 2.59375 | 3 |
NASA opens photo-sharing site
Visitors can leave comments on 180 historic NASA photos
- By Alice Lipowicz
- Aug 31, 2010
NASA has a new partnership with Flickr Commons that expands public access to the space agency's historic photos and allows online comments.
The agency has launched the NASA on The Commons photo archive on the Flickr Commons photo-sharing Web site. Visitors can leave comments, tags and keywords on the photos,as well as offer identifying or descriptive information or general remarks.
To date, NASA on The Commons contains 102 images of NASA rocket launches, 44 images of NASA buildings and construction, and 32 images of NASA officials and dignataries. The images had recorded a total of more than 20,000 views as of today.
10 government Web apps that get results
Library of Congress experiments with Flickr
Many of the published images have attracted comments. “Cool!” wrote one visitor. “What an incredible moment in the history of space exploration,” wrote another guest, referring to a photograph of President John F. Kennedy visiting a NASA space center. “That is Dr. Evil sitting behind JFK,” joked another visitor.
The capability to interact with photos is the result of a partnership between NASA, Flickr from Yahoo! and the Internet Archive, a non-profit digital library based in San Francisco, according to a NASA news release dated Aug. 30.
Flickr Commons was established in cooperation with the Library of Congress to expand access to publicly held photography collections. The library maintains thousands of its own photographs on Flickr.
The New Media Innovation Team at NASA's Ames Research Center in Moffett Field, Calif., worked with photo and history experts to compile the images for NASA on The Commons. Additional images will be added over time.
In a related project, NASA created NASAimages.org to provide thousands of photographic images and videos to the public. The Internet Archive was chosen through a competitive process to organize that collection.
"NASA's long-standing partnership with Internet Archive and this new one with Yahoo!'s Flickr provides an opportunity for the public to participate in the process of discovery," Debbie Rivera, project lead for the NASA Images project at the agency's headquarters in Washington, said in the news release. "In addition, the public can help the agency capture historical knowledge about missions and programs through this new resource and make it available for future generations."
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. | <urn:uuid:21b61d7f-4776-4a62-857c-79ff461e9c6c> | CC-MAIN-2017-04 | https://fcw.com/articles/2010/08/31/nasa-launches-new-photo-sharing-site-on-flickr-commons.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00577-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908861 | 511 | 2.640625 | 3 |
What is a cost estimation DSS?
by Dan Power
Predicting costs for a project or task is often difficult. One approach is to use historical cost data. Another approach is to use data and quantitative models to adjust for changes in costs or cost uncertainty. Cost estimation models are most often algebraic models where an analyst or cost estimator can adjust parameters. Not many years ago, cost estimation was a vague, heuristic task, but data and model-driven DSS are now commonly used to improve the accuracy of the cost estimate or prediction. Specialized cost estimation DSS are used for many tasks including construction cost estimating, custom software development bids and custom manufacturing bids.
Many managers want examples of DSS and students are often interested in "building" a real DSS. Finding small-scale, yet interesting, DSS projects for students can however be difficult. Building and discussing cost estimation DSS is one possibility. A cost estimation DSS is a software application that helps a person estimate cost elements and finalize a bid for a prospective customer. "Cost estimation" refers to the purpose of the Decision Support System and does not constrain how the system is implemented. The generic task is subtle and semi-structured and it can be approached in many ways. A cost estimation DSS may be a model-driven or a data-driven DSS. Data-driven DSS help "add up" cost elements from a database and usually provide limited analytics. Cost estimation DSS are frequently model-driven and spreadsheet-based, but other types of DSS are developed and marketed for assisting in this task (see TechComm Associates, 2003). Successfully estimating costs is important to the survival and profitability of many firms in many different industries. A common problem is underestimating costs and losing money on a project. A hidden problem is overestimating costs and losing a bid when the project could have been completed profitably.
According to the U.S. Department of Labor (BLS, 2010-11), "cost estimators develop the cost information that business owners or managers need to make a bid for a contract or to determine if a proposed new product will be profitable". In some businesses, cost estimates are prepared on the back of an envelope or on a simple "bid" sheet. As the complexity of the estimating task increases computerized decision support becomes increasingly important. "Cost estimators held about 217,800 jobs in 2008. About 59 percent of estimators were in the construction industry and another 15 percent were employed in manufacturing. The remainder worked in a wide range of other industries." Currently, most estimators DO NOT use computerized decision support.
So what is involved in preparing a cost estimate? What is the decision process? A general description suggests the importance of the task. The BLS handbook notes "The methods of and motivations for estimating costs can vary greatly, depending on the industry. On a construction project, for example, the estimating process begins with the decision to submit a bid. After reviewing various preliminary drawings and specifications, the estimator visits the site of the proposed project. The estimator needs to gather information on access to the site and availability of electricity, water, and other services, as well as on surface topography and drainage ... After the site visit is completed, the estimator determines the quantity of materials and labor the firm will need to furnish. This process, called the quantity survey or "takeoff," involves completing standard estimating forms, filling in dimensions, number of units, and other information. A cost estimator working for a general contractor, for example, will estimate the costs of all items the contractor must provide. Although subcontractors will estimate their costs as part of their own bidding process, the general contractor's cost estimator often analyzes bids made by subcontractors as well. Also during the takeoff process, the estimator must make decisions concerning equipment needs, sequence of operations, and crew size. Allowances for the waste of materials, inclement weather, shipping delays, and other factors that may increase costs also must be incorporated in the estimate. On completion of the quantity surveys, the estimator prepares a total project-cost summary, including the costs of labor, equipment, materials, subcontracts, overhead, taxes, insurance, markup, and any other costs that may affect the project. The chief estimator then prepares the bid proposal for submission to the owner."
The BLS report notes "In manufacturing and other firms, cost estimators usually are assigned to the engineering, cost, or pricing departments. The estimators' goal in manufacturing is to accurately estimate the costs associated with making products."
For the many years, students in my DSS course worked in teams to analyze, design and then build a spreadsheet-based DSS for cost estimation. This project was small-scale and varied in purpose, yet students could use a readily available spreadsheet package, like Excel, to build a "real" DSS. The project provides many opportunities for student creativity and initiative; teams work on an important, non-trivial task; students can apply Excel skills they have learned on a small-scale "real" project. Also, students go through the steps in analysis and development and they create and submit deliverables. I encourage teams to follow a decision-oriented design approach and begin by studying a specific cost estimating process in a specific business.
Teams pick an estimating situation and then research, plan, and develop a specific DSS for that situation. The team develops a model-driven DSS for estimating the cost of an event/project and preparing a competitive bid to submit to the person requesting a proposal. The specific DSS supports a person working as a cost estimator or bid specialist or a similar job title. The specific model-driven DSS that is developed should help an estimator input data, apply a detailed quantitative estimating model, conduct sensitivity and "what if" analyses, and prepare a formal bid proposal. Project teams submit 4 deliverables during the semester. Deliverable 1 is a project analysis, specification and research summary report; Deliverable 2 is a model specification and project plan; Deliverable 3 is the completed Spreadsheet-based DSS; and Deliverable 4 is the documentation.
An algebraic model provides the decision support functionality, but the model-driven DSS application needs to facilitate elicitation of values and estimates and then help the estimator complete "what if?" and sensitivity analysis. Some teams break the estimating task into phases or separable divisions. Some teams try to identify standard cost data to compare to model estimates. Occasionally a team will propose calculating a bid from an established, fixed "price sheet". This approach neglects all of the cost estimation issues and provides no information to the decision maker about the profitability of a job or project. Creating a fixed "price sheet" application is NOT a decision support system even though a spreadsheet might be used to help with calculations. In general, teams should receive negative feedback about this simplistic type of application. Understanding all the costs in an estimating situation is usually a major challenge and teams need to face this challenge to build a successful DSS.
Occasionally development teams try to help an estimator answer the question "Should we bid?" in addition to "How much should we bid?" Rarely do teams grapple with the complexity of bidding in the context of a portfolio of bids. In general, a model-driven DSS focuses on a "fixed" price or a "not to exceed" bid situation. Teams need to determine how much detail should be in the cost estimate and how overhead should be allocated. A major issue facing estimators is assessing profitability and keeping the bid amount competitive. Also developers need to determine if it is more appropriate to provide for a profit markup or a markdown. Should profit be an across-the-board percentage or should the DSS provide for selective adjustments to cost elements? Markup pricing usually covers overhead and profit contribution so the issue becomes how much markup? In some situations, labor time estimates are especially difficult to forecast. Perhaps both labor productivity and labor costs need to be considered in an estimate. Also, some teams neglect "what if?" analysis and sensitivity analysis. In a model-driven DSS this capability is important. Also, developers need to determine if common size percentages of cost categories will help the estimator. Is it helpful to show the estimator a bar chart of amounts for major cost elements? Each cost estimating process has its own demands, nuances and idiosyncrasies. The development team needs to make design decisions that accommodate the specific estimator and estimating situation.
Teams are encouraged to look for projects in three industry situations: construction cost estimating, convention and meeting cost estimating, and software development cost estimating. It is important that the project involve sufficient complexity to justify building and using a spreadsheet-based DSS in the estimating situation. Some representative cost estimation DSS project titles from the past few years include: 1) "Cost estimation for a major event on a college campus", 2) "Light industrial construction cost estimating", 3) "Meetings and banquets cost estimating for a hotel", 4) "New home construction estimating", 5) "Prepare attestation bids for a medium-sized accounting firm", 6) "Provide cost estimates for weddings", 7) "Custom software project cost estimating", and 8) "Web site development cost estimating"
Typically the elements of a cost estimate include: 1) Quantity Takeoff: quantities of various materials needed, 2) Labor Hours: for crews or on a unit man-hour basis, 3) Labor Rates and payroll burden: cost per hour, 4) Material Prices, 5) Equipment Costs, 6) Subcontractor Quotes, 7) Allocation of Indirect Costs, 8) Profit Amount or Percentage (cf. Manfredonia, Majewski and Perryman, 2010).
The Occupational Outlook Handbook (BLS, 2010-11) reports that "Computers play a vital role in cost estimation because the process often involves complex mathematical calculations and requires advanced mathematical techniques. For example, to undertake a parametric analysis (a process used to estimate costs per unit based on square footage or other specific requirements of a project), cost estimators use a computer database containing information on the costs and conditions of many other similar projects. Although computers cannot be used for the entire estimating process, they can relieve estimators of much of the drudgery associated with routine, repetitive, and time-consuming calculations. New and improved cost estimating software has lead to more efficient computations, leaving estimators more time to visit and analyze projects."
Cost estimation DSS can help cost estimators prepare bids faster and more accurately. A sophisticated DSS can help insure that when a company wins a bid that it will be able to profitably complete the event/project.
Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, 2010-11 Edition, Cost Estimators, on the Internet at http://www.bls.gov/oco/ocos006.htm, accessed March 11, 2011.
Department of Energy, "Practical Cost Estimating and Validation: Lessons Learned Workshop", www.em.doe.gov/aceteam/workpdfs.html.
Manfredonia, Bill, Joseph P. Majewski, Joseph J. Perryman, "Cost Estimating," http://www.wbdg.org/design/dd_costest.php, Last updated: 05-28-2010, accessed March 11, 2011.
Roetzheim, W., "Estimating Software Costs", Software Development Magazine, October 2000, www.sdmagazine.com/documents/s=821/sdm0010d/ .
TechComm Associates Staff, "Estimating software yields higher profits at Liberty Brass", Micro Estimating Systems, 2001, posted at DSSResources.COM
The above is modified from Power, D., What is a cost estimation DSS? DSS News, Vol. 5, No. 8, April 11, 2004, last updated March 11, 2011
Last update: 2011-03-14 01:16
Author: Daniel Power
You cannot comment on this entry | <urn:uuid:24d7d866-7ee7-4244-bc87-900cd0cd7663> | CC-MAIN-2017-04 | http://dssresources.com/faq/index.php?action=artikel&id=71 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904768 | 2,483 | 2.59375 | 3 |
Frame Relay -- Summary
Network Consultants Handbook - Frame Relay
by Matthew Castelli
Frame Relay is a Layer 2 (data link) wide-area network (WAN) protocol that works at both Layer 1 (physical) and Layer 2 (data link) of the OSI model. Although Frame Relay services were initially designed to operate over ISDN service, the more common deployment today involves dedicated access to WAN resources.
Frame Relay networks are typically deployed as a cost-effective replacement for point-to-point private line, or leased line, services. Whereas point-to-point customers incur a monthly fee for local access and long-haul connections, Frame Relay customers incur the same monthly fee for local access, but only a fraction of the long-haul connection fee associated with point-to-point private line services.
Frame Relay was standardized by two standards bodies -- internationally by the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) and domestically by ANSI (American National Standards Institute).
Frame Relay is a packet-switched technology, meaning that each network end user, or end node, will share backbone network resources, such as bandwidth. Connectivity between these end nodes is accomplished with the use of Frame Relay virtual circuits (VCs).
Frame Relay WAN service primarily comprises four functional components:
- Customer premise Frame Relay access device (FRAD).
- Local access loop to the service provider network.
- Frame Relay switch access port. Link Management Interface parameters are defined here.
- Frame Relay VC parameters to each end site.
DLCIs are of local significance, unless an agreement has been made with the network service provider to deploy global DLCIs. Local significance means that DLCIs are of use only to the local Frame Relay network device. Frame Relay DLCIs are analogous to an organizations telephone network utilizing speed-dial functions.
Two types of Frame Relay VCs exist:
- Permanent virtual circuits (PVCs) -- These are permanently established, requiring no call setup, and utilize DLCIs for endpoint addressing.
- Switched virtual circuits (SVCs) -- These are established as needed, requiring call setup procedures and utilizing X.121 or E.164 addresses for endpoint addressing.
Two types of congestion-notification mechanisms are implemented with Frame Relay:
- Forward explicit congestion notification (FECN) -- The FECN bit is set by a Frame Relay network to inform the Frame Relay networking device receiving the frame that congestion was experienced in the path from origination to destination. Frame relay network devices that receive frames with the FECN bit will act as directed by the upper-layer protocols in operation. The upper-layer protocols will initiate flow-control operations, depending on which upper-layer protocols are implemented. This flow-control action is typically the throttling back of data transmission, although some implementations can be designated to ignore the FECN bit and take no action.
- Backward explicit congestion notification (BECN) -- Much like the FECN bit, the BECN bit is set by a Frame Relay network to inform the DTE that is receiving the frame that congestion was experienced in the path traveling in the opposite direction of frames. The upper-layer protocols will initiate flow-control operations, depending on which upper-layer protocols are implemented. This flow-control action is typically the throttling back of data transmission, although some implementations can be designated to ignore the BECN bit and take no action.
- Committed information rate (CIR) -- This is the amount of bandwidth that will be delivered as best-effort across the Frame Relay backbone network.
- Discard eligibility (DE) -- This is a bit in the frame header that indicates whether that frame can be discarded if congestion is encountered during transmission.
- Virtual circuit identifier
- Data-link connection identifiers (DLCIs) for PVCs -- Although DLCI values can be 10, 16, or 23 bits in length, 10-bit DLCIs have become the de facto standard for Frame Relay WAN implementations.
- X.121/E.164 addressing for SVCs -- X.121 is a hierarchical addressing scheme that was originally designed to number X.25 DTEs. E.164 is a hierarchical global telecommunications numbering plan, similar to the North American Number Plan (NANP, 1-NPA-Nxx-xxxx).
Table 15-17: Summary of Network Topology Formulae
Note: N is the number of locations
|Fully meshed||[(N (N - 1)) / 2]|
|Partial-mesh||(Approximation) [N2 / (N - 1)]|
(Guideline) [((N (N - 1)) / 2) X (N - 1)]
|Hub-and-Spoke||[N - 1]|
Local Management Interface (LMI) is a set of enhancements to the basic Frame Relay specification. LMI includes support for keepalive mechanisms, verifying the flow of data; multicast mechanisms, providing the network server with local and multicast DLCI information; global addressing, giving DLCIs global rather than local significance; and status mechanisms, providing ongoing status reports on the switch-known DLCIs.
Three types of LMI are found in Frame Relay network implementations:
- ANSI T1.617 (Annex D) -- The maximum number of connections (PVCs) supported is limited to 976. LMI type ANSI T1.627 (Annex D) uses DLCI 0 to carry local (link) management information.
- ITU-T Q.933 (Annex A) -- Like LMI type Annex-D, the maximum number of connections (PVCs) supported is limited to 976. LMI type ITU-T Q.933 (Annex A) also uses DLCI 0 to carry local (link) management information.
- LMI (Original) -- The maximum number of connections (PVCs) supported is limited to 992. LMI type LMI uses DLCI 1023 to carry local (link) management information.
- TCP/IP Suite
- Novell IPX Suite
- IBM SNA Suite
- Voice over Frame Relay (VoFr)
Novell IPX implementations over Frame Relay are similar to IP network implementation. Whereas a TCP/IP implementation would require the mapping of Layer 3 IP addresses to a DLCI, Novell IPX implementations require the mapping of the Layer 3 IPX addresses to a DLCI. Special consideration needs to be made with IPX over Frame Relay implementations regarding the impact of Novell RIP and SAP message traffic to a Frame Relay internetwork.
Migration of a legacy SNA network from a point-to-point infrastructure to a more economical and manageable Frame Relay infrastructure is attractive; however, some challenges exist when SNA traffic is sent across Frame Relay connections. IBM SNA was designed to operate across reliable communication links that support predictable response times. The challenge that arises with Frame Relay network implementations is that Frame Relay service tends to have unpredictable and variable response times, for which SNA was not designed to interoperate or able to manage within its traditional design.
Voice over Frame Relay (VoFr) has recently enjoyed the general acceptance of any efficient and cost-effective technology. In the traditional plain old telephone service (POTS) network, a conventional (with no compression) voice call is encoded, as defined by the ITU pulse code modulation (PCM) standard, and utilizes 64 kbps of bandwidth. Several compression methods have been developed and deployed that reduce the bandwidth required by a voice call to as little as 4 kbps, thereby allowing more voice calls to be carried over a single Frame Relay serial interface (or subinterface PVC).
That concludes our serialization of Chapter 15 from Cisco Press' Network Consultants Handbook. | <urn:uuid:b4880bc7-3870-4008-b5a1-6ba3c8d33a11> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/974881/Frame-Relay--Summary.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872137 | 1,663 | 2.96875 | 3 |
Most cables are wrapped in what amounts to a protective tube around the wire to improve safety and extend the operating life of the cable. Cable stripper, also called wire stripper, is a tool used to strip the outside coating of a cable to expose the active wire underneath for installation. Usually, this tool is used when two wires need to be connected or when a connector needs to be applied to the end of a coated wire.
Cable stripping tools are usually hand-held metal tools that look similar to pliers, except they feature a cutting end and stripping hole instead of a grip. Most cable strippers come with cable cutters already built into the tool. Other cable strippers may have specifically-sized holes that are designed to perfectly strip wires of varying sizes. Using a specially-sized cable stripper can make a proper cable stripping job easier to carry out. Professionals such as electricians or cable installation specialists often use size-specific cable strippers that are designed to strip a precise type of cable commonly used in their industries.
Stripping a cable with a cable stripper is typically easy to do, and mistakes in stripping can be corrected easily. Older, brittle, or weathered wires inside a cable can make a successful stripping job more difficult than usual. Unless the amount of available cable is limited, a botched cable strip job can be remedied simply by clipping the badly stripped edge off with a wire cutter and re-stripping the cable.
Cable strippers come in a variety of designs to support the diversity of their usefulness. Handheld designs are most convenient and inexpensive, but benchtop varieties are manufactured to assist in high-volume stripping applications. Articulation of the stripper ranges between manual, electrical, and pneumatic power sources.
A simple manual wire stripper is a pair of opposing blades much like scissors or wire cutters. The addition of a center notch makes it easier to cut the insulation without cutting the wire. This type of wire stripper is used by rotating it around the insulation while applying pressure in order to make a cut around the insulation. Since the insulation is not bonded to the wire, it then pulls easily off the end. This is the most versatile type of wire stripper.
Another type of manual wire stripper is very similar to the simple design previously mentioned, except this type has several notches of varying size. This allows the user to match the notch size to the wire size, thereby eliminating the need for twisting. Once the device is clamped on, the remainder of the wire can simply be pulled out, leaving the insulation behind.
The quality of a stripped wire is largely determined by the quality of the tooling selected. Properly-sized tools are the easiest way to provide a high-quality exposed conductor, and many wires are required to be labelled with gauge information. Another determination of wire quality includes the strip length. Devices like switches and receptacles will have a strip gauge, and automated machines will have a clear adjustment mechanism to ensure accurate wire and strip lengths.
Fiberstore provides a range of fiber tools, our complete line of fiber optic strippers feature more than 40 products. To check which of our wire strippers will suite your particular wire sizes, scroll down to view all our wire strippers or contact our sales. And if you can’t find it here, we can custom build one for you! | <urn:uuid:69e4f5a2-54cd-4f72-8630-2b6d60d0e4c8> | CC-MAIN-2017-04 | http://www.fs.com/blog/strip-cables-outside-coating-with-a-wire-stripper.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00329-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921499 | 688 | 2.65625 | 3 |
How many queens can you put on a regulation chess board without any of them being in a position to attack another? This question is the foundation of the N-Queens problem, where you ask that question of larger and larger square boards. At a certain size, it becomes a logic problem impossible for an unaided human mind to solve, making it a perfect test case for parallel computing, the hallmark of HPC today.
Computing rookies Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, all of whom were trained at San Francisco’s Hack Reactor, an institute designed for intense, fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
“We were able to scale it across every device in the building, including everyone’s laptop, iPhone, Android phone. Even my BlackBerry ran it, which surprised me,” Pethiyagoda said of their project, which they called Smidge.
They then got in touch with the Pivotal Initiative, a big data startup run by EMC and VMware, and managed to run the N-Queens algorithm on a 1000-node Hadoop cluster. Last week, they solved the 27-by-27 version of the problem, setting the world record.
While the N-Queens isn’t exactly one of the pressing scientific issues to be answered through cloud or cluster computing, it still represents an intriguing computational challenge, where the permutations increase exponentially by simply increasing the grid size by one.
Here, the chess board they solved was 27 squares wide and deep. That comes out to 729 squares total, each of which either can or cannot have a queen or not. This means the total possible amount of board configurations comes out to 2 to the power of 729.
Of course, by placing just one queen, one precludes 78 squares from being occupied (26 each from the horizontals, verticals, and diagonals). That both significantly reduces the permutations and adds an intricate and complex layer to the total computation. It is from that point the parallelism that could be replicated in a cloud-based system takes over.
Aside from producing a potentially useful method to the cloud computing paradigm, this news could have an arguably bigger impact on cloud HPC that reaches beyond the pure technical realm. Big data today is experiencing a problem in training and recruiting talent. That comes down to big data being a relatively new phenomenon with the top minds and institutions in the industry still not yet fully understanding the optimal way to survive and utilize the data deluge.
HPC is not new, but trying to set up and run HPC applications in a virtualized setting is from a relative perspective. That a group relatively new to the subject were able to import a parallel problem into a virtualized system is impressive. | <urn:uuid:0ab63666-709b-41ef-a189-796d12e61d7c> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/06/19/hacking_into_the_n-queens_problem_with_virtualization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00173-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947486 | 614 | 2.703125 | 3 |
New OpenMP features
StevenPerron 270003FPC3 Visits (3412)
OpenMP is an add-on to the C, C++, and FORTRAN programming languages that is meant to give the programmer an easy and portable way to parallelize their programs. This is done by adding directives (or pragmas), run-time routines, and environment variables that allow the programmer to control how a program is parallelized.
The XL Compilers support OpenMP, and, in this latest release, have added support for two important parts of the OpenMP 3.0 specification, and improved its implementation of another. The final important change is the reimplementation of OpenMP thread private variables. These are global variables where each threads has is own copy. In previous releases, this was implemented using a library (the smp run-time), but, in this release, it is implemented using calls to the operating system. This will decrease the time needed to access these variables, and, as a result, increase the performance of programs that use these variables heavily. Nothing special needs to be done to use this new feature; it is the default for thread private variable. However, you can revert to the old behavior by using -qsmp=noostls.
The first is the finalization of the task implementation for FORTRAN. The main advantage of the task construct is that it can be used to parallelize recursive algorithms and loops where the iteration count cannot be determined without running the loop. The standard example is to parallelize an algorithm that traverses a linked-list and does something to each node. In a future post, we will go over cases where tasks are useful, and how to use them.
The other new feature is the ability to privatize a FORTRAN allocatable array that is in the allocated state, and to use them in reductions. In previous releases, the programmer would have to take care of allocating the array for each thread and to deallocate it. As well, the programmer would have to jump through hoops in order to mimic first and last private clauses. This makes things easier for the programmer. However, the real advantage is the ability to use an allocatable array in a reduction clause. Now if you want to have an array reduction you do not have to use a fixed size array. It can be whatever you need it to be. | <urn:uuid:caa08368-2e70-4ed8-be98-0b86a65f4680> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/new_openmp_features10?lang=pt_br | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00173-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93609 | 487 | 2.78125 | 3 |
Alan Turing was famous for believing computers could act like humans. He devised what is now known as the Turing test, whereby a computer would supply responses to human questioning. If the questioner could not distinguish the answers from real live person, the computer would have passed the test.
His premise was that since human thinking is logical and computers are based on logic, behavior should be computable. Nice idea. But it hasn’t quite panned out — at least not yet. Even with recently developed AI-type technology such as automated customer service assistants, most people are aware when they are talking to a computer.
A recent article in Wired summed up the problem with Turing’s thinking:
That simplistic idea proved ill-founded. Cognition is far more complicated than mid-20th century computer scientists or psychologists had imagined, and logic was woefully insufficient in describing our thoughts. Appearing human turned out to be an insurmountably difficult task, drawing on previously unappreciated human abilities to integrate disparate pieces of information in a fast-changing environment.
But as the Wired piece suggests, the technology might finally be catching up to Turing. Recent advances in AI-type machines, like Google’s search engine and IBM’s Watson points to how that might come about.
Current AI relies on connection and probability algorithms. This technology drives language recognition found in both Google searches and in IBM Watson’s DeepQA technology. The systems understand, to a degree, what a human is requesting and produce search results or the most likely answers for a Jeopardy question.
However, those systems have to rely on their limited datasets, which confines their answers to particular domains. Neither Google nor Watson can supply ad lib responses like a human. But, as the Wired article points out, the ability to ingest large datasets and correlate the information seems to be the approach that could scale up.
Robert French, cognitive scientist at the French National Center for Scientific Research, theorized that a massive dataset could be the final key. This dataset would contain every memory, including olfactory, audio, visual and sensory data, from millions of people. Says French: “These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions.”
Recent advancements in big data, analytics, and language recognition are enabling the creation of much more intelligent machines than even just a few years ago. At the right scale these technologies may indeed lead to systems that could pass the Turing test. While such machines would only be able to imitate human behavior, they would impact nearly every industry of the modern era. | <urn:uuid:e7290b5a-5e21-43d6-955c-a910dd5d7caf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/04/16/getting_close_to_turing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00294-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961244 | 540 | 3.546875 | 4 |
The concept of cloud hosting technology accounts for a general shift of computer processing, storage, and software delivery away from the desktop and local servers, across the network, and into the next generation data centers hosted by competent cloud computing companies that have large infrastructure. Just as the electric grid revolutionized business, cloud computing or cloud hosting is revolutionizing information technology or IT. This information technology revolution is helping corporations to get free themselves from large information technology related capital investments, and at the same time is enabling them to plug into the extremely powerful computing resources offered by cloud hosting technology over the network. Cloud hosting helps businesses to focus better on their core business and not to worry about the associated IT tasks.
418 Total Views 1 Views Today | <urn:uuid:45580740-2048-45de-b088-bd29a65f99db> | CC-MAIN-2017-04 | http://www.myrealdata.com/blog/457_cloud-hosting-helps-businesses-to-focus-better | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930094 | 144 | 2.609375 | 3 |
Do you feel compelled to wear a Richard Nixon mask or a baseball hat equipped with infrared signal emitters on the brim when you leave the house? If so, you may be trying to prevent a passerby on the street from guessing your name, interests, Social Security number, or credit score using only a pair of face-scanning glasses and an iPhone. This is not science fiction—law enforcement has been using facial recognition technology for years. Through advances in facial recognition software and the convergence of the vast amount of personal information on social networks (especially photographs), smartphones, the power of cloud computing, and statistical re-identification, the use of this technology has the potential to become widespread. The potential ubiquitous use of facial recognition technology raises critical concerns regarding privacy, security, and basic freedom.
Facial recognition technology traces its origin to government-funded research in the 1960s. The technology works by using an algorithm to create a unique numerical code from distinguishable landmarks on faces, sometimes called nodal points. The technology measures approximately 80 nodal points, such as the distance between eyes, nose width, eye socket depth, and jaw line length. The unique code or “biometric template” created by facial recognition software from a photograph can be stored in a database and later compared to other photographs to create a match.
There are several applications of facial recognition technology in law enforcement that most would agree are useful. Police in Tampa, Florida have made over 500 arrests after identifying suspects by taking photographs at a traffic stop and comparing the images to a mugshot database. In 2010, the Massachusetts state police obtained over 100 arrest warrants for creating false identities and revoked 1,860 licenses using facial recognition software against the state’s driver’s license registry. In Britain, Scotland Yard is using facial recognition software to identify suspects from the recent riots in London.
Facial recognition can also provide modern convenience. Since 2002, Australians have been able to use self-processing e-passports at airport customs checkpoints. Advertisers have generated more relevant billboard advertisements based on the age and gender of passers-by. Even Facebook uses facial recognition to suggest the identity of friends to tag in a photo, and programs like iPhoto and Picassa allow users to organize photographs by faces.
The technology is not foolproof, and there are other applications that are outright alarming. The ability to successfully identify a person by matching two photographs is dependent on the quality of the images. If the person in the photograph is not directly facing the camera with open eyes and in front of a plain, light-colored background, the performance of the facial recognition software declines. Thus, while you can obtain a high-quality picture from a driver’s license database, pictures taken without the cooperation of the subject (e.g. through surveillance cameras) rarely meet the ideal standard. Although the technology has improved over the last ten years, there is an inherent error rate because it is reliant on statistics. Accordingly, either matches that should be made do not occur or false identifications happen.
A driver in Boston recently had his license revoked because his picture closely matched the picture of another driver. Although his license was returned, it took days of wrangling for him to prove his identity. At least 34 other states are using similar technology. There are no current reported statistics on the number of false positives, but Massachusetts alone issues 1,500 suspension letters per day using the system.
On August 4, 2011, researchers from Carnegie Mellon’s CyLab presented the results of three experiments from which they concluded that it is possible to use facial recognition software to identify strangers and then determine sensitive information about that person, including their Social Security number.
In one experiment, the researchers were able to identify members of Match.com, who used pseudonyms on the dating site to protect their identities, by comparing their profile photograph to photographs on Facebook.
In the second experiment, they took photographs of college students that they were able to successfully match one-third of the time to the student’s Facebook profile (in less than three seconds).
In the third experiment, the researchers used a custom iPhone application to predict a stranger’s Social Security number (generally just the first five digits) by matching a photograph to a Facebook profile picture in conjunction with information about the stranger’s state and year of birth gathered online. The lead researcher, Alessandro Acquisiti, said: “A person’s face is the veritable link between their offline and online identities.”
In addition to the obvious privacy concerns, there are security and personal liberty concerns. According to a report, one in 750 passengers scanned at an international airport in the United States is falsely identified, and some of the falsely identified individuals may have been temporarily detained by the FBI. In locations where biometric data like facial recognition is used to gain entry to a secured area or through customs, the failure of those institutions to safeguard that data in a computer system can lead to unauthorized persons gaining access.
Although it is not yet possible to consistently and accurately identify all of the faces in a crowd, the technological limitations are likely to continue to fade. The billions of images tagged on social networking sites and associated data provide an easily accessible source of personal information to match with other offline data collected by data aggregators, which can be turned into detailed personal profiles and sold to companies for use in behavioral advertising targeted directly to you through your smartphone or cable box. It may become possible to search for a person online using an image of their face just as easily as it is now to enter a name in a search engine. On the law enforcement side, the FBI will begin testing its Next Generation Identification facial recognition system in January 2012 in four states. The system, which will also use biometric indicators (e.g. iris scans and voice recordings) to identify suspects, will match a photo of an unknown person against mug shots.
Facial recognition technology has not gone unnoticed by lawmakers and regulators. The FTC is hosting a workshop to explore beneficial uses of the technology and the associated privacy and security concerns on December 8, 2011. And U.S. Senator John Rockefeller has asked the FTC to provide a report on the findings from its workshop to his Commerce Committee.
This article, which was published in the December 2011 CBA Report, is republished with permission. | <urn:uuid:b5425221-66da-4006-90dd-fd5aacf83d1c> | CC-MAIN-2017-04 | https://www.dataprivacymonitor.com/federal-legislation/facial-recognition-the-end-of-privacy-or-a-precursor-for-new-laws/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947497 | 1,293 | 3.140625 | 3 |
On June 6th of this year, some of the biggest names in tech joined together for World IPv6 Launch Day to herald the transition from IPv4. Now that the fanfare has died down, what does this mean for you? Many think it will mean more headaches than reward with administrators complaining that IPv6 has longer addresses, and users complaining of application delays due to IPv6 to IPv4 fallback.
Let’s take a look at these two potential issues and how to solve them.
Problem: The Addressing Trap
Ask anyone what the benefit of IPv6 is, and most will say more available addresses, which means longer addresses. Initial reaction from many engineers is that these new addresses are too long to remember. This is only partially right; understanding how the IPv6 addresses work, along with some planning, results in addressing that is as simple as current IPv4 addressing.
IPv6 addresses, like IPv4 addresses, are composed of 3 parts:
- The global routing prefix, assigned by the Regional Internet Registry (RIR) like ARIN in the US or RIPE in Europe
- The local subnet, locally administered at the router level, and
- The individual node, administered by either DHCP, self-assigned addresses (SLAAC), or manually (static).
Essentially, everything past the prefix is under local administrative control.
While IPv6 addresses are longer than IPv4, they include an innovation that will increase their ease of use over IPv4 as much as CIDR notation did over explicit subnet masks. In IPv6, sequential blocks of 0x00 can be written just as a double colon (‘::’). Therefore,the key to ease of use in IPv6 addresses is to assign addresses to create the largestnumber of sequential zeroes.
Solution: Assign Addresses
Create subnets from the left, and use DHCP to assign node addresses from the right. For subnets, assign the left-most sedectet (16-bit block) first, starting from the lowest number. For nodes, simply configure DHCP to start counting at 2 – assuming that 1 is the default gateway.
For example, with the global prefix 2001:db8::/32, the worst-case scenario would be to start subnetting from the right, and use SLAAC to self-assign nearly random numbers. That would be 2001:db8:0:1:xxxx:xxxx:xxxx:xxxx. Instead, start subnetting from the left (which leaves room for later hierarchical expansion) and use sequential DHCP. The result is an address more like 2001:db8:1:0:0:0:0:2 – which shortens to 2001:db8:1::2 – which is much easier to manage and remember. Administrators will soon start to omit the global routing prefix, much as they do for IPv4, and casually call this node “1::2”.
Problem: Failed Connections with IPv6 To IPv4 Fallback
With the world quickly running out of new IPv4 addresses, the time to incorporate IPv6 is now. When adding IPv6 to your network, it is important to be prepared for problems that may arise.
The most common issue for the next few years will likely be IPv6 to IPv4 fallback. Any node that has IPv6 enabled with a valid address will prefer to use IPv6 over IPv4. However, until IPv6 deployment becomes more widespread and stable, full connectivity may not be possible. Therefore, the node will attempt to connect via IPv6, fail, and typically re-try the connection via IPv4. Depending on the OS and the application, that re-try can be nearly instant, or it can range from 20 to 180 seconds or even more!
Solution: Switch Web Browsers
The fallback problem has been studied, tested, and solved by some web browsers much better than others. On Windows, Internet Explorer still relies on the OS to time out, leading to 20 second fallback delays. The best browsers for a mixed IPv4/IPv6 environment are Chrome and Firefox, which have sub-second failover. On MacOS, Safari has also solved the problem. (On some versions of Firefox, this feature may be turned off by default, so open the about:config page and set “network.http.fast-fallback- to-IPv4” to True.)
Problem: Fragmented Packet Blackholes
One feature included in IPv6 to make the network run faster is to do fragmenting on the sender side, rather than on routers along the packet path. If one of the network links between the client and server can’t forward large packets, in IPv4 it breaks the packets into fragments, but in IPv6 it simply sends an ICMPv6 “Packet Too Big” message back to the sender to retransmit in smaller packet sizes. While most transit on the modern Internet allows Ethernet-sized frames, there are still some ways to shrink the effective MTU, like an IPSec VPN. IPv6 nodes need to be able to receive the “Packet Too Big” messages, otherwise the large packets they send along that path to that destination will simply disappear, as with a blackhole route.
The problem is that, in recent years, it has become common practice to block ICMP on the firewall to prevent shenanigans like route hijacking, source squelching, or destination black-holing. While these issues will not go away with IPv6, IPv6 relies on the ICMPv6 “Packet Too Big” message to be returned to the sender for automatic Path MTU Discovery. Blocking those messages can lead to the frustrating situation where small packets can go through, but big packets can’t. This means that ping will likely work, and that the TCP 3-way handshake will likely work to open a socket and establish the connection. However, data won’t flow. A network administrator not aware of this issue might incorrectly deduce that the problem is therefore in the application, or on the server, leading to longer resolution time.
Solution: Know what you’re blocking
Firewalling some ICMPv6 messages may still be a good idea from a security perspective, but “Packet Too Big” is mandatory for a healthy network.
Problem: IPv6 in Unexpected Places
I personally ran into an issue with IPv6 in a hotel room. Given my previous experiences with hotel Wi-Fi, I wasn’t surprised that I couldn’t get online, but I was astonished when OmniPeek showed me that my laptop was trying to send and receive IPv6. Normally, the captive portal at a hotel or coffee shop will intercept your web connection, and send a HTTP redirect to show you the registration screen. However, on this particular network, IPv6 was enabled on the router but not supported by the captive portal. Therefore, my web requests were sent by my laptop via IPv6, giving me partial connectivity, but all IPv4 connections were captured by the captive portal. The result was that I couldn’t use the web because the portal kept interfering, but I couldn’t get to the portal to authenticate because it wouldn’t capture IPv6. Once I finally used our OmniPeek network analyzer to see what was happening, I turned off IPv6 and everything worked just fine.
Public Wi-Fi hotspots, like hotel networks, are unlikely yet to have IPv6, which makes them vulnerable to a security attack using fake IPv6 Router Advertisements. If users can see each other’s packets, an attacker can enable an IPv6-to-IPv4 gateway on his/ her computer, and use ICMPv6 Router Advertisements to convince the other computers that it is the local default gateway for IPv6. Given the preference that PCs have for IPv6 over IPv4, any IPv6-enabled PCs will happily start sending traffic through that fake router, giving the attacker the ability to intercept information, redirect traffic to malicious websites, and possibly even pretend to be the actual user.
Solution: Don’t Use IPv6 in Public
While I want to recommend that you use IPv6 wherever possible, the sad truth is that most locations aren’t using it, and it can cause connection and/or security problems if it’s not set up right. For now, only use IPv6 at home, at work, or on networks you trust. Once enough of us have IPv6 set up on the client side, there will hopefully be enough demand to make its use mainstream. Until then, use it sparingly and decrease your risk.
As IPv4 addresses continue to dwindle, adding more IPv6 addresses to your network is essential for preparing for the time when adding IPv4 addresses is no longer an option. Because the most common IPv6 problem today stems from incomplete deployment, turning IPv6 on now will help advance the total IPv6 traffic levels, leading to more consistent deployment and greatly reduced problems.
Addressing and IPv4 fallback are two common problems, but there are plenty more like neighbor discover and message control. What are you seeing as your organization deploys IPv6? Got any tips on how to avoid unnecessary headaches?
Author Profile - Jim MacLeod is a Product Manager at WildPackets. He has been in the networking industry since 1994, and started doing protocol analysis in 1996. His experience includes positions in firewall and VPN setup and policy analysis, log management, Internet filtering, anti-spam, intrusion detection, network monitoring and control, and of course packet sniffing. | <urn:uuid:c3cddf22-107e-4ba3-ae55-1e97acc9e524> | CC-MAIN-2017-04 | http://www.lovemytool.com/blog/2012/08/ipv6-means-more-interim-headaches-by-jim-macleod.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00560-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927547 | 2,000 | 2.53125 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Intro to Political Geography
Select a size
Gerrymandering (Internal Borders) Found within states The intentional manipulation of borders to benefit one political group or organization Based on voting records, race, and anticipated future voting Using the two blank maps provided you will complete each map as noted. Each district must contain five (5) voters each Map 1 – Majority Republican Map 2 – Majority Democrat Follow Up Questions What impact does Gerrymandering have on legislative districts? Is this fair to voters? Why or why not? What is a realistic solution to Gerrymandering? Gerrymandering Exercise Part 1 - Answer the following questions once your group has been formed: Choose what your nation will be Industrial – needs coal, iron ore, timber Military – gold, population to recruit, areas for bases Trade – anything that can be traded of value Part 2 – Build your nation, Name, Flag, Slogan – should represent the idea from above Part 3 – Chose up to 6 territories that you want to claim for your nation Part 4 – Turn in maps Borders and Boundaries Game Wallerstine’s World-System’s Theory Assumes that all nations are a part of a world economic system void of independent economies and based on capitalism Create a world-economy of control and dependency Core: High wealth, political and economic control Semi-Periphery: economically strong and growing, influence over the periphery but follows the core Periphery: Dependent and responsive economically and politically to core and, to an extent, semi-periphery nations Three Tier Structure Core Processes that incorporate higher levels of education, higher salaries, and more technology * Generate more wealth in the world economy Semi-periphery Places where core and periphery processes are both occurring. Places that are exploited by the core but then exploit the periphery. * Serves as a buffer between core and periphery Periphery Processes that incorporate lower levels of education, lower salaries, and less technology * Generate less wealth in the world economy * * * Core=US, Canada, Western Europe, Australia and Japan Semi-Periphery-Mexico, Venezuela, Argentina, Uraguay, Brazil, South Africa, Russia and Eastern Europe, Turkey, Saudi Arabia, India and China-exploited by Core and in turn exploit the Periphery Periphery=rest of Africa, rest of South American and Central America, Central and Asia and most of Middle East-exploited by everyone * Landlocked Non-Landlocked A State that borders on any type of water system (river, ocean, lake) that allows them access to large seas or the oceans A State that has no access to water and must go through another State in order to reach a port UNCLOS United Nations Law of the Sea which determines territorial vs international waters Territorial Waters Up to 12 nautical miles out from the coast Contiguous Zone Up to 12 nautical miles from the edge of territorial waters Exclusive Economic Zones Up to 200 nautical miles, states determine what economic activity may or may not take place International Waters Water outside of a specific nations control which is shared by all nations without limitations A border above the basic surface of the earth and extending into the upper atmosphere Territorial Airspace Airspace up to 12 nautical miles from the designated coast line that is exclusively controlled by a specific nation with a vertical ceiling that is undefined (depends on the county and ranges from 19-99 miles) International Airspace Airspace outside of a specific nations control which is shared by all nations without limitations Air Space Sovereignty - Complete control over a territory’s political & military affairs Territoriality – The attempt by an individual or group to affect, influence, or control people, phenomena, and relationships, by delimiting and asserting control over a geographic area Territorial Integrity – A government has the right to keep the borders and territory of a state intact and free from attack Statehood Vocabulary A separate entity composed of three or more States that forge an association and form an administrative structure for mutual benefit in pursuit of shared goals. An alliance system that binds political decisions and economies together. History: Created Post-WWII to rebuild Europe economically, Counter the Soviet threat, and create a place to hold a dialogue to settle disputes Modern Threats: Donald Trump – NATO, NAFTA, TPP Brexit – EU Supranationalism Economic An agreement between States that offers all parties economic opportunity through trade and commerce Examples OPEC – Organization of Petroleum Exporting Countries – controls oil prices by controlling production WTO – World Trade Organization – set trade rules for member nations NAFTA – North Atlantic Free Trade Agreement – removes tariffs (import taxes) in North America (Mexico, United States and Canada) on agreed upon products Military An agreement whereby two or more States pledge to aid each other if attacked or to share military material and technology Examples NATO – North Atlantic Treaty Organization – member nations pledge to protect each other in case of attack (counters the Warsaw Pact which did the same for Communist Eastern Europe) UN – United Nations – Can authorize the use of member nations military forces to act as peace keepers or an aggressive force AU – African Union – Same as UN, except only in Africa Political An agreement whereby two or more States work together for economic and military benefits – a pledge to help each other Examples UN – United Nations – A world forum to discuss global issues AL – Arab League (also has military powers like the UN and AU) – North African and Middle Eastern states working to promote growth and stability G20 – Global 20 – The 20 largest economies discussing political and economic issues Genetic Boundary Classification Antecedent - physical landscape defined the boundary without any human modification Ex: Mongolia and China (Desert) Subsequent – A boundary that has undergone a regular modification process Ex: China and Vietnam Superimposed-forcibly drawn boundary that cuts across a unified cultural boundary Ex: Kurds and the modern Middle East Relict boundary – A boundary no longer serves a purpose but still affects the lives of people living there Ex: East-West Germany Movement of power from the central government to regional governments within the state. Conflicts within a State (centrifugal forces) cause friction among the population which leads to the break in unity of the State’s government and potentially to the breaking up of the State itself Examples Scotland-England-Wales-Northern Ireland: Autonomy within the United Kingdom Italy – Sardinia has some economic independence USSR – post 1991, breaks into 13 separate States Devolution Examples of Devolution - Economic Brazil – Southern Brazil attempted (unsuccessfully) to break away from the north. Wealth in the south supports poverty in the north United States – Northern manufacturing and Southern agriculture lead to conflicts on wealth distribution and slavery European Union – Great Britain refuses to use the Euro. They maintain their own currency and economic independence and have currently voted to leave the EU Israel-Palestine Cause: Superimposed Borders Issues: As per British mandate in 1948, a territory is created for Jewish settlement. Palestinian (Muslim) settlement already exist leading to continuous conflict between these two sides still today Ukraine and Russia Cause: Cultural Devolution Issues: Protests in Ukraine removed the democratically elected pro-Ruissian president and replaced him with a pro-western president. Crimea – invaded and absorbed by Russia, Eastern Ukraine – war between Russian separatists and Ukrainians Basque Region in Spain Cause: Cultural Devolution Issues: Basque separatists have fought (physically and politically) to separate from Spain. Basque’s speak their own language and have a culture separate from Spain proper Conflicts within States United Nations Peace Keepers – Soldiers under the control of the United Nations move into an area to act as local police and prevent future conflicts Example: Golan Heights in Syria Demilitarized Zone (DMZ) – An area that separates two groups (usually States) where no military personnel or weaponry is allowed to be placed thus separating the militaries Example: North and South Korea No Fly Zone – Air space restriction for military aircraft to prevent the use of them against groups within a nation Example – Iraq post-1991 UN attack Responses to Devolution Reading Quiz #1 Nov 21st/22nd – Ch. 8, Sec 1-2, pg. 261-275 Reading Quiz #2 Nov 30th/Dec 1st – Ch. 8, Sec 3-4, pg. 276-295 Unit Essay Exam and Map Quiz December 12th (Odd) – December 13th (Even) Multiple Choice/End of Unit December 14th (Odd) – December 15th (Even) Final Review December 19th (Even) – December 21st (Odd) Final Exam December 20th (Even) – December 22nd (Odd) Unit 4 and End of Semester Calendar A State is a politically organized territory that is run by an independent government and is recognized by a large portion of the world Ex: The United States, Mexico, Russia, Syria State A political entity within a State. A way of dividing up a State into smaller sections. Ex: Nebraska and Iowa in the United States or Quebec in Canada State/Province The state has a governor who works in unison with the state’s legislature to pass laws within the framework of the federal and state’s constitution The negotiated treaty between the two states will allow for joint military exercises to help the two states be prepared to counter any threats, both internal and external, in the region state vs. state (Its all in the context) Nation - Refers to a group of people with similar cultural traits whose boundaries may or may not follow political lines Nation-State – A major portion of the state’s population share a common culture Multinational State –A state that contains two or more ethnic groups with traditions of self determination that agree to coexist peacefully by recognizing each other as distinct nationalities Multiethnic State – A state containing multiple ethnic groups Stateless Nation – A state with a nation of people with no political or cultural control Political - A separation based on a negotiated settlement between two different States or states Ex: Manmade lines such as latitude and longitude. Straight lines don’t exist in nature Physical – A boundary based on a geographical barrier Ex: Lake, river, ocean, mountain range Cultural – A boundary based on cultural grouping Ex: Mongolia and China, Kosovo and Serbia Compact A small, condensed shape where no single point is far from the center of the nation Ex: Poland, Belgium Fragmented A nation that is broken into multiple pieces and may or may not be spread out over distances. Most nations are separated because of water Ex: Denmark, Indonesia Elongated A nation whose territory is significantly longer than it is wide Ex: Vietnam, Chile Perforated A nation whose territory contains another territory that it completely surrounds Ex: South Africa – Lesotho, Italy – San Marino and Vatican City Protruded A nation where a portion of its territory extends in an elongated fashion from the main territory. Sometimes referred to as a panhandle Ex: Thailand, Oklahoma, Nebraska Enclave Exclave A country or part of a country that is surrounded by another country A territory legally or politically attached to a territory with which it is not touching * * * Core=US, Canada, Western Europe, Australia and Japan Semi-Periphery-Mexico, Venezuela, Argentina, Uraguay, Brazil, South Africa, Russia and Eastern Europe, Turkey, Saudi Arabia, India and China-exploited by Core and in turn exploit the Periphery Periphery=rest of Africa, rest of South American and Central America, Central and Asia and most of Middle East-exploited by everyone * (physically and politically) to separate from Spain. Basque’s speak their own language and have a culture separate from Spain proper Conflicts within States United Nations Peace Keepers – Soldiers under the control of the United Nations move into an area to act as local police and prevent future conflicts Example: Golan Heights in Syria Demilit | <urn:uuid:311d75eb-a6b5-48bd-86f5-11ee0b87f682> | CC-MAIN-2017-04 | https://docs.com/anthony-razor/7164/intro-to-political-geography | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916731 | 2,494 | 2.859375 | 3 |
Running a Task Outside AutoMate (Command-line Operation)
Occasionally users may have a need to run a task created in AutoMate from outside of AutoMate. Usually the user wishes to run the task from one of the following:
- A batch file
- An external program
- The command line
- A desktop icon
The files AMTask.exe and AMTaskCm.exe (collectively AMTask) exist for this purpose and can be found in the AutoMate folder which is installed (by default) at "C:\Program Files\AutoMate 6\".
How to use AMTask
Proper usage of AMTask.exe or AMTaskcm.is entering the default installation path of AMTask.exe or AMTaskCm.exe enclosed in parenthesis, followed by the path location of the .AML file associated with the task, also enclosed in parenthesis (see the syntax examples below). Once started, AMTask does not end until the task specified on the command-line parameter has finished processing.
Command Line Options
AMTask accepts several command-line parameters to control its operation:
The filename of the task to run. If the task file name includes spaces it must be surrounded in quotes or improper operation will result. The first parameter must be "taskname".
Specifies variable(s)/value(s) to pass to the task. Format is semi-colon delimited name=value pairs. For example: /v:varname1=value1;varname2=value2;varname3=value3.
- /password <password>
If the task is password protected, use <password> to decrypt the task before execution.
Prompt for the password if the task is password protected and the /password parameter has not been specified. NOTE: This parameter is only available in AMTask.exe
Display a message box (or output to standard output) the usage and syntax help.
Since the variable list is semi-colon delimited, semi-colons are not allowed in the variable name or value to pass. This can be worked around by replacing a semi-colon with another special character before passing it to AMTask and configuring the task to replace it back to a semi-colon at run time using an embedded expression in the task. For example, if an exclamation point were used as a replacement character for a semi-colon, a Set Variable action at the beginning of your task using the expression Replace$(var1, "!", ";") as the new variable data would convert the exclamation points back to semi-colons.
The difference between AMTask.exe and AMTaskCm.exe
The two files AMTask.exe and AMTaskCm.exe work exactly the same with the exception of one characteristic. AMTask.exe is a pure Windows application and is designed to run a task and return to a Windows application when the task specified has been completed, whereas AMTaskCm.exe is a "console application" and is designed to be run from a command prompt or batch file. Additionally, AMTaskCm.exe emits proper return codes: 0 for task success, 1 for task failure and 2 if the task stops.
Why two files?
True windows applications will return immediately when run from the command prompt regardless of when they actually finish running; thus, using the original AMTask.exe one would not be able to determine when the launched task finished or retrieve a return code to determine its success or failure. To avoid this behavior, use AMTaskCm.exe which is designed for use in a console (command line or batch-file) environment.
Why not always use AMTaskCm.exe then?
When AMTaskCm.exe (a console application) is invoked from a true Windows application (not from the command prompt) it causes a command prompt (AKA DOS box) to appear if one was not already open. This is not visually appealing and can confuse users. The rule to remember is:
- Use AMTask.exe when launching from a Windows application, macro/script, or windows itself.
- Use AMTaskCm.exe when launching from the command prompt, a DOS based application or most importantly, a batch file.
The following are syntax examples that can used when running a task outside of the Task Administrator.
"C:\Program Files\AutoMate 6\AMTask.exe" /?
Run a task
"C:\Program Files\AutoMate 6\AMTask.exe" "C:\Documents and Settings\All Users\Documents\My AutoMate Tasks\check email.aml"
Run a task and pass variables
"C:\Program Files\AutoMate 6\AMTask.exe" "C:\Documents and Settings\All Users \Documents\My AutoMate Tasks\check email.aml" /v:MyName=MrJones;Phone=213-738-1700 | <urn:uuid:e6f19c5a-c6ed-49bb-a72a-8c25dc0dcd20> | CC-MAIN-2017-04 | http://www.networkautomation.com/urc/knowledgebase/running-a-task-outside-automate-command-line-operation/7E2FECDF-A644-B1BE-0E0E9520C83F05AC/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.799138 | 1,042 | 2.53125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.